title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"A Time-Orbiting Potential Trap for Bose-Einstein Condensate Interferometry",
"A Time-Orbiting Potential Trap for Bose-Einstein Condensate Interferometry"
]
| [
"J M Reeves \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n",
"O Garcia \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n",
"B Deissler \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n",
"K L Baranowski \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n",
"K J Hughes \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n",
"C A Sackett \nPhysics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA\n"
]
| [
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA",
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA",
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA",
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA",
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA",
"Physics Department\nUniversity of Virginia\n22904 *CharlottesvilleVA"
]
| []
| We describe a novel atom trap for Bose-Einstein condensates of 87 Rb to be used in atom interferometry experiments. The trap is based on a time-orbiting potential waveguide. It supports the atoms against gravity while providing weak confinement to minimize interaction effects. We observe harmonic oscillation frequencies (ω x ,ω y ,ω z ) as low as 2π × (6.0, 1.2, 3.3) Hz. Up to 2 × 10 4 condensate atoms have been loaded into the trap, at estimated temperatures as low as 850 pK.We anticipate that interferometer measurement times of 1 s or more should be achievable in this device. | 10.1103/physreva.72.051605 | [
"https://arxiv.org/pdf/cond-mat/0509691v2.pdf"
]
| 118,998,760 | cond-mat/0509691 | 40662d323b78100ccf0eb7cca51864f441f7d2e5 |
A Time-Orbiting Potential Trap for Bose-Einstein Condensate Interferometry
27 Oct 2005
J M Reeves
Physics Department
University of Virginia
22904 *CharlottesvilleVA
O Garcia
Physics Department
University of Virginia
22904 *CharlottesvilleVA
B Deissler
Physics Department
University of Virginia
22904 *CharlottesvilleVA
K L Baranowski
Physics Department
University of Virginia
22904 *CharlottesvilleVA
K J Hughes
Physics Department
University of Virginia
22904 *CharlottesvilleVA
C A Sackett
Physics Department
University of Virginia
22904 *CharlottesvilleVA
A Time-Orbiting Potential Trap for Bose-Einstein Condensate Interferometry
27 Oct 2005(Dated: August 20, 2018)numbers: 3920+q0375-b0375Be * Electronic address: jmr5p@virginiaedu
We describe a novel atom trap for Bose-Einstein condensates of 87 Rb to be used in atom interferometry experiments. The trap is based on a time-orbiting potential waveguide. It supports the atoms against gravity while providing weak confinement to minimize interaction effects. We observe harmonic oscillation frequencies (ω x ,ω y ,ω z ) as low as 2π × (6.0, 1.2, 3.3) Hz. Up to 2 × 10 4 condensate atoms have been loaded into the trap, at estimated temperatures as low as 850 pK.We anticipate that interferometer measurement times of 1 s or more should be achievable in this device.
Atom interferometry with Bose-Einstein condensates has drawn a considerable amount of interest due to the potential for high-precision measurements [1]. The fundamental limit on the sensitivity of an atom-based Sagnac interferometer, for example, exceeds a photonbased interferometer by a factor of 10 11 , for middle-weight atoms and optical-wavelength light. Sensor applications are also more numerous for an atom interferometer, given that atoms are affected by electric and magnetic fields while photons are not. This heightened sensitivity also amplifies the effects of environmental noise, imposing practical limits on the sensitivity that can be obtained. Nonetheless, the best gyroscope on record is an atom interferometer [2].
Using a Bose-Einstein condensate for interferometry is appealing because the intrinsic advantages of thermal atoms are considerably increased. Slow velocities and long coherence lengths allow condensates to exhibit greater sensitivity per atom than a thermal cloud with similar number of atoms. Though producing and working with condensates remains challenging, several groups have demonstrated Mach-Zehnder or Michelson interferometers using condensates [3,4,5,6,7,8,9,10]. However, all these experiments have been limited to measurement times of roughly ten milliseconds or less. In this paper we discuss a novel atom waveguide that we expect will permit significantly longer measurement times.
In the design of a condensate interferometer, one must decide how the atoms will be transported through the device. The simplest method is to orient the axis of the device vertically, allowing the atoms to fall freely under the influence of gravity [3,6,8]. While this technique introduces no additional fields or dephasing effects, the measurement time is limited by the speed at which the condensate falls. However, BEC experiments generically suffer from low production rates. This reduces the signal-to-noise ratio, since the statistical fluctuations in phase scale as N − 1 2 for number of atoms N. While thermal atomic beam experiments can produce 10 9 atoms/s, condensate production rates are more typically 10 5 atoms/s. In order to make up for these low numbers, long interaction times will be required so that the overall phase is increased. This makes interferometers based on falling atoms unattractive, though some of the difficulties might be circumvented using either a fountain geometry [11] or a magnetic levitation approach [12].
The alternative possibility is to use trapped atoms. Condensate interferometers using atoms confined by either magnetic [5,9,13] or optical [4,7,10] fields have been demonstrated. Measurement times in these devices have been limited for a variety of reasons, but a common concern is the effect of interatomic interactions, which can introduce phase noise and cause spatial distortions in the cloud [4,10,14]. Confinement also imposes severe geometrical constraints due to the need to avoid uncontrolled motional excitations [13,15].
To avoid these problems, one wants a trap capable of holding the atoms against gravity but otherwise as weakly confining as possible. Weak three-dimensional confinement has previously been observed [16], but in this paper we present a novel weakly confining waveguide that is particularly well-suited for the demands of atom interferometry.
The waveguide is illustrated in Fig. 1. It is based on a four-wire linear quadrupole and uses the time-orbiting potential (TOP) technique [17,18,19]. Four current-carrying rods provide a linear quadrupole field, with the zero line at the center. A rotating bias field pushes the zero away from the atoms to prevent Majorana losses. We preferred the TOP to other options because of its noise-reduction effects. Our bias field rotates at about 10 kHz, and the atomic spins follow adiabatically. Because of this, any slowly varying magnetic fields or other environmental noise coupling to the spins will tend to average out. The atoms do become sensitive to noise near 10 kHz, but we have found that most of the magnetic noise in our lab has frequencies well below this.
Conventional TOP traps are not especially weak. The obvious way to reduce the confinement strength is to reduce the magnetic field amplitude, but we cannot lower the force below what is required to counter gravity. The solution we have found is to oscillate not only the bias field, but also the quadrupole field. With the appropriate choice of phase, this causes the field zero to oscillate back and forth above the atoms. The atoms are constantly attracted to the overhead zero, and we can weaken the confinement further than would otherwise be possible.
To understand this mechanism, suppose the oscillating quadrupole field is
B Q = B ′ Q (xx − zẑ) cos Ωt(1)
wherex andẑ are the transverse directions andŷ is along the axis of the waveguide. Thê z direction is vertical. The bias field is
B 0 = B 0 (x sin Ωt +ẑ cos Ωt)(2)
with rotation frequency Ω = 11.9 kHz. The resulting time-averaged field magnitude, to second order in the coordinates, is
|B| = B 0 − 1 2 B ′ Q z + B ′ 2 Q 16B 0 (3x 2 + z 2 ),(3)
providing a total potential energy
U = µB + mgz = µB 0 − 1 2 µB ′ Q z + mgz + 1 2 m(ω 2 x x 2 + ω 2 z z 2 )(4)
for atomic mass m, gravitational acceleration g, and magnetic moment µ. We set the gradient B ′ Q = 2mg/µ to support the atoms against gravity and obtain trap frequencies
ω x = 3mg 2 2µB 0 1 2 and ω z = ω x √ 3 .(5)
For a 10 G bias field, this gives trap frequencies ω x = 2π × 7 Hz and ω z = 2π × 4 Hz for Rb atoms in a state with maximum µ.
The waveguide is made of machined copper rods held inside a vacuum chamber, as shown in Fig. 1. Each of the four rods is a coaxial pair. As Fig. 2 illustrates, the four outer conductors are connected in one circuit that provides the quadrupole field while the inner conductors form two circuits used to generate the two components of the bias field. The end loops shown on the quadrupole circuit help minimize the axial quadrupole field.
The leads of the circuits do have an appreciable effect on the trap potential. For instance, any residual axial component causes the quadrupole field to become
B Q = (axx + cyŷ − bzẑ) cos Ωt(6)
with c = b − a. A more accurate description of the potential requires the inclusion of many other first-and second-order terms in the magnetic fields. Categorizing all such terms is prohibitive, but we have found that in the relevant parameter range, the total average field magnitude is well approximated by
|B| = B 0 − 1 2 bz + 3 16B 0 a 2 + αB 0 x 2 + 1 4B 0 c 2 + γB 0 y 2 + 1 16B 0 b 2 + βB 0 z 2 (7)
which yields frequencies The static spherical quadrupole field remains on to provide tight confinement. We achieve condensation with about 2×10 4 atoms at a temperature of 50 nK, using a 3.69 G bias field.
ω x = 2µ m 3 16B 0 a 2 + αB 0 1 2 ω y = 2µ m 1 4B 0 c 2 + γB 0 1 2 ω z = 2µ m 1 16B 0 b 2 + βB 0 1 2 .(8)
From the evaporative cooling, we obtained a more accurate calibration of the bias field as 0.440 G/A times the current amplitude I 0 .
Our final atom number is somewhat lower than typical. We believe this is because transferring the atoms from the quadrupole trap to the TOP trap is inefficient and we are exploring ways to improve this. Once the condensate is made the waveguide quadrupole field is ramped on and the spherical quadrupole field ramped off. The centers of the main trap and the waveguide do not exactly coincide, so the fields are ramped with a 7 s time period to enable the atoms to move adiabatically to the new local B field minimum. We do not observe any losses in the transfer. Figure 3 shows snapshots of the cloud as the atoms are being transferred.
We measured the trap frequencies of the waveguide by observing either center-of-mass (for the x and z directions) or breathing mode (for the y direction) oscillations in the cloud. We The weakest confinement we observed, at B 0 = 20.5 G, had ω x = 2π × 6.0 Hz, ω y = 2π × 1.2 Hz and ω z = 2π × 3.3 Hz. By adiabatically expanding a small condensate into such a weak trap, we were able to obtain very low temperatures. Figure 3(g) shows an image of 1.6×10 3 atoms in the trap with this bias field. We estimate the temperature of this cloud to be 850 pK. Although lower temperatures have been observed in Na [16], this is the lowest temperature achieved for Rb atoms of which we are aware.
With the successful demonstration of our trap, we are now preparing to explore condensate interferometry. We plan to conduct experiments similar to those of Wang et al., [9], using a Bragg laser pulse to split and recombine condensate wave packets. The weak confinement of our guide should greatly reduce the limiting effects of interactions. For instance, the phase distortions discussed in [14] should have negligible effect for condensate numbers below about 1.5 × 10 4 , and phase diffusion effects during the wavepacket propagation should not become important for interaction times less than about 1 s [22]. Using the current apparatus, we plan to study these and other limiting effects. With suitable modifications, our waveguide could be used to precisely measure electric polarizability, gravitational forces, rotations, and other phenomena [23]. We are hopeful that the trap design presented here will help condensate interferometry realize this potential.
This work was supported by the US Office of Naval Research, the National Science Foundation, the Research Corporation, and the Alfred P. Sloan Foundation. 19 G/cm (c) 9.7 G/cm (d) 3.9 G/cm (e) 1.9 G/cm (f) 0 G/cm. During the loading, the cloud moves due to the centers of the external quadrupole and the waveguide not being aligned. The bias field here is 3.69 G, and the final trap frequencies are ω x = 2π × 11 Hz, ω z = 2π × 6.2 Hz, and ω y = 2π × 0.6 Hz. Panel (g) shows an atomic cloud after increasing the bias field to 20.5 G, with trap frequencies ω x = 2π × 6.0 Hz, ω y = 2π × 1.2 Hz and ω z = 2π × 3.3 Hz. We estimate the temperature of this cloud to be 850 pK.
b.
c.
a.
x y z
obtained this form by modeling the total field using the Biot-Savart law and the mechanical design of the leads. The model predicts B 0 /I 0 = 0.40 G/A, a/I Q = −0.83 G/(A cm), b/I Q = −0.86 G/(A cm), α = −0.11 cm −2 , β = −0.061 cm −2 , and γ = 0.019 cm −2 ,where I 0 is the bias current amplitude and the I Q is the quadrupole current amplitude.These values were coarsely verified using a gaussmeter, yielding B 0 /I 0 ≈ 0.4 G/A anda/I Q ≈ b/I Q ≈ 0.8 G/(A cm).The three trap circuits have similar impedances, presenting a 10 mΩ resistive and 0.3 µH inductive load. The circuits are driven with an actively stabilized commercial audio amplifier, using transformers to match the amplifier's output impedance. The details of this drive circuit will be presented elsewhere. The trap is mounted on several copper blocks that deliver the current and remove heat. The measured thermal coefficient of the trap structure is 2 W/K. The quadrupole field requires a current of 38 A to cancel gravity and a bias field of 20 G requires I 0 = 50 A in both bias circuits, yielding a total temperature rise of about 16 K.Our BEC system is based on the scheme described by Lewandowski et al.[20]. We have a single MOT, separated from an ultra-high vacuum science cell by a tube 30 cm long with diameter 1 cm. Our MOT contains 2×10 9 87 Rb atoms at roughly 200 µK. We optically pump them into the F = 2, m = 2 ground state for magnetic trapping. We transfer the atoms to a spherical quadrupole trap, obtaining about 1.5×10 9 atoms at 900 µK with an axial field gradient of 387 G/cm. The atoms are transported to the science cell by a programmable motor, which moves the electromagnet coils at v = 0.8 m/s. Once in place within the waveguide structure, we evaporatively cool the cloud. The atoms are initially too hot for our TOP trap, so we start evaporating in the quadrupole trap. We evaporate on the spin state transitions within the F=2 ground state manifold. Once the cloud cools below 200 µK, we turn on the waveguide bias field and continue evaporating.
perturbed the cloud by introducing a sudden change in the confining field and then recorded the subsequent behavior. These tests were done on a noncondensed cloud at temperatures of about 1 µK. From the periods we determined the trap frequencies as a function of the applied currents. We measured the frequencies over a range of bias fields from 3 to 16 G. From this data, we solved for the trap parameters in our model Eq. (8). Using a multivariable minimization, we found |a|/I Q = 0.734 G/(A cm), |b|/I Q = 0.709 G/(A cm), α = 0.17 cm −2 , β = 0.05 cm −2 , and γ = 0.02 cm −2 . The quadratic coefficients are rather different from our model predictions, though the order of magnitude is correct. Using the empirical coefficients, Eq. (8) reproduces the measured frequencies to an accuracy of about 0.1 Hz over the range of bias fields tested.
FIG. 1 :Fig. 2 .FIG. 2 :
122Scale drawing of the trap structure. The main fields are generated by the four horizontal rods, each of which is a coaxial pair. A pair consists of an outer conductor that is a 5-mm-diameter, 1-mm-wall oxygen-free high-conductivity copper tube, an alumina insulator, and an inner conductor that is a 1.6-mm-diameter copper wire. The rods are held by two boron nitride blocks, which also support the leads and circuit connections. The right block has been depicted as transparent inorder to display the arrangement of the conductors. The rod centers form a square 15 mm on a side and the blocks are spaced 5 cm apart. The function of each of the conductors is described in Current flow through waveguide, indicated by thickened lines with directional arrows. The four rods in Fig. 1 are depicted here as the edges of a rectangular box. Circuits (a) and (b) refer to the current through the inner conductors, which provides the oscillating bias field. Circuit (c) is composed of the outer conductors, which supply the confinement quadrupole field. The end loops on circuit (c) help to minimize the axial quadrupole field. FIG. 3: Loading a Bose-Einstein condensate into the waveguide. The sequence of pictures show the trapped condensate as the static quadrupole field is gradually turned off: (a) 29 G/cm (b)
. K Bongs, K Sengstock, Rep. Prog. Phys. 67907K. Bongs and K. Sengstock, Rep. Prog. Phys. 67, 907 (2004).
. T L Gustavson, A Landragin, M A Kasevich, Class. Quantum Grav. 172385T. L. Gustavson, A. Landragin, and M. A. Kasevich, Class. Quantum Grav. 17, 2385 (2000).
. M R Andrews, C G Townsend, H J Miesner, D S Durfee, D M Kurn, W Ketterle, Science. 275637M. R. Andrews, C. G. Townsend, H. J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle, Science 275, 637 (1997).
. Y Shin, M Saba, T A Pasquini, W Ketterle, D E Pritchard, A E Leanhardt, Phys. Rev. Lett. 9250405Y. Shin, M. Saba, T. A. Pasquini, W. Ketterle, D. E. Pritchard, and A. E. Leanhardt, Phys. Rev. Lett. 92, 050405 (2004).
. E W Hagley, L Deng, M Kozuma, M Trippenbach, Y B Band, M Edwards, M Doery, P S Julienne, K Helmerson, S L Rolston, Phys. Rev. Lett. 833112E. W. Hagley, L. Deng, M. Kozuma, M. Trippenbach, Y. B. Band, M. Edwards, M. Doery, P. S. Julienne, K. Helmerson, S. L. Rolston, et al., Phys. Rev. Lett. 83, 3112 (1999).
. Y Torii, Y Suzuki, M Kozuma, T Sugiura, T Kuga, L Deng, E W Hagley, Phys. Rev. A. 6141602Y. Torii, Y. Suzuki, M. Kozuma, T. Sugiura, T. Kuga, L. Deng, and E. W. Hagley, Phys. Rev. A 61, 041602(R) (2000).
. K Bongs, S Burger, S Dettmer, D Hellweg, J Arlt, W Ertmer, K Sengstock, Phys. Rev. A. 6331602K. Bongs, S. Burger, S. Dettmer, D. Hellweg, J. Arlt, W. Ertmer, and K. Sengstock, Phys. Rev. A 63, 031602(R) (2001).
. S Gupta, K Dieckmann, Z Hadzibabic, D E Pritchard, Phys. Rev. Lett. 89140401S. Gupta, K. Dieckmann, Z. Hadzibabic, and D. E. Pritchard, Phys. Rev. Lett. 89, 140401 (2002).
. Y J Wang, D Z Anderson, V M Bright, E A Cornell, Q Diot, T Kishimoto, M Prentiss, R A Saravanan, S R Segal, S Wu, Phys. Rev. Lett. 9490405Y. J. Wang, D. Z. Anderson, V. M. Bright, E. A. Cornell, Q. Diot, T. Kishimoto, M. Prentiss, R. A. Saravanan, S. R. Segal, and S. Wu, Phys. Rev. Lett. 94, 090405 (2005).
. M Saba, T A Pasquini, C Sanner, Y Shin, W Ketterle, D E Pritchard, Science. 3071945M. Saba, T. A. Pasquini, C. Sanner, Y. Shin, W. Ketterle, and D. E. Pritchard, Science 307, 1945 (2005).
. R Wynands, S Weyers, Metrologia. 4264R. Wynands and S. Weyers, Metrologia 42, S64 (2005).
. T Weber, J Herbig, M Mark, H.-C Nägerl, R Grimm, Science. 299232T. Weber, J. Herbig, M. Mark, H.-C. Nägerl, and R. Grimm, Science 299, 232 (2003).
. Y Shin, C Sanner, G.-B Jo, T A Pasquini, M Saba, W Ketterle, D E Pritchard, M Vengalattore, M Prentiss, Phys. Rev. A. 7221604Y. Shin, C. Sanner, G.-B. Jo, T. A. Pasquini, M. Saba, W. Ketterle, D. E. Pritchard, M. Vengalattore, and M. Prentiss, Phys. Rev. A 72, 021604(R) (2005).
. M Olshanii, V Dunjko, cond-mat/0505358M. Olshanii and V. Dunjko, eprint cond-mat/0505358.
. S Wu, E J Su, M Prentiss, Euro. Phys. J. D. 35111S. Wu, E. J. Su, and M. Prentiss, Euro. Phys. J. D 35, 111 (2005).
. A E Leanhardt, T A Pasquini, M Saba, A Schirotzek, Y Shin, D Kielpinski, D E Pritchard, W Ketterle, Science. 3011513A. E. Leanhardt, T. A. Pasquini, M. Saba, A. Schirotzek, Y. Shin, D. Kielpinski, D. E. Pritchard, and W. Ketterle, Science 301, 1513 (2003).
. W Petrich, M H Anderson, J R Ensher, E A Cornell, Phys. Rev. Lett. 743352W. Petrich, M. H. Anderson, J. R. Ensher, and E. A. Cornell, Phys. Rev. Lett. 74, 3352 (1995).
. A S Arnold, E Riis, J. Mod. Optics. 495861A. S. Arnold and E. Riis, J. Mod. Optics 49, 5861 (1999).
. S Gupta, K W Murch, K L Moore, T P Purdy, D M Stamper-Kurn, eprint cond-mat/0504749S. Gupta, K. W. Murch, K. L. Moore, T. P. Purdy, and D. M. Stamper-Kurn, eprint cond-mat/0504749.
. H J Lewandowski, D M Harber, D L Whitaker, E A Cornell, J. Low Temp. Phys. 132309H. J. Lewandowski, D. M. Harber, D. L. Whitaker, and E. A. Cornell, J. Low Temp. Phys. 132, 309 (2003).
. A E Leanhardt, Y Shin, A P Chikkatur, D Kielpinski, W Ketterle, D E Pritchard, Phys. Rev. Lett. 90100404A. E. Leanhardt, Y. Shin, A. P. Chikkatur, D. Kielpinski, W. Ketterle, and D. E. Pritchard, Phys. Rev. Lett. 90, 100404 (2003).
. J Javanainen, M Wilkens, Phys. Rev. Lett. 784675J. Javanainen and M. Wilkens, Phys. Rev. Lett. 78, 4675 (1997).
P R Berman, Cavity Quantum Electrodynamics. San DiegoAcademic PressP. R. Berman, ed., Cavity Quantum Electrodynamics (Academic Press, San Diego, 1994).
| []
|
[
"Collective dynamics and the Anderson-Higgs mechanism in a bona fide holographic superconductor",
"Collective dynamics and the Anderson-Higgs mechanism in a bona fide holographic superconductor"
]
| [
"Hyun-Sik Jeong [email protected] \nInstituto de Física Teórica UAM/CSIC\nCalle Nicolás Cabrera 13-1528049MadridSpain\n\nDepartamento de Física Teórica\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain\n\nSchool of physics & CAS Center for Excellence in Topological Quantum Computation\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina\n\nKavli Institute for Theoretical Sciences\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina\n",
"Matteo Baggioli \nWilczek Quantum Center\nSchool of Physics and Astronomy\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nShanghai Research Center for Quantum Sciences\n201315Shanghai\n",
"Keun-Young Kim \nDepartment of Physics and Photon Science\nGwangju Institute of Science and Technology\n123 Cheomdan-gwagiro61005GwangjuKorea\n\nResearch Center for Photon Science Technology\nGwangju Institute of Science and Technology\n123 Cheomdan-gwagiro61005GwangjuKorea\n",
"Ya-Wen Sun [email protected] \nSchool of physics & CAS Center for Excellence in Topological Quantum Computation\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina\n\nKavli Institute for Theoretical Sciences\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina\n"
]
| [
"Instituto de Física Teórica UAM/CSIC\nCalle Nicolás Cabrera 13-1528049MadridSpain",
"Departamento de Física Teórica\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain",
"School of physics & CAS Center for Excellence in Topological Quantum Computation\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina",
"Kavli Institute for Theoretical Sciences\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina",
"Wilczek Quantum Center\nSchool of Physics and Astronomy\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Shanghai Research Center for Quantum Sciences\n201315Shanghai",
"Department of Physics and Photon Science\nGwangju Institute of Science and Technology\n123 Cheomdan-gwagiro61005GwangjuKorea",
"Research Center for Photon Science Technology\nGwangju Institute of Science and Technology\n123 Cheomdan-gwagiro61005GwangjuKorea",
"School of physics & CAS Center for Excellence in Topological Quantum Computation\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina",
"Kavli Institute for Theoretical Sciences\nUniversity of Chinese Academy of Sciences\nZhongguancun east road 80100049BeijingChina"
]
| []
| The holographic superconductor is one of the most popular models in the context of applied holography. Despite what its name suggests, it does not describe a superconductor. On the contrary, the low temperature phase of its dual field theory is a superfluid with a spontaneously broken U(1) global symmetry. As already observed in the previous literature, a bona fide holographic superconductor can be constructed using mixed boundary conditions for the bulk gauge field. By exploiting this prescription, we study the near-equilibrium collective dynamics in the Higgs phase and reveal the characteristic features of the Anderson-Higgs mechanism. We show that second sound disappears from the spectrum and the gauge field acquires a finite energy gap of the order of the plasma frequency. We observe an overdamped to underdamped crossover for the Higgs mode which acquires a finite energy gap below ≈ T c /2, with T c the superconducting critical temperature. Interestingly, the energy gap of the Higgs mode at low temperature is significantly smaller than 2∆, with ∆ the superconducting energy gap. Finally, we interpret our results using Ginzburg-Landau theory and we confirm the validity of previously derived perturbative analytic expressions. | 10.1007/jhep03(2023)206 | [
"https://export.arxiv.org/pdf/2302.02364v3.pdf"
]
| 256,615,189 | 2302.02364 | 81a6877d0ce348bb9fb0660e73d78a8611f7f432 |
Collective dynamics and the Anderson-Higgs mechanism in a bona fide holographic superconductor
22 Mar 2023
Hyun-Sik Jeong [email protected]
Instituto de Física Teórica UAM/CSIC
Calle Nicolás Cabrera 13-1528049MadridSpain
Departamento de Física Teórica
Universidad Autónoma de Madrid
Campus de Cantoblanco28049MadridSpain
School of physics & CAS Center for Excellence in Topological Quantum Computation
University of Chinese Academy of Sciences
Zhongguancun east road 80100049BeijingChina
Kavli Institute for Theoretical Sciences
University of Chinese Academy of Sciences
Zhongguancun east road 80100049BeijingChina
Matteo Baggioli
Wilczek Quantum Center
School of Physics and Astronomy
Shanghai Jiao Tong University
200240ShanghaiChina
Shanghai Research Center for Quantum Sciences
201315Shanghai
Keun-Young Kim
Department of Physics and Photon Science
Gwangju Institute of Science and Technology
123 Cheomdan-gwagiro61005GwangjuKorea
Research Center for Photon Science Technology
Gwangju Institute of Science and Technology
123 Cheomdan-gwagiro61005GwangjuKorea
Ya-Wen Sun [email protected]
School of physics & CAS Center for Excellence in Topological Quantum Computation
University of Chinese Academy of Sciences
Zhongguancun east road 80100049BeijingChina
Kavli Institute for Theoretical Sciences
University of Chinese Academy of Sciences
Zhongguancun east road 80100049BeijingChina
Collective dynamics and the Anderson-Higgs mechanism in a bona fide holographic superconductor
22 Mar 2023Prepared for submission to JHEP
The holographic superconductor is one of the most popular models in the context of applied holography. Despite what its name suggests, it does not describe a superconductor. On the contrary, the low temperature phase of its dual field theory is a superfluid with a spontaneously broken U(1) global symmetry. As already observed in the previous literature, a bona fide holographic superconductor can be constructed using mixed boundary conditions for the bulk gauge field. By exploiting this prescription, we study the near-equilibrium collective dynamics in the Higgs phase and reveal the characteristic features of the Anderson-Higgs mechanism. We show that second sound disappears from the spectrum and the gauge field acquires a finite energy gap of the order of the plasma frequency. We observe an overdamped to underdamped crossover for the Higgs mode which acquires a finite energy gap below ≈ T c /2, with T c the superconducting critical temperature. Interestingly, the energy gap of the Higgs mode at low temperature is significantly smaller than 2∆, with ∆ the superconducting energy gap. Finally, we interpret our results using Ginzburg-Landau theory and we confirm the validity of previously derived perturbative analytic expressions.
Introduction
In the last decade, the holographic correspondence, or gauge-gravity duality, has become an invaluable complementary tool to investigate the many-body dynamics of strongly correlated materials and strongly coupled condensed matter systems [1][2][3][4], with a particular emphasis on the problem of strange metals and high-T c superconductors [5][6][7][8].
The so-called holographic superconductor, or HHH model, introduced by Hartnoll, Herzog and Horowitz [9,10], is one of the most popular models in the context of holography applied to condensed matter and it has received an enormous amount of attention in the last years (see [11,12] for reviews on the topic). Nevertheless, it presents a "small" problem: it does not describe a superconductor. On the contrary, since the U(1) symmetry of the dual field theory is global rather than local, it describes a superfluid. One could argue that for some questions, e.g. the electric conductivity, the difference between the two is not important and hence one could still consider the holographic superfluid model as a weakly gauged holographic superconductor. Unfortunately, for many other features (e.g., the nature and dynamics of vortices, the collective low energy modes, etc.), a superfluid is profoundly different from a superconductor.
In order to investigate these different aspects, it is imperative to construct a bona fide holographic superconductor model. As a matter of fact, that has already been considered by many authors in the past [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. The "trick" to transform a holographic superfluid into a holographic superconductor consists in modifying the boundary conditions for the bulk gauge field from Dirichlet to mixed boundary conditions, as introduced in the seminal works by Witten [30,31] (see also [32][33][34][35]), and described in detail by Marolf and Ross [36]. 1 This procedure, which is equivalent to a Legendre transform of the dual field theory generating functional together with the introduction of a boundary Maxwell kinetic term, allows to gauge the boundary U(1) symmetry and bring in dynamical electromagnetism in the dual description.
The same type of boundary conditions have resulted to be important in several other holographic applications including the study of plasmons [38][39][40][41][42][43][44][45][46][47][48], Friedel oscillations [49], anyons [50][51][52] and magnetohydrodynamics [53]. 2 An analogous procedure can also be used to make the boundary metric dynamical and obtain semiclassical Einstein equations in the boundary dynamics [56][57][58].
One fundamental difference between superfluids and superconductors is the spectrum of collective low-energy excitations. Superfluids are characterized by the appearance of an additional sound mode [59], known as second sound 3 . This new excitation is a direct manifestation of the emergent Goldstone mode of the spontaneously broken U(1) global symmetry. The latter coincides with the fluctuations of the phase of the order parameter which cost no energy. On the contrary, the fluctuations of the amplitude of the order parameter, collectively labelled as the Higgs mode, are not hydrodynamic 4 , and they are overdamped close to the critical temperature. At low temperature, the Higgs mode is expected to develop a real energy gap which is proportional to the superconducting gap ∆. This whole dynamics can be directly derived using a phenomenological time-dependent Ginzburg Landau (GL) description [60,61]. At the same time, the late time and long distance dynamics of a superfluid in the broken phase can be consistently described using relativistic superfluid hydrodynamics [62][63][64][65][66][67][68], as a formal extension of the two-fluid Tisza-Landau model [69,70].
In a superconductor, the major difference with what just described is due to the famous Anderson-Higgs mechanism [71][72][73][74]. The massless Nambu-Goldstone mode, which in superfluids corresponds to the fluctuations of the phase of the order parameter, gets "eaten" by the dynamical gauge field and the corresponding photon becomes massive (see cartoon in Fig. 1). The mass of the photon is expected to be order of the plasma frequency ω p and it is a direct effect of the presence of dynamical electromagnetism. In other words, apart from the presence of first sound 5 , in a superconductor, and differently from a superfluid, we do not expect any other gapless excitation. The low-energy spectrum of holographic superfluids has been investigate numerically by computing the quasinormal modes at finite frequency and wave-vector. In the probe limit, this task has been originally achieved in [75][76][77]. 6 More completely, in [80], a fully backreacted analysis has been done and matched 1-to-1 with the expectations from relativistic superfluid hydrodynamics. Perturbative computations near the critical point were originally performed in [81]. More recently, using a more advanced method based on the concept of symplectic current, extended analytical results have been presented [82,83]. Those studies investigated the dynamics of the overdamped order parameter fluctuations [84] (see also [85,86] for earlier studies) and provided a concrete comparison near T c between the holographic superfluid model, time-dependent Ginzburg Landau theory and model F in Hoenberg-Halperin classification [87]. To the best of our knowledge, an underdamped Higgs mode with a real energy gap, obeying the standard effective theory expectations [88,89], has never been observed in holographic superfluids. On the contrary, in [90], the authors observed the emergence of a pair of underdamped complex valued modes at low temperature arising from microscopic degrees of freedom and not related to the dynamics of the order parameter. 7 At the same time, we are not aware of any computation of the low energy collective modes in a bona fide holographic superconductor model. In [18], the authors made an attempt in this direction by considering purely alternative (i.e., Neumann) boundary conditions for the bulk gauge field. As explained in [53], and re-iterated below, those boundary conditions simply perform a Legendre transform of the boundary action but do not introduce any kinetic term for the boundary gauge field. In other words, those boundary conditions correspond to the limit of infinite boundary gauge coupling and miss most of the relevant physics.
The scope of this work is to fill this gap and study in detail the collective dynamics of a bona fide holographic superconductor model at finite frequency and wave-vector. The manuscript is organized as follows. In section 2, following the work of [91], we present a phenomenological time dependent Ginzburg Landau description of the collective dynamics; in section 3, we present the holographic setup and all the details related to it; in section 4, we describe the main thermodynamic and transport properties in the Higgs phase; in section 5, we present the results for the transverse modes; in section 6, we present the results for the longitudinal modes and evidence for the Anderson-Higgs mechanism; finally, in section 7, we conclude with some final remarks and observations for the future.
Ginzburg-Landau phenomenological approach: a review
In this section, we present a brief review of the phenomenological Ginzburg-Landau theory [92] in its different incarnations. Our task is not to construct a complete Ginzburg-Landau description for strongly coupled superconductors nor to exactly match the results from holography to the effective description. On the contrary, we will use the results presented here as a guidance for the interpretation and discussion of the holographic results. For simplicity, we will follow closely the presentation of Ref. [91] (see also [63,93]). In order to avoid clutter, the speed of light and the Planck constant will be set to unity in the rest of the manuscript.
Ginzburg-Landau theory
Let us start from the most known form of Ginzburg-Landau theory which is a valid description for a superfluid transition close to the critical point. The starting point is the free energy density F [Ψ] which is expressed as a function of a complex order parameter field Ψ,
F [Ψ] = F n (T ) + d 3 r F [Ψ] = F n (T ) + d 3 r a |∇Ψ| 2 + b |Ψ| 2 + c 2 |Ψ| 4 ,(2.1)
where a, b, c are phenomenological parameters. For vanishing order parameter, Ψ = 0, the free energy F coincides with the normal phase free energy, F n (T ). The complex scalar field can be conveniently parameterized as Ψ = |Ψ(r)| e iθ(r) , where |Ψ(r)| is its modulus and θ(r) its phase. In order to implement the spontaneous symmetry breaking of the global U(1) symmetry and the transition to a superfluid phase at small temperature, one phenomenologically assumes that b = β (T − T c ). In this way, for T < T c , the quadratic term in the free energy density becomes negative and the minima of the latter are shifted to a finite value of Ψ. This is the familiar dynamics of the Mexican-hat potential (see Fig.1).
Minimizing the functional in Eq.(2.1), we obtain the classical equation of motion
a∇ 2 Ψ − bΨ − c|Ψ| 2 Ψ = 0 , (2.2)
which, for homogeneous solutions, gives rise to an equilibrium value Ψ 0 :
|Ψ| = |b| c =: Ψ 0 ∼ T c − T . (2.3)
At the critical temperature, the susceptibility χ diverges, χ −1 ∝ b, and the heat capacity displays a jump [93]. By construction, the order parameter obeys the mean-field scaling behavior with critical exponent 1/2. For later use, we also define the superfluid density n s and the normal density n n as
n s := 2|Ψ| 2 , n n = n − n s = n − 2Ψ 2 0 . (2.4)
Here, n indicates the total density and the factor of 2 comes from the comparison with the microscopic theory in which the condensate is formed by a pair of electrons [94].
In order to promote this picture out of equilibrium, different routes can be followed. At first, we will ignore dissipative terms, and just insist on a field theory approach based on a Lagrangian formalism. Later, we will discuss in detail the shortcomings of this picture. The idea is to promote the Ginzburg-Landau functional to an action S defined in Minkowski space with coordinates {vt, r}:
S = dt d 3 r L , (2.5)
where v is an emergent lightcone velocity which does not depend a priori on temperature.
A simple way to build the Lagrangian L in Eq.(2.5) is to recast the free energy density F in Eq.(2.1) in a relativistic-invariant form using the following substitution
∇Ψ → ∂ µ Ψ , ∂ µ := ∂ t v , ∇ . (2.6)
The corresponding Lagrangian can be then written down as
L = a (∂ µ Ψ) (∂ µ Ψ * ) − b |Ψ| 2 − c 2 |Ψ| 4 = a v 2 ∂Ψ ∂t ∂Ψ * ∂t − F . (2.7)
For stationary solutions, i.e., equilibrium configurations, the dynamics obtained from the action principle in Eq.(2.7) reduces to the standard Ginzburg-Landau theory in Eq.(2.1). Decomposing the complex scalar order parameter into its modulus and phase, the Lagrangian in Eq.(2.7) can be further expressed
L = a ∂ µ |Ψ| ∂ µ |Ψ * | + a |Ψ| 2 ∂ µ θ ∂ µ θ − b |Ψ| 2 − c 2 |Ψ| 4 . (2.8)
In order to study the dynamics out of equilibrium, let us consider a small deviation of the modulus from its equilibrium value
|Ψ| = Ψ 0 + φ , (φ Ψ 0 ) , (2.9)
where Ψ 0 is given in Eq.(2.3) and it is real valued. Then, the Lagrangian in Eq.(2.8) reduces to
L = a ∂ µ φ ∂ µ φ − 2|b| φ 2 + a Ψ 2 0 ∂ µ θ ∂ µ θ + b 2 2c , (2.10)
which consequently yields to the two dynamical equations
a 1 v 2 ∂ 2 φ ∂t 2 − ∇ 2 φ + 2|b|φ = 0 , 1 v 2 ∂ 2 θ ∂t 2 − ∇ 2 θ = 0 . (2.11)
The former is the equation for the modulus of the complex order parameter, often indicated as the Higgs/amplitude mode, while the latter is that for the phase, which is identified with the Goldstone mode. By going to Fourier space, and solving the above equations, we obtain two different low-energy excitations which are described by
Higgs mode: ω 2 = 2|b|v 2 a + v 2 k 2 , Goldstone mode: ω 2 = v 2 k 2 . (2.12)
As expected, the Goldstone mode shows a gapless dispersion relation with velocity v. On the contrary, the Higgs mode presents an energy gap 13) which vanishes at the critical temperature as ∼ √ T c − T . In addition, the Higgs mass in Eq.(2.12) can be obtained using the relativistic formula for the energy,
ω H := 2|b|v 2 a ,(2.ω 2 H = m 2 H v 4 + P 2 v 2 , as m H := ω H v 2 = 2|b| av 2 ∼ T c − T ,(2.14)
In what follows, we use the word "mass" interchangeably with the term "energy gap". Before continuing, let us emphasize the (many) shortcomings of this first simple approach.
(I) All dissipative effects are neglected. The latter would have several effects on the dispersion relation of the modes discussed. First, they would introduce attenuation in the dispersion of the Goldstone mode. Second, they would make the Higgs mode overdamped close to the critical temperature. (II) The dynamics considered so far is restricted to the order parameter Ψ and ignores completely its coupling to other conserved quantities as charge density, momentum and energy. Moreover, we ignored the coupling to a potential external gauge fields, parameterizing for example an external chemical potential or superfluid velocity. (III) The Lagrangian construction is completely phenomenological and poorly motivated. In particular, it is not able to reproduce the well-known fact that the speed of propagation of the Goldstone mode vanishes at the critical temperature T = T c . This is simply because the velocity v is introduced by hand using the emergent light-cone structure and it is not related to the superfluid density as it should. Within the standard GL picture, in order to obtain propagating modes, as second or fourth sound, one needs to include reactive couplings to other conserved quantities such as charge density (see for example [93]). Proceeding in this section, we will describe some of the more advanced alternatives to this method and discuss the possibility to have a complete description of the dissipative dynamics.
Anderson-Higgs mechanism
So far, we have considered a system with a global U(1) symmetry and in particular the transition between a normal fluid to a superfluid state. Now, we want to promote the Ginzburg-Landau description to the case of superconductors where the U(1) symmetry is gauged. In order to do that, we perform the following transformation
∂ µ Ψ → D µ Ψ := (∂ µ + iqA µ ) Ψ ,(2.15)
where we have defined for convenienceq := q/v. Moreover, we add in the Lagrangian a coupling to an external current J µ ext and a kinetic term for the dynamical gauge field:
L → L − A µ J µ ext − 1 4λ F 2 . (2.16)
Here, λ, parameterizes the strength of the gauge coupling or, in other words, the strength of the electromagnetic interactions. By setting the external sources to zero, J µ ext = 0, we obtain:
L = a (∂ µ + iqA µ ) Ψ (∂ µ − iqA µ ) Ψ * − b |Ψ| 2 − c 2 |Ψ| 4 − 1 4λ F 2 ,(2.17)
which, following the same steps as before, can be expressed near equilibrium
L = a ∂ µ φ ∂ µ φ − 2|b| φ 2 + b 2 2c + aq 2 Ψ 2 0 A µ A µ − 1 4λ F 2 ,(2.18)
where we neglected mixed terms ∼ Ψ 0 φA µ A µ (see [91] for details regarding this approximation). In addition, the phase degree of freedom θ has disappeared from the Lagrangian as it can be simply reabsorbed into a gauge transformation. Comparing the Lagrangian in Eq.(2.18) with that in Eq.(2.10), one can notice that the phase θ (the Goldstone mode) is absorbed into the gauge field A µ which has now acquired a finite mass ∝ Ψ 2 0 . This is the famous Anderson-Higgs mechanism [71][72][73][74]95].
Using Eq.(2.18), the equations of motion for the gauge field can be derived as
∂ µ F µν + 1 λ 2 GL A ν = 0 , λ 2 GL := 1 2aq 2 Ψ 2 0 λ ,(2.19)
i.e., the famous London equation, where λ GL is the London penetration length.
In Fourier space, the dispersion relation for the photon becomes
ω 2 = ω 2 A + v 2 k 2 , ω A := v λ GL = qΨ 0 √ 2aλ . (2.20)
The gauge field mass in the relativistic form is then given by
m A := ω A v 2 = 1 v λ GL ∼ T c − T ,(2.21)
where we used the expression for λ GL in (2.19) and Eq.(2.3).
As for the Higgs mode (2.14), mass of the gauge field (2.21) vanishes at the critical temperature following the mean-field behavior (T − T c ) 1/2 , but with a different multiplicative prefactor. Taking the ratio between the two masses (or energy gaps), we get:
m H m A = ω H ω A = √ 2 λ GL ξ GL =: √ 2 κ GL ,(2.22)
where we have defined the GL parameter κ GL , and the correlation length ξ GL
ξ GL := a |b| . (2.23)
This shows that, depending on the type of superconductor, one mass could be larger or smaller than the other. Indeed, for type-I superconductors one has κ GL < 1/ √ 2 while, for type-II superconductors, κ GL > 1/ √ 2. In our general scenario, in which the EM coupling is taken as arbitrary, this distinction depends on the value of λ since κ GL ∼ 1/ √ λ. By projecting these expressions to zero temperature, one obtains an interesting result regarding the mass of the photon field in the zero temperature limit. Let us stress that this extrapolation is a priori not trustable since the GL framework is reliable only close to the critical point around which the value of the order parameter Ψ is small, and the free energy can therefore be legitimately expanded in powers of it. On the contrary, going at low temperature, the order parameter grows and the GL treatment is not well grounded. Nevertheless, let us abuse of this approximation and see what we get. In the limit of zero wave-vector, k = 0, and zero temperature, the dispersion relation in Eq. (2.20) becomes
ω A (T = 0) = v λ GL (T = 0) = 2a q 2 Ψ 2 0 (T = 0) λ = a q 2 n λ ,(2.24)
where we have used the expression for the London penetration length λ GL in Eq. (2.19). In addition, in the last equality, Ψ 0 is replaced by the total density n using (2.4):
n n = n − 2Ψ 2 0 T =0 − −− → n = 2Ψ 2 0 . (2.25)
Here, we have assumed that the normal component n n is vanishing at T = 0. Under this assumption, one can see that ω A (T = 0) = ω p , ω p := a q 2 n λ , (2.26) where ω p is the plasma frequency. 8 Following this argument, we find that the "mass" of the photon in the zero temperature limit is determined by the value of the plasma frequency. This can also be thought as a consequence of the Anderson-Higgs mechanism. In other words, we do expect the sound mode to be pushed by Coulomb interactions to the plasma frequency value. Since, via the Anderson-Higgs mechanism, the Goldstone mode is absorbed into the gauge field, the "mass" of the gauge field at small temperature is pushed to the plasma frequency value as well.
Dissipative effects
So far, we have completely ignored any dissipative effects coming from the conductivity, the viscosity, etc. For simplicity, we will first follow the treatment in [91] and then discuss possible improvements.
The first effect of dissipation comes from the fact that the material is a conductor, with a finite conductivity σ. Because of this reason, the electric permittivity cannot be assumed to be a constant. On the contrary, in the simplest scheme of approximations, it becomes a complex and frequency dependent quantity given by
0 → (ω) = 0 1 + i λ σ(ω) ω . (2.29)
This substitution arises naturally in the standard treatment of electromagnetism in conductors [96], and it can be easily derived from the Maxwell equation:
∇ · D = ρ , D = E ,(2.30)
where D is the displacement vector and E the electric field. By using Ohm's law, J = σE, together with the continuity equation ∂ t ρ + ∇ · J = 0, one obtains that:
ρ = k · J ω = σ k · E ω , (2.31)
which, plugged into the Gauss law for the displacement vector, gives rise to the substitution in Eq. (2.29). Under this simple replacement, the dispersion relation of the massive gauge 8 In the non-relativistic limit, one has:
a = 2 4me , q = 2e ,(2.27)
and then recovers the familiar expression for the plasma frequency [91]:
ω 2 p = λ e 2 nω 2 = ω 2 A + v 2 k 2 − iv 2 λ σ ω , ω A := v λ GL = qΨ 0 √ 2aλ (2.32)
where we have assumed the conductivity σ(ω) to be a constant. Frequency dependent terms in the conductivity are obviously present but will not affect the dispersion relation at leading order in ω.
Notice that for ω A = 0 this equation describes the propagation of electromagnetic waves in a conductor and implies the well-known skin-effect arising from the imaginary term ∝ λ. Usually, this equation is solved by assuming a complex-valued wave-vector and a real-valued frequency. Here, we take the opposite approach and consider the wave-vector real and the frequency complex. At zero wave-vector, the solutions of Eq.(2.32) are given by
ω = −i v 2 λ σ 2 ± ω 2 A − v 2 λ σ 2 2 .
(2.33)
Because of the square root structure, there is clearly a competition between the dissipative effects and the mass term ω A arising because of Anderson-Higgs mechanism which can result in a overdamped mode or an underdamped one. More precisely, the excitations of the gauge field show a real mass gap only when:
ω A > v 2 λ σ 2 . (2.34)
On the contrary, in the limit of strong dissipation, the frequencies are purely imaginary. Let us consider the two limiting cases: (I) the near-critical region T ≈ T c and (II) the low temperature region T ≈ 0. When T → T c , ω A → 0 because of Eq.(2.20) (or equivalently λ GL → ∞), then Eq.(2.33) gives two simple solutions
T → T c : ω = −iv 2 λ σ , ω = 0 ,(2.35)
where the decay time of the overdamped mode is τ = 1/(v 2 λ σ). On the other hand, at T → 0, we do expect all dissipative effects, and in particular the conductivity σ, to vanish. The same equation gives rise to a pair of solutions which read
T → 0: ω = ± ω A − i v 2 λ σ 2 = ± ω p − i v 2 λ σ 2 . (2.36)
where we used (2.26) in the last equality. In this opposite case, the excitations have a real gap with a small attenuation constant. Assuming that the conductivity vanishes at zero temperature, one would simply get ω = ± ω p at exactly T = 0. The crossover between the overdamped (high T ) and underdamped (low T ) regimes can be approximately found by equating the two terms,
ω A = v 2 λ σ 2 , λ GL (T ) σ(T ) = 2 v λ . (2.37)
Using common values for these quantities in weakly coupled superconductors, one obtains that the crossover temperature T * is approximately given by T * /T c ∼ 0.5 [91].
In a similar way, using the Rayleigh dissipation function formalism [97], the effects of the conductivity on the dispersion relation of the Higgs mode, Eq.(2.12), can be incorporated in the attenuation constant γ given by [91]:
γ ∼ σ ξ 2 GL Ψ 2 0 . (2.38)
where ξ GL is given in (2.23). In this approximation, the dispersion relation of the mode is modified into
ω 2 = ω 2 H + v 2 k 2 − i2γω , ω 2 H := 2|b|v 2 a (2.39)
which is of the same form of that for the gauge field fluctuations in Eq.(2.32). Similarly, we consider the zero momentum solution of Eq.(2.39) which reads
ω = −iγ ± ω 2 H − γ 2 . (2.40)
In the near-critical regime, where T → T c , we have that
T → T c : ω H ∼ (T − T c ) 1/2 , γ ∼ ξ 2 GL Ψ 2 0 ∼ const. (2.41)
Thus, around the critical point we have ω H γ which implies the appearance of two overdamped modes of the type
ω = −i ω 2 H 2γ , ω = −2iγ , (2.42)
where the relaxation time of the longest-living excitation is given by
τ 1 = 2γ ω 2 H ∼ 1 |T − T c | . (2.43)
Note that the strong effects of damping near T c render the observation of the Higgs mode problematic since the latter is strongly overdamped. In the opposite limit of small temperature, we do expect the conductivity to vanish and we therefore expect the effects of dissipation to be negligible compared to the ω H term. In particular, there, we do expect a pair of weakly attenuated modes with a real gap ω H
T → 0: ω = ± ω H − iγ . (2.44)
Before concluding, let us present some remarks about the introduction of dissipative effects and the coupling to other conserved quantities. A standard way to promote the Ginzburg-Landau framework out of equilibrium and include dissipative effects is the socalled time-dependent complex Ginzburg-Landau theory [98]. Let us sketch the idea quickly by considering the dynamics of the complex order parameter Ψ and the ungauged case (i.e., the superfluid). While the equilibrium solution is given by minimizing the free energy density introduced in Eq.(2.1), the deviations from it are assumed to obey the simple time-dependent equation:
∂Ψ ∂t = −Γ 0 δF δΨ * , (2.45)
where Γ 0 is a phenomenological parameter which governs the relaxation of the order parameter. In general, the latter is taken to be a complex number. At the linearized level, and neglecting inhomogeneities, this equation also predicts the appearance of an overdamped amplitude (Higgs) mode near the critical point with dispersion:
ω = −i Re [Γ 0 ] b + . . . . (2.46)
This result is qualitatively analogous to what was obtained before using the Rayleigh dissipation function. Indeed, also in this case, the imaginary gap of the amplitude mode vanishes at the critical temperature. Notice that in this language the mass of the Higgs mode is controlled by the imaginary part of the phenomenological parameter Γ 0 , ω H = Im [Γ 0 ] b, and also vanishes as expected at the critical point. More in general, in order to consider the near-critical dynamics of a superfluid, and in particular to obtain also propagating modes, one needs to couple the dynamics of the order parameter to the conserved charge density [93]. For superfluids, this procedure will automatically end up in the so-called model F in the Hoenberg-Halperin classification [99] (see also [87] for a holographic derivation of this dynamics in the holographic superfluid model, [100,101] for a study of the nonlinear dynamics and [102] for an analysis of the universality class of holographic superconductors). It would be interesting to extend the model F in order to account for a dynamical gauge field and the coupling between the different modes.
Notice that model F does not take into account the dynamics of energy and momentum fluctuations, which will be anyway irrelevant for our holographic model in the probe limit. One could also formally derive a hydrodynamic theory for the superconductor by matching together magneto-hydrodynamics with the spontaneous breaking of the U(1) symmetry. In this case, the most challenging question is how to incorporate in a precise way the presence of non-hydrodynamic modes therein, as it is the case for the fluctuations of the amplitude mode. Near the critical temperature, an approach similar to those used around the QCD critical point [103][104][105] or those employed for pinned charge density waves [5] might work.
We leave the construction of a complete and rigorous effective description of the superconducting critical dynamics in presence of dissipation as a task for the future. We will come back to this discussion in the outlook.
The holographic setup
We consider the four dimensional Abelian-Higgs bulk action [9,10]
S bulk = d 4 x √ −g R + 6 − 1 4 F 2 − |DΦ| 2 − M 2 |Φ| 2 , (3.1)
in presence of a negative cosmological constant Λ = −3. We have defined the bulk field strength F := dA and the covariant derivative D µ := ∇ µ − iQA µ , with Q the charge of the complex bulk scalar field Φ and M its mass. For simplicity, we work in the probe limit in which the dynamics of the metric fluctuations is kept frozen. The corresponding equations of motion for the matter bulk fields are given by:
∇ µ F µν − iQ(Φ * D ν Φ − ΦD ν Φ * ) = 0 , (3.2) D 2 − M 2 Φ = 0 , D 2 − M 2 Φ * = 0. (3.3)
The background metric is chosen as:
ds 2 = 1 z 2 −f (z) dt 2 + dz 2 f (z) + dx 2 + dy 2 , (3.4)
with the emblackening factor which takes the Schwarzschild form:
f (z) = 1 − z 3 z 3 h . (3.5)
The corresponding temperature and entropy density of the dual field theory are given by:
T = 3 4πz h , s = 4π z 2 h . (3.6)
Finally, the ansatz for the bulk matter field is taken as:
A = A t (z) dt , Φ = ψ(z) . (3.7)
Note that, with A z = A x = A y = 0, the Maxwell equation of motion (3.2) implies that the phase of the scalar Φ is a constant [9]. Hence, for simplicity, we set the background phase to be zero and take Φ to be a real scalar in the background. Using the aforementioned notations, the bulk equations of motion can be written as
A t − 2Q 2 ψ 2 z 2 f A t = 0 , ψ + f f − 2 z ψ + Q 2 A 2 t f 2 ψ − M 2 z 2 f ψ = 0 ,(3.8)
and are solved numerically integrating them from the horizon (z = z h ) to the boundary (z = 0). For the concrete numerical computations, we take (z h , Q, M 2 ) = (1, 1, −2). We assume standard quantization for the bulk scalar field and fix the conformal dimension of the dual operator to be ∆ ψ = 2. At the horizon, we impose the regularity conditions for both the gauge field, A t (z h = 1) = 0, and the scalar field. Near the boundary, the matter fields behave as
A t = µ − ρz + O(z 2 ) , ψ = ψ 1 z + ψ 2 z 2 + O(z 3 ) . (3.9)
Using the holographic dictionary, µ can be interpreted as the chemical potential in the dual field theory and ρ as the charge density. Moreover, using standard quantization for the scalar field, ψ 1 represents the source for the dual scalar operator (the order parameter) and ψ 2 its the expectation value, i.e., the scalar condensate O 2 . In order to describe the spontaneous symmetry breaking of the dual U (1) symmetry, we always set the source to be zero, ψ 1 = 0. We will describe the main physical properties of the broken phase in section 4.
Fluctuations and boundary conditions
In order to study the dynamics of the low energy modes in the dual field theory, on top of the background solution Eq.(3.4), we switch on the following bulk field fluctuations:
δA = δa t (t, z, x) dt + δa x (t, z, x) dx + δa y (t, z, x) dy , δΨ = δσ(t, z, x) + i δη(t, z, x) , (3.10)
where the radial gauge A r = 0 is assumed. Importantly, we work in the probe limit in which the fluctuations of the metric are kept frozen. Moreover, we decompose all fluctuations in Fourier space using the notation:
ξ(t, z, x) =ξ(z) e i(kx−ωt) . (3.11)
where for simplicity the wave-vector k is aligned along the x direction and ξ is a collective label denoting a generic bulk field fluctuation. The equations of motion for the fluctuations arising from Eqs.(3.2)-(3.3) decouple into two independent sectors:
Longitudinal sector: {δa t (z), δa x (z), δσ(z), δη(z)} ,
Transverse sector: {δa y (z)}. (3.12) Note that the complex scalar field fluctuation (δσ, δη) are only coupled to the longitudinal vector components (δa t , δa x ). The equations in each sectors are as follows. In the longitudinal sector, we have
0 = f δη + f − 2f z δη + Q 2 A 2 t f − M 2 z 2 + ω 2 f − k 2 δη − 2Q iωA t f δσ − iQ ωψ f δa t − iQ kψ δa x , (3.13) 0 = f δσ + f − 2f z δσ + Q 2 A 2 t f − M 2 z 2 + ω 2 f − k 2 δσ + 2Q 2 A t ψ f δa t + 2Q iωA t f δη , (3.14) 0 = f δa t − k 2 + 2Q 2 ψ 2 z 2 δa t − ωk δa x − 2Qiωψ z 2 δη − 4Q 2 A t ψ z 2 σ , (3.15) 0 = f δa x + f δa x + ω 2 f − 2Q 2 ψ 2 z 2 δa x + ωk f δa t + 2Qikψ z 2 δη , (3.16)
together with the constraint equation
ω f δa t + kδa x = 2Qi z 2 ψ δη − ψ δη . (3.17)
In the transverse sector, the dynamics of the fluctuations is controlled by
0 = f δa y + f δa y + ω 2 f − k 2 − 2Q 2 ψ 2 z 2 δa y . (3.18)
After defining the equations of motion, we need to specify the boundary conditions for the fluctuations and in particular for those of the bulk gauge field. Following Ref. [14] and our more recent work, Ref. [53], we promote the external gauge field in the boundary field theory to be a dynamical field. This is fundamental to describe a superconducting phase rather than a superfluid one.
Let us start by considering the bulk Maxwell action in (3+1) dimension as
S bulk = − 1 4e 2 d 4 x √ −gF 2 ,(3.19)
where F = dA is the field strength for the U (1) gauge field A and the EM bulk coupling e is re-introduced for clarity. We then introduce the following boundary terms
S boundary = d 3 x − 1 4λ F 2 µν + J µ ext A µ ,(3.20)
where λ parameterizes the strength of Coulomb interactions at the boundary (not to be confused with the bulk coupling e in Eq. (3.19)) and the last term is just a Legendre transform in terms of an external current J µ ext . The variation of the total action, S tot := S bulk + S boundary , reads
δ Aµ S tot = d 3 x Π µ − 1 λ ∂ ν F µν + J µ ext δA µ , (3.21)
where the conjugate momenta of the gauge field, Π µ , is given by 9
Π µ = δS bulk δA µ = √ −g e 2 F zµ z→0 . (3.22)
Eq.(3.21) is equivalent to the boundary Maxwell equations
∂ ν F µν = λ (Π µ + J µ ext ) ,(3.23)
which implies that the gauge field, A µ , is now dynamical in the boundary field theory description. Following this prescription, the external sources can be determined as are constructed accordingly. We will derive the dispersion relations of the low-energy modes using the determinant method [106]. For this purpose, we define the source matrix for the longitudinal/transverse sector as
δJ x (L) ext = − ω λ Z (L) Ax − 1 e 2 ω ω 2 − k 2 Z (S) Ax , δJ y (L) ext = − ω 2 − k 2 λ Z (L) Ay − 1 e 2 Z (S) Ay ,(3.δa µ = δa (L) µ + δa (S) µ z + . . . , δσ = δσ (L) z + δσ (S) z 2 + . . . , δη = δη (L) z + δη (S) z 2 + . . . ,S long = δJ x (L) (I) ext δJ x (L) (II) ext δJ x (L) (III) ext δη (L)(I) δη (L)(II) δη (L)(III) δσ (L)(I) δσ (L)(II) δσ (L)(III) , S trans = δJ y (L) ext ,(3.26)
where the indices I, II, III denote the n-th independent solution. The dispersion relation of the modes are then obtained by imposing the determinant of the source matrix to vanish:
det S long (ω, k) = 0 , S trans (ω, k) = 0 . (3.27)
In what follows, we set e = 1 and keep λ as a free parameter to control the ratio between the strength of Maxwell interactions in the bulk and those at the boundary.
The equilibrium superconducting state
By numerically solving equations (3.8) with the boundary conditions defined in the previous section, one observes the appearance of a bulk solution with a non-trivial profile for the bulk complex scalar field above a certain critical value of the chemical potential. This is the broken phase in which the U (1) symmetry is spontaneously broken. For our choice of parameters, we find µ c z h ≈ 4.062 which corresponds to a critical temperature T c /µ = 0.0587 consistent with the results in Ref. [75]. We plot the profile of the scalar condensate as a function of the reduced temperature in Fig. 2. As expected, close to the critical point we observed the typical mean-field behavior
O 2 ∼ 1 − T /T c . (4.1)
Before moving to the dynamics of the fluctuations at finite frequency and wave-vector, we can study the electric response of the system in the broken phase. The electric conductivity 10 can be defined holographically using
σ(ω) = 1 iω δa (S) x δa (L) x ,(4.2)
where δa
(L) x is the leading coefficient of the fluctuation δa x , while δa (S) x
the subleading coefficient. In absence of coupling to momentum (i.e., in the probe limit), the optical conductivity takes the simple form
σ(ω) = σ 0 + i ω + δ(ω) ρ s µ , (4.3)
where ρ s is the superfluid density. The superfluid density approaches the total density at low temperature, as shown in the left panel of Fig. 3. From the formula above, we can also extract the parameter σ 0 . In the small temperature regime, T /T c 1, it was shown [9,10,12,107]
σ 0 ∼ e −∆/T , ∆ := O 2 /2 ,(4.4)
i.e., the low temperature behavior of conductivity σ 0 is exponentially suppressed by the condensate O 2 . We show this behavior in the right panel of Fig. 3, proving that the formula above works very well. Notice that, as well known, the energy gap extracted is given by 2∆ ∼ 8T c and much larger than the BCS prediction 2∆ ∼ 3.5T c .
Transverse collective modes
In this section, we study the dispersion relation of the transverse low-energy collective modes. Unless otherwise mentioned, we set λ/T = 0.1.
Massive electromagnetic waves
In order to understand the dynamics in the transverse sector, we utilise the following equation:
ω 2 =ω 2 A +ṽ 2 k 2 − iσ ω ,(5.1)
which is exactly of the same form as the one derived in the dissipative Ginzburg-Landau framework in the previous section, Eq.(2.32), i.e.,
ω A ↔ ω A ,ṽ ↔ v ,σ ↔ v 2 λ σ . (5.2)
Using these notations,ṽ parameterizes the velocity of propagation of EM waves,σ the dissipative effects coming from the conductivity andω A the emergent mass arising because of the Anderson-Higgs mechanism.
Transverse excitations in the normal phase. In the normal phase, T ≥ T c , the equation (5.1) can be formally derived using magnetohydrodynamics [108] and has been verified holographically in [44,53]. In particular, because of the probe limit, in the normal phase we do expect
T ≥ T c :ω A = 0 ,ṽ 2 = 1 − λ χ BB ,σ = σ 0 λ ,(5.3)
together with χ BB = −3/(4πT ), as proved explicitly in [53] (see also appendix A for the derivation ofṽ). Furthermore, let us recall that above T c , the bulk field A t (associated with the chemical potential) is absent in the transverse sector (3.18), which implies that the transverse dispersion relation of the normal phase is independent of the value of µ. As a consequence, the dispersion data shown in red color in Fig. 4 are representative for all the temperatures T ≥ T c (or µ ≤ µ c ). Transverse excitations in the superconducting phase. The dispersion relation of the lowest collective modes in the transverse sector is shown in Fig. 4 for different values of temperature in the superconducting phase. At the critical temperature (red data), we observe the standard behavior for EM waves in a conductor, in which the effects of screening induce a gap in the wave-vector [109]. The dynamics of EM waves displays a crossover between an overdamped diffusive behavior for long wave-lengths to a propagating behavior at short wave-lengths. The crossover between the two regimes is controlled by the conductivity of the system and the value of the electromagnetic coupling λ. We refer to [53] for a complete study of this behavior.
• • • • • • • • • • • • • • • • • • • • • • • • • • • ••
By decreasing the temperature and moving deeper into the superconducting phase (green and blue data), we observe that the critical wave-vector becomes smaller. At a critical value of the temperature, the gap of the dispersion relation changes its nature and becomes a real energy gap, while the imaginary part of the dispersion becomes approximately constant. 11 In Fig. 4, we also display the fitting curves (solid lines) using (5.1), which are in good agreement with the numerical values (symbols). Note that in general there are three fitting parameters (ω A ,ṽ ,σ), while the numerical quasi-normal mode data has only two independent degrees of freedom at a given wave-vector. Therefore, for practical purposes, we fix v 2 = 1 − λ χ BB even in the superconducting phase and we only fit for the two parametersω A ,σ. We then verify a posteriori the validity of this assumption. In the following subsections, we discuss their temperature and EM coupling dependence in detail.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• -1.0 -0.
Before continuing, we remind the reader that, in the case of holographic superfluids, the spectrum does not display any transverse hydrodynamic mode (see [75] for details).
Zero wave-vector excitations
We are ready to investigate the dispersion relation of the transverse EM waves in the superconducting phase. For simplicity, we start with the homogeneous case, k = 0. The solutions of Eq.(5.1) at k = 0 read
ω = − i 2σ ± 1 2 4ω 2 A −σ 2 (5.4)
and will be analyzed in detail below. Let us remind that in the normal phase we haveσ finite andω A = 0. Depending on the value ofω A andσ, the dispersion in Eq.(5.4) can give purely imaginary or complex modes. More precisely, we have three distinct cases. Whenever the dissipative effects are dominating, 4ω 2 A <σ 2 , the modes are purely imaginary, with dispersion relation
• • • • • • • • • • • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • -2 -1 0 1 2 -0.ω (±) = − i 2 σ ± σ 2 − 4ω 2 A .
(5.5)
In the smallω A limit, these imaginary poles are just given by
ω (+) ≈ −i σ −ω 2 Ã σ , ω (−) ≈ −iω 2 Ã σ . (5.6)
At a critical value of the mass, 4ω 2 A =σ 2 , these two poles collide on the imaginary axes at ω collision = − i 2σ . After the collision, they split into two complex poles and move away from the imaginary axes towards the real axes in a symmetric fashion.
In the opposite limit, in which the mass dominates over the dissipative effects, 4ω 2 A > σ 2 , we have the complex poles
ω = ω (C) := ± ω (R) − i ω (I) = ± 1 2 4ω 2 A −σ 2 − i 2σ . (5.7)
As a general rule, dissipative effects become subdominant at low temperature. Therefore, the dynamics just described is what we do expect by decreasing the temperature from the critical point down to zero temperature. This is exactly what we observe in Fig. 5.
As an interesting observation, we find that, at leading order in the EM coupling λ, the collision between the two modes occurs at the specific value
Im [ω] T collision = − 1 2 λ T ,(5.8)
which is confirmed numerically in Fig. 6. As shown explicitly in the right panel, this expression represents only an approximation in the regime of small EM coupling and it fails above λ/T ≈ 0.4. By fitting the data at k = 0, we can extract the temperature dependence of the phenomenological parameters (σ/T,ω A /T ). Their behavior is shown in Fig. 7. Interestingly, we find that the dissipative parameterσ takes the same form as in the normal phase and does not receive corrections in the superconducting state. In particular, for all the values of the temperature, within the probe limit approximation, we find that
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •T / T c ω A T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •σ = σ 0 λ ,(5.9)
where:
σ 0 := lim ω→0 Re[σ(ω)] ,(5.10)
and σ(ω) is the conductivity defined in Eq.(4.2) and shown in the right panel of Fig. 3. The dynamics of the other fitting parameter,ω A , is more complex. In the regime of small temperature, T T c , we find that this parameter is well fitted by the plasma frequency value ω p := λ ρ 2 + p , (5.11) where is the energy density, and p the thermodynamic pressure which can be evaluated using the Smarr relation + p = sT + µρ. The low temperature behavior ofω A is shown in the right panel of Fig. 7 using a solid line. On the contrary, near the critical point, the value ofω A strongly deviates from the plasma frequency value, Eq.(5.11), and vanishes at the critical point with a square root behavior,
ω A = α 1 − T /T c ,(5.12)
where α is a λ-dependent constant.
In particular, the mass of the EM waves vanishes at the critical point since λ GL → ∞ in Eq.(2.21). At the same time, it is expected that in the limit of small temperature, the mass In other words, our holographic results are perfectly compatible with the GL picture reviewed in Section 2. In principle, using perturbative methods, one could extract analytically the value of the parameter α which determines the near-critical behavior of the massω A , as done in [87]. We leave this analysis for the future.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 T / T c σ T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.992 0.996 1. 0 0.1 0.2 0.3 0.4 0.5 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 T / T c ω A T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
EM coupling dependence
To find the EM coupling dependence, we have performed the same analysis for different values of λ. The results are shown in Fig. 8. First, we observe that for all the values of the electromagnetic coupling and temperature, the parameterσ obeys the expression in Eq.(5.9). Second, we find that independently of the value of the EM coupling, the mass of the gauge field fluctuations approaches the plasma frequency value at low temperatures. Interestingly, we observe that the massω A reaches the plasma frequency value at a larger temperature for smaller values of the EM coupling (see top right panel in Fig.8). Finally, near the critical temperature, the mass always vanishes following the mean-field behavior in Eq. (5.12). The constant of proportionality α depends on the electromagnetic coupling and, at least for this choice of parameters, it is well approximated by the fitting expression α/T = 5.5 λ/T . As already mentioned, the value of this constant should be related to the GL parameters which can be computed directly from the holographic picture, as done in [87] for the superfluid case.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 T / T c ω A T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.999984 0.999993 1. 0 0.004 0.008 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Comparison with the perturbative analytical results near the critical point. Recently, Ref. [26] studied the holographic Meissner effect in the holographic superconductor model using perturbative analytical techniques valid in the near-critical regime (see [81,87] for similar analyses in the case of holographic superfluids). In particular, a closed-form for the London penetration length, λ holo , was obtained. Using that expression, we can extract an analytical formula for the mass of the gauge fieldω A which is given bỹ
ω A = 1 λ holo = 2λ 1 + λ I , I := 1 0 dz ψ(z) z 2 . (5.13)
In the expression above, ψ(z) is the bulk complex scalar field (see Eq.(3.7)). The limits of integration are the location of the boundary z = 0 and that of the horizon z = 1. The comparison between our numerical data and the expression (5.13) derived in [26] is presented in Fig. 9. The agreement near the critical point, T /T c ≈ 1, is excellent. Interestingly, we notice that the validity of Eq.(5.13) extends to lower temperatures when the EM coupling is small. On the contrary, for large values of the EM coupling λ, the analytical formula approximates well the numerical data only very close to the critical point.
Longitudinal collective modes and the Anderson-Higgs mechanism
We now move to the discussion of the longitudinal sector. Once again, unless otherwise mentioned, we set the value of the EM coupling to λ/T = 0.1. The ω (±) poles (circles) collide on the real axes, while the Ω pole (star) moves down along the imaginary axes.
Collective excitations
For simplicity, let us start with the homogeneous case, k = 0. Given that the dynamics is complicated, we find instructive to first present a schematic description which refers to the top panel of Fig. 10. In the normal phase, above the critical temperature, the fluctuations of the scalar order parameter at zero wave-vector decouple from those of the gauge field. The modes associated with the scalar fluctuations, sometimes referred as critical modes, have both a real and imaginary gap which vanish at the critical temperature. This dynamics is exactly equivalent to that presented in the probe holographic superfluid in [75] (see also [87]) and can be easily derived using the time-dependent Ginzburg-Landau theory.
In addition to the scalar critical modes, there is a non-hydrodynamic mode that corresponds to damped charge diffusion. Here, charge fluctuations are damped (rather than diffusing) because of the effects of dynamical electromagnetism (see [53,108]), i.e., ω = −iσ = −iσ 0 λ. At the critical temperature, T = T c , the two critical modes approach the origin. However, differently from the case of the superfluid, the mode corresponding to the fluctuations of charge does not go to the origin at the critical point as it remains overdamped. As a consequence, just below the critical temperature, no massless propagating degree of freedom appears (cfr. second sound in superfluids), but rather one observes three different modes with a purely imaginary frequency which we denote as Ω, ω (+) , ω (−) . Decreasing further the temperature, two of these three modes, ω (+) and ω (−) , collide on the imaginary axes and create a pair of complex modes which move towards the real axes and become underdamped. We will refer to those (complex) modes as ω (C) . The other third mode, Ω, remains on the imaginary axes, and its (negative) imaginary part increases by decreasing temperature.
T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •T • • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • •• • • 0.
Notice that, at zero wave-vector, the dynamics of the pair of modes ω (±) is the same as in the transverse sector (e.g. Fig. 5). This simply reflects the fact that, when the mo-mentum is zero, the equations of motion for the longitudinal fluctuations, Eq.(3.13), can be decomposed into two decoupled sectors: i) (δσ, δη, δa t ); ii) δa x . Then, the equation for δa x is exactly the same as the one in the transverse sector, Eq. (3.18).
We now move to the case of finite wave-vector, k = 0. The dynamics is more complicated as all the fluctuations are now coupled. Phenomenologically, at least in the limit of small wave-vector, k/T 1, we find that the lowest quasi-normal modes are well approximated by the following equations with six phenomenological parameters (σ,ω A , V, Γ, Ω, D Ω ):
ω ω + iσ + i Γ k 2 = V 2 k 2 +ω 2 A , ω + i Ω + i D Ω k 2 = 0 . (6.1)
Solving the two equations above gives the dispersion of the modes in the limit of small wave-vector,
ω = ± 1 2 4ω 2 A −σ 2 − i 2σ + ± 2V 2 − Γσ 2 4ω 2 A −σ 2 − i 2 Γ k 2 , (6.2) ω = −i Ω − i D Ω k 2 . (6.3)
As we will see shortly, Eq.(6.2) is related to the second sound in the superfluid case and reduces to ω = ω (±) or ω (C) at k = 0. The Eq.(6.3) is related to the "Higgs" mode. Let us stress that the first equation of (6.1) is not derived from a formal effective description (e.g., hydrodynamics) but just an educated guess from two limiting cases. i) for k = 0, the transverse mode is equivalent to the longitudinal mode ii) for λ = 0, the dispersion relation of the superfluid (σ =ω A = 0) is recovered. The second equation of (6.1) is the same as the superfulid case because it is supposed to be the Higgs mode, which is λ independent. We will come back to these points in the following paragraph. More generally, we do expect the above two modes (Eqs.(6.1)) to couple. Nevertheless, as we will see, at least in the limit of small wave-vector, the decoupling results become a reasonable approximation. This indicates that the coupling between the two equations above generates corrections to the dispersion relations which are higher-order in k.
Before continuing with our analysis, let us pause and discuss first the superfluid limit in which the gauge field is not dynamical at the boundary. For superfluids (see e.g. [75]), one finds the second sound waves and the damped charge diffusive mode or amplitude mode. This corresponds to assume that (I) our parameters (σ,ω A ) vanish in Eq.(6.1) and (II) that Γ and V are exactly the attenuation constant and the speed of propagation of second sound in the superfluid. As we will see, this is indeed the case. In the superfluid limit, the ω (±) modes combine into second sound and the Ω mode becomes the Higgs mode.
The dispersion relation of the lowest QNMs in the longitudinal spectrum is shown in Fig. 11 from high temperature (top left), in the normal phase, to the lowest temperature accessible (bottom right). 12 In solid/dashed lines, we display the fitting formulas using Eqs.(6.2)-(6.3). In what follows, we discuss in detail the coefficients, (σ,ω A , V, Γ, Ω, D Ω ) appearing in Eqs.(6.1). Finally, we will discuss the similarities and differences with the GL picture presented in Section 2.
• • • • • • • • • • • • • • • • • • • • • • •T / T c ω A T • • • • • • • • • • • • • • • • • • • • • • • 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 T / T c 2 • • • • • • • • • • • • • • • • • • • • • • •
The fate of second sound
Let us first analyze the dynamics of the second sound in our holographic superconductor. Its dispersion relation at low k is given by Eq.(6.2). The temperature dependence of the coefficients (σ,ω A , V, Γ) is presented in Fig. 12. Interestingly, the coefficients (σ,ω A ) are exactly the same as the ones appearing in the dispersion relation for EM waves in the transverse sector (cfr. Eqs.(5.9)-(5.12)). At the same time, as perhaps expected, the speed of sound V and the attenuation constant Γ coincide with those of second sound in the holographic superfluid model [75]. Notice how the speed of propagation approaches the conformal sound speed V 2 = 1/2 at low temperature and vanishes at the critical point. Also notice that as T → T c , bothω A and V vanish. As a consequence, the dispersion relation therein becomes phase, D c T = 3/(4π) ∼ 0.238. This is consistent with our data in Fig. 12.
ω = −iσ − i Γ k 2 ,(6.T / T c σ T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.
Let us also notice that the dynamics of this mode is completely missing in the GL formalism presented in Section 2 since we have not considered the coupling to the conserved charge density nor the corresponding charge fluctuations. In order to include this mode into the EFT framework, one should extend the GL theory and promote it as in model F in the Hoenberg-Halperin classification [99] for superfluids. In the context of holographic superfluids, the matching with model F has been proved explicitly in [87]. It would be interesting to repeat this analysis in the case of a superconductor with dynamical Coulomb interactions.
Finally, we want to discuss the effects of the EM coupling on the coefficients appearing in the dispersion relations. The behavior of the various coefficients as a function of temperature for different values of the EM coupling is shown in Fig. 13. Interestingly, we find that the velocity V and the attenuation constant Γ are independent of the EM coupling λ. On the contrary, as expected, the dissipative coefficientσ and the massω A depends on the EM coupling λ. Their dependence is shown in Fig. 14 for a value of the temperature close to the critical point. The first shows a linear behavior, while the mass shows a square root behavior with λ.
The "Higgs" mode and its mass
Higgs mode at zero wave-vector. Next, we discuss the fate of the damped diffusive mode in Eq.(6.3), the Higgs mode. In particular, we focus on its dynamics at zero wavevector, as shown in Fig. 15. Near the critical point, we find that the Higgs mode is well approximated by a dispersion relation as in Eq.(6.3). We find numerically that:
Ω ∼ (1 − T /T c ) . (6.5)
This mode corresponds to the fluctuations of the amplitude of the order parameter. Its behavior is in perfect agreement with the expectation from GL theory, Eq.(2.42), i.e.,
Ω ↔ ω 2 H 2γ
, (6.6) and also with the holographic results for superfluids in [75] and the analysis of [84] (see also [100,101]). Interestingly, by decreasing the temperature, this mode collides with a first nonhydrodynamic higher pole (indicated with blue color in Fig. 15). This collision produces a pair of complex modes, with a finite real part which are displayed in red color in Fig. 15. This behavior is, once more, well described qualitatively by GL theory, see Eq. (2.40). To be precise, the complete dynamics is more complicated than an interactions between two modes as assumed in Eq. (2.40). Indeed, the first non-hydrodynamic mode interacts as well with a second higher order non-hydrodynamic pole (green dots in Fig. 15) which is not included in Eq. (2.40). Nevertheless, this mode does not strongly affect the low-energy dynamics.
We have also studied the behavior of the Higgs mode in Eq.(6.3) and found that its dynamics is independent of the value of the EM coupling λ. In other words, Fig. 15 • does not change with λ. 13 This is consistent with the properties of the Higgs mode as derived in the GL framework. In particular, the amplitude mode remains unaffected by Coulomb interactions. Importantly, this also implies that the position of the collision, and the temperature at which the Higgs mode acquires a real gap, do not depend on the value of the EM coupling λ.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.
Higgs mode at finite wave-vector. Considering the finite wave-vector case, we can try to push the comparison with GL theory further. For that purpose, we use the data in Fig. 15 with the dispersion relation in Eq.(2.39) obtained from GL theory. Although, Eq.(2.39) cannot completely capture the dynamics of our Higgs mode near T = T c , we can still use this approximation in the lower T regime, i.e, for the red mode in Fig. 15 up to near the collision point between the blue and black modes, T /T c ∼ 0.6.
From GL theory, Eq.(2.39), we expect a dispersion relation of the form
ω = ± ω 2 H −γ 2 +ṽ 2 2 ω 2 H −γ 2 k 2 − iγ ,(6.7)
whereω H is the mass of the Higgs mode andγ its attenuation constant. Here, as done for the transverse sector before, we use the tilde variables for the holographic quantities:
ω H ↔ ω H ,ṽ ↔ v ,γ ↔ γ . (6.8)
Performing this analysis, we find that the velocity v coincides exactly with the velocityṽ, Eq.(5.3), appearing in the dispersion of the gauge fluctuations mode in Eq.(5.1). This is not surprising, and it is indeed expected from the GL theory (see Eq.(2.20) and Eq.(2.39)). Therefore, we have two fitting parameters (ω H ,γ) which can be extracted from the zero wave-vector analysis. Their temperature behavior for λ/T = 0.1 is shown in Fig. 16. At low temperature, we find thatω H >γ, which is consistent with the GL framework. Additionally, we find the two parameters are of the same order around T * /T c ≈ 0.6. This signals the crossover between the overdamped regime at large temperature and the underdamped one at low temperature and it is consistent with the results presented in Fig. 15. Finally, the dissipative parameterγ does not seem to vanish towards zero temperature. This point deserves further investigation in the model with backreaction.
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
After describing the dynamics of the Higgs mode at zero wave-vector, we can extend the analysis for k = 0, i.e., once we know (ω H ,γ) together withṽ 2 = 1 − λ χ BB , we can study the dispersion relation at finite wave-vector (6.7). We show the real and imaginary parts of the dispersion relation of the Higgs mode at low temperature in Fig. 17.
Interestingly, we see that the GL prediction fits very well the numerical data at low temperature. This is yet another confirmation that the holographic results are in perfect agreement with the Ginzburg-Landau effective description.
Further comments on Higgs energy gap. Before closing this section, we discuss another feature related to the Higgs energy gap,ω H . In (s-wave) BCS-type superconductors, under certain specific approximations, the Higgs mode energy gapω H obeys the following expression [112]:ω
H = 2∆ ,(6.9)
where ∆ is the superconducting energy gap related to the order parameter as 2∆ = O 2 [9]. Using our data in Fig. 2, we estimate O 2 = 2∆ ≈ 8 T c at T /T c = 0.15, which implies 2∆/T ≈ 53 at T /T c = 0.15. This result is not consistent with the value of the Higgs gap ω H reported in Fig. 16 which isω H /T ≈ 8.6 at approximately the same temperature T /T c = 0.15. Combining these outcomes, we find:
ω H 2∆ T ≈0.15Tc ≈ 0.162 ,(6.10)
which is much smaller than the expected value in Eq.(6.9). We speculate about the origin of this discrepancy. First, from a practical perspective, working in the probe limit does not guarantee complete control on the low-temperature dynamics. Second, to the best of our knowledge, the result in Eq.(6.9) is not of universal validity but rather limited to weakly coupled BCS-type superconductors. As explicitly shown recently in [113,114], holographic superconductors do not fall into that simple class. It is tempting to attribute this novel outcome to the peculiar strongly-coupled and quantum critical nature of holographic superconductors. Further investigation is needed to ascertain the validity of such a statement. Let us also discuss the GL parameter κ GL in (2.22), which is also associated with Higgs energy gap via:ω
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.T • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 0.H ω A = √ 2 κ GL , κ GL ≈ 1 √ λ . (6.11)
This parameter was studied recently in [26]. Using our numerical data (ω H ,ω A ), we can discuss the behavior of κ GL . In Fig. 18, we examine the ratio betweenω H andω A as a function of the temperature for different values of the EM coupling. We find a power law dependence of the type κ GL = ζ 1 + ζ 2 T /T c . This has to be contrasted with the logarithmic behavior found in higher dimensions in [26], which reflects the different nature of the EM coupling in different dimensions. Moreover, as expected from Eq.(6.11), the ratio decreases at larger λ (e.g., from red to yellow in Fig. 18). This is consistent with the fact that the λ-dependence in the GL parameter κ GL comes entirely from the propagation length λ GL ∝ 1/ √ λ. Finally, let us comment on the temperature dependence of the GL parameter κ GL . In AdS 5 [26], the GL parameter decreases with temperature. Here, it increases. The difference between the two scenarios is rooted in the dimension of the U(1) coupling λ in 2D and 3D and could be possibly understood analytically by performing holographic perturbative computations near the critical point.
So far, we have focused our analysis on the weak-coupling regime, λ/T 1. In Fig. 19, we also discuss the λ-dependence on the energy gaps (ω A ,ω H ) at fixed temperature and for larger values of the U(1) coupling λ. In the left panel, we displayω A for different values of λ at T /T c = 0.15. We find that (I)ω A is monotonically increasing as we enhance λ; (II)ω A deviates from the plasma frequency value, Eq.(5.11), for large λ. Furthermore, dialing the value of λ/T up to λ/T = 50 at the same temperature T /T c = 0.15, we also checked that the other energy gap,ω H , is independent of λ/T , which implies that the ratio betweenω H andω A is decreasing with λ (see the right panel in Fig.19). Our observation (II) also implies that κ GL does not follow ≈ 1/ √ λ in the limit of λ → ∞. On the contrary, we numerically find thatω H /ω A = (λ/T ) −0.34 . This is another distinct feature from the higher dimensional case discussed in [26], where κ GL remains finite even in the strong EM coupling limit λ → ∞. It would be interesting to understand the large λ limit better. We plan to revisit this question in the near future. Finally, let us comment about the validity of the probe limit and the expectations in presence of backreaction. In general, we do not expect a qualitative difference in the nature of the low-energy modes, whose structure is mostly dictated by symmetries. Nevertheless, we do expect that the quantitative results, especially in the limit of small temperature, could radically change in presence of backreaction. This is also the reason why all our data are cut around T ≈ 0.2T c , where we do expect such effects to become important. We leave the investigation of the backreacted model for the near future.
Outlook
All previous studies on collective dynamics in holographic models with spontaneously broken U(1) symmetry (e.g., [75,80]) have been focused on the case where the U(1) symmetry is global and the dual field theory describes a superfluid rather than a superconductor. In this work, we have studied the low-energy collective dynamics of a bona fide holographic superconductor model in which the gauge field in the boundary field theory is dynamical and the broken U(1) symmetry gauged. We have revealed the characteristic features of the Anderson-Higgs mechanism and showed evidence for the presence of a Higgs mode presenting a real mass gap at low temperature. Interestingly, the pattern that gives rise to this mode seems to follow the GL logic. On the contrary, in holographic superfluids, the emergence of a pair of complex underdamped modes at low temperature has been observed to follow a very distinct dynamics [90]. Using a phenomenological attitude, and guided by the predictions of a simple Ginzburg-Landau approach, we have described the dispersion relations of the collective modes in both the transverse and longitudinal sector as a function of the temperature and the electromagnetic coupling. The agreement between the GL effective description and our holographic results is excellent. Our work proves that a holographic superconductor, following all the rules of superconductivity, including the characteristic dynamical excitations, can be constructed using mixed boundary conditions for the bulk gauge field.
There are several directions which are worth it investigating in the future.
• First and foremost, we have not presented a complete and formal effective description of the low-energy dynamics. This task can be performed using two slightly different approaches. From one side, one could try to gauge the model F of Hoenberg and Halperin [99] and perform an analysis similar to that of [87] for the case of holographic superfluids. An alternative approach would be to combine magnetohydrodynamics and superfluid hydrodynamics to construct a hydrodynamic framework for superconductors. This would need an extension of standard hydrodynamics in order to incorporate the dynamics of slowly relaxing non-hydrodynamic modes. Indeed, as emphasized above, ignoring the fluctuations of energy and momentum, a superconductor does not present any hydrodynamic gapless modes in the spectrum. This is very different from the case of superfluids which present a gapless propagating second sound mode easily described within "standard" hydrodynamics.
• It would be interesting to study in more detail the transport properties of a holographic superconductor. In particular, one would like to understand if any signature of the massive Higgs mode can be observed in the optical conductivity spectrum below the superconducting gap. Naively, one would expect that an underdamped Higgs mode with gap below the SC gap ∆ should leave a clear signature in σ(ω). Here, one must deal with the subtleties regarding the electric response in presence of Coulomb interactions, see, e.g., [47,48].
• Quenches in our holographic superconductor model could be useful tools to explore the collective dynamics beyond linear approximation. In particular, one might think of extending the analysis of [100,101] to this case and use GL theory to interpret the numerical results. Nonlinear response is expected to be an excellent probe for the dynamics of the Higgs mode which is usually undetectable in the linear response regime [115].
• One could generalize our study to the case of multiband superconductors where a hydrodynamic mode, known as Leggett mode, should be present [116,117]. It would be fascinating to study the dynamics of the Leggett mode using holography.
• A different way to promote the gauge field at the boundary as dynamical is by using the dual higher-form description in the bulk [55]. It would be interesting to construct a holographic superconductor model without advocating for any U(1) vector gauge field in the bulk.
• The dynamics and possible observation of the Higgs mode in superconductors has been topic of a long-standing debate in the condensed matter community [118][119][120][121][122][123][124].
In this work, we found very distinct features in the emergence of the Higgs mode in holographic superconductors with respect to the previous observations in holographic superfluids [90]. In particular, we see that the Higgs mode arises, as expected from the Ginzburg Landau arguments, from the dynamics of the amplitude of the order parameter. On the contrary, in holographic superfluids, Ref. [90] observed the emergence of an underdamped massive mode at low temperature from the spectrum of microscopic modes. It would be interesting to understand this difference further.
• It has been recently demonstrated that, in presence of a non-zero superflow, the fingerprints of the Higgs mode could be visible already in the linear response regime [125]. One could introduce a non-zero condensate flow (supercurrent) in the holographic model and investigate the dynamics of the amplitude mode therein.
We plan to return to some of these issues in the near future. Since we are working in the probe limit, the dynamics of the transverse momentum is kept frozen. Because of this reason, in the normal phase, the shear diffusion mode will not appear and the whole low-energy dynamics will be controlled by the transverse fluctuations of the gauge field. Those follow the so-called telegrapher equation:
ω ω + i σ e = k 2 e µ m (A.1)
where e , µ m are respectively the electric permittivity and the magnetic permeability. Because of the modified b.c.s. and the dynamical gauge field in the boundary description, the normal phase displays a transverse massless mode with diffusive dispersion, ω = −i k 2 σµm , which can be thought as the diffusion of magnetic lines. A detailed check of this dynamics has been recently reported in [53].
Furthermore, from standard electrodynamics, we have thatṽ 2 = 1/( e µ m ), with e , µ m respectively the electric permittivity and the magnetic permeability. In general, the latter is related to the electric and magnetic susceptibilities via
χ EE = e − 1 λ , χ BB = 1 λ − 1 µ m . (A.2)
In [53], we found that, at least in the limit of small EM coupling λ/T 1, χ EE = 0 to a good approximation. Then, using Eq.(A.2) in such a limit, we immediately find:
v 2 = 1 − λ χ BB for λ/T 1 . (A.3)
Moreover, as shown in [53], for our simple holographic model we have
χ BB = −z h = −3/(4πT ) . (A.4)
Figure 1 .
1Left: The typical Mexican-hat potential in the GL phenomenological description of 2nd order phase transition. In blue, the fluctuations of the phase of the order parameter, the Nambu Goldstone mode. In red, the fluctuations of the amplitude of the order parameter, the Higgs mode. Right: The spectrum of low-energy collective modes in a superconductor. The Higgs mode is expected to have an energy gap of the size of the superconducting gap. The NG mode is "eaten" by the gauge field and the photon becomes massive with an energy gap of order of the plasma frequency.
24) where Z Ax := kδa t + ωδa x , Z Ay := δa y . (L) and (S) respectively stand for leading and subleading terms. Additionally, the conservation equation ∇ µ J µ ext = 0 holds and it implies that the time component, δJ t (L) ext , is fixed by the others appearing in Eq.(3.24). Near the AdS boundary (z → 0), the fluctuations behave as
Figure 2 .
2Order parameter O 2 vs. reduced temperature T /T c . The critical temperature is T c /µ = 0.0587. The inset shows the near-critical mean field behavior: numerical result (solid black), fitting result (4.1) (dashed red).
Figure 3 .
3Left: Total density ρ/µ 2 (dashed) and superfluid density ρ s /µ 2 (solid). The total density ρ is evaluated from Eq.(3.9) while the superfluid density is from the optical conductivity data. Right: σ 0 as a function of the reduced temperature. The inset shows the low temperature behavior: numerical result (solid black), fitting result (4.4) (dashed gray).
Figure 4 .
4The dispersion relation of the lowest collective modes in the transverse sector for different values of the reduced temperature T /T c = (1, 0.999, 0.998) (red, green, blue). Symbols represent the numerical values and solid lines are fits using Eq.(5.1).
Figure 5 .
5The dynamics of the lowest collective modes in the transverse sector at k = 0 by decreasing the reduced temperature from T /T c = 1 to 0.3 (in the direction of the arrows). The (red, green, blue) data are the same as inFig. 4.
Figure 6 .
6Left: The dynamics of the lowest QNMs in the transverse spectrum at k = 0 for T /T c ∈ [1, 0.3] with λ/T = 0.1 − 0.5 (from red to blue data). The red data corresponds to Fig. 5. Right: The collision frequency, ω collision , as a function of the EM coupling λ/T . The black solid line is the phenomenological finding in Eq.(5.8).
Figure 7 .
7The temperature dependence of the phenomenological paramtersσ andω A in Eq.(5.4). Dots are evaluated from the numerical fits. Solid lines represent the analytical expression in Eq. (5.9) (left panel) and Eq.(5.11) (right panel). The insets show the data near the critical point, T = T c . The dashed red line in the right panel is the expression in Eq.(5.12).
Figure 8 .
8The phenomenological parameters appearing in the dispersion relation of the lowest QNMs in the transverse sector at k = 0. Different colors from red to blue correspond to λ/T = 0.1 − 0.5. Top left: the conductivityσ and the expression in Eq.(5.9) (solid lines). Top right: the phenomenological massω A together with the plasma frequency value in Eq.(5.11) (solid lines). Bottom left: the behavior of the mass close to the critical point, T ∼ T c and the fitting formula in Eq.(5.12) (dashed lines). Bottom right: the phenomenological parameter α as a function of the EM coupling. The dashed line is the fitting formula α/T = 5.5 λ/T . of the gauge field fluctuations approaches the plasma frequency value, see Eq.(2.26).
Figure 9 .
9The gauge field massω A . Dots are numerically obtained by fitting the dispersion relation. Dashed lines represent the analytical expression in Eq.(5.13). Left: λ/T = 0.1. Right: λ/T = 0.1 − 0.5 (red-blue).
Figure 10 .
10Top panel: Schematic plot of the poles in the longitudinal channel at zero momentum. In the normal phase, the black symbols represent the scalar fluctuations (δσ, δη), while the gray symbol is the damped charge diffusion from (δa x , δa t ). Below T c , the scalar sector couples to the gauge sector and three damped poles appear (Ω, ω (+) , ω (−) ). As the temperature is lowered, the ω (+) pole collides with ω (−) and generates a coupled of complex modes (blue symbols). The other pole Ω becomes more and more overdamped. Bottom panel: the numerical data. Bottom left: the near critical region, T /T c = 1.001 − 0.999 (red-blue). The ω (±) poles are represented with circles while the Ω one with stars. The inset shows the behavior of ω (+) pole. Bottom right: the collision regime, T /T c = 0.999 − 0.989 (blue, pink, purple, black).
Figure 11 .
11Dispersion relation of the low-energy modes in the longitudinal spectrum. From top left to bottom right, the temperature decreases, T /T c = 1.00141, 0.999438, 0.998946, 0.996497, 0.991633, 0.982048. Each panel is associated to the corresponding imaginary part located below it.
Figure 12 .
12Coefficients appearing in the dispersion relation, Eq.(6.2). Solid black lines are drawn using Eq.(5.9) forσ and Eq.(5.11) forω A . The red dashed line near T c is Eq.(5.12). The speed of propagation V and the attenuation constant Γ coincide exactly with those reported for second sound in the holographic superfluid[75] .
the damped charge diffusion mode at T ≥ T c . In other words, the attenuation constant Γ, in the limit T → T c , becomes the charge diffusion constant in the normal
Figure 13 .
13Coefficients appearing in the dispersion relation (6.2) as a function of the reduce temperature for λ = 0.1, 0.2, 0.3 (red, orange, yellow). Solid lines represent Eq.(5.9) forσ and Eq.(5.11) forω A . The dashed lines are Eq.(5.12).
Figure 14 .
14The λ dependence of the dissipative coefficientσ and the massω A at T /T c = 0.9. Left: σ vs. λ. The dashed line is the fitting formulaσ/T = 0.33λ/T . Right:ω A vs. λ. The dashed line is the fitting formulaω A /T = 1.55 λ/T .
Figure 15 .
15The dynamics of the Higgs mode as a function of the reduced temperature. The black dots encode the fluctuations of the amplitude of the order parameter close to the critical point.
Figure 16 .
16The phenomenological parameters of GL theory, (ω H ,γ). The Higgs frequencyω H and the Higgs attenuation constantγ at λ/T = 0.1.
Figure 18 .
18The ratioω H /ω A as a function of the reduced temperature for λ/T = (0.1, 0.2, 0.3) (red, orange, yellow). The dashed line are the fitting to the function: ζ 1 + ζ 2 T /T c .
Figure 19 .
19The λ-dependence of the energy gaps (ω A ,ω H ) at T /T c = 0.15. Left:ω A /T vs. λ/T . The solid line is the plasma frequency value, Eq.(5.11). Right: The ratioω H /ω A vs. λ/T . The solid line is the fitting curve at large λ:ω H /ω A = (λ/T ) −0.34 .
Science, ICT & Future Planning (NRF-2021R1A2C1006791) and GIST Research Institute (GRI) grant funded by the GIST in 2022. M.B. acknowledges the support of the Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01) and the sponsorship from the Yangyang Development Fund. M.B. would like to thank IFT Madrid, NORDITA, GIST and Chulalongkorn University for the warm hospitality during the completion of this work and acknowledges the support of the NORDITA distinguished visitor program and GIST visitor program. H.-S Jeong would like to thank GIST for the warm hospitality during the completion of this work. K.-Y Kim acknowledges the hospitality at APCTP where part of this work was done. A The speed of transverse excitations in the normal phase Let us remind the reader about the spectrum of transverse excitations in the normal phase.
that σ 0 := limω→0
Re[σ(ω)] is associated with
the superconducting energy gap ∆ via
Figure 17. The dispersion relation of the Higgs mode at low temperature: T /T c = (0.15, 0.21, 0.25) (blue, green, red). Left: Re[ω] vs. k. Right: Im[ω] vs. k. The solid lines are the predictions from GL theory, Eq.(6.7).0
0.2
0.4
0.6
0.8
1.0
6.8
7.0
7.2
7.4
7.6
7.8
k / T
-Im[ω]
T
In the rest of this manuscript, we will not consider the fluctuations of energy and momentum, therefore we will not discuss the dynamics of first sound arising from those.6 See also[78] for the extension to more exotic superfluid phase transitions and[79] for the generalization in presence of a small explicit breaking of the global U(1) symmetry.7 We thank Aristomenis Donos to point this out to us.
Note that Π µ is the radially conserved bulk current obtained from the Maxwell equation: 0 = ∂z ( √ −g F zµ ) = ∂z Π µ .
Previous studies of the conductivity in presence of Coulomb interactions in holography can be found in[47,48].
The dynamics of the real part of the dispersion relation is reminiscent of what found in[110,111] with the difference that therein no hydrodynamic mode survives.
Let us remind that the probe limit approximation ceases to be trustable at low temperature.
We have explicitly checked for λ/T = 0.1, 0.2, 0.3 .
AcknowledgmentsWe would like to thank Y. Ahn
. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •, • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Holographic quantum matter. S A Hartnoll, A Lucas, S Sachdev, 1612.07324S. A. Hartnoll, A. Lucas and S. Sachdev, Holographic quantum matter, 1612.07324.
J Zaanen, Y.-W Sun, Y Liu, K Schalm, Holographic Duality in Condensed Matter Physics. Cambridge Univ. PressJ. Zaanen, Y.-W. Sun, Y. Liu and K. Schalm, Holographic Duality in Condensed Matter Physics. Cambridge Univ. Press, 2015.
Applied Holography: A Practical Mini-Course. M Baggioli, 10.1007/978-3-030-35184-7SpringerBriefs in Physics. SpringerM. Baggioli, Applied Holography: A Practical Mini-Course. SpringerBriefs in Physics. Springer, 2019, 10.1007/978-3-030-35184-7.
. M Natsuume, 10.1007/978-4-431-55441-7AdS/CFT Duality User Guide. 903M. Natsuume, AdS/CFT Duality User Guide, vol. 903. 2015, 10.1007/978-4-431-55441-7.
Colloquium: Hydrodynamics and holography of charge density wave phases. M Baggioli, B Goutéraux, Rev. Mod. Phys. 9511001M. Baggioli and B. Goutéraux, Colloquium: Hydrodynamics and holography of charge density wave phases, Rev. Mod. Phys. 95 (Jan, 2023) 011001.
Holographic duality and the resistivity of strange metals. R A Davison, K Schalm, J Zaanen, 10.1103/PhysRevB.89.2451161311.2451Phys. Rev. 89245116R. A. Davison, K. Schalm and J. Zaanen, Holographic duality and the resistivity of strange metals, Phys. Rev. B89 (2014) 245116, [1311.2451].
Planckian dissipation, minimal viscosity and the transport in cuprate strange metals. J Zaanen, 10.21468/SciPostPhys.6.5.0611807.10951SciPost Phys. 661J. Zaanen, Planckian dissipation, minimal viscosity and the transport in cuprate strange metals, SciPost Phys. 6 (2019) 061, [1807.10951].
Quantum connections. S Hartnoll, S Sachdev, T Takayanagi, X Chen, E Silverstein, J Sonner, 10.1038/s42254-021-00319-0Nature Rev. Phys. 3S. Hartnoll, S. Sachdev, T. Takayanagi, X. Chen, E. Silverstein and J. Sonner, Quantum connections, Nature Rev. Phys. 3 (2021) 391-393.
Building a Holographic Superconductor. S A Hartnoll, C P Herzog, G T Horowitz, 10.1103/PhysRevLett.101.031601Phys.Rev.Lett. 101316010803.3295S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, Building a Holographic Superconductor, Phys.Rev.Lett. 101 (2008) 031601, [0803.3295].
Holographic Superconductors. S A Hartnoll, C P Herzog, G T Horowitz, 10.1088/1126-6708/2008/12/015JHEP. 0812150810.1563S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, Holographic Superconductors, JHEP 0812 (2008) 015, [0810.1563].
Introduction to Holographic Superconductor Models. R.-G Cai, L Li, L.-F Li, R.-Q Yang, 10.1007/s11433-015-5676-51502.00437Sci. China Phys. Mech. Astron. 5860401R.-G. Cai, L. Li, L.-F. Li and R.-Q. Yang, Introduction to Holographic Superconductor Models, Sci. China Phys. Mech. Astron. 58 (2015) 060401, [1502.00437].
Lectures on Holographic Superfluidity and Superconductivity. C P Herzog, 10.1088/1751-8113/42/34/343001J.Phys.A. 423430010904.1975C. P. Herzog, Lectures on Holographic Superfluidity and Superconductivity, J.Phys.A A42 (2009) 343001, [0904.1975].
The Holographic Superconductor Vortex. M Montull, A Pomarol, P J Silva, 10.1103/PhysRevLett.103.091601Phys. Rev. Lett. 103916010906.2396M. Montull, A. Pomarol and P. J. Silva, The Holographic Superconductor Vortex, Phys. Rev. Lett. 103 (2009) 091601, [0906.2396].
Emergent Gauge Fields in Holographic Superconductors. O Domenech, M Montull, A Pomarol, A Salvio, P J Silva, 10.1007/JHEP08(2010)0331005.1776JHEP. 0833O. Domenech, M. Montull, A. Pomarol, A. Salvio and P. J. Silva, Emergent Gauge Fields in Holographic Superconductors, JHEP 08 (2010) 033, [1005.1776].
On two pieces of folklore in the AdS/CFT duality. K Maeda, M Natsuume, T Okamura, 10.1103/PhysRevD.82.0460021005.2431Phys. Rev. D. 8246002K. Maeda, M. Natsuume and T. Okamura, On two pieces of folklore in the AdS/CFT duality, Phys. Rev. D 82 (2010) 046002, [1005.2431].
Dynamical gauge fields in holographic superconductors. P J Silva, 10.1002/prop.201100016Fortsch. Phys. 59P. J. Silva, Dynamical gauge fields in holographic superconductors, Fortsch. Phys. 59 (2011) 756-761.
Holographic Higgs Phases. M Rozali, D Smyth, E Sorkin, 10.1007/JHEP08(2012)1181202.5271JHEP. 08118M. Rozali, D. Smyth and E. Sorkin, Holographic Higgs Phases, JHEP 08 (2012) 118, [1202.5271].
Non-Equilibrium Field Dynamics of an Honest Holographic Superconductor. X Gao, M Kaminski, H.-B Zeng, H.-Q Zhang, 10.1007/JHEP11(2012)1121204.3103JHEP. 11112X. Gao, M. Kaminski, H.-B. Zeng and H.-Q. Zhang, Non-Equilibrium Field Dynamics of an Honest Holographic Superconductor, JHEP 11 (2012) 112, [1204.3103].
Holographic Superfluids and Superconductors in Dilaton-Gravity. A Salvio, 10.1007/JHEP09(2012)1341207.3800JHEP. 09134A. Salvio, Holographic Superfluids and Superconductors in Dilaton-Gravity, JHEP 09 (2012) 134, [1207.3800].
. A Salvio, Superconductivity, Holography Superfluidity, 10.1088/1742-6596/442/1/0120401301.0201J. Phys. Conf. Ser. 44212040A. Salvio, Superconductivity, Superfluidity and Holography, J. Phys. Conf. Ser. 442 (2013) 012040, [1301.0201].
Transitions in Dilaton Holography with Global or Local Symmetries. A Salvio, 10.1007/JHEP03(2013)1361302.4898JHEP. 03136A. Salvio, Transitions in Dilaton Holography with Global or Local Symmetries, JHEP 03 (2013) 136, [1302.4898].
Vortices in holographic superfluids and superconductors as conformal defects. O J C Dias, G T Horowitz, N Iqbal, J E Santos, 10.1007/JHEP04(2014)0961311.3673JHEP. 0496O. J. C. Dias, G. T. Horowitz, N. Iqbal and J. E. Santos, Vortices in holographic superfluids and superconductors as conformal defects, JHEP 04 (2014) 096, [1311.3673].
Topological defects as relics of spontaneous symmetry breaking from black hole physics. H.-B Zeng, C.-Y Xia, H.-Q Zhang, 10.1007/JHEP03(2021)1361912.08332JHEP. 03136H.-B. Zeng, C.-Y. Xia and H.-Q. Zhang, Topological defects as relics of spontaneous symmetry breaking from black hole physics, JHEP 03 (2021) 136, [1912.08332].
Universal statistics of vortices in a newborn holographic superconductor: beyond the Kibble-Zurek mechanism. A Campo, F J Gómez-Ruiz, Z.-H Li, C.-Y Xia, H.-B Zeng, H.-Q Zhang, 10.1007/JHEP06(2021)0612101.02171JHEP. 0661A. del Campo, F. J. Gómez-Ruiz, Z.-H. Li, C.-Y. Xia, H.-B. Zeng and H.-Q. Zhang, Universal statistics of vortices in a newborn holographic superconductor: beyond the Kibble-Zurek mechanism, JHEP 06 (2021) 061, [2101.02171].
Holographic topological defects and local gauge symmetry: clusters of strongly coupled equal-sign vortices. Z.-H Li, C.-Y Xia, H.-B Zeng, H.-Q Zhang, 10.1007/JHEP10(2021)1242103.01485JHEP. 10124Z.-H. Li, C.-Y. Xia, H.-B. Zeng and H.-Q. Zhang, Holographic topological defects and local gauge symmetry: clusters of strongly coupled equal-sign vortices, JHEP 10 (2021) 124, [2103.01485].
Holographic Meissner effect. M Natsuume, T Okamura, 10.1103/PhysRevD.106.0860052207.07182Phys. Rev. D. 10686005M. Natsuume and T. Okamura, Holographic Meissner effect, Phys. Rev. D 106 (2022) 086005, [2207.07182].
Inhomogeneous Structures in Holographic Superfluids: II. Vortices. V Keranen, E Keski-Vakkuri, S Nowling, K P Yogendran, 10.1103/PhysRevD.81.126012Phys. Rev. D. 811260120912.4280V. Keranen, E. Keski-Vakkuri, S. Nowling and K. P. Yogendran, Inhomogeneous Structures in Holographic Superfluids: II. Vortices, Phys. Rev. D 81 (2010) 126012, [0912.4280].
Vortex and Droplet Engineering in Holographic Superconductors. T Albash, C V Johnson, 10.1103/PhysRevD.80.126009Phys. Rev. D. 801260090906.1795T. Albash and C. V. Johnson, Vortex and Droplet Engineering in Holographic Superconductors, Phys. Rev. D 80 (2009) 126009, [0906.1795].
Vortex lattice for a holographic superconductor. K Maeda, M Natsuume, T Okamura, 10.1103/PhysRevD.81.026002Phys. Rev. D. 81260020910.4475K. Maeda, M. Natsuume and T. Okamura, Vortex lattice for a holographic superconductor, Phys. Rev. D 81 (2010) 026002, [0910.4475].
Anti-de Sitter space and holography. E Witten, hep-th/9802150Adv. Theor. Math. Phys. 2E. Witten, Anti-de Sitter space and holography, Adv. Theor. Math. Phys. 2 (1998) 253-291, [hep-th/9802150].
Z) action on three-dimensional conformal field theories with Abelian symmetry. E Witten, hep-th/0307041From Fields to Strings: Circumnavigating Theoretical Physics: A Conference in Tribute to Ian Kogan. 7E. Witten, SL(2,Z) action on three-dimensional conformal field theories with Abelian symmetry, in From Fields to Strings: Circumnavigating Theoretical Physics: A Conference in Tribute to Ian Kogan, pp. 1173-1200, 7, 2003. hep-th/0307041.
AdS / CFT correspondence and symmetry breaking. I R Klebanov, E Witten, 10.1016/S0550-3213(99)00387-9hep-th/9905104Nucl. Phys. B. 556I. R. Klebanov and E. Witten, AdS / CFT correspondence and symmetry breaking, Nucl. Phys. B 556 (1999) 89-114, [hep-th/9905104].
SL(2,Z) action on three-dimensional CFTs and holography. R G Leigh, A C Petkou, 10.1088/1126-6708/2003/12/020hep-th/0309177JHEP. 12R. G. Leigh and A. C. Petkou, SL(2,Z) action on three-dimensional CFTs and holography, JHEP 12 (2003) 020, [hep-th/0309177].
A Note on AdS / CFT dual of SL(2,Z) action on 3-D conformal field theories with U(1) symmetry. H.-U Yee, 10.1016/j.physletb.2004.05.082hep-th/0402115Phys. Lett. B. 598H.-U. Yee, A Note on AdS / CFT dual of SL(2,Z) action on 3-D conformal field theories with U(1) symmetry, Phys. Lett. B 598 (2004) 139-148, [hep-th/0402115].
Stability in Gauged Extended Supergravity. P Breitenlohner, D Z Freedman, 10.1016/0003-4916(82)90116-6Annals Phys. 144249P. Breitenlohner and D. Z. Freedman, Stability in Gauged Extended Supergravity, Annals Phys. 144 (1982) 249.
D Marolf, S F Ross, 10.1088/1126-6708/2006/11/085hep-th/0606113Boundary Conditions and New Dualities: Vector Fields in AdS/CFT. 85D. Marolf and S. F. Ross, Boundary Conditions and New Dualities: Vector Fields in AdS/CFT, JHEP 11 (2006) 085, [hep-th/0606113].
W Cottrell, A Hashimoto, A Loveridge, D Pettengill, 1711.01257AdS/CFT with double trace deformations II: Vector Fields. W. Cottrell, A. Hashimoto, A. Loveridge and D. Pettengill, Stability and boundedness in AdS/CFT with double trace deformations II: Vector Fields, 1711.01257.
Holographic Plasmons. U Gran, M Tornsö, T Zingg, 10.1007/JHEP11(2018)1761712.05672JHEP. 11176U. Gran, M. Tornsö and T. Zingg, Holographic Plasmons, JHEP 11 (2018) 176, [1712.05672].
Plasmons in Holographic Graphene. U Gran, M Tornsö, T Zingg, 10.21468/SciPostPhys.8.6.0931804.02284SciPost Phys. 893U. Gran, M. Tornsö and T. Zingg, Plasmons in Holographic Graphene, SciPost Phys. 8 (2020) 093, [1804.02284].
Exotic Holographic Dispersion. U Gran, M Tornsö, T Zingg, 10.1007/JHEP02(2019)0321808.05867JHEP. 0232U. Gran, M. Tornsö and T. Zingg, Exotic Holographic Dispersion, JHEP 02 (2019) 032, [1808.05867].
Holographic Response of Electron Clouds. U Gran, M Tornsö, T Zingg, 10.1007/JHEP03(2019)0191810.11416JHEP. 0319U. Gran, M. Tornsö and T. Zingg, Holographic Response of Electron Clouds, JHEP 03 (2019) 019, [1810.11416].
Holographic Plasmon Relaxation with and without Broken Translations. M Baggioli, U Gran, A J Alba, M Torns, T Zingg, 804M. Baggioli, U. Gran, A. J. Alba, M. Torns and T. Zingg, Holographic Plasmon Relaxation with and without Broken Translations, 1905.00804.
Holographic fundamental matter in multilayered media. U Gran, N Jokela, D Musso, A V Ramallo, M Tornsö, 10.1007/JHEP12(2019)0381909.01864JHEP. 1238U. Gran, N. Jokela, D. Musso, A. V. Ramallo and M. Tornsö, Holographic fundamental matter in multilayered media, JHEP 12 (2019) 038, [1909.01864].
M Baggioli, U Gran, M Tornsö, 10.1007/JHEP04(2020)1061912.07321Transverse Collective Modes in Interacting Holographic Plasmas. 106M. Baggioli, U. Gran and M. Tornsö, Transverse Collective Modes in Interacting Holographic Plasmas, JHEP 04 (2020) 106, [1912.07321].
Collective modes of polarizable holographic media in magnetic fields. M Baggioli, U Gran, M Tornsö, 10.1007/JHEP06(2021)0142102.09969JHEP. 0614M. Baggioli, U. Gran and M. Tornsö, Collective modes of polarizable holographic media in magnetic fields, JHEP 06 (2021) 014, [2102.09969].
Density response of holographic metallic IR fixed points with translational pseudo-spontaneous symmetry break. A Romero-Bermúdez, 10.1007/JHEP07(2019)1531904.06237JHEP. 07153A. Romero-Bermúdez, Density response of holographic metallic IR fixed points with translational pseudo-spontaneous symmetry break JHEP 07 (2019) 153, [1904.06237].
Screening of Coulomb interactions in Holography. E Mauri, H T C Stoof, 10.1007/JHEP04(2019)0351811.11795JHEP. 0435E. Mauri and H. T. C. Stoof, Screening of Coulomb interactions in Holography, JHEP 04 (2019) 035, [1811.11795].
Anomalous attenuation of plasmons in strange metals and holography. A Romero-Bermúdez, A Krikun, K Schalm, J Zaanen, 10.1103/PhysRevB.99.2351491812.03968Phys. Rev. B. 99235149A. Romero-Bermúdez, A. Krikun, K. Schalm and J. Zaanen, Anomalous attenuation of plasmons in strange metals and holography, Phys. Rev. B 99 (2019) 235149, [1812.03968].
Friedel oscillations and horizon charge in 1D holographic liquids. T Faulkner, N , 10.1007/JHEP07(2013)0601207.4208JHEP. 0760T. Faulkner and N. Iqbal, Friedel oscillations and horizon charge in 1D holographic liquids, JHEP 07 (2013) 060, [1207.4208].
Holographic anyonic superfluidity. N Jokela, G Lifschytz, M Lippert, 10.1007/JHEP10(2013)0141307.6336JHEP. 1014N. Jokela, G. Lifschytz and M. Lippert, Holographic anyonic superfluidity, JHEP 10 (2013) 014, [1307.6336].
Holographic plasma and anyonic fluids. D K Brattan, G Lifschytz, 10.1007/JHEP02(2014)0901310.2610JHEP. 0290D. K. Brattan and G. Lifschytz, Holographic plasma and anyonic fluids, JHEP 02 (2014) 090, [1310.2610].
A strongly coupled anyon material. D K Brattan, 10.1007/JHEP11(2015)2141412.1489JHEP. 11214D. K. Brattan, A strongly coupled anyon material, JHEP 11 (2015) 214, [1412.1489].
Y Ahn, M Baggioli, K.-B Huh, H.-S Jeong, K.-Y Kim, Y.-W Sun, 2211.01760Holography and magnetohydrodynamics with dynamical gauge fields. Y. Ahn, M. Baggioli, K.-B. Huh, H.-S. Jeong, K.-Y. Kim and Y.-W. Sun, Holography and magnetohydrodynamics with dynamical gauge fields, 2211.01760.
Generalized symmetries and 2-groups via electromagnetic duality in AdS/CF T. O Dewolfe, K Higginbotham, 10.1103/PhysRevD.103.026011Phys. Rev. D. 103260112010.06594O. DeWolfe and K. Higginbotham, Generalized symmetries and 2-groups via electromagnetic duality in AdS/CF T , Phys. Rev. D 103 (2021) 026011, [2010.06594].
Generalised global symmetries in holography: magnetohydrodynamic waves in a strongly interacting plasma. S Grozdanov, N Poovuttikul, 10.1007/JHEP04(2019)1411707.04182JHEP. 04141S. Grozdanov and N. Poovuttikul, Generalised global symmetries in holography: magnetohydrodynamic waves in a strongly interacting plasma, JHEP 04 (2019) 141, [1707.04182].
Setting the boundary free in AdS/CFT. G Compere, D Marolf, 10.1088/0264-9381/25/19/195014Class. Quant. Grav. 251950140805.1902G. Compere and D. Marolf, Setting the boundary free in AdS/CFT, Class. Quant. Grav. 25 (2008) 195014, [0805.1902].
Holographic evolution with dynamical boundary gravity. C Ecker, W Van Der Schee, D Mateos, J Casalderrey-Solana, 10.1007/JHEP03(2022)1372109.10355JHEP. 03137C. Ecker, W. van der Schee, D. Mateos and J. Casalderrey-Solana, Holographic evolution with dynamical boundary gravity, JHEP 03 (2022) 137, [2109.10355].
A Ishibashi, K Maeda, T Okamura, 2301.12170Semiclassical Einstein equations from holography and boundary dynamics. A. Ishibashi, K. Maeda and T. Okamura, Semiclassical Einstein equations from holography and boundary dynamics, 2301.12170.
The two-fluid theory and second sound in liquid helium. R J Donnelly, 10.1063/1.3248499Physics Today. 62R. J. Donnelly, The two-fluid theory and second sound in liquid helium, Physics Today 62 (2009) 34-39.
A Larkin, A Varlamov, Theory of Fluctuations in Superconductors. International Series of Monographs on Physics. OxfordA. Larkin and A. Varlamov, Theory of Fluctuations in Superconductors. International Series of Monographs on Physics. OUP Oxford, 2009.
N Kopnin, Theory of Nonequilibrium Superconductivity. International Series of Monographs on Physics. Clarendon PressN. Kopnin, Theory of Nonequilibrium Superconductivity. International Series of Monographs on Physics. Clarendon Press, 2001.
Hydrodynamics of a superfluid condensate. E P Gross, 10.1063/1.1703944Journal of Mathematical Physics. 4E. P. Gross, Hydrodynamics of a superfluid condensate, Journal of Mathematical Physics 4 (1963) 195-207.
Introduction to Superfluidity: Field-theoretical approach and applications. A Schmitt, 1404.1284A. Schmitt, Introduction to Superfluidity: Field-theoretical approach and applications, 1404.1284.
. S J Putterman, Superfluid hydrodynamics. 3S. J. Putterman, Superfluid hydrodynamics, vol. 3. Jan., 1974.
Low-energy effective field theory for finite-temperature relativistic superfluids. A Nicolis, 1108.2513A. Nicolis, Low-energy effective field theory for finite-temperature relativistic superfluids, 1108.2513.
Low-energy quantum effective action for relativistic superfluids. D T Son, hep-ph/0204199D. T. Son, Low-energy quantum effective action for relativistic superfluids, hep-ph/0204199.
A Theory of first order dissipative superfluid dynamics. J Bhattacharya, S Bhattacharyya, S Minwalla, A Yarom, 10.1007/JHEP05(2014)1471105.3733JHEP. 05147J. Bhattacharya, S. Bhattacharyya, S. Minwalla and A. Yarom, A Theory of first order dissipative superfluid dynamics, JHEP 05 (2014) 147, [1105.3733].
Transport in holographic superfluids. C P Herzog, N Lisker, P Surowka, A Yarom, 10.1007/JHEP08(2011)0521101.3330JHEP. 0852C. P. Herzog, N. Lisker, P. Surowka and A. Yarom, Transport in holographic superfluids, JHEP 08 (2011) 052, [1101.3330].
Transport phenomena in helium II. L Tisza, Nature. 141L. Tisza, Transport phenomena in helium II, Nature 141 (1938) 913-913.
Theory of the superfluidity of helium II. L Landau, Physical Review. 60356L. Landau, Theory of the superfluidity of helium II, Physical Review 60 (1941) 356.
Coherent excited states in the theory of superconductivity: Gauge invariance and the meissner effect. P W Anderson, Physical review. 110827P. W. Anderson, Coherent excited states in the theory of superconductivity: Gauge invariance and the meissner effect, Physical review 110 (1958) 827.
Random-phase approximation in the theory of superconductivity. P W Anderson, Physical Review. 1121900P. W. Anderson, Random-phase approximation in the theory of superconductivity, Physical Review 112 (1958) 1900.
Broken symmetries and the masses of gauge bosons. P W Higgs, 10.1103/PhysRevLett.13.508Phys. Rev. Lett. 13P. W. Higgs, Broken symmetries and the masses of gauge bosons, Phys. Rev. Lett. 13 (Oct, 1964) 508-509.
Plasmons, gauge invariance, and mass. P W Anderson, 10.1103/PhysRev.130.439Phys. Rev. 130P. W. Anderson, Plasmons, gauge invariance, and mass, Phys. Rev. 130 (Apr, 1963) 439-442.
I Amado, M Kaminski, K Landsteiner, 10.1088/1126-6708/2009/05/021Hydrodynamics of Holographic Superconductors. 210903.2209I. Amado, M. Kaminski and K. Landsteiner, Hydrodynamics of Holographic Superconductors, JHEP 0905 (2009) 021, [0903.2209].
Holographic Superfluids and the Landau Criterion. I Amado, D Areán, A Jiménez-Alba, K Landsteiner, L Melgar, I Salazar Landea, 10.1007/JHEP02(2014)0631307.8100JHEP. 0263I. Amado, D. Areán, A. Jiménez-Alba, K. Landsteiner, L. Melgar and I. Salazar Landea, Holographic Superfluids and the Landau Criterion, JHEP 02 (2014) 063, [1307.8100].
Holographic Type II Goldstone bosons. I Amado, D Arean, A Jimenez-Alba, K Landsteiner, L Melgar, I S Landea, 10.1007/JHEP07(2013)1081302.5641JHEP. 07108I. Amado, D. Arean, A. Jimenez-Alba, K. Landsteiner, L. Melgar and I. S. Landea, Holographic Type II Goldstone bosons, JHEP 07 (2013) 108, [1302.5641].
Dynamical stability from quasi normal modes in 2nd, 1st and 0th order holographic superfluid phase transition 2211. Z.-Q Zhao, X.-K Zhang, Z.-Y Nie, Z.-Q. Zhao, X.-K. Zhang and Z.-Y. Nie, Dynamical stability from quasi normal modes in 2nd, 1st and 0th order holographic superfluid phase transition 2211.14762.
Pseudo-spontaneous U (1) symmetry breaking in hydrodynamics and holography. M Ammon, D Arean, M Baggioli, S Gray, S Grieninger, 10.1007/JHEP03(2022)015JHEP. 03152111.10305M. Ammon, D. Arean, M. Baggioli, S. Gray and S. Grieninger, Pseudo-spontaneous U (1) symmetry breaking in hydrodynamics and holography, JHEP 03 (2022) 015, [2111.10305].
A holographic superfluid symphony. D Arean, M Baggioli, S Grieninger, K Landsteiner, 10.1007/JHEP11(2021)2062107.08802JHEP. 11206D. Arean, M. Baggioli, S. Grieninger and K. Landsteiner, A holographic superfluid symphony, JHEP 11 (2021) 206, [2107.08802].
An Analytic Holographic Superconductor. C P Herzog, 10.1103/PhysRevD.81.1260091003.3278Phys. Rev. D. 81126009C. P. Herzog, An Analytic Holographic Superconductor, Phys. Rev. D 81 (2010) 126009, [1003.3278].
Dissipation in holographic superfluids. A Donos, P Kailidis, C Pantelidou, 10.1007/JHEP09(2021)1342107.03680JHEP. 09134A. Donos, P. Kailidis and C. Pantelidou, Dissipation in holographic superfluids, JHEP 09 (2021) 134, [2107.03680].
Dissipative effects in finite density holographic superfluids. A Donos, P Kailidis, 10.1007/JHEP11(2022)0532209.06893JHEP. 1153A. Donos and P. Kailidis, Dissipative effects in finite density holographic superfluids, JHEP 11 (2022) 053, [2209.06893].
Higgs/amplitude mode dynamics from holography. A Donos, C Pantelidou, 10.1007/JHEP08(2022)2462205.06294JHEP. 08246A. Donos and C. Pantelidou, Higgs/amplitude mode dynamics from holography, JHEP 08 (2022) 246, [2205.06294].
Order parameter fluctuations in the holographic superconductor. N W M Plantz, H T C Stoof, S Vandoren, 10.1088/1361-6455/aa584c1511.05112J. Phys. B. 5064001N. W. M. Plantz, H. T. C. Stoof and S. Vandoren, Order parameter fluctuations in the holographic superconductor, J. Phys. B 50 (2017) 064001, [1511.05112].
Observing the origin of superconductivity in quantum critical metals. J.-H She, B J Overbosch, Y.-W Sun, Y Liu, K Schalm, J A Mydosh, 10.1103/PhysRevB.84.1445271105.5377Phys. Rev. B. 84144527J.-H. She, B. J. Overbosch, Y.-W. Sun, Y. Liu, K. Schalm, J. A. Mydosh et al., Observing the origin of superconductivity in quantum critical metals, Phys. Rev. B 84 (2011) 144527, [1105.5377].
A Donos, P Kailidis, 2210.06513Nearly Critical Holographic Superfluids. A. Donos and P. Kailidis, Nearly Critical Holographic Superfluids, 2210.06513.
Amplitude/higgs modes in condensed matter physics. D Pekker, C Varma, 10.1146/annurev-conmatphys-031214-014350Annual Review of Condensed Matter Physics. 6D. Pekker and C. Varma, Amplitude/higgs modes in condensed matter physics, Annual Review of Condensed Matter Physics 6 (2015) 269-297.
Higgs mode in superconductors. R Shimano, N Tsuji, 10.1146/annurev-conmatphys-031119-050813Annual Review of Condensed Matter Physics. 11R. Shimano and N. Tsuji, Higgs mode in superconductors, Annual Review of Condensed Matter Physics 11 (2020) 103-124.
Holographic Superfluids and the Dynamics of Symmetry Breaking. M J Bhaseen, J P Gauntlett, B D Simons, J Sonner, T Wiseman, 10.1103/PhysRevLett.110.0153011207.4194Phys. Rev. Lett. 11015301M. J. Bhaseen, J. P. Gauntlett, B. D. Simons, J. Sonner and T. Wiseman, Holographic Superfluids and the Dynamics of Symmetry Breaking, Phys. Rev. Lett. 110 (2013) 015301, [1207.4194].
Extended time-dependent ginzburg-landau theory. K V Grigorishin, 10.1007/s10909-021-02580-0Journal of Low Temperature Physics. 203K. V. Grigorishin, Extended time-dependent ginzburg-landau theory, Journal of Low Temperature Physics 203 (May, 2021) 262-308.
On the Theory of superconductivity. V L Ginzburg, L D Landau, 10.1016/B978-0-08-010586-4.50035-3Zh. Eksp. Teor. Fiz. 20V. L. Ginzburg and L. D. Landau, On the Theory of superconductivity, Zh. Eksp. Teor. Fiz. 20 (1950) 1064-1082.
An introduction to the ginzburg-landau theory of phase transitions and nonequilibrium patterns. P Hohenberg, A Krekhov, Physics Reports. 572P. Hohenberg and A. Krekhov, An introduction to the ginzburg-landau theory of phase transitions and nonequilibrium patterns, Physics Reports 572 (2015) 1-42.
M Tinkham, Introduction to Superconductivity. Dover Books on Physics Series. Dover PublicationsM. Tinkham, Introduction to Superconductivity. Dover Books on Physics Series. Dover Publications, 2004.
Broken symmetries, massless particles and gauge fields. P W Higgs, 10.1016/0031-9163(64)91136-9Phys. Lett. 12P. W. Higgs, Broken symmetries, massless particles and gauge fields, Phys. Lett. 12 (1964) 132-133.
Introduction to Electrodynamics. D Griffiths, Pearson EducationD. Griffiths, Introduction to Electrodynamics. Pearson Education, 2014.
Some general theorems relating to vibrations. J W Strutt, http:/arxiv.org/abs/https:/londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/plms/s1-4.1.357s1-4 (1871Proceedings of the London Mathematical Society. J. W. Strutt, Some general theorems relating to vibrations, Proceedings of the London Mathematical Society s1-4 (1871) 357-368, [https://londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/plms/s1-4.1.357].
The world of the complex ginzburg-landau equation. I S Aranson, L Kramer, 10.1103/RevModPhys.74.99Rev. Mod. Phys. 74I. S. Aranson and L. Kramer, The world of the complex ginzburg-landau equation, Rev. Mod. Phys. 74 (Feb, 2002) 99-143.
Theory of dynamic critical phenomena. P C Hohenberg, B I Halperin, 10.1103/RevModPhys.49.435Rev. Mod. Phys. 49P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Mod. Phys. 49 (Jul, 1977) 435-479.
Critical and near-critical relaxation of holographic superfluids. M Flory, S Grieninger, S Morales-Tejera, 2209.09251M. Flory, S. Grieninger and S. Morales-Tejera, Critical and near-critical relaxation of holographic superfluids, 2209.09251.
Thermalization and prethermalization in the soft-wall AdS/QCD model. X Cao, J Chao, H Liu, D Li, 2204.11604X. Cao, J. Chao, H. Liu and D. Li, Thermalization and prethermalization in the soft-wall AdS/QCD model, 2204.11604.
Universality class of holographic superconductors. K Maeda, M Natsuume, T Okamura, 10.1103/PhysRevD.79.126004Phys. Rev. D. 791260040904.1914K. Maeda, M. Natsuume and T. Okamura, Universality class of holographic superconductors, Phys. Rev. D 79 (2009) 126004, [0904.1914].
Soft pions and transport near the chiral critical point. E Grossi, A Soloviev, D Teaney, F Yan, 10.1103/PhysRevD.104.0340252101.10847Phys. Rev. D. 10434025E. Grossi, A. Soloviev, D. Teaney and F. Yan, Soft pions and transport near the chiral critical point, Phys. Rev. D 104 (2021) 034025, [2101.10847].
Transport and hydrodynamics in the chiral limit. E Grossi, A Soloviev, D Teaney, F Yan, 10.1103/PhysRevD.102.0140422005.02885Phys. Rev. D. 10214042E. Grossi, A. Soloviev, D. Teaney and F. Yan, Transport and hydrodynamics in the chiral limit, Phys. Rev. D 102 (2020) 014042, [2005.02885].
Hydrodynamics with parametric slowing down and fluctuations near the critical point. M Stephanov, Y Yin, 10.1103/PhysRevD.98.0360061712.10305Phys. Rev. D. 9836006M. Stephanov and Y. Yin, Hydrodynamics with parametric slowing down and fluctuations near the critical point, Phys. Rev. D 98 (2018) 036006, [1712.10305].
Holographic Operator Mixing and Quasinormal Modes on the Brane. M Kaminski, K Landsteiner, J Mas, J P Shock, J Tarrio, 10.1007/JHEP02(2010)021JHEP. 1002210911.3610M. Kaminski, K. Landsteiner, J. Mas, J. P. Shock and J. Tarrio, Holographic Operator Mixing and Quasinormal Modes on the Brane, JHEP 1002 (2010) 021, [0911.3610].
Lectures on holographic methods for condensed matter physics. S A Hartnoll, 10.1088/0264-9381/26/22/224002Class.Quant.Grav. 262240020903.3246S. A. Hartnoll, Lectures on holographic methods for condensed matter physics, Class.Quant.Grav. 26 (2009) 224002, [0903.3246].
Relativistic magnetohydrodynamics. J Hernandez, P Kovtun, 10.1007/JHEP05(2017)0011703.08757JHEP. 051J. Hernandez and P. Kovtun, Relativistic magnetohydrodynamics, JHEP 05 (2017) 001, [1703.08757].
Gapped momentum states. M Baggioli, V V Brazhkin, K Trachenko, M Vasin, 10.1016/j.physrep.2020.04.0021904.01419Phys. Rept. 865M. Baggioli, V. V. Brazhkin, K. Trachenko and M. Vasin, Gapped momentum states, Phys. Rept. 865 (2020) 1-44, [1904.01419].
Maxwell interpolation and close similarities between liquids and holographic models. M Baggioli, K Trachenko, 10.1103/PhysRevD.99.1060021808.05391Phys. Rev. D. 99106002M. Baggioli and K. Trachenko, Maxwell interpolation and close similarities between liquids and holographic models, Phys. Rev. D 99 (2019) 106002, [1808.05391].
Low frequency propagating shear waves in holographic liquids. M Baggioli, K Trachenko, 10.1007/JHEP03(2019)0931807.10530JHEP. 0393M. Baggioli and K. Trachenko, Low frequency propagating shear waves in holographic liquids, JHEP 03 (2019) 093, [1807.10530].
Amplitude collective modes in superconductors and their coupling to charge-density waves. P B Littlewood, C M Varma, 10.1103/PhysRevB.26.4883Phys. Rev. B. 26P. B. Littlewood and C. M. Varma, Amplitude collective modes in superconductors and their coupling to charge-density waves, Phys. Rev. B 26 (Nov, 1982) 4883-4893.
Holographic superconductivity of a critical Fermi surface. J Schmalian, 2209.00474J. Schmalian, Holographic superconductivity of a critical Fermi surface, 2209.00474.
Quantum critical Eliashberg theory, the SYK superconductor and their holographic duals. G A Inkof, K Schalm, J Schmalian, 2108.11392G. A. Inkof, K. Schalm and J. Schmalian, Quantum critical Eliashberg theory, the SYK superconductor and their holographic duals, 2108.11392.
Coupling of higgs and leggett modes in non-equilibrium superconductors. H Krull, N Bittner, G S Uhrig, D Manske, A P Schnyder, 10.1038/ncomms11921Nature Communications. 711921H. Krull, N. Bittner, G. S. Uhrig, D. Manske and A. P. Schnyder, Coupling of higgs and leggett modes in non-equilibrium superconductors, Nature Communications 7 (Jun, 2016) 11921.
A J Leggett, 10.1143/PTP.36.901Number-Phase Fluctuations in Two-Band Superconductors. 36A. J. Leggett, Number-Phase Fluctuations in Two-Band Superconductors, Progress of Theoretical Physics 36 (11, 1966) 901-930.
A theoretical description of the new phases of liquid 3 He. A J Leggett, 10.1103/RevModPhys.47.331Rev. Mod. Phys. 47A. J. Leggett, A theoretical description of the new phases of liquid 3 He, Rev. Mod. Phys. 47 (Apr, 1975) 331-414.
The higgs mode in disordered superconductors close to a quantum phase transition. D Sherman, U S Pracht, B Gorshunov, S Poran, J Jesudasan, M Chand, 10.1038/nphys3227Nature Physics. 11D. Sherman, U. S. Pracht, B. Gorshunov, S. Poran, J. Jesudasan, M. Chand et al., The higgs mode in disordered superconductors close to a quantum phase transition, Nature Physics 11 (Feb, 2015) 188-192.
The 'higgs' amplitude mode at the two-dimensional superfluid/mott insulator transition. M Endres, T Fukuhara, D Pekker, M Cheneau, P Schauβ, C Gross, 10.1038/nature11255Nature. 487M. Endres, T. Fukuhara, D. Pekker, M. Cheneau, P. Schauβ, C. Gross et al., The 'higgs' amplitude mode at the two-dimensional superfluid/mott insulator transition, Nature 487 (Jul, 2012) 454-458.
Universal relaxational dynamics near two-dimensional quantum critical points. S Sachdev, 10.1103/PhysRevB.59.14054Phys. Rev. B. 59S. Sachdev, Universal relaxational dynamics near two-dimensional quantum critical points, Phys. Rev. B 59 (Jun, 1999) 14054-14073.
Anomalous fluctuations in phases with a broken continuous symmetry. W Zwerger, 10.1103/PhysRevLett.92.027203Phys. Rev. Lett. 9227203W. Zwerger, Anomalous fluctuations in phases with a broken continuous symmetry, Phys. Rev. Lett. 92 (Jan, 2004) 027203.
Visibility of the amplitude (higgs) mode in condensed matter. D Podolsky, A Auerbach, D P Arovas, 10.1103/PhysRevB.84.174522Phys. Rev. B. 84174522D. Podolsky, A. Auerbach and D. P. Arovas, Visibility of the amplitude (higgs) mode in condensed matter, Phys. Rev. B 84 (Nov, 2011) 174522.
Higgs mode in a two-dimensional superfluid. L Pollet, N Prokof'ev, 10.1103/PhysRevLett.109.010401Phys. Rev. Lett. 10910401L. Pollet and N. Prokof'ev, Higgs mode in a two-dimensional superfluid, Phys. Rev. Lett. 109 (Jul, 2012) 010401.
Spectral functions of the higgs mode near two-dimensional quantum critical points. D Podolsky, S Sachdev, 10.1103/PhysRevB.86.054508Phys. Rev. B. 8654508D. Podolsky and S. Sachdev, Spectral functions of the higgs mode near two-dimensional quantum critical points, Phys. Rev. B 86 (Aug, 2012) 054508.
Amplitude higgs mode and admittance in superconductors with a moving condensate. A Moor, A F Volkov, K B Efetov, 10.1103/PhysRevLett.118.047001Phys. Rev. Lett. 11847001A. Moor, A. F. Volkov and K. B. Efetov, Amplitude higgs mode and admittance in superconductors with a moving condensate, Phys. Rev. Lett. 118 (Jan, 2017) 047001.
| []
|
[
"HOLOMORPHIC TRIANGLE INVARIANTS AND THE TOPOLOGY OF SYMPLECTIC FOUR-MANIFOLDS",
"HOLOMORPHIC TRIANGLE INVARIANTS AND THE TOPOLOGY OF SYMPLECTIC FOUR-MANIFOLDS"
]
| [
"Peter Ozsváth ",
"Zoltán Szabó "
]
| []
| []
| This article analyzes the interplay between symplectic geometry in dimension four and the invariants for smooth four-manifolds constructed using holomorphic triangles introduced in [18]. Specifically, we establish a non-vanishing result for the invariants of symplectic four-manifolds, which leads to new proofs of the indecomposability theorem for symplectic four-manifolds and the symplectic Thom conjecture. As a new application, we generalize the indecomposability theorem to splittings of fourmanifolds along a certain class of three-manifolds obtained by plumbings of spheres. This leads to restrictions on the topology of Stein fillings of such three-manifolds. | 10.1215/s0012-7094-04-12111-6 | [
"https://export.arxiv.org/pdf/math/0201049v1.pdf"
]
| 14,355,379 | math/0201049 | 29d9e3822bd69d242534effff3a58a13472a3f26 |
HOLOMORPHIC TRIANGLE INVARIANTS AND THE TOPOLOGY OF SYMPLECTIC FOUR-MANIFOLDS
8 Jan 2002
Peter Ozsváth
Zoltán Szabó
HOLOMORPHIC TRIANGLE INVARIANTS AND THE TOPOLOGY OF SYMPLECTIC FOUR-MANIFOLDS
8 Jan 2002
This article analyzes the interplay between symplectic geometry in dimension four and the invariants for smooth four-manifolds constructed using holomorphic triangles introduced in [18]. Specifically, we establish a non-vanishing result for the invariants of symplectic four-manifolds, which leads to new proofs of the indecomposability theorem for symplectic four-manifolds and the symplectic Thom conjecture. As a new application, we generalize the indecomposability theorem to splittings of fourmanifolds along a certain class of three-manifolds obtained by plumbings of spheres. This leads to restrictions on the topology of Stein fillings of such three-manifolds.
Introduction
In [18], we constructed an invariant for smooth, closed four-manifolds (using holomorphic triangles, and the Floer homology theories defined in [17] and [16]). The aim of the present article is to investigate this invariant in the case where X is a closed, symplectic four-manifold. Our first result is the following: Theorem 1.1. If (X, ω) is a closed, symplectic manifold with b + 2 (X) > 1, then for the canonical Spin c structure k, we have that
Φ X,k = ±1.
Moreover, if s ∈ Spin c (X) is any Spin c structure for which Φ X,s ≡ 0, then we have the inequality that c 1 (k), ω ≤ c 1 (s), ω , with equality iff k = s.
The above can be seen as a direct analogue of a theorem of Taubes concerning the Seiberg-Witten invariants for symplectic manifolds, see [22] and [23]. However, the proof (given in Section 5) is quite different in flavor. While Taubes' theorem uses the interplay of the symplectic form with the Seiberg-Witten equations, our approach uses the topology of Lefschetz fibrations, together with general properties of HF + . As such, our proof relies on a celebrated result of Donaldson [4], which constructs Lefschetz pencils on symplectic manifolds, see also [1] and [21].
Combined with the general properties of Φ (see [18]), the above non-vanishing theorem has a number of consequences.
1.1. New proofs of known results. Theorem 1.1 can be used to reprove the indecomposability theorem for symplectic four-manifolds, a theorem whose Kähler version was established by Donaldson using his polynomial invariants [3], and whose symplectic version was established by Taubes using Seiberg-Witten invariants [22]: Corollary 1.2. (Donaldson: Kähler case; Taubes: symplectic case) If (X, ω) is a closed symplectic four-manifold, then it admits no smooth decomposition as a connected sum X = X 1 #X 2 into two pieces with b + 2 (X 1 ), b + 2 (X 2 ) > 0.
Proof. This follows immediately from the non-vanishing result in Theorem 1.1, together with the vanishing result for Φ for a connected sum, Theorem 1.3 of [18] (which in turn follows easily from the definition of Φ).
In the course of proving Theorem 1.1, we establish a certain "adjunction relation", which can be seen as an analogue of an earlier adjunction relation from Seiberg-Witten theory (see [7] and [19]). Together with Theorem 1.1, this relation gives a new proof of the symplectic Thom conjecture. Note that this question has a long history in gauge theory. Various versions were proved in [12], [11], [14], and the general case (which we reprove here) is contained in [19]. Theorem 1.3. If (X, ω) is a symplectic four-manifold and Σ ⊂ X is an embedded, symplectic submanifold, then Σ is genus-minimizing in its homology class.
1.2. Generalized indecomposability. We will generalize the indecomposability theorem for symplectic four-manifolds (Corollary 1.2) to a large class of plumbed threemanifolds, in place of S 3 .
By a weighted graph we mean a graph G, equipped with an integer-valued function m on the vertices of G. Recall that for each weighted graph, there is a uniquely associated three-manifold Y (G, m), which is the boundary of the associated plumbing of disk bundles over spheres (the integer multiplicities here record the Euler numbers of the disk bundles). The degree of a vertex v in a graph G, denoted d(v), is the number of edges which contain the given vertex. • G is a disjoint union of trees • at each vertex in G, we have that
m(v) ≥ d(v).(1)
Then no closed, symplectic four-manifold (X, ω) can be decomposed along Y as a union X = X 1 ∪ Y X 2 into two pieces with b + 2 (X 1 ) > 0 and b + 2 (X 2 ) > 0.
Note that in the special cases where Y is S 2 × S 1 or a lens space, the above theorem was known using Seiberg-Witten theory. Corollary 1.5. Let G be a weighted graph satisfying the hypothesis of Theorem 1.4. If X is any Stein four-manifold with ∂X = ±Y (G), then b + 2 (X) = 0.
Proof. According to [13], such a Stein manifold W can always be embedded in a surface of general type X, so that b + 2 (X − W ) > 0. Thus, the corollary follows from Theorem 1.4.
Note that −Y (G) always admits a Stein filling with b + 2 (X) = 0, using a theorem of Eliashberg [6], see also [9]. Theorem 1.4 follows from Theorem 1.1, coupled with a vanishing invariant for fourmanifolds admitting a decomposition along Y (G, m). In turn, this vanishing theorem follows from from a Floer homology calculation for plumbings along graphs which satisfy the hypotheses of Theorem 1.4. Of course, it is interesting to consider plumbing diagrams which do not satisfy Inequality (1). For this more general case, one does not expect such a strong vanishing theorem -for instance, any Seifert fibered space with b 1 (Y ) = 0 can be obtained as a plumbing along a tree. We return to the general case of three-manifolds obtained as plumbings along trees in a future paper, [20].
1.3.
Organization. This paper is organized as follows. In Section 2 we rapidly review some of the basic notions used throughout this paper, specifically regarding Lefschetz fibrations. We also extend the four-manifold invariant Φ defined in [18] to the case where the the four-manifold X has b + 2 (X) = 1. In Section 3, we derive the adjunction relation Theorem 3.1 which is used later in the proofs of Theorems 1.1 and 1.3. In Section 4, we calculate Φ for the K3 surface. In Section 5, we prove Theorem 1.1, along with an auxiliary non-vanishing result for the Floer homology groups of a three-manifold which fibers over the circle. One ingredient in this proof is the K3 calculation in the previous section. In Section 6, we deduce Theorem 1.3 from Theorems 1.1 and 3.1. In Section 7, we provide the Floer homology calculations which lead to Theorem 1.4. This paper, of course, is built on the theory developed in [17], [16], and [18], and it is written assuming familiarity with those papers. Important properties of the fourdimensional invariant Φ (which will be used repeatedly here) are summarized in Section 3 of [18]. Moreover, at two important points in the present paper (when calculating the invariant for the K3 surface, and when finding examples of three-manifolds with non-trivial Floer homology which fiber of the circle) we rely on some of the calculations of Floer homology groups given in [15], see especially Section 8 of [15].
1.4. Further remarks. For the purposes of proving Theorem 1.3, we extend the invariant Φ to four-manifolds with b + 2 (X) = 1. As one expects from the analogy with gauge theory, the invariant in that case has additional structure. For our purposes, it suffices to construct Φ as the invariant of a four-manifold equipped with a line L inside H 2 (X; Q) consisting of vectors with square zero. This line corresponds to a choice of a "chamber at infinity" (compare [2]). We hope to return to this topic in a future paper.
The pseudo-holomorphic triangles in the g-fold symmetric product of the Heegaard surface implicit in the statement of Theorem 1.1 naturally gives rise to a locus inside X. It is quite interesting to compare this object with the pseudo-holomorphic curve constructed by Taubes in [24]. This may also provide a link with the work of Donaldson and Smith, see [5].
Preliminaries
We collect here some of the preliminaries for the proof of Theorem 1.1. In Subsection 2.1, we review some standard properties of Lefschetz fibrations, mainly to set up the terminology which will be used later. For a thorough discussion of this topic, we refer the reader to [9]. We then return to some properties of HF ± , building on the results from [18].
2.1. Lefschetz fibrations. Let C be an oriented two-manifold (possibly with boundary). A Lefschetz fibration over C is a smooth four-manifold W and a map π : W −→ C with finitely many critical points, each of which admits an orientation-preserving chart modeled on (w, z) ∈ C 2 , where the map π is modeled on the map C 2 −→ C given by (w, z) → w 2 + z 2 . Moreover, we will always assume that any two critical points map to different values under π.
If π : W −→ C has no critical points, then the fibration endows W with a canonical almost-complex structure, characterized by the property that the fibers of π are Jholomorphic. Since a Spin c structure over a four-manifold is specified by an almostcomplex structure in the complement of finitely many points, a Lefschetz fibration endows W with a canonical Spin c structure, which we denote by k. We adopt here the conventions of [22]: the first Chern class of the canonical Spin c structure agrees with the first Chern class of the complex tangent bundle (on the locus where the latter is defined).
A Lefschetz fibration is said to be relatively minimal if none of the fibers of π contains exceptional spheres -i.e. spheres whose self-intersection number is −1.
Lefschetz fibrations over the disk D π : W −→ D (with n critical points) can be specified by an ordered n-tuple of simple, embedded curves τ 1 , ..., τ n in F . The space W then has the homotopy type of the two-complex by attaching disks to F along the curves. Homologies between the [τ i ] gives rise to homology classes in W . More precisely, we can identify
H 2 (W ; Z) ∼ = Z ⊕ Ker (Z n −→ H 1 (F ; Z)) ,
where the first Z factor is generated by the homology class of the fiber F , and the map Z n −→ H 1 (F ; Z) is the map generated by taking multiples of the homology classes of
[τ 1 ],...,[τ n ] in H 1 (F ; Z).
Relative minimality in this case is equivalent to the condition that none of these distinguished curves in F bound disks in F . Lemma 2.1. Suppose that P ⊂ F is a two-dimensional manifold-with-boundary whose boundary is some collection of curves among the {τ 1 , ..., τ n } (each with multiplicity one). Let P denote the closed surface in W obtained by attaching copies of vanishing cycles to P . Then, g( P ) = g(P ) P · P = − (#of boundary components of P )
c 1 (k), [ P ] + P · P = 2 − 2g( P ).
Proof. The equality on the genus is obvious. The self-intersection number of P follows from the fact that the vanishing cycles are finished off with disks with framing −1. The final equation is a local calculation, in view of the fact that the determinant bundle of the canonical Spin c structure is identified, in the complement of the singular locus, with the bundle of fiber-wise tangent vectors.
A Lefschetz fibration over a disk bounds a three-manifold which is a surface bundle over the circle. Such a circle bundle is uniquely given by the mapping class of its monodromy (a mapping class of a two-manifold is an orientation-preserving diffeomorphism, modulo isotopy). Recall that a (right-handed) Dehn twist of the annulus (using the conventions of [9]) is a diffeomorphism Ψ of [0, 1] × S 1 which fixes the boundary pointwise, and satisfies the additional property that the intersection number of an
# [0, 1] × {x} ∩ ψ([0, 1] × {x}) = −1.
More generally, a (right-handed) Dehn twist about a curve τ ⊂ F is a self-diffeomorphism D τ of F whose restriction to some annular neighborhood of τ is a right-handed Dehn twist of the annulus, and which fixes all points in the complement in F of the annular neighborhood. If the Lefschetz fibration has a unique critical point, then its monodromy is a Dehn twist about some curve τ in the fiber F . More generally, if the fibration has critical values {x 1 , ..., x n }, then we can find the tuple of curves (τ 1 , ..., τ n ) by embedding a bouquet of n circles in D − {x 1 , ..., x n }, so that the winding number of τ i around x j is δ i,j . Then, the monodromy about the i th circle is a Dehn twist about τ i . Thus, the monodromy map around the boundary of the disk is given as the product of Dehn
twists D τ 1 • ... • D τn .
Note that the curves (τ 1 , ..., τ n ) obtained from a Lefschetz fibration as above depend on the embedding of the bouquet of circles. By changing the homotopy classes of the embedded circles, we can vary the curves (τ 1 , ..., τ n ) by Hurwitz moves, moves which carry the tuple (τ 1 , ..., τ i , τ i+1 , ..., τ n ) to (τ 1 , ..., τ i+1 , D τ i+1 (τ i ), ..., τ n ).
It is well-known that any orientation-preserving automorphism of F extends to a Lefschetz fibration over the disk. Indeed, we find it convenient to formulate this fact as follows:
Theorem 2.2. (see [10]) The mapping class group is generated as a monoid by Dehn twists along finitely many non-separating curves. Indeed we can choose the generating set {τ 1 , ..., τ m } so that their homology classes span H 1 (Σ; Z), all homological relations between the curves are generated (over Z) by special relations in which the homology classes of [τ ] i appear with multiplicities zero or 1, and the curves which appear withnon-zero multiplicities in these relations can be chosen to be disjoint from one another.
Proof.
It is a theorem of Humphries (see [10]) that the mapping class group is generated (as a group) by the 2g + 1 curves {α 1 , ..., α g , β 1 , ..., β g , δ} which are pictured in Figure 1. Now, it is easy to see that if we include in addition the curve ǫ, then we can express the inverses of Dehn twists along all of the α i and β j as positive multiples of Dehn twists along copies of all the α i , β j , and ǫ. This can be seen, for example, from the identity: :
1 = g i=1 D α i · D β i · D 2 ǫ · g i=1 D β g−i+1 · D α g−i+1 4
, which in turn can be obtained by exhibiting a Lefschetz fibration over the two-sphere whose monodromy representation is given by the above curves. (That Lefschetz fibration is obtained by viewing the elliptic surface E(2g) as a genus 2g fibration over the twosphere -see Chapter 8 of [9] for an extensive discussion). It remains to capture δ −1 . To this end, we observe that F has a rotational symmetry φ : F −→ F with the property that we can introduce a new curve α g+1 so that for i = 1, ..., g, Ψ(β i ) = β j where j ≡ i+1 (mod g), for i = 2, ..., g, Ψ(α i ) = α i+1 , Ψ(α g+1 ) = α 2 , Ψ(ǫ) = α 1 , and finally Ψ(α 1 ) = δ. It is now clear that the mapping class group is generated as a monoid by Dehn twists about the 2g + 3 curves {α 1 , ..., α g+1 , β 1 , ..., β g , δ, ǫ}. For homological relations between these curves, observe that the homology classes of the {α 1 , ..., α g , β 1 , ..., β g } span H 1 (Σ; Z). It follows that the following three relations span all relations: Recall that a Spin c structure over a three-manifold Y is a suitable equivalence class of nowhere vanishing vector field over Y . A three-manifold which fibers over the circle has a canonical Spin c structure, induced by a vector field which is everywhere transverse to the fibers. When Y bounds a Lefschetz fibration over a disk, this Spin c structure is the restriction of the canonical Spin c structure of the Lefschetz fibration.
Symplectic manifolds and Lefschetz fibrations.
A symplectic structure on a four-manifold (X, ω) gives the manifold an isotopy class of almost-complex structures, and hence a canonical Spin c structure. Symplectic manifolds can be blown up, to construct a new four-manifold X, which is diffeomorphic to the connected sum of X with the complex projective plane given the opposite of its complex orientation. Symplectically, X is obtained by gluing the complement of a ball in X to a neighborhood of a symplectic two-sphere E with self-intersection number −1. Note that the canonical Spin c structure k is the Spin c structure which agrees with k in the complement of E, and which satisfies
c 1 ( k), [E] = +1.
In [4], Donaldson showed that if (X, ω) is a symplectic four-manifold, then after blowing up X sufficiently many times, one obtains a new symplectic four-manifold ( X, ω) which admits a Lefschetz fibration
π : X −→ S 2 .
In fact, the fibers of π are symplectic, and hence the canonical Spin c structure of the symplectic form agrees with the canonical class of the Lefschetz fibration in the sense of Subsection 2.1.
2.3.
Preliminaries on HF + . Let t be a Spin c structure on an oriented three-manifold Y . If c 1 (t) is a torsion class, we simply call t a torsion homology class. The divisibility of a s structure is the quantity defined by
d(t) = gcd ξ∈H 1 (Y ;Z) c 1 (t) ∪ ξ, [Y ] .
Lemma 2.3. Let Y be a three-manifold equipped with a non-torsion Spin c structure t, and let d(t) = d denote its divisibility, then
(1 − U d/2 )HF ∞ (Y, t) = 0.
Proof. This is an easy consequence of the material in Section 11 of [16]. Specifically, it is shown there (Theorem 11.3) that the twisted version of
HF ∞ is, HF ∞ (Y, t) is a free Z[U, U −1 ] module, endowed with the Z[H 1 (Y ; Z)] action where e h (h ∈ H 1 (Y ; Z)) acts as multiplication by U h∪c 1 (t),[Y ] /2 .
There is a universal coefficients spectral sequence converging to the untwisted version HF ∞ (Y ) (as a Z[U, U −1 ] module), and whose E 2 term is given by
Tor i Z[U,U −1 ] (HF ∞ j (Y, t), Z[U, U −1 ]), where here the Z[U, U −1 ] is given a trivial action by Z[H 1 (Y ; Z)]. Observe that we have a free resolution of HF ∞ j (Y, t) as a module over A = Z[U, U −1 ] ⊗ Z Z[H 1 (Y ; Z)], given by b 1 (Y ) i=1 A e h i −U n i /2 − −−−−− → A, where h i is a basis for H 1 (Y ; Z), and n i = c 1 (t) ∪ h i , [Y ]
. So, the E 2 term of the above sequence is simply calculated by the homology of
b 1 (Y ) i=1 Z[U, U −1 ] 1−U n i /2 − −−−− → Z[U, U −1 ]. .
Bearing in mind that
Z[U] U a − 1 ⊗ Z[U,U −1 ] Z[U] U b − 1 ∼ = Z[U]/(U c − 1) ∼ = Tor 1 Z[U,U −1 ] Z[U] U a − 1 , Z[U] U b − 1
(and all higher Tor i vanish), where here c = gcd(a, b), it follows easily that U d/2 − 1 annihilates this E 2 term (in view of the fact that d is the greatest common divisor of the
integers c 1 (t) ∪ h i , [Y ] /2 for i = 1, ..., b 1 (Y ))
, and hence it also annihilates HF ∞ (Y ) with untwisted coefficients.
Let Y be a closed, oriented three-manifold. It follows from Lemma 2.3 and the finiteness of HF red (Y ) that for any sufficiently large integer k so that if t is a nontorsion Spin c structure with divisibility d, then
(1 − U dk/2 ) : HF − (Y, t) −→ HF − red (Y, t) defines a projection map of HF − (Y, t) onto HF − red (Y, t).
In fact, by composing with the inverse of the coboundary map
τ : HF + red (Y, t) −→ HF − red (Y, t), this gives a map Π red Y : HF − (Y, t) −→ HF + red (Y, t)
. Using a decomposition of W along such a three-manifold N (and using a Spin c structure s over W whose restriction to N is non-torsion) is analogous to the "admissible cuts" of [18]. Indeed, the comparison with the mixed invariants defined there is given by the following:
Proposition 2.4. Suppose that W is a cobordism from Y 1 to Y 2 with b + 2 (W ) > 1, which is separated by a three-manifold N into a pair of cobordisms W 1 ∪ N W 2 .
Given any pair of Spin c structures s 1 and s 2 over W 1 and W 2 respectively whose restrictions to N agree and are non-torsion, we have:
F + W 2 ,s|W 2 • Π red N • F − W 1 ,s|W 1 (ξ) = {s∈Spin c (W )|s|W 1 =s 1 ,s|W 2 =s 2 } ±F mix W,s (ξ).
Proof.
Since c 1 (s)|N is non-torsion, we can find an embedded surface F ⊂ N with c 1 (s), [F ] = 0. Now, we can cut W in two along
N ′ = Y 1 #(S 1 × F ), giving W = W ′ 1 ∪ N ′ W ′ 2 .
Now, by naturality of the exact sequences (relating HF − , HF ∞ , and HF + ) the usual composition laws, we see that
F + W 2 ,s|W 2 • Π red N • F − W 1 ,s|W 1 (ξ) = η∈δH 1 (N ) F + W ′ 2 ,s+η|W ′ 2 • Π red N ′ • F + W ′ 1 ,s+η|W ′ 1 (ξ).
Next, we find some embedded surface Σ ⊂ W of positive square which disjoint from F , and let Q denote its tubular neighborhood. Then, Q#Y 2 naturally gives a cut of W which we can arrange to be disjoint from the cut N ′ used above (by making the tubular neighborhoods sufficiently small). It then follows now easily from from the composition laws that
η∈δH 1 (N ) F + W ′ 2 ,s+η|W ′ 2 • Π red N ′ • F + W ′ 1 ,s+η|W ′ 1 (ξ) = η∈δH 1 (N ) F mix W,s+δη (1 − U dk/2 ξ).
The equation follows by choosing k large enough that U dk/2 annihilates all the mixed invariants of W .
2.4. The case where b + 2 (X) = 1. The construction of closed invariants defined in [18] works only in the case where the four-manifold has b + 2 (X) > 1. However, Proposition 2.4 suggests a construction which can be used even when b + 2 (X) = 1. Rather than setting up the general theory at present, we content ourselves with developing enough of it to allow us to establish Theorem 1.3 in the case where b + 2 (X) = 1. Definition 2.5. Let X be a closed, smooth four-manifold and choose a line
L ⊂ H 2 (X; Q) with the property that each vector v ∈ L has v · v = 0. Choose a cut X = X 1 # N X 2 for which the image of H 2 (N; Q) inside H 2 (X; Q) is L. Then, for each Spin c structure s ∈ Spin c (X) for which c 1 (s) evaluates non-trivially on L, we can define Φ W,s,L : Z[U] ⊗ Λ * (H 1 (X)/Tors) −→ Z/ ± 1
to be the non-zero on only those homogeneous elements of Z[U]⊗Λ * (H 1 (X)/Tors) whose degree is given by
d(s) = c 1 (s) 2 − 2χ(X) − 3sgn(X) 4 ,
where χ(X) denotes the Euler characteristic of X and sgn(X) denotes the signature of its intersection form. On those elements, the invariant is the coefficient of Θ + ∈ HF + (S 3 , s) in the expression Proof. An embedded surface F ⊂ X whose homology class is in the line L always gives rise to a cut as in Definition 2.5. Specifically, let F ⊂ X be a smoothly embedded, connected submanifold with [F ] ∈ L. Then, we decompose
F + W 2 ,s|W 2 • Π red N • F − W 1 ,s|W 1 (U n · Θ − ⊗ ζ).X = (X − nd(F )) ∪ S 1 ×F (F × D) .
Next, suppose that F 1 and F 2 are two embedded surfaces whose homology classes lie inside L. Then we claim that there is a third embedded surface F 3 which is disjoint from both F 1 and F 2 , and whose homology class also lies inside L. This is easily constructed by starting with some initial surface Σ, and then adding handles along canceling pairs of intersection points between Σ and F 1 (and then Σ and F 2 ). It follows now from the usual arguments that the invariant calculated by using the cut determined by F 1 (or F 2 ) agrees with the invariant calculated using the cut determined by F 3 ; i.e. the invariant using any such embedded surface is independent of the choice of homology class and surface.
Finally, if X = W 1 ∪ N W 2 is an arbitrary cut as in Definition 2.5, then we can find an embedded surface F ⊂ X disjoint from N whose homology class lies in the line L. Indeed, letting F 0 be any surface representing an element of H 2 (N; Z) with non-trivial image in H 2 (X; Z), we let F be a surface obtained by pushing F 0 out of N, using some vector field normal to N inside X. Since F is disjoint from N, again, the usual arguments show that the invariant calculated using the cut N agree with the invariants calculated using the cut determined by any embedded surface whose homology class lies in L.
The adjunction relation
We prove here the following adjunction relation (for the Seiberg-Witten analogues, compare [7] when g = 0, and [19] when g > 0):
Theorem 3.1. For each genus g there is an element ξ ∈ Z[U] ⊗ Z Λ * H 1 (Σ) of degree 2g
with the following significance. Given any smooth, oriented, four-dimensional cobordism W from Y 1 to Y 2 (both of which are connected three-manifolds), any smoothly-embedded connected, oriented submanifold Σ ⊂ W of genus g, and any s ∈ Spin c (W ) satisfying the constraint that
c 1 (s), [Σ] − [Σ] · [Σ] = −2g(Σ),(2)
then we have the relation:
F • W,s (·) = F • W,s+ǫPD[Σ] (i * (ξ(Σ)) ⊗ ·),(3)
where ǫ is the sign of c 1 (s), [Σ] , and i * :
Z[U] ⊗ Z Λ * H 1 (Σ) −→ Z[U] ⊗ Z Λ * H 1 (W )/Tors is the map induced by the inclusion i : Σ −→ W .
Before proceeding to the proof of Theorem 3.1, we make a few general observations. Note that if
c 1 (s), [Σ] ≥ 2g − Σ · Σ,
the above theorem always obtain relations of the form of Equation (3), which can be obtained by reversing the orientation of Σ and adding extra null-homologous handles if necessary, to achieve the hypotheses of Theorem 3.1. It is not important for our present purposes to identify the particular word ξ(Σ). However, it is easy to see that for a genus g surface,
ξ(Σ) ≡ U g (mod Λ * H 1 (Σ)),
by observing that surfaces and Spin c structures satisfying the hypotheses of Theorem 3.1 can be found in a tubular neighborhood of a two-sphere of arbitrary negative selfintersection number, where all the maps on HF ∞ are non-trivial. Indeed, it is natural to expect from the analogy with Seiberg-Witten theory that ξ(Σ) is given by the formula: [19]).
ξ(Σ) = g i=1 (U − A i · B i ) (compare
With these remarks in place, we turn our attention to the proof of Theorem 3.1. One ingredient in this proof is the behavior of HF • under connected sums, as we recall presently. In Section 4 of [15], we defined a product
: HF • (Y, t) ⊗ Z[U ] HF ≤0 (Z, u) −→ HF • (Y #Z, t#u),
which, in the case where Z ∼ = S 3 is an isomorphism (indeed, it is the canonical isomorphism obtained from the diffeomorphism Y #S 3 with Y ). This product is functorial under cobordisms (see Proposition 4.4 of [15]), in the sense that if W is a cobordism from Z 1 to Z 2 equipped with the Spin c structure s, then the following diagram commutes:
HF • (Y ) ⊗ Z[U ] HF ≤0 (Z 1 ) ⊗ − −− → HF • (Y #Z 1 , t#u 1 ) Id⊗F ≤0 W,s F • ([0,1]×Y )#W,t#u HF • (Y ) ⊗ Z[U ] HF ≤0 (Z 2 ) ⊗ − −− → HF • (Y #Z 2 , t#u 2 ).(4)
In the above diagram, ([0, 1] × Y )#W denotes the boundary connected sum.
Proof of Theorem 3.1. By the blowup formula, it suffices to consider the case where
Σ · Σ = −n,
where n ≥ 2g. Now, let N be a tubular neighborhood of an oriented two-manifold of genus g with self-intersection number −n ≤ −2g, and let u denote the Spin c structure over N with
c 1 (u), [Σ] = −n − 2g
An easy application of the long exact sequence for integral surgeries, together with the adjunction inequality for three-manifolds (see Theorems 10.19 and 8.1 of [16] respectively), gives us that
HF + (Z, u|Z) ∼ = Z[U −1 ] ⊗ Λ * H 1 (Σ g ).
(Details are given in Lemma 9.17 of [15], where the absolute grading on HF • (Z, u|Z) is also calculated.) In particular, HF + red (Z, u|Z) = 0, and hence
HF ≤0 (Z, u|Z) ∼ = Z[U] ⊗ Λ * H 1 (Σ g ). Indeed, since c 1 (u − PD[Σ]) 2 , [N] > c 1 (s ′ ) 2 , [N]
for any s ′ ∈ Spin c (N) with s ′ = u − PD[Σ] and s ′ |Z = u|Z, we have that the map
F ≤0 N,u−PD[Σ] : HF ≤0 (S 3 ) −→ HF ≤0 (Z, u) takes a top-dimensional Θ S 3 of HF ≤0 (S 3 ) to a top-dimensional generator Θ Z of HF ≤0 (Z, u|Z).
Moreover, according to the dimension formula, the grading of F ≤0 N,u (Θ S 3 ) is 2g less than the grading of this element so (since HF ≤0 (Z, u|Z) is generated by Θ Z as a module over the ring Z[U] ⊗ Λ * H 1 (Z)/Tors ∼ = Z[U] ⊗ Λ * H 1 (Σ)) we can find an element ξ(Σ) of degree 2g in the graded algebra ξ(Σ) ∈ Z[U] ⊗ Z Λ * H 1 (Σ) with the property that
F ≤0 N,u (Θ S 3 ) = ξ(Σ) · F ≤0 N,u−PD[Σ] (Θ S 3 ).
Next, suppose that Y 1 is a three-manifold equipped with the Spin c structure t 1 , and W 1 is the connected sum ([0, 1] × Y 1 ) #N, then the naturality of the product map (Diagram (4)) shows that
F • W 1 ,u (ζ) = ζ ⊗ F ≤0 N,u (Θ S 3 ) = ζ ⊗ ξ(Σ) · F ≤0 N,u−PD[Σ] (Θ S 3 ) = F • W 1 ,u−PD[Σ] (ξ(Σ) ⊗ ζ)
. Finally, if W is a cobordism as in the statement of the theorem, we can decompose it into a union of W 1 (the connected sum of a collar neighborhood of Y 1 with a tubular neighborhood N of Σ) and its complement W 2 . Both Spin c structures s and s − PD[Σ] agree over W 2 , so the theorem follows from the above equation, together with the composition law for the cobordism invariant.
The invariant for the K3 surface
In proving the non-vanishing theorem for symplectic four-manifolds in general, it is helpful to have one explicit example. The aim of the present subsection is such a calculation, for the K3 surface. Recall that the K3 surface is the simply-connected smooth four-manifold which can be given the structure of a compact algebraic surface whose canonical class is trivial -i.e. if k is the canonical Spin c structure coming from the almost-complex structure, then c 1 (k) = 0.
Proposition 4.1. The invariants for the K3 surface are given by:
Φ K3,s = 1 if c 1 (s) = 0 0 otherwise.
We model this calculation on a paper by Fintushel and Stern (see [8]) where they calculate a Donaldson invariant for K3 using Floer's exact triangle. In particular, they employ the following handle decomposition of K3.
Following the notation of [8], let M{p, q, r} denote the three-manifold obtained by surgeries on the Borromean rings, with integer coefficients p, q and r. There is a cobordism X from the Poincaré homology three-sphere Σ(2, 3, 5) ∼ = M{−1, −1, −1} to itself with the opposite orientation, −Σ(2, 3, 5) = M{1, 1, 1}, composed of six twohandles apiece, which we break up as the following composition:
X 1 = S 3 ⇒ M{−1, −1, −1} ⇒ M{−1, −1, 0} ⇒ M{−1, −1, 1} . and X 2 = M{−1, −1, 1} ⇒ M{−1, 0, 1} ⇒ M{−1, 1, 1} ⇒ M{0, 1, 1} ⇒ M{1, 1, 1} ⇒ S 3 .
Our goal now is to determine the maps on Floer homology induced by these twohandle additions. Indeed, the Floer homology groups themselves, as absolutely graded groups, were calculated in Section 8 of [15]. In particular, it is shown there that
HF + k (M{1, 1, 1}) ∼ =
Z if k is even and k ≥ 2 0 otherwise,
HF + k (M{0, 1, 1}) ∼ = Z if k ≡ 1 2
(mod 2) and k ≥ 3 2 0 otherwise,
HF + k (M{−1, 1, 1}) ∼ = Z ⊕ Z if k = 0 Z if k is even and k > 0 0 otherwise, HF + k (M{−1, 0, 1}) ∼ = Z if k ≡ 1 2 (mod Z) and k ≥ 1 2 Z ⊕ Z if k = − 1 2 0 otherwise.
It is also shown there that the Z[U] action is surjective for the first two examples, while it has a one-dimensional cokernel for the second two. The groups HF − k for these threemanifolds can be immediately deduced by the long exact sequence relating HF − and HF + (see [16]), and the groups for the remaining three-manifolds are determined by the duality of HF ± under orientation reversal, and the observation that −M{p, q, r} ∼ = M{−p, −q, −r}.
In the above statements, we are using the absolute gradings on the Floer homology groups for Y equipped with a torsion Spin c structure t defined in Section 7 of [18]. This absolute grading has the property that if W is a cobordism from Y 1 to Y 2 , (endowed with a Spin c structure s whose restrictions t 1 and t 2 respectively are both torsion), then Proof. For a negative-definite cobordism between integral homology spheres, the map induced on HF ∞ is always an isomorphism (see Proposition 10.4 of [15]). From the dimension formula (Equation (5)), it follows that the degree is raised by two. Proof. Let F denote the map given by the cobordism, and summing over all Spin c structures in δH 1 (M{−1, −1, 0}). Dualizing (i.e. applying Theorem 3.5 of [18], and the graded version of the duality isomorphism, c.f. Proposition 7.11 of [18]), we get the following diagram:
HF −1 − (M{−1, 1, 1}) F − − −− → HF 0 − (M{1, 1, 1}) ∼ = ∼ = HF + −1 (M{−1, −1, 1}) F + − −− → HF + −2 (M{−1, −1, −1})
.
From the calculations of the Floer homology groups restated above, we see that
HF + −1 (M{−1, −1, 1}) ∼ = Z ∼ = HF + −2 (M{−1, −1, −1})
. Indeed, an isomorphism is given by composing the maps in the surgery exact sequence (see the proof of Proposition 8.2 of [15]). But this composition is precisely F + . It follows that the map F − (the map on cohomology) above is an isomorphism, and hence (since there is no torsion present), its dual, the map
F − : HF − 0 (M{−1, −1, −1}) −→ HF − −1 (M{−1, 1, 1}
) induces an isomorphism (between two groups which are isomorphic to Z).
The cobordism W has b 2 (W ) = 2. Indeed, we can find an embedded torus T 1 ⊂ W which generates the image of H 2 (M{−1, −1, 0}; Z) inside W , and another embedded torus T 2 ⊂ W with square zero with T 1 · T 2 = 1. Applying the adjunction inequality for the cobordism invariant (Theorem 1.5 of [18]) to the embedded surface T 2 , it follows that the only Spin c structure s ∈ k + δH 1 (M{−1, −1, 0}) whose associated map F + W,s is non-trivial is the restriction of k itself.
The non-vanishing theorem for symplectic four-manifolds
The aim of the present section is to prove Theorem 1.1. Via Donaldson's construction of Lefschetz pencils, we will reduce this theorem to the following more manifestly topological variant:
Theorem 5.1. Let π : X −→ S 2 be a relatively minimal Lefschetz fibration over the sphere with b + 2 (X) > 1 whose generic fiber F has genus g > 1. Then, for the canonical Spin c structure, we have that
c 1 (k), [F ] = 2 − 2g Φ X,k = ±1.
Moreover, for any other Spin c structure s = k with Φ X,s = 0, we have that
c 1 (k), [F ] = 2 − 2g < c 1 (s), [F ] .
One ingredient in the above proof is a related result for three-manifolds which fiber over the circle. To state it, recall that a three-manifold Y which admits a fibration π : Y −→ S 1 has a canonical Spin c structure which is obtained as the (integrable) two-plane field, which is the kernel of the differential of π. If F is a fiber of π, then the evaluation
c 1 (ℓ), [F ] = 2 − 2g.
Theorem 5.2. Let Y be a three-manifold which fibers over the circle, with fiber genus g > 1, and let t be a Spin c structure over Y with
c 1 (t), [F ] = 2 − 2g.
Then, for t = ℓ, we have that HF + (Y, t) = 0; while HF + (Y, ℓ) ∼ = Z.
Indeed, we also establish the following result, which bridges the above two theorems:
F + W −B 4 ,s : HF + (Y, s|Y ) −→ HF + (S 3 )
is non-trivial; and that is the canonical Spin c structure k. Indeed, the induced map
F + W −B 4 ,k : HF + (Y, k|Y ) −→ HF + 0 (S 3 ) ∼ = Z is an isomorphism.
We prove the above three theorems, in reverse order. In fact, we prove several special cases of these theorems first. It will be convenient to fix some notation. Suppose that W is some four-manifold which admits a Lefschetz fibration π (over some two-manifold possibly with boundary). Then we let
S(W ) = {s ∈ Spin c (W ) c 1 (t), [F ] = 2 − 2g}.
(This is a slight abuse of notation: S(W ) depends on the Lefschetz fibration π, not just the four-manifold W .) Similarly, if Y is a three-manifold which fibers over the circle, we let
T(Y ) = {t ∈ Spin c (Y ) c 1 (t), [F ] = 2 − 2g}.
We will also let HF + (Y, T(Y )) denote the direct sum
HF + (Y, T(Y )) = t∈T(Y ) HF + (Y, t).
Lemma 5.4. Let π : W −→ [1, 2] × S 1 be a relatively minimal Lefschetz fibration with fiber genus g > 1 over the annulus, which connects a pair of three-manifolds Y 1 and Y 2 (which fiber over the circle), then for some choice of signs, the map
s∈S(W ) ±F + W,s : HF + (Y 1 , T(Y 1 )) −→ HF + (Y 2 , T(Y 2 ))
induces an isomorphism.
Proof. Note that whereas S(W ) can easily by infinite; according to the finiteness properties for the maps associated to cobordisms (Theorem 3.3 of [18]) there are only finitely many s ∈ S(W ) for which F + W,s is non-trivial. First assume that the Lefschetz fibration π has a single node. In this case, W can be viewed as the cobordism obtained by attaching a single two-handle to Y = Y 1 along a curve K in the fiber of π, with framing −1 (with respect to framing K inherits from the fiber F ⊂ Y ); in particular, Y 2 = Y −1 (K). Moreover, since the Lefschetz fibration is relatively minimal, the curve K is homotopically non-trivial as a curve in F . Now, if Y 0 (K) is the three-manifold obtained as zero-surgery along K then the cobordism from Y to Y 0 also maps to the circle (by a map π 0 which is no longer a fibration, but which extends the map π from Y to S 1 ). Clearly, if s is any Spin c structure which extends over W 0 , the restriction of c 1 (s) to a generic fiber of π 0 : Y 0 (K) −→ S 1 is also 2−2g. However, since K is homotopically non-trivial, the Thurston norm of the homology class of this fiber in Y 0 (K) is smaller than 2−2g, so the adjunction inequality for HF + (Theorem 8.1 of [16]) ensures that HF + (Y 0 , s|Y 0 ) = 0. Thus, the lemma follows immediately from the surgery long exact sequence for HF + (see Theorem 10.12 of [16]):
... −→ HF + (Y, T(Y )) −→ HF + (Y −1 (K), T(Y −1 (K))) −→ HF + (Y 0 (K), T(Y 0 )) = 0 −→ ...
In the above sequence, T(Y 0 ) denotes those Spin c structures whose evaluation on the homology class of a fiber of π 0 (which is no longer a fibration) is given by 2 − 2g, where now g still denotes the genus of the fibration for Y .
The case of multiple nodes follows immediately by the composition law.
Lemma 5.5. If π : Y −→ S 1 is a surface bundle over S 1 , with fiber genus g > 1, then there is a unique Spin c structure t ∈ T(Y ) with HF + (Y, t) = 0. In fact,
HF + (Y, t) ∼ = Z.
Proof. Note that the mapping class group is generated as a monoid by (right-handed) Dehn twists. This is equivalent to the claim that if p 1 : Y 1 −→ S 1 and p 2 : Y 2 −→ S 1 any two fibrations over the circle whose fiber has the same genus, then we can extend the two fibrations to form a relatively Lefschetz fibration over the annulus. It follows from Lemma 5.4 that for a genus g fibration over the circle HF + (Y, T(Y )) is independent of the monodromy map, and depends only on the genus g.
Thus, for each g > 1, it suffices to find some fibered three-manifold for which the lemma is known to be true. For this purpose, let Y = Y (g) be the zero-surgery on the torus knot K of type (2, 2g + 1). This is a fibered three-manifold whose fiber has genus g. Writing the symmetrized Alexander polynomial of K as
∆ K (T ) = − g i=−g (−T ) i = a 0 + d i=1 a i (T i + T −i )
it is shown in Proposition 8.1 of [15] that if t is a Spin c structure over Y with In particular, when c 1 (t), [F ] = 2 − 2g, it follows immediately that HF + (Y, t) ∼ = Z. Lemma 5.6. Let F be an oriented surface of genus g > 0, and consider the cobordism W from S 3 to F × S 1 obtained by puncturing the product F × D 2 in a single point. Let k denote the Spin c structure over W with c 1 (k), [F ] = 2 − 2g. Then, the induced map
F + W,k : HF + (F × S 1 , ℓ) −→ HF + 0 (S 3 ) ∼ = Z is an isomorphism, as is the induced map (1 − U g−1 )F − W,k : Z ∼ = HF − −2 (S 3 ) −→ HF − red (F × S 1 , ℓ) ∼ = Z ⊂ HF − (F × S 1 , ℓ).
Proof. To see the claim about F + W,k , it suffices to embed the cobordism (W, k) into a closed four-manifold (X, s) with b + 2 (X) > 1, so that s|W = k and Φ X,s = ±1. To see why this suffices, observe that U ·HF + (F ×S 1 , ℓ) = 0, so F + W,s must take HF + (F ×S 1 , ℓ) into HF + 0 (S 3 ) ∼ = Z. In general, the image of such a map consists of multiples of some integer d. Now, take an admissible cut of X = X 1 # N X 2 which is disjoint from F , and so that F ⊂ X 2 (such a cut is found by taking any embedded surface Σ of positive square which is disjoint from F ). It then follows that for each Spin c structure s ∈ Spin c (X) which restricts to W as k, the sum of invariants
n∈Z Φ X,s+PD[Σ]
is divisible by d. In fact, it is a straightforward consequence of the dimension formula that the part of this sum which is homogeneous of degree zero is the invariant Φ X,s , and this, in turn, forces d = ±1, so that the claimed map is an isomorphism. Now, such four-manifolds can be found for all possible genera g in the blow-ups of the K3 surface, in light of the blow-up formula and the K3 calculation. Specifically, for each genus g, we can find an embedded surface Σ ⊂ K3 with Σ · Σ = 2g − 2, for instance, by taking a single section of an elliptic fibration of K3, which is a sphere of self-intersection number −2, and attaching g copies of the fiber. In the 2g − 2-fold blow-up, Σ has a proper transform Σ with Σ · Σ = 0. Consider the Spin c structure s with The statement about HF − follows similarly, by choosing the cut for X so that the surface F lies in X 1 .
c 1 ( s) = −PD[E 1 ] − ... − PD[E 2g−2 ],
Lemma 5.7. Let π : W −→ D be a Lefschetz fibration over the disk, whose singular fibers are all non-separating nodes. Then, π : W −→ D can be embedded in a Lefschetz fibration V over a larger disk with the property that the canonical Spin c structure k is the only Spin c structure in s ∈ S(V ) for which F + V,s : HF + (∂V, T(∂V )) −→ HF + 0 (S 3 ) ∼ = Z is non-trivial; and indeed, F + V,k is an isomorphism.
Proof. We claim that any Lefschetz fibration over the disk with non-separating fibers can be embedded into a Lefschetz fibration over the disk with nodes corresponding to (isotopic translates) of the standard curves {τ 1 , ..., τ m } described in Theorem 2.2. This is constructed as follows. Suppose that W is described by monodromies which are Dehn twists around curves (C 1 , ..., C n ). Then, we can find automorphisms of F , φ 1 ,...,φ n , so that φ i (τ 1 ) = C i . We then express each φ i = D(τ m i,1 ) · ... · D(τ m i,ℓ i ). We let V be the Lefschetz fibration over the disk with monodromies obtained by juxtaposing τ m i,1 , ..., τ m i,ℓ i , τ 1 for i = 1, ..., n, union as many τ i as it takes to span all of H 1 (Σ; Z). By performing Hurwitz moves, we obtain a subfibration with monodromies (φ 1 (τ 1 ), ..., φ n (τ 1 )); i.e. we have embedded W in V . Next, we argue that V has the required form. According to Lemmas 5.5, 5.6, and 5.4, we see that
s∈S(V ) F + V −B 4 ,s : HF + (∂V, t) ∼ = Z −→ HF + −2 (S 3 )
is an isomorphism. We claim that k is the only Spin c structure in the sum with non-zero contribution.
Note that H 1 (V ; Z) is the quotient of Z 2g by the homology homology classes of the vanishing cycles for V , so we have arranged that H 1 (V ; Z) = 0; in particular, H 2 (V ; Z) has no torsion. It follows that the Spin c structure k is uniquely determined by the evaluation of its first Chern class on the various two-dimensional homology classes in V . Moreover, if we choose the translates of the various τ i carefully, so that parallel copies of the same τ i remain disjoint, then we can find a basis for H 2 (V ; Z) consisting of [F ] and surfaces P obtained by "capping off" submanifolds-with-boundary P ⊂ F whose boundaries consist of copies of the vanishing cycles. Suppose, next, that P 1 is induced from a relation P 1 in F with this form, and let m denote the number of its boundary components. Then, the relation F − P 1 = P 2 also has this form (and has the same number of boundary components), and its closed extension P 2 satisfies the following elementary properties (see Lemma 2.1):
[F ] = [ P 1 ] + [ P 2 ], g(F ) = g( P 1 ) + g( P 2 ) + m − 1, m = −[ P 1 ] 2 = −[ P 1 ] 2 = [ P 1 ] · [ P 2 ]
Now suppose that s ∈ S(V ) is a Spin c structure for which F + W,s is non-trivial. Then, the above equations, and the condition that c 1 (s), [F ] = 2 − 2g say that
c 1 (s), [ P 1 ] − [ P 1 ] · [ P 1 ] + c 1 (s), [ P 2 ] − [ P 2 ] · [ P 2 ] = 2 − 2g( P 1 ) + 2 − 2g( P 2 ) . Now, either c 1 (s), [ P 1 ] − [ P 1 ] · [ P 1 ] = 2 − 2g([ P 1 ])
, in which case (according to Lemma 2.1),
c 1 (s), [ P 1 ] = c 1 (k), [ P 1 ] ,(6)
or, after possibly switching the roles of P 1 and P 2 , we have that
c 1 (s), [ P 1 ] − [ P 1 ] · [ P 1 ] ≤ −2g([ P 1 ]).(7)
Inequality (7) is ruled out by the adjunction relation, Theorem 3.1, as follows. By adding trivial two-handles to P 1 if necessary, we obtain an embedded surface with c 1 (s), [Σ] = −2g + Σ · Σ. There are two cases, according to whether g(Σ) = 0 or g(Σ) > 0. In the latter case, the adjunction relation gives some word ξ(Σ) of degree 2g(Σ) > 0 in A(Σ) with the property that
F + V,s (·) = F + V,s+ǫPD[Σ] (ξ(Σ) ⊗ ·)
. Observe that homology classes in Σ are all homologous to classes in the fiber F in ∂V , so the action by ξ(Σ) appearing above can be interpreted as the action by an element of positive degree in Z[U] ⊗ Z Λ * (H 1 (Y )/Tors) on HF + (∂V, ℓ). But all such elements annihillate HF + (∂V, ℓ) (since it is supported in a single dimension). Thus, the only remaining possibility is that g(Σ) = 0, in which case no handles were added to P 1 . In thise case, the adjunction relation ensures that the Spin c structure s − PD[ P 1 ] has non-trivial invariant, while
c 1 (s), [ P 2 ] − [ P 2 ] · [ P 2 ] = 4 − 2g( P 2 ). But then, c 1 (s − PD[ P 1 ]), [ P 2 ] − [ P 2 ] · [ P 2 ] = 4 − 2g( P 2 ) − 2m.
Next, observe that m > 1, since the vanishing cycles for V are all homotopically nontrivial. Moreover, if m = 2, then g( P 2 ) = g(F ). Thus, using P 2 in place of P 1 , and s − PD[ P 1 ] in place of s, we obtain the same contradiction as before.
The contradiction to Inequality (7) leads to the conclusion that Equation (6) holds for all choices of P 1 . But these surfaces, together with [F ], generate the homology of V . Thus, we have shown that s = k, as claimed.
Proof of Theorem 5.2. According to Lemma 5.5, there is a unique t ∈ T(Y ) with HF + (Y, t) = 0, and for t, we have that HF + (Y, t) ∼ = Z. It remains to identify t with the canonical Spin c structure. As in the proof of the lemma, we constructed a Lefschetz fibration over the annulus which connects Y with S 1 × Σ. By attaching D × Σ to the S 1 × Σ boundary component, we obtain a Lefschetz fibration W over the disk. Indeed, since the mapping class group is generated by Dehn twists along non-separating curves, we can choose W so that Lemma 5.7 applies to W . In particular, in this case, the canonical Spin c structure k in S(W ) induces a non-trivial map F + W,s . The result follows, since k|Y = ℓ.
Lemma 5.8. Let W be a relatively minimal Lefschetz fibration over the annulus, all of whose nodes are separating. Then, the only Spin c structure s ∈ S(W ) for which the map F + W,s : HF + (Y 1 , s|Y 1 ) ∼ = Z −→ HF + (Y 2 , s|Y 2 ) ∼ = Z is non-trivial is the canonical Spin c structure. And for that Spin c structure, the induced map is an isomorphism.
Proof. According to Lemmas 5.5,5.6,and 5.4, we see that
s∈S(W ) F + W,s : t∈T(Y ) HF + (Y, t) −→ t∈T(S 1 ×F ) HF + (S 1 × F, t) ∼ = HF + (S 1 × F, ℓ) ∼ = Z
is an isomorphism. Now, observe that W is a cobordism which is obtained by attaching a sequence of two-handles along null-homologous curves. Thus, a Spin c structure over W is uniquely characterized by its restriction to one of its boundary components, and its evaluations on the two-dimensional homology classes introduced by the two-handles. According to Theorem 5.2, the restriction to the boundary must agree with the canonical Spin c structure. Each node has, as fiber, a union of two surfaces meeting at a point: i.e. we obtain a pair of embedded surfaces g( P 1 ) + g( P 2 ) = g(F ) and P 2 1 = P 2 2 = −1. Moreover, since the fibration is assumed to be relatively minimal, g( P 1 ) > 0 and g( P 2 ) > 0. Thus, applying the adjunction relation as in the proof of Lemma 5.7, we see that
c 1 (s), [ P 1 ] = c 1 (k), [ P 1 ] .
It is easy to see that the homology classes of the form [ P 1 ] (one for each node) generate H 2 (W ; Z)/H 2 (Y ; Z). Thus, it follows that s = k.
Proof of Theorem 5.3. Let π : X −→ D be the Lefschetz fibration. By combining Lemma 5.4, Lemma 5.5, and Lemma 5.6, we see that the map s∈S(X) F + X,s :
t∈T(Y ) HF + (Y, t) −→ HF + 0 (S 3 )
induces an isomorphism. We can find a subdisk D 0 ⊂ D which contains all the fibers with non-separating nodes. Let X 0 ⊂ X denote its preimage. According to Lemmas 5.4, 5.5, and 5.6, there must be at least one Spin c structure s ∈ S(X) for which the map
F + X,s : HF + (Y, s|Y ) ∼ = Z −→ HF + (S 3 )
is non-trivial. According to Lemma 5.7, its restriction s|X 0 is the canonical Spin c structure; according to Lemma 5.8, its restriction s|X − X 0 is also the canonical Spin c structure. Now, the map 1] by attaching two-handles along null-homologous curves. Thus, the only Spin c structure whose restrictions to both X − X 0 and X 0 agree with k is the canonical Spin c structure k itself.
H 1 (X − X 0 ) −→ H 1 (Y ; Z) is an isomorphism, since X − X 0 is obtained from Y × [0,
Proof of Theorem 5.1. We decompose X = X 1 # S 1 ×Σg X 2 where X 1 is the pre-image of a disk in the Lefschetz fibration which contains no singular points (in particular,
X 1 = D × F ).
According to Proposition 2.4,
F + W 2 ,s 2 • Π red N • F − W 1 ,s 1 = {s∈Spin c (W )|s|W 1 =s 1 ,s|W 2 =s 2 } F mix W,s .
Now, by Lemma 5.6,
Π red • F − W 1 ,s 1 : Z ∼ = HF − −2 (S 3 ) −→ HF + red (S 1 × Σ g ) ∼ = Z is an isomorphism.
Similarly, according to Theorem 5.3,
F + W 2 ,s 2 : HF + red (S 1 × Σ g ) ∼ = Z −→ HF + 0 (S 3 ) ∼ = Z is an isomorphism. Thus, we conclude that 1 = η∈δH 1 (Σ×S 1 ) ±Φ X,s+η .
Observe, however, that δH 1 (Σ × S 1 ) is one-dimensional; in fact, the Spin c structures in the δH 1 (Σ × S 1 )-orbit are of the form k + ZPD[F ]. By the dimension formula, the only such Spin c structure which has degree zero is k (using the adjunction formula and the fact that the fiber genus g > 1).
If k > 0 we see that F mix W,s−kPD[F ] is zero. If F mix W,s+kPD[F ]
were non-zero, the expression F + W 2 ,s 2 • Π red N • F − W 1 ,s 1 (U) would have to be non-zero. But this is impossible, since U annihilates HF + red (S 1 × Σ g , ℓ). Finally, we observe that the usual adjunction inequality for surfaces with square zero (Theorem 1.5 of [18]) ensures that if
c 1 (k), [F ] = 2 − 2g > c 1 (s), [F ] ,
then Φ X,s ≡ 0.
Proof of Theorem 1.1. First, observe that the conditions on ω in Theorem 1.1 are all open conditions, so it suffices to prove the theorem in the case where ω has rational periods. According to Donaldson's theorem, any sufficiently large multiple Nω gives rise to a Lefschetz pencil. Specifically, if we blow up X sufficiently many times, we get a new symplectic manifold ( X, ω) with the property that
Nω − m i=1 PD[E i ]
is Poincaré dual to the fiber of a Lefschetz fibration over S 2 . Here, {E i } m i=1 are the exceptional spheres in X. In particular, for any Spin c structure s ∈ Spin c (X), we have that
c 1 ( s), [F ] = c 1 (s), Nω − m.(8)
Clearly, the canonical Spin c structure of ( X, ω) is the blow-up of the canonical Spin c structure of (X, ω), so according to the blow-up formula for Φ, follows that Φ X,ℓ = ±1 if and only if Φ X, ℓ = ±1. But the latter equation follows, according to Theorem 5.1.
For suitable choice of N, we can arrange for the Lefschetz fibration to be relatively minimal, see [21] and [1]. In this case, if s ∈ Spin c (X) is any structure with Φ X,s = 0, then its blowup s satisfies Φ X, s = 0. Thus, the inequality stated in this theorem is equivalent to the corresponding inequality from Theorem 5.1, in view of Equation (8).
The genus-minimizing properties of symplectic submanifolds
In the case where b + 2 (X) > 1, Theorem 1.3 is now an easy consequence of Theorem 3.1 and Theorem 1.1. For this implication, we follow [19] Proof of Theorem 1.3 when b + 2 (X) > 1. If the theorem were false, we could find a symplectic manifold (X, ω) and a pair Σ, Σ ′ ⊂ X of homologous, smoothly-embedded submanifolds, with Σ symplectic, and g(Σ ′ ) < g(Σ). By blowing up X and taking the proper transform of Σ as necessary, we can assume that c 1 (k), [Σ] < 0. By attaching handles to Σ ′ as necessary, we can arrange for g(Σ ′ ) = g(Σ) − 1. Then, the adjunction formula for Σ gives us that
c 1 (k), [Σ ′ ] − [Σ ′ ] · [Σ ′ ] = −2g(Σ ′ ).
Theorem 1.1 says that Φ X,k is non-trivial, so according to Theorem 3.1, Φ X,k−PD[Σ ′ ] is non-trivial, as well. But since
ω, c 1 (k − PD[Σ ′ ]) = ω, c 1 (k) − 2 ω, [Σ] < ω, c 1 (k) ,
we obtain the desired contradiction to Theorem 1.1.
For the case where b + 2 (X) = 1, we appeal directly to the analogue of Theorem 5.1. Specifically, recall that if π : X −→ S 2 is a Lefschetz fibration with genus g > 1, then c 1 (k), [F ] = 2 − 2g = 0, so we have an invariant Φ X,s,L in the sense of Subsection 2.4, where L is the line containing F in H 2 (X; Q). The proof of Theorem 5.1 gives: Theorem 6.1. Let π : X −→ S 2 be a relatively minimal Lefschetz fibration over the sphere with b + 2 (X) = 1 whose generic fiber F has genus g > 1. Then, for the canonical Spin c structure, we have that
c 1 (k), [F ] = 2 − 2g Φ X,k,L = ±1,
where L denotes the line in H 2 (X; Q) containing [F ]. Moreover, for any other Spin c structure s = k with Φ X,s,L = 0, we have that
c 1 (k), [F ] = 2 − 2g < c 1 (s), [F ] .(9)
Proof of Theorem 1.3 when b + 2 (X) = 1. Once again, if the theorem were false, we would be able to find homologous surfaces Σ and Σ ′ in (X, ω) with Σ symplectic and g(Σ ′ ) = g(Σ) − 1. We claim that for sufficiently large N, we can find a relatively minimal Lefschetz fibration on some blowup X whose fiber F satisfies F · Σ = 0, where Σ is some suitable proper transform of Σ. Specifically, if ω · Σ = c (which we can assume is an integer), then provided that Nω 2 > c, we can let Σ represent the homology class
[ Σ] = [Σ] − [E 1 ] − ... − [E N c ]
inside the Lefschetz fibration obtained by blowing up the Lefschetz pencil for Nω. The homology class of the fiber here is given by
[F ] = N[ω] − [E 1 ] − ... − [E M ],
where M = N 2 ω 2 . Of course, Theorem 6.1 ensures that Φ X,k,L ≡ 0. We can then find a new embedded surface F ′ representing F , but which is disjoint from Σ ′ , and cut X along F ′ × S 1 into two pieces, one of which is a tubular neighborhood of F ′ . For this cut, Theorem 3.1 shows that Φ X,k±PD[Σ],L is also non-trivial. But since
c 1 (k ± PD[Σ]), [F ] = c 1 (k), [F ] ,
this violates Inequality (9). 7. A class of three-manifolds with HF + red (Y ) = 0 We now prove the following: Theorem 7.1. Let Y be a three-manifold which can be obtained as a plumbing of spheres specified by a weighted graph (G, m) which satisfies the following conditions:
• G is a disjoint union of trees • at each vertex in G, we have that
m(v) ≥ d(v).(10)
Then, HF + red (Y ) = 0. Note that any lens space can be expressed as a plumbing of two-spheres along a graph (G, m) satisfying the above hypotheses. (Indeed, the graph is linear: it is connected, each vertex has degree at most two, and multiplicity at least two.)
Any Seifert fibered space Y with b 1 (Y ) ≤ 1 and which is not a lens space is obtained as a plumbing along a star-like graph: the graph is connected, has a unique vertex (the "central node") with degree n > 2, and all other vertices have degree at most two and multiplicity at least two. The degree of the central node agrees with the number of "singular fibers" of the Seifert fibration, and its multiplicity b is one of the Seifert invariants of the fibration. Thus, a Seifert fibration satisfies the hypotheses of the above theorem when b ≥ n.
Remark 7.2. An easy inductive argument similar to the proof given below also gives the relative grading. Suppose that (G, m) is a weighted graph satisfying the hypotheses of Theorem 7.1, with the additional hypothesis that Y = −Y (G, m) is a rational homology three-sphere (this in turn is equivalent to the hypothesis that each component of G contains at least one vertex for which Inequality (10) is strict), and let W (G, m) be the four-manifold obtained by plumbing two-sphere bundles according to a weighted graph (G, m), and let W = −W (G, m) be the plumbing with negative-definite intersection form. Then for each t ∈ Spin c (Y ), letting K(t) denote the set of characteristic vectors K ∈ H 2 (W ; Z) for which K|Y = c 1 (t), we have that d(Y, t) = min K∈K(t)
K 2 + |G| 4 ,(11)
where |G| = rk(H 2 (W )) denotes the number of vertices in G. Indeed, Equation (11) remains true even in the case where the graph has a single vertex where Inequality (10) fails, which includes all Seifert fibered rational homology three-spheres. We return to these topics, and the more general issue of determining HF + for trees with arbitrary weights, in a future paper [20].
Proof. In view of the Künneth decomposition for connected sums, see Theorem 12.1 of [16], it suffices to consider the case where G is a connected graph.
We will prove inductively that if there is some vertex v in G where m(v) > d(v), then Y is a rational homology sphere and HF (Y ) has rank given by the number of elements in H 1 (Y ; Z). (Observe that if this is not the case, and equality holds everywhere, then it is easy to see by repeated blow-downs that the three-manifold in question is S 2 × S 1 , and it is easy to see that HF + red (S 2 × S 1 ) = 0, c.f. [16].) Next, we induct on the number of vertices. Clearly, if the number of vertices is one, the three-manifold in question is a lens space; for lens spaces, the conclusion of the theorem follows easily from the genus one Heegaard diagram (c.f. Proposition 8.1 of [17]).
For the inductive step on the number of vertices, we use induction on m(v) where v is some leaf (vertex with d(v) = 1). Suppose that m(v) = 1. In this case, it is easy to see that −Y (G) = −Y (G ′ ), where G ′ is the weighted tree obtained from G by deleting the leaf v, and decreasing the weight of the neighbor of v (thought of as a vertex in G ′ ) by one. Observe that G ′ also satisfies the hypothesis of the theorem. Thus, the case where m(v) = 1 follows from the inductive hypothesis on the number of vertices. More generally, suppose that G 1 is a weighted graph, and we have a leaf v with m(v) = k. In this case, we can form two other weighted graphs G 2 and G 3 , where G 2 is obtained from G 1 by deleting the leaf v, and G 3 which is obtained from G 1 by increasing the weight of v by one. We have then the following long exact sequence (Theorem 10.12 of [16]):
... − −− → HF (−Y (G 2 )) − −− → HF (−Y (G 3 )) − −− → HF (−Y (G 1 )) − −− → ...
By the inductive hypothesis, we know the theorem is true for the weighted graphs G 1 and G 2 . Now cobordisms from −Y (G 2 ) to −Y (G 3 ) and from −Y (G 3 ) and −Y (G 1 ) (which induce two of the maps in the above long exact sequence) are clearly negativedefinite. So it follows that −Y (G 3 ) is a rational homology sphere, with |H 1 (Y (G 3 ); Z)| = |H 1 (Y (G 1 ); Z)| + |H 1 (Y (G 2 ); Z)|.
Moreover, by the induction hypothesis, HF (−Y (G 1 ) and HF (−Y (G 2 )) have no odddimensional generators. Since the map from HF (−Y (G 1 )) to HF (−Y (G 2 )) changes the Z/2Z grading, it follows that this map is zero, so that the above long exact sequence is actually a short exact sequences. This implies that HF (Y (G 3 )) is a free Abelian group with rank rk HF (Y (G 3 )) = rk HF (Y (G 1 )) + rk HF (Y (G 2 )).
The induction hypothesis is equivalent to the statement that for i = 1, 2, HF (Y (G i )) are free and rk HF (Y (G i )) = |H 1 (Y (G i ); Z)|, which in turn gives the corresponding equation for the graph G 3 .
Proof of Theorem 1.4. According to the definition of Φ, if X is a smooth fourmanifold which can be separated along a rational homology three-sphere Y into X = X 1 ∪ Y X 2 so that b + 2 (X i ) > 0, then Y constitutes an admissible cut for the definition of Φ. If HF + red (Y ) = 0, then the invariant Φ must vanish identically. Thus, in this case, the existence of such a decomposition along a graph manifold satisfying the hypotheses of Theorem 7.1 gives a vanishing result which is inconsistent with Theorem 1.1.
In the case where Y is not a rational homology three-sphere, it is formed as a connected sum of a rational homology three-sphere (as in Theorem 7.1) with a collected of copies S 2 × S 1 . It follows from the behaviour of Floer homology under connected sums (c.f. [16]) that HF + red (Y, M) = 0 for any choice of twisted coefficient system M over Y , so we again get a vanishing result for Φ for any smooth four-manifold which admits the hypothesized decomposition along Y .
Theorem 1. 4 .
4Let Y = Y (G, m) be a plumbed three-manifold, where (G, m) satisfies the following conditions:
[α 1 Figure 1 .
11] + [α 2 ] + [δ] = 0, [ǫ] + [α g+1 ] + [α 1 ] = 0, [α 2 ] + ... + [α g+1 ] = 0 (See Figure 2 for an illustration in the case where g = 4.) Generators of the mapping class group. Dehn twists about the pictured curves {α 1 , ..., α g , β 1 , ..., β g , δ} generate the mapping class group. The additional curve ǫ is discussed in the proof of Theorem 2.2.
Figure 2 .
2Monoid generators of the mapping class group, g = 4. Dehn twists about the pictured curves {α 1 , ..., α 5 , β 1 , ..., β 4 , ǫ, δ} generate the mapping class group as a monoid. The symmetry Ψ described in the proof of Theorem 2.2 is realized by a 90 • clockwise rotation of this picture.
Here, Θ + and Θ − are bottom-and top-dimensional generators of HF + (S 3 ) and HF − (S 3 ) respectively. Proposition 2.6. The invariant Φ W,s,L depends on the cut only through the choice of line L ∈ H 2 (X; Q).
M{− 1 ,
1−1, −1} ⇒ M{−1, −1, 0} ⇒ M{−1, −1, 1} ⇒ M{−1, 0, 1} ⇒ M{−1, 1, 1} ⇒ M{0, 1, 1} ⇒ M{1, 1, 1}.The two-handles are attached in the obvious manner: for example, to go from M{p, q, r} to M{p + 1, q, r}, we attach a two-handle along an unknot with framing −1 which links the first ring once. Let E denote the negative-definite manifold obtained as a plumbing of two-spheres according to the E8 Dynkin diagram; then ∂E = M{−1, −1, −1}. There is a decomposition of K3 as K3 ∼ = E#X#E.To obtain an admissible cut of the K3 as required in the definition of Φ (c.f. Definition 8.3 of [18]), we cut the surface along N = M{−1, −1, 1}, to get the decomposition of K3 − B 4 − B 4 as
gr(F W,s (ξ)) − gr(ξ) = c 1 (s) 2 − 2χ(W ) − 3sgn(
Lemma 4 . 2 .
42For the cobordism E − B 4 from S 3 to M{−1, −1, −1}, endowed with the Spin c structure obtained by restricting k, the generator in HF − −2 (S 3 ) is mapped to the generator HF − 0 (Σ(2, 3, 5)).
Lemma 4. 3 .
3For the cobordism W M{−1, −1, −1} ⇒ M{−1, −1, 0} ⇒ M{−1, −1, 1} (endowed with the Spin c structure obtained by restricting k), the induced map Z ∼ = HF − 0 (M{−1, −1, −1}) F − W,s − −− → HF − −1 (M{−1, −1, 1}) ∼ = Z is an isomorphism. Moreover, the if we equip W with any other Spin c structure, the induced map is trivial
Proof of Proposition 4.1. According to Lemmas 4.2 and 4.3, generator of HF − −2 (S 3 ) is mapped to the generator of HF − −1 (M{−1, −1, 1}) ∼ = Z. Now, δ −1 of that generator is the generator of HF + red,0 (M{−1, −1, 1}) ∼ = Z. Investigating the four exact sequences connectingM{−1, −1, 1}, M{−1, 0, 1}, M{−1, ∞, 1} ∼ = S 3 , M{−1, ∞, 1} ∼ = S 3 , M{−1, 0, 1}, M{−1, 1, 1} , M{−1, 1, 1}, M{0, 1, 1}, M{∞, 1, 1} ∼ = S 3 , M{∞, 1, 1} ∼ = S 3 , M{0, 1, 1}, M{1, 1, 1} ,we see that the mapZ ∼ = HF + red,0 (M{−1, −1, 1}) −→ HF + −2 (M{1, 1, 1}) ∼ = Zinduced by summing the maps induced by all Spin c structures on the composite cobordism from X 2 −N is an isomorphism. In fact, by finding square zero tori which intersect the homology classes coming from H 2 (M{−1, 0, 1}; Z) and H 2 (M{0, 1, 1}; Z) in X 2 − N and applying the adjunction inequality (as in the proof of Lemma 4.3), we see that the only Spin c structure which contributes to this sum is the one with trivial first Chern class. Finally, the mapHF + −2 (M{1, 1, 1}) −→ HF + 0 (S 3 )is an isomorphism (for the given Spin c structure) once again, in view of the dimension formula and the fact that N − B 4 has negative-definite intersection form (Proposition 10.4 of[15]).
Theorem 5. 3 .
3Let π : W −→ D be a relatively minimal Lefschetz fibration over the disk with fiber genus g > 1, and let Y = −∂W . Then, there is a unique Spin c structure s over W for which c 1 (s), [F ] = 2 − 2g, and the induced map
+ (Y, t) is a free Abelian group of rank ∞ j=1 ja |i|+j .
so that c 1 ( s), [ Σ] = 2 − 2g; i.e. the tubular neighborhood of Σ is W , and s is an extension of k. According to Proposition 4.1 and the blowup formula (Theorem 2.4 of [18]), Φ X, s = ±1.
Acknowledgements. It is our pleasure to thank András Stipsicz for some very helpful discussions.
The degree doubling formula for braid monodromies and Lefschetz pencils. D Auroux, L Katzarkov, PreprintD. Auroux and L. Katzarkov. The degree doubling formula for braid monodromies and Lefschetz pencils. Preprint, 2000.
Irrationality and the h-cobordism conjecture. S K Donaldson, J. Differential Geom. 261S. K. Donaldson. Irrationality and the h-cobordism conjecture. J. Differential Geom., 26(1):141- 168, 1987.
Polynomial invariants for smooth four-manifolds. S K Donaldson, Topology. 293S. K. Donaldson. Polynomial invariants for smooth four-manifolds. Topology, 29(3):257-315, 1990.
Lefschetz pencils on symplectic manifolds. S K Donaldson, J. Differential Geom. 532S. K. Donaldson. Lefschetz pencils on symplectic manifolds. J. Differential Geom., 53(2):205-236, 1999.
Lefschetz pencils and the canonical class for symplectic 4-manifolds. S K Donaldson, I Smith, math.SG/0012067S. K. Donaldson and I. Smith. Lefschetz pencils and the canonical class for symplectic 4-manifolds. math.SG/0012067, 2000.
Topological characterization of Stein manifolds of dimension > 2. Y Eliashberg, Internat. J. of Math. 1Y. Eliashberg. Topological characterization of Stein manifolds of dimension > 2. Internat. J. of Math., 1:29-46, 1990.
Immersed spheres in 4-manifolds and the immersed Thom conjecture. R Fintushel, R J Stern, Turkish J. Math. 192R. Fintushel and R. J. Stern. Immersed spheres in 4-manifolds and the immersed Thom conjecture. Turkish J. Math, 19(2):145-157, 1995.
Using Floer's exact triangle to compute Donaldson invariants. R Fintushel, R J Stern, Number 133 in Progr. Math. Birkhäuser. R. Fintushel and R. J. Stern. Using Floer's exact triangle to compute Donaldson invariants, pages 435-444. Number 133 in Progr. Math. Birkhäuser, 1995.
4-manifolds and Kirby calculus. R E Gompf, A I Stipsicz, Graduate Studies in Mathematics. 20American Mathematical SocietyR. E. Gompf and A. I. Stipsicz. 4-manifolds and Kirby calculus, volume 20 of Graduate Studies in Mathematics. American Mathematical Society, 1999.
Generators for the mapping class group. S P Humphries, Topology of low-dimensional manifolds (Proc. Second Sussex Conf. Chelwood GateSpringer722S. P. Humphries. Generators for the mapping class group. In Topology of low-dimensional manifolds (Proc. Second Sussex Conf., Chelwood Gate, 1977), number 722 in Lecture Notes in Math., pages 44-47. Springer, 1979.
The genus of embedded surfaces in the projective plane. P B Kronheimer, T S Mrowka, Math. Research Letters. 1P. B. Kronheimer and T. S. Mrowka. The genus of embedded surfaces in the projective plane. Math. Research Letters, 1:797-808, 1994.
Embedded surfaces and the structure of Donaldson's polynomial invariants. P B Kronheimer, T S Mrowka, J. Differential Geometry. P. B. Kronheimer and T. S. Mrowka. Embedded surfaces and the structure of Donaldson's poly- nomial invariants. J. Differential Geometry, pages 573-734, 1995.
Tight contact structures and the Seiberg-Witten invariants. P Lisca, G Matić, Invent. Math. 1293P. Lisca and G. Matić. Tight contact structures and the Seiberg-Witten invariants. Invent. Math., 129(3):509-525, 1997.
A product formula for Seiberg-Witten invariants and the generalized Thom conjecture. J W Morgan, Z Szabó, C H Taubes, J. Differential Geometry. 44J. W. Morgan, Z. Szabó, and C. H. Taubes. A product formula for Seiberg-Witten invariants and the generalized Thom conjecture. J. Differential Geometry, 44:706-788, 1996.
Absolutely graded Floer homologies and intersection forms for fourmanifolds with boundary. P S Ozsváth, Z Szabó, math.SG/0110170P. S. Ozsváth and Z. Szabó. Absolutely graded Floer homologies and intersection forms for four- manifolds with boundary. math.SG/0110170.
Holomorphic disks and three-manifold invariants: properties and applications. P S Ozsváth, Z Szabó, math.SG/0105202P. S. Ozsváth and Z. Szabó. Holomorphic disks and three-manifold invariants: properties and applications. math.SG/0105202.
Holomorphic disks and topological invariants for rational homology three-spheres. P S Ozsváth, Z Szabó, math.SG/0101206P. S. Ozsváth and Z. Szabó. Holomorphic disks and topological invariants for rational homology three-spheres. math.SG/0101206.
Holomorphic triangles and invariants for smooth four-manifolds. P S Ozsváth, Z Szabó, math.SG/0110169P. S. Ozsváth and Z. Szabó. Holomorphic triangles and invariants for smooth four-manifolds. math.SG/0110169.
The symplectic Thom conjecture. P S Ozsváth, Z Szabó, Ann. of Math. 1511P. S. Ozsváth and Z. Szabó. The symplectic Thom conjecture. Ann. of Math., 151(1):93-124, 2000.
Floer homology for three-manifolds bounding sphere plumbings. P S Ozsváth, Z Szabó, perparation. P. S. Ozsváth and Z. Szabó. Floer homology for three-manifolds bounding sphere plumbings. In perparation, 2001.
Lefschetz pencils and divisors in moduli space. I Smith, Geom. Topol. 5I. Smith. Lefschetz pencils and divisors in moduli space. Geom. Topol., 5:579-608, 2001.
The Seiberg-Witten invariants and symplectic forms. C H Taubes, Math. Research Letters. 16C. H. Taubes. The Seiberg-Witten invariants and symplectic forms. Math. Research Letters, 1(6):809-822, 1994.
More constraints on symplectic forms from Seiberg-Witten invariants. C H Taubes, Math. Research Letters. 21C. H. Taubes. More constraints on symplectic forms from Seiberg-Witten invariants. Math. Re- search Letters, 2(1):9-13, 1995.
The geometry of the Seiberg-Witten invariants. C H Taubes, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansIIC. H. Taubes. The geometry of the Seiberg-Witten invariants. In Proceedings of the International Congress of Mathematicians, Vol. II, pages 493-504, 1998.
| []
|
[
"Parabolic isometries of the fine curve graph of the torus",
"Parabolic isometries of the fine curve graph of the torus"
]
| [
"Pierre-Antoine Guihéneuf ",
"Emmanuel Militon "
]
| []
| []
| In this article we finish the classification of actions of torus homeomorphisms on the fine curve graph initiated by Bowden, Hensel, Mann, Militon, and Webb in [BHM + 22]. This is made by proving that if f ∈ Homeo(T 2 ), then f acts elliptically on C † (T 2 ) if and only if f has bounded deviation from some v ∈ Q 2 \ {0}. The proof involves some kind of slow rotation sets for torus homeomorphisms. 1 I.e. non contractible. 2 It follows from [MM99, Proposition 4.6] and the Nielsen-Thurston classification [Thu88]. | null | [
"https://export.arxiv.org/pdf/2302.08184v2.pdf"
]
| 256,900,689 | 2302.08184 | 8c2cf1d9b4be4be57fd0849fed50a545c97e7763 |
Parabolic isometries of the fine curve graph of the torus
16 May 2023 May 17, 2023
Pierre-Antoine Guihéneuf
Emmanuel Militon
Parabolic isometries of the fine curve graph of the torus
16 May 2023 May 17, 2023
In this article we finish the classification of actions of torus homeomorphisms on the fine curve graph initiated by Bowden, Hensel, Mann, Militon, and Webb in [BHM + 22]. This is made by proving that if f ∈ Homeo(T 2 ), then f acts elliptically on C † (T 2 ) if and only if f has bounded deviation from some v ∈ Q 2 \ {0}. The proof involves some kind of slow rotation sets for torus homeomorphisms. 1 I.e. non contractible. 2 It follows from [MM99, Proposition 4.6] and the Nielsen-Thurston classification [Thu88].
Introduction
The fine curve graph C † (S) of a closed surface S was introduced by Bowden, Hensel, and Webb [BHW22] to give a counterpart of the classical curve graph adapted to the study of the group of all homeomorphisms of S. More precisely, the classical curve graph C(S) has vertex set the isotopy classes of essential simple closed curves on S, with edges between pairs of isotopy classes that can be realized disjointly (a slight modification is needed for genus 1 surfaces). Note that the natural action of a homeomorphism on curves quotients down to an action of the mapping class group Map(S) on the curve graph C(S) by isometries. The Gromov hyperbolicity (or equivalently δ-hyperbolicity) of C(S), showed by Masur and Minsky [MM99,MM00], then implies numerous geometric and algebraic properties of the mapping class group Map(S) (e.g. [BBF15,BKMM12,BM08,DGO16,Mah11]).
In this paper we will focus on the case of the torus S = T 2 . Let us give the precise definition of the fine curve graph in this context. Definition 1. The fine curve graph on the torus T 2 is the graph C † (T 2 ) whose vertices are essential 1 simple loops. There is an edge between two vertices α and β if and only if the loops α and β have at most one intersection point.
As a consequence of the Gromov hyperbolicity of the classical curve graphs for punctured surfaces, it was proved in [BHW22] that the fine curve graph C † (S) is Gromov hyperbolic. This enables the authors to use large scale geometry techniques to study Homeo(S) via its action on C † (S). As an application, they prove that, for any closed surface S of genus ≥ 1, stable commutator length and fragmentation norm on Homeo 0 (S) are unbounded, answering a question posed by Burago, Ivanov, and Polterovich [BIP08].
In the same way as the mapping class group Map(S) acts on C(S) by isometries, the whole homeomorphism group Homeo(S) acts on C † (S) by isometries. Gromov has classified isometries of Gromov hyperbolic spaces [Gro87,paragraph 8], [BH99], according to the asymptotic translation length, defined for an isometry g of a Gromov hyperbolic space X as
|g| X = lim n→+∞ 1 n d X x, g n (x) .
It is a standard exercise to see that this limit exists and is independent of x. This independence immediately implies that the asymptotic translation length is a conjugacy invariant of isometries of X. Gromov classification is then as follows: for g an isometry of a Gromov hyperbolic space, g is
• Hyperbolic if the asymptotic translation length is positive;
• Parabolic if the asymptotic translation length is zero but g has no finite diameter orbits, and
• Elliptic if g has finite diameter orbits.
There is an equivalent reformulation of this trichotomy in terms of fixed points on the Gromov boundary of X, but we do not require this point of view in the present work. While there is no mapping class acting parabolically on 2 C(S), the situation is much richer for the action of homeomorphisms on C † (S): in [BHM + 22], the authors prove that there are homeomorphisms of T 2 acting parabolically on C † (T 2 ). They also initiate a classification of actions of homeomorphisms on C † (T 2 ) in terms of rotational behaviour: they give a criterion of hyperbolicity in terms of rotation set, as well as examples of parabolic and elliptic homeomorphisms. In the present article, we complete their work to give a complete classification of actions of homeormorphisms of T 2 in terms of rotational behaviour.
Definition 2. Let v ∈ R 2 \{0}. We say that f ∈ Homeo(T 2 ) has bounded deviation from direction v if there exists ρ ∈ R 2 and a liftf : R 2 → R 2 of f such that | f n (x)−x−nρ, v | is bounded from above, uniformly in x ∈ R 2 and n ∈ N.
Observe that this definition does not depend on the chosen liftf of f . Recall the classification of homotopy classes for torus homeomorphisms: if f ∈ Homeo(T 2 ), then f of f 2 is homotopic either to the identity, or to a Dehn twist (defined in Section 1.1), or to a linear Anosov automorphism.
A homeomorphism homotopic to a linear Anosov automorphism has unbounded deviation in any direction, and a homeomorphism homotopic to a Dehn twist has unbounded deviation in any direction but possibly one. To see this, consider one point of R 2 and some nontrivial integer translate of it, and look at the deviation between these two points (at some point one has to use the fact that the eigendirections of linear Anosov automorphisms have irrational slope).
Note that if f ∈ Homeo(T 2 ) is homotopic to identity and has bounded deviation from two non-collinear directions, then it has bounded deviation from any direction.
The main theorem of this article is the following.
Theorem A. Let f ∈ Homeo(T 2 ). Then f acts elliptically on C † (T 2 ) if and only if f has bounded deviation from some v ∈ Q 2 \ {0}.
Combined with the results of [BHM + 22], this theorem implies a complete classification of the action of homeomorphisms on C † (T 2 ) in terms of rotational behaviour.
We denote by Homeo 0 (T 2 ) the connected component of identity in the group Homeo(T 2 ); it coincides with the set of homeomorphisms that are homotopic to identity. We denote the rotation set off by ρ(f ). It is a compact convex subset of the plane capturing the rotational behaviour of the homeomorphism (see Section 1.1 for precise definitions).
Corollary 3. Let f ∈ Homeo 0 (T 2 ). Then • f acts hyperbolically on C † (T 2 ) if and only if ρ(f ) has nonempty interior;
• f acts parabolically on C † (T 2 ) if and only if ρ(f ) is a segment of irrational direction, or ρ(f ) is a segment of rational direction not passing through a rational point, or f is a pseudo-rotation 3 with unbounded deviation from any v ∈ Q 2 \ {0};
• f acts elliptically on C † (T 2 ) if and only if ρ(f ) is a segment of rational direction passing through a rational point, or f is a pseudo-rotation with bounded deviation from some v ∈ Q 2 \ {0}.
Let us explain how to deduce this corollary from Theorem A. The first point is [BHM + 22, Theorem 1.3]. Hence it suffices to distinguish homeomorphisms that act elliptically from those that do not.
By Passeggi and Sambarino [PS20], any homeomorphism whose rotation set is a segment of rational slope not passing by a rational point, has unbounded deviation, and hence by Theorem A acts parabolically.
Moreover, by Dávalos [Dáv18], any homeomorphism whose rotation set is a segment of rational direction v passing through a rational point has bounded deviation in v ⊥ , and hence by Theorem A acts elliptically.
If we suppose that the Franks-Misiurewicz conjecture [FM90] holds, the above corollary implies the following improvement of the second point: f acts parabolically on C † (T 2 ) if and only if ρ(f ) is a segment of irrational direction, or f is a pseudo-rotation with unbounded deviation from some v ∈ Q 2 \ {0}. In particular any f whose rotation set is a segment with rational direction acts elliptically on C † (T 2 ).
Remark that there are both pseudo-rotations with bounded displacement in some rational direction (like actual rotations) and pseudo-rotations with unbounded displacement in any rational direction (see for [KT14] for rational pseudo-rotations and [KK09] for irrational ones). Hence one cannot distinguish completely the possible types of actions of homeomorphisms on C † (T 2 ) only in terms of rotation sets. Note that Jäger [J09], Jäger-Tal [JT17] and Kocsard [Koc21] gives criteria of semi-conjugation to a rotation for torus homeomorphisms with bounded displacements.
One can give a statement similar to Corollary 3 from the rotation viewpoint:
Corollary 4. Let f ∈ Homeo 0 (T 2 ). Then
• If ρ(f ) has nonempty interior, then f acts hyperbolically on C † (T 2 );
• If ρ(f ) is a segment with irrational slope, then f acts parabolically on C † (T 2 );
• If ρ(f ) is a segment with rational slope passing through a rational point, then f acts elliptically on C † (T 2 );
• If ρ(f ) is a segment with rational slope not passing through a rational point (a case that should never hold according to Franks-Misiurewicz conjecture [FM90]), then f acts parabolically on C † (T 2 );
• If f is a pseudo-rotation with unbounded deviation from any v ∈ Q 2 \ {0}, then f acts parabolically on C † (T 2 );
• If f is a pseudo-rotation with bounded deviation from some v ∈ Q 2 \ {0}, then f acts elliptically on C † (T 2 ).
Theorem A also allows to give a complete classification of the action of homeomorphisms on C † (T 2 ) for the ones that are not homotopic to identity.
As proved in [BHM + 22], any homeomorphism having an iterate homotopic to an Anosov linear automorphism acts hyperbolically on C † (T 2 ). Thus it remains to classify the actions in the case of a homeomorphism having an iterate homotopic to a Dehn twist.
Corollary 5. Let f ∈ Homeo(T 2 ) such that f r is homotopic to a Dehn twist.
• f acts hyperbolically on C † (T 2 ) if and only if ρ(f r ) has nonempty interior;
• f acts parabolically on C † (T 2 ) if and only if ρ(f r ) is a single number, and f has unbounded displacement in the vertical direction;
• f acts elliptically on C † (T 2 ) if and only if ρ(f r ) is a single number, and f has bounded displacement in the vertical direction.
Note that by Addas-Zanata, Tal, and Garcia [AZTG14], if ρ(f r ) is reduced to a single rational number, then f has bounded displacement in the vertical direction and hence acts elliptically on C † (T 2 ). To our knowledge, the question whether the second case is nonempty (i.e. if there exists homeomorphisms homotopic to Dehn twists acting parabolically on C † (T 2 )) is open.
Remark that by [Pas14], on a open and dense subset of Homeo 0 (T 2 ), the rotation set is a polygon with rational vertices, hence the set of parabolic elements of Homeo 0 (T 2 ) is included in a closed set with empty interior.
A last comment: in [LRW22] Le Roux and Wolff prove that any automorphism of a variant of the fine curve graph is realized by some homeomorphism of the surface. They suggest that the same result holds for our definition of the fine curve graph, hence in some sense our classification of actions of homeomorphisms on the fine curve graph covers the whole automorphism group of the ifne curve graph. 3. Are there some torus homeomorphisms homotopic to identity acting parabolically but not properly on C † (T 2 )?
4. More generally, what are the sets of possible good limit values? Can they be classified?
A potential master's student of the first author should start thinking about the last three questions soon.
Rotation sets for torus homeomorphisms
Let f ∈ Homeo 0 (T 2 ) and fix a liftf :
R 2 → R 2 .
The rotation set off is the set
ρ(f ) = v ∈ R 2 ∃(x k ) k ∈ R 2 , (n k ) k → +∞ :f n k (x k ) − x k n k −→ k→+∞ v .
A theorem of Misiurevicz and Ziemian [MZ89] states that it is a compact convex subset of R 2 (see also Lemma 8). Some basic properties are straightforward consequences of the definition: for any k ∈ Z, ρ(f k ) = kρ(f ), and ρ is a conjugacy invariant: if g ∈ Homeo 0 (T 2 ) andg is a lift of g to R 2 , then ρ(gfg −1 ) = ρ(f ). It depends on the lift of f in the following way: any other lift of f can be writtenf
+ v, with v ∈ Z 2 . Then ρ(f + v) = ρ(f ) + v.
We will be inspired by an equivalent formulation of the rotation set in Section 2 to define good limit values: fix a fundamental domain D ⊂ R 2 of the torus (e.g. D = [0, 1] 2 ), then (see [MZ89])
ρ(f ) = lim n→+∞f n (D) n ,
where the limit holds in the Hausdorff topology (and in particular, the result states that the limit does exist).
Now, let f ∈ Homeo(T 2 ) homotopic to a Dehn twist. By this, we mean that there exists a basis of the torus in which f is homotopic to the linear automorphism
1 k 0 1 for some k ∈ Z * .
Fix a liftf : R 2 → R 2 , and denote by p 2 : R 2 → R the projection on the second coordinate (according to the basis used to define the Dehn twist).
Following [Doe97], we can define the rotation set off as
ρ(f ) = v ∈ R ∃(x k ) k ∈ R 2 , (n k ) k → +∞ : p 2 f n k (x k ) − x k n k −→ k→+∞ v .
This set is a segment of R. As for homeomorphisms isotopic to the identity, it follows from the definition that for any g ∈ Homeo 0 (T 2 ), we have ρ(f ) = ρ(gfg −1 ). Moreover, two lifts of f to R 2 have rotation sets which differ by an integral translation of R.
Outline
One way of Theorem A is easy to prove: if the homeomorphism f has bounded deviation from a rational direction, then f acts elliptically on C † (T 2 ). We write down the proof of this implication.
Proof of the "if" part in Theorem A. Let us say that f has bounded deviation from v ∈ Q 2 \ {0}. Without loss of generality, we can suppose that v = (p, q), where either p and q are relatively prime integers, or (p, q) ∈ {(0, 1), (1, 0)}. Let α : R/Z → T 2 be the loop defined by t → t(q, −p). When γ and γ ′ are isotopic loops of T 2 , we denote by C γ (γ ′ ) the number of lifts of γ that are met by a given liftγ ′ of γ ′ . This number does not depend on the chosen liftγ ′ of γ ′ . Note that (as already said before) a homeomorphism homotopic to a linear Anosov automorphism has unbounded deviation in any direction, and a homeomorphism homotopic to a Dehn twist has unbounded deviation in any direction but possibly one. In the last case, this direction is such that the loops (f n (α)) n∈N are all homotopic one to each other.
As f has bounded deviation from direction v = (p, q), there exists ρ ∈ R 2 and a liftf :
R 2 → R 2 of f such that | f n (x) − x − nρ, v | is bounded uniformly in x and n.
This implies that the sequence (C α+nρ (f n (α)) n≥0 is bounded, where α + nρ is the loop t → α(t) + nρ. Hence, as the loop α + nρ is either disjoint from α or equal to α, the sequence (C α (f n (α))) n≥0 is bounded. But, by Lemma 4.5 of [BHM + 22], for any n ≥ 0,
C α (f n (α)) + 1 ≥ d C † (T 2 ) (α, f n (α)) and the orbit of α under f in C † (T 2 ) is bounded.
To prove the direct implication, we suppose that f has unbounded deviation from any rational direction; we want to prove that, in this case, f does not act elliptically on C † (T 2 ).
The first step is to define good limit values (Section 2), that are analogs of rotation sets capturing sublinear speeds: instead of dividing by the time, one divides by the diameter of the iterate of the fundamental domain. An important property is that good limit values are convex, as stated in Lemma 8.
We then state a (non) ellipticity criterion in terms of good limit values. This criterion (Proposition 9) improves a criterion given in [BHM + 22] and relies on branched covering maps of the torus by square tiled surfaces.
The next step is done in Section 4. We set a dichotomy for possible shapes of good limit values: either one of them contains a segment of irrational slope, or there exists a rational line containing any good limit value. In the first case, the criterion set in the previous section applies and shows that the action of the homeomorphism is non elliptic. It then remains to treat the second case.
The final argument is given in Section 5. We show that if any good limit value is contained in a single rational line, and if f has unbounded deviation from any rational direction, then (a modified version of) some good limit value contains a segment of irrational slope. This allows to apply once again the parabolicity criterion.
Good limit values
In this section we define good limit values, that could also be called "slow rotation sets": they capture the rotational behaviour in the case when the rotation speed is sublinear.
For any n ≥ 0, we denote by d n the diameter off n (D), where D is the fundamental domain [0, 1] 2 of the torus T 2 . The following lemma implies that there is a subsequence of (d n ) n which tends to +∞. In the sequel we will need a more precise result which is the second part of this lemma.
Lemma 6. If f ∈ Homeo(T 2 ) is not homotopic to a linear Anosov automorphism and has unbounded deviation in some direction, then
sup n∈N d n = +∞.
More precisely, if there exists v ∈ R 2 , ρ ∈ ρ(f ), (n k ) going to infinity, and a sequence of points (x k ) such that for any k ∈ N,
| f n k (x k ) − x k − n k ρ, v | ≥ k, then lim k→+∞ sup x,y∈D f n k (x) −f n k (y), v = +∞.
(2.1)
Let us explain this statement. If f ∈ Homeo(T 2 ) has no iterate homotopic to identity, then sup n∈N d n = +∞, so the first part is relevant only in the case f ∈ Homeo 0 (T 2 ). If f is homotopic to a Dehn twist about the horizontal direction, then (2.1) holds for any v / ∈ R(0, 1), so the lemma is only relevant in the case v ∈ R(0, 1), in which case one can define
f n k (x k ) − x k − n k ρ, v := p 2 f n k (x k ) − x k − n k ρ.
In the case where f is homotopic to a linear Anosov automorphism, then (2.1) holds for any direction v, except possibly the stable direction of the automorphism (which is irrational).
Proof. We prove the lemma in the case f ∈ Homeo 0 (T 2 ), the case where f is homotopic to a Dehn twist is identical.
Suppose that f has unbounded deviation in some direction v. In particular, there exists ρ ∈ ρ(f ), (n k ) going to infinity, and a sequence of points (x k ) such that for any k,
| f n k (x k )−x k −n k ρ, v | ≥ k.
Taking the image of each x k under an integral translation if necessary, we can suppose that the points x k all belong to D. Without loss of generality, by taking a subsequence if necessary, we can suppose that each of those scalar products are positive.
If the set { ρ, v | ρ ∈ ρ(f )} is a nontrivial segment, then the conclusion of the lemma is straightforward. If not, then the quantity of (2.1) does not depend on the choice of ρ ∈ ρ(f ).
Suppose that the last limit of the lemma does not hold. Then, by considering a subsequence if necessary, it holds that
sup k∈N sup x,y∈D 2 f n k (x) −f n k (y), v = R < +∞.
Then for any y ∈ D, applying this to x = x k , we have (considering that n 0 = 0)
f n k (y) − y − n k ρ, v = f n k (y) −f n k (x k ), v + f n k (x k ) − x k − n k ρ, v + x k − y, v ≥ k − 2R.
Observe that the left side of the above inequality does not change if we replace the point y by one of its integral translate so that the inequality actually holds for any y ∈ R 2 . By choosing k ≥ 2R + 1, we get f n k (y) − y − n k ρ, v ≥ 1. As this holds for any y ∈ R 2 , one can iterate: for any z ∈ R 2 and any ℓ ∈ N, we have f ℓ n k (z) − z − ℓ n k ρ, v ≥ ℓ. In particular, this implies that there exists ρ ′ ∈ ρ(f ) \ (ρ + Rv ⊥ ), a contradiction.
Fix f ∈ Homeo(T 2 ) and a pointx 0 ∈ int(D). For any subset A of R 2 , any λ > 0 and any v ∈ R 2 , let us denote λA
+ v = {λa + v | a ∈ A}. For any n ≥ 0, let A n = 1 d n f n (D) −f n (x 0 ) .
More generally, fix a sequence (a k ) k≥0 of positive real numbers as well as a sequence (n k ) k≥0 of integers with n k → +∞. For any k ≥ 0, define
B k = 1 a k f n k (D) −f n k (x 0 ) . (2.2)
Asx 0 ∈ D, observe that, for any n ≥ 0, 0 ∈ A n . Moreover, by definition of (d n ), for any n ≥ 0, the set A n is contained in the closed unit disc of R 2 and has diameter 1. Recall that the set of compact subsets of the closed unit disc, endowed with the Hausdorff topology, is compact.
We endow the set of closed subsets of R 2 with the following topology, which resembles Hausdorff convergence on any (large) ball. Let φ be the map that to any closed subset F of R 2 associates the compact subset F ∪ {∞} of the Alexandroff compactification of R 2 (which is homotopic to S 2 ). The topology on closed subsets of R 2 we consider is then the initial topology associated to φ and Hausdorff topology on the Alexandroff compactification of R 2 . The set of closed subsets of R 2 endowed with this topology is compact.
Definition 7. We call good limit value of the sequence (A n ) any limit value A ∞ of this sequence such that there exists a subsequence (A n k ) k≥0 which converges to A ∞ with lim k→+∞ d n k = +∞.
Note that a good limit value of the sequence (A n ) is a limit value of the sequence (B k ) associated to some sequence (n k ) and with a k = d n k .
By Lemma 6 -combined with the compactness of the set of compact subsets of the unit disk endowed with Hausdorff topology -if f has unbounded deviation in some direction, then it has at least one good limit value. More generally, for any sequences (a k ) k≥0 of positive real numbers and (n k ) k≥0 of integers with n k → +∞, the sequence (B k ) admits a limit value (not necessarily compact).
The following lemma is a direct adaptation of a result of Misiurevicz and Ziemian [MZ89] asserting that rotation sets are convex.
Lemma 8. Suppose that lim k→+∞ a k = +∞. Then any limit value B ∞ of the sequence
(B k ) is a convex subset of R 2 .
Note that this implies that any good limit value A ∞ of the sequence (A n ) is a convex subset of the closed unit disc.
Proof. Let ξ 1 and ξ 2 be two points of a limit value B ∞ of the sequence (B k ) k . Then, extracting a subsequence if necessary, there exist sequences (p k ) k and (q k ) k of points of D such that lim
k→+∞ 1 a k f n k (p k ) −f n k (x 0 ) = ξ 1 and lim k→+∞ 1 a k f n k (q k ) −f n k (x 0 ) = ξ 2 .
Let ξ = λξ 1 + (1 − λ)ξ 2 , with 0 < λ < 1 and let us prove that ξ ∈ B ∞ . For any k ≥ 0, let z k = λf n k (p k ) + (1 − λ)f (q k ). By Lemma 3.3 of the article [MZ89] by Misiurewicz and Ziemia, the setf n k (D) is √ 2-quasi-convex, so that for any k ≥ 0 there exists a point
r k ∈ D such that d(f n k (r k ), z k ) ≤ √ 2. Then the sequence 1 a k f n k (r k ) −f n k (x 0 ) k
has the same limit as the sequence 1 a k z k −f n k (x 0 ) k and this limit is ξ. Hence the point ξ belongs to B ∞ and the set B ∞ is convex.
A non-ellipticity criterion
Let f ∈ Homeo(T 2 ) and x 0 ∈ T 2 . The goal of this section is to prove the following criterion, which generalizes Section 6 of [BHM + 22].
Proposition 9 (Criterion of non-ellipticity). Suppose that the following holds:
1. lim k→+∞ a k = +∞.
2. There exists a sequence (w k ) k of vectors of R 2 such that the sequence
(B k + w k ) k≥0 = 1 a k f n k (D) −f n k (x 0 ) + w k k≥0
of compact subsets of R 2 converges to some closed subset B ∞ for the topology defined before Definition 7.
3. The set B ∞ contains a nontrivial segment of irrational slope.
Then f does not act as an elliptic isometry of C † (T 2 ).
Recall that by [BHM + 22, Theorem 1.3], f acts hyperbolically on C † (T 2 ) if and only if int(ρ(f )) = ∅. Hence this criterion can be used to prove that some homeomorphism acts parabolically on C † (T 2 ).
To prove this proposition, we need to introduce some notation and three lemmas. For any integer m > 0 we let
T 2 m = R/mZ × R/mZ.
This space can be seen as a cover of T 2 = R 2 /Z 2 of degree m 2 via the projection T 2 m → T 2 . We denote by p m : R 2 → T 2 m the projection. We endow T 2 m with the translation surface structure which makes the map
T 2 = R/Z × R/Z → T 2 m (x, y) → (mx, my)
an isomorphism, where T 2 is endowed with the usual translation surface structure. This means that, in comparison to the usual euclidean metric on R 2 , distances are multiplied by 1 m on both coordinates (hence the diameter of T 2 m is √ 2). The first lemma we need is a purely topological lemma. We state it in the case of the torus, which is the case we need, but it is valid on any surface.
Lemma 10. Let γ 1 and γ 2 be two simple paths [0, 1] → T 2 which are homotopic with fixed extremities in T 2 . Then there exists a nonempty open set U ⊂ T 2 such that, for any point p of U , the two paths γ 1 and γ 2 are homotopic with fixed extremities in T 2 \ {p}.
Proof. Define the equivalence relation on T 2 whose equivalence classes are the singletons of points outside γ 1 , and γ 1 . This is the equivalence relation that shrinks the path γ 1 to a point. We denote by T 1 the quotient of T 2 by this equivalence relation and by q 1 the point that is the image of γ 1 in T 1 .
The space T 1 is still a 2-torus and the image α 1 of γ 2 in T 1 is a path. As γ 2 is simple, the path α 1 has only self intersections at the point q 1 and the autointersection points cannot be transverse. Hence the path α 1 is a homotopically trivial loop; it is a union of simple loops based at q 1 which do not meet each other except at q 1 , some of them homotopically trivial and some of them homotopically non trivial (and there is a finite number of such last ones).
The complement of any homotopically trivial simple loop has one component which is homeomorphic to a disk, which we call the interior of such a loop. Take the closure C of the union of the interiors of the homotopically trivial simple loops appearing in the decomposition of α 1 . Observe that C contains the point q 1 , we can shrink C to a point q 2 to obtain a new torus T 2 .
We call α 2 the image of the path α 1 in the torus T 2 . The path α 2 is the concatenation of a finite number of homotopically non trivial simple loops. Moreover, the loop α 2 has auto-intersections only at the point q 2 , none of which are transverse (too see this, use the fact that γ 1 is a simple loop). We then write α 2 as a concatenation of loops β 1 , β 2 , . . . , β r , each of which is a homotopically trivial loop which cannot be written as a concatenation of nontrivial homotopically trivial loops. Fix a liftq 2 of the point q 2 to R 2 and denote byα 2 ,β 1 ,β 2 , . . . ,β r the respective lifts of α 2 , β 1 , β 2 , . . . , β r based atq 2 . Observe that the loopα 2 is the concatenation of the loopsβ 1 ,β 2 , . . . ,β r and that each of the latter loop is simple. Observe that the interior of eachβ i is disjoint from its translates under the group of deck transformations. Hence the interior ofβ i projects injectively to the torus T 2 (because no transverse autointersection is allowed and no deck transformation has a fixed point). We call this projection the interior of the loop β i , and we denote it by I i . Observe also that, for i = j, either I i is contained in I j or I i contains I j or I i and I j are pairwise disjoint. An induction on r shows that the complement of the union of the closures of the I i 's is nonempty. It suffices to take as a set U the preimage in T 2 of this subset of T 2 to prove the lemma.
Remark 11. There is a shorter argument for the proof of this lemma, which uses the folkloric dual function: the curve γ 1 γ −1 2 is homologous to 0. This allows to define a dual function on the complement of its image in the torus, and any point p in which the dual function is equal to 0 suits the conclusion of the lemma.
The three lemmas that follow are essentially proved in [BHM + 22].
Lemma 12. Let α and β be two curves in C † (T 2 ) and m ≥ 1. Then, for any respective liftsα andβ of α and β in T 2 m (via the cover map T 2 m → T 2 ), we have
d C † (T 2 m ) (α,β) ≤ d C † (T 2 ) (α, β).
Proof. This is a straightforward consequence of Lemma 6.3 in [BHM + 22].
Lemma 13. Fix K > 0. There exists a square-tiled surface Σ(K) such that, for any m > 0 and any p ∈ T 2 m , there exists a branched covering map f m,p : Σ(K) → T 2 m , which is branched only at p, with the following properties.
1. For any two essential simple closed curves α and β of T 2 m with d C † (T 2 m ) (α, β) ≤ K and which do not meet the point p, there exist lifts of α and β to Σ(K) which are disjoint.
2. The map f m,p is a local isomorphism of translation surfaces outside f −1 m,p ({p}).
Proof. The proof is almost identical to the proof of Lemma 6.4 in [BHM + 22].
The following lemma is Lemma 6.5 in [BHM + 22].
Lemma 14. Let ξ ∈ R \ Q and Σ be a square-tiled surface. There exists L ′ > 0 such that any line segment in Σ of slope ξ and of length greater than L ′ meets any horizontal closed curve.
As a consequence of the above Lemma, we obtain the following corollary.
Corollary 15. Fix ξ and Σ as in the above lemma and take L = 2L ′ , where L ′ is given by the above lemma. Then there exists ε > 0 such that the following property holds. For any line segment S which is ε-close, for the Hausdorff distance, to a line segment of slope ξ and of length greater than L, any path which is homotopic to S with fixed extremities in the complement of singular points in Σ meets any horizontal closed curve.
Proof. Consider the set K consisting of compact connected subsets of Σ(K) which are either segments of length L and slope ξ or a union of two segments of Σ of slope ξ with a common extremity which is a singularity of Σ and whose total length (i.e. the sum of the lengths of those segments) is equal to L. By Lemma 14, any element of K meets any horizontal closed curve of Σ. Moreover, the set K is compact for the Hausdorff topology. For any element S ′ of K, there exists ε S ′ > 0 such that any line segment S which is ε S ′ -close to S ′ meets any horizontal curve. By compactness of K, we can take ε S ′ = ε independent of S ′ . Observe that any such segment S as above has a nonzero algebraic intersection number with any horizontal curve so that Corollary 15 holds.
Proof of Proposition 9. We prove this proposition by contradiction. Suppose that f acts elliptically on C † (T 2 ). We denote by α the curve t → (t, 0) on T 2 and byα its lift t → (t, 0) to R 2 . Then there exists K ′ > 0 such that, for any n ≥ 0
d C † (T 2 ) α, f n (α) ≤ K ′ .
Let K = K ′ +1 and observe that, for any essential simple closed curve β which is disjoint from α,
d C † (T 2 ) α, f n (β) ≤ K.
Apply Lemma 13 to obtain a square tiled surface Σ(K). Take θ ∈ R \ Q such that the set B ∞ contains a nontrivial segment with irrational slope θ. Apply Corollary 15 with this slope and the surface Σ(K). This corollary gives a number L > 0. Fix M > 0 in such a way that the set M B ∞ contains an irrational segment S ∞ with length > L and slope θ. Take a compact subset C whose interior contains this segment S ∞ . Then the sequence of subsets
1 ⌊ a k M ⌋ f n k (D) −f n k (x 0 ) + M a k M w k k
converges to the subset M B ∞ for the Hausdorff topology. Take an integer k 0 sufficiently large so that the set
1 ⌊ a k 0 M ⌋ f n k 0 (D) −f n k 0 (x 0 ) + M a k 0 M w k 0 ∩ C is ε-close to the set M B ∞ ∩ C,
where ε is given by Corollary 15 and observe that the set
1 ⌊ a k 0 M ⌋ f n k 0 (D) −f n k 0 (x 0 ) ∩ (C − M w k 0 ) is ε-close to the set M (B ∞ − w k 0 ) ∩ (C − M w k 0 ). Fix m = ⌊ a k 0 M ⌋.
Hence there exist two points x, y ∈ D such that the pointf n k 0 (x) −f n k 0 (x 0 ) is mεclose to one extremity of the segment m(S ∞ − M w k 0 ) and the pointf n k 0 (y) −f n k 0 (x 0 ) is mε-close to the other extremity of this segment. Moreover, we choose those points x and y outside any lift of α. In particular, the point p m (f n k 0 (x) −f n k 0 (x 0 )) is ε-close to one extremity of the segment S = p m (m(S ∞ − M w k 0 )) and the point p m (f n k 0 (y) −f n k 0 (x 0 )) is ε-close to the other extremity of this segment. Observe that the length of the segment S is greater than L. Denote by S ′ the line segment in T 2 m joining these two points that remains ε-close to S and let S ′′ = S ′ + p m (f n k 0 (x 0 )). Observe that the segment S ′′ is ε-close to a segment of the same slope and same length as S.
Take a simple pathγ contained in D whose extremities are the points x and y and that does not meet any lift of α. Fix an essential simple closed curve β of T 2 homotopic to α that is disjoint from α and has a lift to R 2 containing the pathγ. Observe that the path p m (f n k 0 (γ)) is homotopic with fixed extremities to S ′′ in T 2 m (because those paths admit lifts to R 2 with the same endpoints). By Lemma 10, there exists a point p of T 2 m , which neither belongs to any lift of α nor to any lift of β, such that those two paths are still homotopic with fixed extremities in T 2 m \ {p}. By Lemma 13, there exists a covering map Σ(K) → T 2 m which is ramified only at the point p. By Corollary 15, any lift of the curve p m (f n k 0 (β)) to the surface Σ(K) meets any horizontal curve of Σ(K). However, the curve p m (f n k 0 (β)) is a lift of the curve f n k 0 (β) to T 2 m so that, by Lemma 12,
d C † (T 2 m ) p m (f n k 0 (β)), p m (α) ≤ K.
Hence the curves p m (f n k 0 (β)) and p m (α) must admit disjoint lifts to Σ(K), which is a contradiction as the latter curve is horizontal.
Possible directions of good limit values
The following proposition gives two possibilities for the possible shapes of the good limit values of the sequence (A n ) n .
Proposition 16. Let f ∈ Homeo 0 (T 2 ). One of the following holds.
1. There exists a good limit value of the sequence (A n ) n which contains a nontrivial segment with irrational slope.
2. There exists a line with rational slope which contains any good limit value of the sequence (A n ) n .
Proof. Suppose that 1. does not hold. Then any good limit value is a convex set with empty interior (any convex set with nonempty interior contains a nontrivial segment with irrational slope). Hence, any good limit value is a segment of rational slope, contained in the closed unit disk, and having diameter 1. We argue by contradiction by supposing that there are at least two different rational directions containing a good limit value. Call these directions θ 1 , θ 2 ∈ P(R 2 ).
We endow P(R 2 ) with a distance δ making it homeomorphic to the circle. For θ ∈ P(R 2 ), denote L θ the line of direction θ passing by 0.
As a first step, we state that if d n is large enough, then A n is very close to a rational segment. This follows from a simple compactness argument.
Claim 17. For any ε > 0, there exists R > 0 such that if d n > R, then there exists θ ∈ P(R 2 ) rational such that the following holds:
∀x ∈ A n , d(x, L θ ) ≤ ε.
(P n,θ,ε )
Proof. Suppose the claim is false. Then there exists ε 0 > 0 and a subsequence (n k ) such that d n k → +∞, and that for any rational direction θ ∈ P(R 2 ), there exists x ∈ A n k with d(x, L θ ) > ε 0 . By taking a subsequence of (n k ) k if necessary, one can suppose that d n k ≥ k and that the sets A n k converge towards some compact set for Hausdorff topology. By the discussion at the beginning of the proof of the proposition, this set can only be a segment with rational slope θ 0 , i.e. contained in some line L θ 0 . Hence, if k is large enough, for any x ∈ A n k we have d(x, L θ 0 ) ≤ ε 0 . This is a contradiction.
It allows to define the following.
Definition 18. A main direction of A n for ε > 0 is a direction θ such that (P n,θ,ε ) holds.
An R-excursion is any integer interval [n 1 , n 2 ] on which any n ∈ [n 1 , n 2 ] satisfies d n ≥ R.
The following says that if R is large enough, then on any R-excursion, the main directions of A n cannot vary a lot.
Lemma 19. For any δ 0 > 0, there exists ε > 0 and R > 0 satisfying: if [n 1 , n 2 ] is an R-excursion, then for any n, n ′ ∈ [n 1 , n 2 ] and any θ, θ ′ such that (P n,θ,ε ) and (P n ′ ,θ ′ ,ε ) hold, we have δ(θ, θ ′ ) < δ 0 .
Note that if the conclusion of the lemma holds for some ε > 0, then it holds for any 0 < ε ′ < ε.
Proof. Suppose it is false. Then there exists δ 0 > 0 such that for any k > 0, there is a kexcursion [n k 1 , n k 2 ] and two directions α k 1 , α k 2 with δ(α k 1 , α k 2 ) ≥ δ 0 such that (P n k 1 ,α k 1 ,1/(2k) ) and (P n k 2 ,α k 2 ,1/(2k) ) hold. By extracting a subsequence, one can suppose that (α k 1 , α k 2 ) converge towards (α 1 , α 2 ), and more precisely that δ(α k 1 , α 1 ) ≤ 1/(2k) and δ(α k 2 , α 2 ) ≤ 1/(2k). In this case, for k large enough, (P n k 1 ,α 1 ,1/k ) and (P n k 2 ,α 2 ,1/k ) hold. We now prove that for n ∈ [n k 1 , n k 2 ], the set of main directions cannot vary a lot between times n and n + 1. Indeed, calling
K f = 2 max d(f , Id R 2 ), d(f −1 , Id R 2 ) , we have d H f n (D) −f n (x 0 ),f n+1 (D) −f n+1 (x 0 ) ≤ K f Hence, d H f n (D) −f n (x 0 ) d n ,f n+1 (D) −f n+1 (x 0 ) d n ≤ K f d n .
(4.1)
But we also have |d n − d n+1 | ≤ K f , so |1 − d n+1 /d n | ≤ K f /d n .
The fact that -given a compact subset A of the unit disc -the map R + ∋ λ → λA is 1-Lipschitz for Hausdorff distance d H implies that
d H f n+1 (D) −f n+1 (x 0 ) d n ,f n+1 (D) −f n+1 (x 0 ) d n+1 = d H d n+1 d nf n+1 (D) −f n+1 (x 0 ) d n+1 ,f n+1 (D) −f n+1 (x 0 ) d n+1 ≤ 1 − d n+1 d n ≤ K f d n .
Combined with (4.1), by triangle inequality, this gives
d H (A n , A n+1 ) = d H f n (D) −f n (x 0 ) d n ,f n+1 (D) −f n+1 (x 0 ) d n+1 ≤ 2K f d n .
Using the fact that d n tends to infinity, we deduce that the Hausdorff distance between A n and A n+1 is in O(1/k) (recall that n ∈ [n k 1 , n k 2 ]). Hence, if (θ n ) and (θ ′ n ) are such that (P n,θn,1/k ) and (P n+1,θ ′ n ,1/k ) hold, then δ(θ n , θ ′ n ) tends to 0 as k tends to infinity. This implies for any η > 0, the directions α 1 and α 2 can be joined by an η-chain consisting of accumulation points of sequences (θ n k ), where n k ∈ [n k 1 , n k 2 ], and (P n k ,θn k ,1/k ) holds.
This proves that the set of θ ∈ P(R 2 ) satisfying the following property contains an interval containing both α 1 and α 2 : there exists a sequence n k ∈ [n k 1 , n k 2 ] such that (P n k ,θn k ,1/k ) holds, d n k ≥ k and θ n k → θ. This is a contradiction as such an interval has to contain an irrational direction.
We are now ready to end the proof of Proposition 16. The idea is that the images of the fundamental domain cannot grow in two different directions on two R-excursions of the same length. We first build such two R-excursions, and then use them to get to a contradiction.
Let δ 0 < δ(θ 1 , θ 2 )/3, and consider
α = min{∠(θ, θ ′ ) | δ(θ 1 , θ) < δ 0 , δ(θ 2 , θ ′ ) < δ 0 }.
Take ε > 0 such that sin α ≥ 6ε.
Consider R associated to ε and δ 0 as in Lemma 19. Increasing R if necessary, one can suppose that
R ≥ K f , with K f = 2 max d(f , Id R 2 ), d(f −1 , Id R 2 ) .
We now consider two R-excursions of the same length, one with main directions close to θ 1 , the other with main directions close to θ 2 ; we moreover suppose that on the first one, the diameter increases a lot. More precisely, we first prove that there exists n 1 ≤ n 2 such that:
• [n 1 , n 2 ] is an R-excursion;
• d n 2 ≥ 4R/ε;
• d n 1 ≤ 2R;
• for any n ∈ [n 1 , n 2 ], there exists θ ∈ P(R 2 ) such that δ(θ 1 , θ) < δ 0 and (P n,θ,ε ) holds.
Indeed, by Lemma 19, for any R ′ > R, as by hypothesis there is at least two different rational directions containing a good limit value, there exists an infinite number of R ′excursions. Moreover, still by Lemma 19, there exists an infinite number of R ′ -excursions [n 1 , n 2 ] such that for any n ∈ [n 1 , n 2 ] and any θ such that (P n,θ,ε ) holds, we have δ(θ, θ 1 ) < δ 0 . So we can consider such a 4R/ε-excursion [ñ 1 , n 2 ], and n 1 the minimal integer such that [n 1 , n 2 ] is a R-excursion. Trivially, [n 1 , n 2 ] satisfies the two firsts points and the last point. But, as we have already seen, |d n 1 − d n 1 −1 | ≤ K f , so d n 1 ≤ d n 1 −1 + K f ≤ 2R.
As, for any n, |d n+1 − d n | ≤ K f , observe that there are arbitrarily long R-excursions with one element n of the excursion satisfying (P n,θ 2 ,ε ). As a consequence, using Lemma 19, one can similarly find n ′ 1 ≤ n ′ 2 such that: Figure 1: Proof of Proposition 16. Iff n 2 (D) andf n ′ 2 (D) have more or less the same size but different main directions, then it forcesf n ′ 2 (D) to have interior (light red shape): the shape in deep red is impossible forf n ′ 2 (D) (recall that n 2 − n 1 = n ′ 2 − n ′ 1 ) because of the shape of the integer translatef n 2 (D) + w off n 2 (D). This forces the relation d ′ n 2 ≫ d n 2 , but a symmetric argument implies that d n 2 ≫ d ′ n 2 , leading to a contradiction.
f n ′ 1 (D)f n 1 (D) f n ′ 2 −n ′ 1f n 2 −n 1 f n ′ 2 (D)f n 2 (D) + wf n 2 (D)
• [n ′ 1 , n ′ 2 ] is an R-excursion;
• n ′ 2 − n ′ 1 = n 2 − n 1 ;
• d n ′ 1 ≤ 2R; • for any n ∈ [n ′ 1 , n ′ 2 ], there exists θ ∈ P(R 2 ) such that δ(θ 2 , θ) < δ 0 and (P n,θ,ε ) holds.
Indeed, consider an R + (n 2 − n 1 )K f -excursion [ñ ′ 1 ,ñ ′ 2 ], consider n ′ 1 the minimal integer such that [n ′ 1 ,ñ ′ 2 ] is an R-excursion, and set n ′ 2 = n ′ 1 + (n 2 − n 1 ); the hypothesis on the size of the excursion ensures that [n ′ 1 , n ′ 2 ] is a R-excursion.
The contradiction then comes as follows (see Figure 1). Take
x 1 ∈ D such that f n 2 (x 1 ) −f n 2 (x 0 ) ≥ d n 2 /2. Note that f n 1 (x 1 ) −f n 1 (x 0 ) ≤ 2R. There exists v 0 , v 1 ∈ Z 2 such thatf n 1 (x 0 ) − v 0 andf n 1 (x 1 ) − v 1 both belong to the fundamental domainf n ′ 1 (D). Hence, v 0 − v 1 ≤ v 0 −f n 1 (x 0 ) − v 1 −f n 1 (x 1 ) + f n 1 (x 0 ) −f n 1 (x 1 ) ≤ d n ′ 1 + d n 1 ≤ 4R.
Now, take any θ such that δ(θ, θ 2 ) ≤ δ 0 , and consider a unit vector u θ orthogonal to L θ .
dist : = d f n 2 −n 1 f n 1 (x 0 ) − v 0 −f n 2 −n 1 f n 1 (x 1 ) − v 1 , L θ = f n 2 (x 0 ) −f n 2 (x 1 ), u θ + v 0 − v 1 , u θ ≥ f n 2 (x 0 ) −f n 2 (x 1 ) sin ∠ f n 2 (x 0 ) −f n 2 (x 1 ), θ − 4R ≥ d n 2 2 sin α − εd n 2 (this is here we use d n 2 ≥ 4R/ε) ≥ d n 2 2 6ε − εd n 2 ≥ 2εd n 2 .
As the two pointsf n 1 (x 0 ) − v 0 andf n 1 (x 1 ) − v 1 belong tof n ′ 1 (D), and as (P n ′ 2 ,θ,ε ) holds for some θ such that δ(θ, θ 2 ) ≤ δ 0 , then dist has to be smaller than εd n ′ 2 for some θ such that δ(θ, θ 2 ) ≤ δ 0 . Hence, d n ′ 2 ≥ 2d n 2 . But then one can apply the exact same argument, permuting θ 1 with θ 2 , to deduce that d n 2 ≥ 2d n ′ 2 . This is a contradiction.
Rational case and end of the proof of Theorem A
We now state the last result we need to prove Theorem A.
Proposition 20. Let f ∈ Homeo(T 2 ). Suppose that any good limit value of the sequence (A n ) is contained in the horizontal axis. Then either f has bounded deviation in the vertical direction, or there exists a sequence (a n ) of positive real numbers tending to infinity, and a sequence (w n ) of vectors of R 2 , such that some limit value of the sequence (B n + w n ) n = 1 a n f n (D) −f n (x 0 ) + w n n contains B(0, 1).
Let us first show how this proposition implies Theorem A.
Proof of Theorem A. Let f ∈ Homeo 0 (T 2 ), and suppose that in any rational direction, f has no bounded deviation. We want to prove that f does not act elliptically. Apply Proposition 16. In the first case given by this proposition, Proposition 9 implies that f does not act elliptically on C † (T 2 ).
Suppose now that the second case given by Proposition 16 holds: There exists a line with rational slope which contains any good limit value of the sequence (A n ) n . By conjugating with an element of SL 2 (Z) if necessary, we do not lose generality by supposing that this direction is horizontal.
Apply Proposition 20. As f has unbounded deviation in the vertical direction, there exists a sequence (a k ) of positive real numbers tending to infinity, and a sequence (w k ) of vectors of R 2 , such that some limit value of the sequence (B n + w k ) k contains B(0, 1).
The parabolicity criterion (Proposition 9) applies and shows that f does not act elliptically on C † (T 2 ).
Now, suppose that f ∈ Homeo(T 2 ) has an iterate homotopic to a linear Anosov homeomorphism A. Then f has unbounded deviation from any direction, and by [BHM + 22, Theorem 5.3], f acts hyperbolically on C † (T 2 ). Note that we could also use our ellipticity criterion (Proposition 9) to conclude that f does not act elliptically: consider (0, 0), (1, 1) ∈ D, thenf n (1, 1) −f n (0, 0) = A n (1, 1); using the fact that these vectors tend to some irrational direction of P(R 2 ) together with the quasi-convexity of fundamental domains we conclude that some limit set B ∞ contains a nontrivial segment of irrational slope.
Finally, suppose that f ∈ Homeo(T 2 ) has an iterate homotopic to a Dehn twist. Conjugating by an element of SL 2 (Z) if necessary, we can suppose that f n is homotopic to a Dehn twist about the horizontal direction. Suppose that f has unbounded displacement in the vertical direction. Then there is some k ∈ Z * such thatf n (0, 1)−f n (0, 0) = (nk, 1). Hence any good limit value contains a nontrivial horizontal interval. If there is a good limit value which is not included in the horizontal axis, as such a limit value is convex and contains a nontrivial horizontal interval, then Proposition 9 applies and f does not act elliptically on C † (T 2 ). If not, then any good limit value is included in the horizontal axis, and Proposition 20 allows to once again apply Proposition 9 to prove that f does not act elliptically on C † (T 2 ).
Proof of Proposition 20. In this proof, for a set A and r ≥ 0, we denote B(A, r) = {x | d(x, A) ≤ r}.
Suppose that any good limit value of the sequence (A n ) n is included in the horizontal axis, and that f has unbounded displacement in the vertical direction.
Applying the same idea as in the proof of Theorem A (3 paragraphs above), by considering the iterates of (0, 0) and (1, 1), we can see that under these conditions, an iterate of f cannot be isotopic to a linear Anosov automorphism, or a Dehn twist in a direction that is not horizontal (adapting the points (0, 0) and (1, 1) in the latter case if necessary). So some iterate of f is isotopic to the identity, or to a Dehn twist. Hence, replacing f with an iterate of it if necessary, we can suppose that f is homotopic to
1 k 0 0 1 for some k 0 ∈ Z.
Claim 21. Suppose that any good limit value of the sequence (A n ) n is contained in the horizontal axis. Suppose also that for any sequence (a n ) of positive real numbers tending to infinity, and any sequence (w n ) of vectors of R 2 , any limit value of the sequence (B n + w n ) n does not contain B(0, 1).
Then there exists C > 0 and, for any n ≥ 0, a line L n passing through 0 and of direction θ n such that:
• θ n tends to the horizontal direction (1, 0);
•f n (D) −f n (x 0 ) ⊂ B(L n , C).
If moreover f has unbounded deviation in the vertical direction then, up to taking a subsequence, we can moreover suppose the following:
• the projection off n (D) on the vertical axis has length h n tending to infinity.
Proof. For any n ∈ N, let x n , y n ∈ D such that d(f n (x n ),f n (y n )) = d n . Let L ′ n be the line passing byf n (x n ) andf n (y n ).
Let b n = max{d(f n (z), L ′ n ) | z ∈ D} and z n ∈ D be such that d(f n (z n ), L ′ n ) = b n . If sup n b n = C/2 < +∞, the two first points of the claim are proved, by setting L n = L ′ n −f n (x 0 ): in this case the distance of any point off n (D) −f n (x 0 ) to L n is smaller than 2C/2 = C.
Otherwise, there exists a subsequence (n k ) along which b n k ≥ k. Let q k be the orthogonal projection off n (z n k ) on L ′ n k , a k = b n k . Note that q k ∈ [x n k , y n k ] because of the definition of x n and y n (for a, b ∈ R 2 , we denote by [a, b] the affine segment between points a and b). Let also
w k = q k +f n k (x 0 ) a k .
Then the set
B n k + w k =f n k (D) −f n k (x 0 ) a k + w k =f n k (D) − q k a k
contains both the two points (f n (x n k ) − q k )/a k and (f n (y n k ) − q k )/a k , which belong to the line L n k = L ′ n k − q k passing through 0, which are at distance d n k /a k ≥ √ k, and such that the segment between these points contains 0; this set also contains the point (f n (z n k ) − q k )/a k that is at distance ≥ √ k of the line L n k = L ′ n k − q k . By Lemma 8, we deduce that any limit value of the sequence B n k + w k contains a quarter of disk (centered at 0) of radius 10. By modifying a bit the sequence (w k ) to (w ′ k ), i.e. by applying a translation to each set B n k + w k , we get a limit value of the sequence (B n + w ′ n ) n containing B(0, 1).
For the last point of the claim, define e n = sup f n (x) −f n (y), (0, 1) | x, y ∈ D 2 the "diameter off n (D) in the vertical direction". By Lemma 6, using that f has unbounded deviation in the vertical direction, we have a subsequence (n k ) along which lim e n k = +∞. This proves the last point.
Now, up to increasing the constant C of Claim 21 if necessary, suppose C ≥ 2. Let m = 5⌈C⌉ and n 1 ∈ N such that h n 1 ≥ 20m and |θ n 1 | ≤ 1/100 (by Claim 21). We denote by p 1 the projection on the first (horizontal) coordinate, and p 2 the projection on the second (vertical) one.
Let n 2 ≥ n 1 such that |θ n 2 | ≪ |θ n 1 |, | tan θ n 2 |d n 1 ≤ C and d n 2 ≫ d n 1 + k 0 n 1 C. Let x − 2 , x + 2 , x ∈f n 2 (D) such that p 1 (x − 2 ) = min(p 1 |f n 2 (D) ), p 1 (x + 2 ) = max(p 1 |f n 2 (D) ), and p 1 (x) = 1 2 (min(p 1 |f n 2 (D) ) + max(p 1 |f n 2 (D) )) (see Figures 2 and Figxx1-). Let v ∈ Z 2 such that x ∈f n 1 (D) + v. We denote D 0 the integer translate of D satisfyingf n 1 (D) + v =f n 1 (D 0 ). Let x − 1 , x + 1 ∈f n 1 (D 0 ) such that d(x − 1 , x + 1 ) = d n 1 and p 1 (x − 1 ) < p 1 (x + 1 ). We suppose that p Ln 1 (x) ≥ 1 2 p Ln 1 (x − 1 ) + p Ln 1 (x + 1 ) (5.1) (we identify L n 1 with R), the other inequality can be treated identically. Similarly, we suppose that θ n 1 > 0. Let n ∈ [m, 2m] ∩ N. Theñ f n 1 D 0 + (0, n) =f n 1 (D 0 ) + w n , with w n = (k 0 n 1 n, n).
Let γ 1 be a path included inf n 1 (D 0 ) linking x − 1 to x + 1 , and γ 2 a path included iñ f n 2 (D) linking x − 2 to x + 2 . We want to prove that the paths γ 1 + w n and γ 2 intersect for any n ∈ [m, 2m]. We first define an affine shear mapping A with linear part of the form 1 0 − tan θ ′ n 2 1 for some angle θ ′ n 2 , such that the abscissa of Ax is 0, and that the points Ax − 2 , Ax + 2 are on the horizontal axis {y = 0}. Note that it forces the angle θ ′ n 2 to be close to θ n 2 , in the sense that θ ′ n 2 − θ n 2 ≪ θ n 2 (because of the bound by C, and the fact that h n 2 goes to infinity).
Let us write Ax − 1 = (a − 1 , b − 1 ), Ax + 1 = (a + 1 , b + 1 ), Ax − 2 = (−M, 0) and Ax + 2 = (M, 0). The fact that d n 2 ≫ d n 1 implies that
max(|a − 1 |, |a + 1 |) ≤ max d(x, x − 1 ), d(x, x + 1 ) ≤ d n 1 ≤ M/2. (5.2)
Hence, because k 0 n 1 m ≤ 6k 0 n 1 C ≪ d n 2 , max |a − 1 + k 0 n 1 n|, |a + 1 + k 0 n 1 n| ≤ M ; (5.3) note that a − 1 + k 0 n 1 n is the abscissa of A(x − 1 + w n ) and a + 1 + k 0 n 1 n is the abscissa of A(x + 1 + w n ). The same estimates hold for any point of γ 1 + w n . We also have (see Figure 3 for the notations and the configuration.) The last inequality comes from the fact that, by the hypothesis (5.1) on x, we have that d(P, Q) ≥ d(P, R)/2. This implies that
b − 1 = p 2 A(x − 1 − x) ≤ − d n 1 − 2C 2 sin θ n 1 + 2C + | tan θ ′ n 2 p 1 (x − 1 − x)| ≤ − d n 1 2 sin θ n 1 + 3C + | tan θ ′ n 2 ||a − 1 | ≤ − d n 1 2 sin θ n 1 + 3C + 2| tan θ n 2 |d n 1 ,
where the last inequality is a consequence of (5.2). Because we have supposed | tan θ n 2 |d n 1 ≤ C, we get b − 1 ≤ − d n 1 2 sin θ n 1 + 5C.
Moreover,
h n 1 ≤ p 2 (R − P ) + 2C = sin θ n 1 d(R, P ) + 2C ≤ sin θ n 1 d n 1 + 2C + 2C ≤ sin θ n 1 d n 1 + 4C,
so b − 1 ≤ − h n 1 − 4C 2 + 5C ≤ − h n 1 2 + 7C.
Hence, because h n 1 ≥ 20m, 6C ≤ m and n ≤ 2m, b − 1 + n ≤ −10m + 8C + 2m = 8(C − m) ≤ −C.
(5.4)
Using the fact that n ≥ m ≥ 5C and that (because θ n 1 ≥ 0) p 2 (x + 1 ) ≥ −2C (see Figure 3), we get b + 1 + n ≥ C. Then the paths γ 1 and γ 2 meet.
we get a similar classification for higher genus surfaces 4 ? 2. Are there some torus homeomorphisms homotopic to identity satisfying lim inf d n < +∞ and lim sup d n = +∞, where d n is the image under a lift of f n to R 2 of a fixed fundamental domain?
Figure 2 :
2End of proof of Proposition 20 in the case of k 0 = 0. In the case k 0 = 0 there is a shear appearing in the translates of D.
p 2
2(x − 1 − x) ≤ p 2 (P − Q) + 2C = −d(P, Q) sin θ n 1 + 2C ≤ − d n 1 − 2C 2 sin θ n 1 + 2C.
Figure 3 :
3End of proof of Proposition 20: estimation of p 2 (x − 1 − x). The length d(P, R) of the red segment is bigger than d n 1 − 2C.
Lemma 22 .
22Let M, C ∈ R + , and γ 2 be a path of R 2 linking the points (−M, 0) and(M, 0) of R 2 , that is included in [−M, M ] × [−C, C].Let γ 1 be a path of R 2 linking the points (a − 1 , b − 1 ) and (a + 1 , b + 1 ) of R 2 , that is included in (−M, M ) × R, with b − 1 < −C and b + 1 > C.
We call pseudo-rotation a homeomorphism f ∈ Homeo0(T 2 ) whose rotation set is reduced to a single point.
The authors have a strategy for a characterization of homeomorphisms isotopic to identity acting hyperbolically on C † (Sg): they should be the ones with nonempty interior homological rotation set, and the ones with a pseudo-Anosov mapping class when removing some periodic orbit. It should be the subject of a future work.
AcknowledgmentsThe first author was supported by a PEPS-JCJC grant. The second author was supported by the ANR project Gromeov ANR-19-CE40-0007.The authors warmly thank Alejandro Kocsard and Roberta Shapiro for their useful comments about the first version of this article.Proof. It suffices to define the path α 2 by concatenating (−∞, −M ) × {0}, γ 2 and (M, +∞) × {0}. This is a Jordan loop of the Alexandroff compactification S 2 of R 2 , which is isotopic -with an isotopy with support included in [−M, M ] × [−C, C]to the path R × {0}. It is then easy to see that the points (a − 1 , b − 1 ) and (a + 1 , b + 1 ) lie in different connected components of S 2 \ α 2 , and hence that α 2 and γ 2 intersect. But it is also easy to see that any intersection point cannot belong to (−∞, −M ) × {0} or (M, +∞) × {0}; this implies that γ 1 and γ 2 intersect.From this we deduce that for any n ∈ [m, 2m], the paths γ 1 + w m and γ 2 intersect. Hence, for any n ∈ [m, 2m] ∩ N, we havẽThis implies that there exists two points z 1 , z 2 ∈ f n 2 −n 1 (D) such that |p 1 (z 1 )−p 1 (z 2 )| ≤ 1 and |p 2 (z 1 ) − p 2 (z 2 )| ≥ 3C. This contradicts the fact that n 2 − n 1 satisfies the two first points of Claim 21: if θ n 2 −n 1 is small enough such a property is incompatible with f n 2 −n 1 (D) −f n 2 −n 1 (x 0 ) ⊂ B(L n 2 −n 1 , C).
Dynamics of homeomorphisms of the torus homotopic to Dehn twists. Salvador Addas-Zanata, Fábio A Tal, Bráulio A Garcia, Ergodic Theory Dyn. Syst. 342Salvador Addas-Zanata, Fábio A. Tal, and Bráulio A. Garcia, Dynamics of home- omorphisms of the torus homotopic to Dehn twists, Ergodic Theory Dyn. Syst. 34 (2014), no. 2, 409-422.
Constructing group actions on quasi-trees and applications to mapping class groups. Mladen Bestvina, Ken Bromberg, Koji Fujiwara, Publ. Math., Inst. Hautes Étud. Sci. 122Mladen Bestvina, Ken Bromberg, and Koji Fujiwara, Constructing group actions on quasi-trees and applications to mapping class groups, Publ. Math., Inst. Hautes Étud. Sci. 122 (2015), 1-64.
Metric spaces of non-positive curvature. R Martin, André Bridson, Haefliger, Grundlehren Math. Wiss. 319SpringerMartin R. Bridson and André Haefliger, Metric spaces of non-positive curvature, Grundlehren Math. Wiss., vol. 319, Berlin: Springer, 1999.
Rotation sets and actions on curves. Bhm + 22] Jonathan, Sebastian Bowden, Kathryn Hensel, Emmanuel Mann, Richard Militon, Webb, Adv. Math. 408 B. 33BHM + 22] Jonathan Bowden, Sebastian Hensel, Kathryn Mann, Emmanuel Militon, and Richard Webb, Rotation sets and actions on curves, Adv. Math. 408 B (2022), 33.
Quasi-morphisms on surface diffeomorphism groups. Jonathan Bowden, Sebastian Hensel, Richard Webb, J. Am. Math. Soc. 351Jonathan Bowden, Sebastian Hensel, and Richard Webb, Quasi-morphisms on sur- face diffeomorphism groups, J. Am. Math. Soc. 35 (2022), no. 1, 211-231.
Conjugation-invariant norms on groups of geometric origin, Groups of diffeomorphisms in honor of Shigeyuki Morita on the occasion of his 60th birthday. Dmitri Burago, Sergei Ivanov, Leonid Polterovich, Mathematical Society of JapanTokyoDmitri Burago, Sergei Ivanov, and Leonid Polterovich, Conjugation-invariant norms on groups of geometric origin, Groups of diffeomorphisms in honor of Shigeyuki Morita on the occasion of his 60th birthday, Tokyo: Mathematical Society of Japan, 2008, pp. 221-250.
Geometry and rigidity of mapping class groups. Jason Behrstock, Bruce Kleiner, Yair Minsky, Lee Mosher, Geom. Topol. 162Jason Behrstock, Bruce Kleiner, Yair Minsky, and Lee Mosher, Geometry and rigid- ity of mapping class groups., Geom. Topol. 16 (2012), no. 2, 781-888.
Dimension and rank for mapping class groups. Jason A Behrstock, Yair N Minsky, Ann. Math. 2Jason A. Behrstock and Yair N. Minsky, Dimension and rank for mapping class groups, Ann. Math. (2) 167 (2008), no. 3, 1055-1077.
On annular maps of the torus and sublinear diffusion. Pablo Dávalos, J. Inst. Math. Jussieu. 174Pablo Dávalos, On annular maps of the torus and sublinear diffusion, J. Inst. Math. Jussieu 17 (2018), no. 4, 913-978.
Hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces. François Dahmani, Vincent Guirardel, Denis Osin, Mem. Am. Math. Soc. 1156AMSFrançois Dahmani, Vincent Guirardel, and Denis Osin, Hyperbolically embedded sub- groups and rotating families in groups acting on hyperbolic spaces, Mem. Am. Math. Soc., vol. 1156, Providence, RI: American Mathematical Society (AMS), 2016.
Rotation measures for homeomorphisms of the torus homotopic to a Dehn twist. H Erik Doeff, Ergodic Theory Dyn. Syst. 173H. Erik Doeff, Rotation measures for homeomorphisms of the torus homotopic to a Dehn twist, Ergodic Theory Dyn. Syst. 17 (1997), no. 3, 575-591.
Rotation sets of toral flows. John Franks, Michał Misiurewicz, Proc. Am. Math. Soc. 1091John Franks and Michał Misiurewicz, Rotation sets of toral flows, Proc. Am. Math. Soc. 109 (1990), no. 1, 243-249.
Mikhaïl Gromov, Hyperbolic groups, Essays in group theory. 8Mikhaïl Gromov, Hyperbolic groups, Essays in group theory, Publ., Math. Sci. Res. Inst. 8, 75-263 (1987)., 1987.
Linearization of conservative toral homeomorphisms. Tobias Jäger, Invent. Math. 1763Tobias Jäger, Linearization of conservative toral homeomorphisms, Invent. Math. 176 (2009), no. 3, 601-616.
Irrational rotation factors for conservative torus homeomorphisms. T Jäger, F , Ergodic Theory Dyn. Syst. 375T. Jäger and F. Tal, Irrational rotation factors for conservative torus homeomor- phisms, Ergodic Theory Dyn. Syst. 37 (2017), no. 5, 1537-1546.
A mixing-like property and inexistence of invariant foliations for minimal diffeomorphisms of the 2-torus. Alejandro Kocsard, Andrés Koropecki, Proc. Am. Math. Soc. 13710Alejandro Kocsard and Andrés Koropecki, A mixing-like property and inexistence of invariant foliations for minimal diffeomorphisms of the 2-torus, Proc. Am. Math. Soc. 137 (2009), no. 10, 3379-3386.
Periodic point free homeomorphisms and irrational rotation factors. Alejandro Kocsard, Ergodic Theory Dyn. Syst. 4110EnglishAlejandro Kocsard, Periodic point free homeomorphisms and irrational rotation fac- tors, Ergodic Theory Dyn. Syst. 41 (2021), no. 10, 2946-2982 (English).
Area-preserving irrotational diffeomorphisms of the torus with sublinear diffusion. Andres Koropecki, Fabio Armando Tal, Proc. Amer. Math. Soc. 14210Andres Koropecki and Fabio Armando Tal, Area-preserving irrotational diffeomor- phisms of the torus with sublinear diffusion, Proc. Amer. Math. Soc. 142 (2014), no. 10, 3483-3490.
Automorphisms of some variants of fine graphs. Le Frédéric, Maxime Roux, Wolff, Frédéric Le Roux and Maxime Wolff, Automorphisms of some variants of fine graphs, 2022.
Random walks on the mapping class group. Joseph Maher, Duke Math. J. 1563Joseph Maher, Random walks on the mapping class group, Duke Math. J. 156 (2011), no. 3, 429-468.
Geometry of the complex of curves. I: Hyperbolicity. A Howard, Yair N Masur, Minsky, Invent. Math. 1381Howard A. Masur and Yair N. Minsky, Geometry of the complex of curves. I: Hy- perbolicity, Invent. Math. 138 (1999), no. 1, 103-149.
Geometry of the complex of curves. II: Hierarchical structure. Geom. Funct. Anal. 104, Geometry of the complex of curves. II: Hierarchical structure, Geom. Funct. Anal. 10 (2000), no. 4, 902-974.
Rotation sets for maps of tori. Michał Misiurewicz, Krystyna Ziemian, J. London Math. Soc. 2Michał Misiurewicz and Krystyna Ziemian, Rotation sets for maps of tori, J. London Math. Soc. (2) 40 (1989), no. 3, 490-506.
Rational polygons as rotation sets of generic homeomorphisms of the two torus. Alejandro Passeggi, J. Lond. Math. Soc., II. Ser. 891Alejandro Passeggi, Rational polygons as rotation sets of generic homeomorphisms of the two torus, J. Lond. Math. Soc., II. Ser. 89 (2014), no. 1, 235-254.
Deviations in the Franks-Misiurewicz conjecture. Alejandro Passeggi, Martín Sambarino, Ergodic Theory Dyn. Syst. 409Alejandro Passeggi and Martín Sambarino, Deviations in the Franks-Misiurewicz conjecture, Ergodic Theory Dyn. Syst. 40 (2020), no. 9, 2533-2540.
On the geometry and dynamics of diffeomorphisms of surfaces. William P Thurston, Bull. Am. Math. Soc., New Ser. 192William P. Thurston, On the geometry and dynamics of diffeomorphisms of surfaces, Bull. Am. Math. Soc., New Ser. 19 (1988), no. 2, 417-431.
| []
|
[
"Worldsheet computation of heavy-light correlators",
"Worldsheet computation of heavy-light correlators"
]
| [
"Davide Bufalini [email protected] \nMathematical Sciences and STAG Research Centre\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom\n",
"Sergio Iguri [email protected] \nInstituto de Astronomía y Física del Espacio (IAFE)\nCONICET -Universidad de Buenos Aires\nC. C. 67, Suc. 281428Buenos AiresArgentina\n\nMathematics with Computer Science Program\nGuangdong Technion -Israel Institute of Technol-ogy\n515063Shantou, GuangdongPeople's Republic of China\n",
"Nicolas Kovensky [email protected] \nInstitut de Physique Théorique\nUniversité Paris Saclay\nCEA\nCNRS\nOrme des Merisiers\n91191Gif-sur-Yvette CEDEXFrance\n",
"David Turton [email protected] \nMathematical Sciences and STAG Research Centre\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom\n"
]
| [
"Mathematical Sciences and STAG Research Centre\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom",
"Instituto de Astronomía y Física del Espacio (IAFE)\nCONICET -Universidad de Buenos Aires\nC. C. 67, Suc. 281428Buenos AiresArgentina",
"Mathematics with Computer Science Program\nGuangdong Technion -Israel Institute of Technol-ogy\n515063Shantou, GuangdongPeople's Republic of China",
"Institut de Physique Théorique\nUniversité Paris Saclay\nCEA\nCNRS\nOrme des Merisiers\n91191Gif-sur-Yvette CEDEXFrance",
"Mathematical Sciences and STAG Research Centre\nUniversity of Southampton\nSO17 1BJSouthamptonUnited Kingdom"
]
| []
| We compute a large collection of string worldsheet correlators describing light probes interacting with heavy black hole microstates. The heavy states consist of NS5 branes carrying momentum and/or fundamental string charge. In the fivebrane decoupling limit, worldsheet string theory on a family of such backgrounds is given by exactly solvable null-gauged WZW models. We construct physical vertex operators in these cosets, including all massless fluctuations. We first compute a large class of novel heavy-light-light-heavy correlators in the AdS 3 limit, where the light operators include those dual to chiral primaries of the holographically dual CFT. We compare a subset of these correlators to the holographic CFT at the symmetric product orbifold point, and find precise agreement in all cases, including for light operators in twisted sectors of the orbifold CFT. The agreement is highly non-trivial, and includes amplitudes that describe the analogue of Hawking radiation for these microstates. We further derive a formula for worldsheet correlators consisting of n light insertions on these backgrounds, and discuss which subset of these correlators are likely to be protected. As a test, we compute a heavy-light five-point function, obtaining precisely the same result both from the worldsheet and the symmetric orbifold CFT. This paper is a companion to and extension of [1]. | 10.1007/jhep03(2023)066 | [
"https://export.arxiv.org/pdf/2210.15313v1.pdf"
]
| 253,157,344 | 2210.15313 | bd10968eeb4a00ed56d1209e7cc983ba28b8ba13 |
Worldsheet computation of heavy-light correlators
27 Oct 2022
Davide Bufalini [email protected]
Mathematical Sciences and STAG Research Centre
University of Southampton
SO17 1BJSouthamptonUnited Kingdom
Sergio Iguri [email protected]
Instituto de Astronomía y Física del Espacio (IAFE)
CONICET -Universidad de Buenos Aires
C. C. 67, Suc. 281428Buenos AiresArgentina
Mathematics with Computer Science Program
Guangdong Technion -Israel Institute of Technol-ogy
515063Shantou, GuangdongPeople's Republic of China
Nicolas Kovensky [email protected]
Institut de Physique Théorique
Université Paris Saclay
CEA
CNRS
Orme des Merisiers
91191Gif-sur-Yvette CEDEXFrance
David Turton [email protected]
Mathematical Sciences and STAG Research Centre
University of Southampton
SO17 1BJSouthamptonUnited Kingdom
Worldsheet computation of heavy-light correlators
27 Oct 2022Prepared for submission to JHEP
We compute a large collection of string worldsheet correlators describing light probes interacting with heavy black hole microstates. The heavy states consist of NS5 branes carrying momentum and/or fundamental string charge. In the fivebrane decoupling limit, worldsheet string theory on a family of such backgrounds is given by exactly solvable null-gauged WZW models. We construct physical vertex operators in these cosets, including all massless fluctuations. We first compute a large class of novel heavy-light-light-heavy correlators in the AdS 3 limit, where the light operators include those dual to chiral primaries of the holographically dual CFT. We compare a subset of these correlators to the holographic CFT at the symmetric product orbifold point, and find precise agreement in all cases, including for light operators in twisted sectors of the orbifold CFT. The agreement is highly non-trivial, and includes amplitudes that describe the analogue of Hawking radiation for these microstates. We further derive a formula for worldsheet correlators consisting of n light insertions on these backgrounds, and discuss which subset of these correlators are likely to be protected. As a test, we compute a heavy-light five-point function, obtaining precisely the same result both from the worldsheet and the symmetric orbifold CFT. This paper is a companion to and extension of [1].
Introduction
String Theory provides a microscopic description of black holes as being bound states of strings and branes with an exponentially large number of internal microstates [2]. Amongst these microstates, there are coherent pure states, large families of which have been shown to be well-described by smooth and horizonless supergravity solutions, see e.g. [3][4][5][6][7][8][9]. Upon taking an appropriate AdS decoupling limit, these solutions are proposed to correspond to specific families of pure states in the holographically dual CFT (HCFT); precision holography has provided sharp evidence supporting this correspondence [10][11][12][13][14]. While supergravity constructions provide valuable insight into the structure of black hole microstates, it is natural to expect that string-theoretic physics beyond supergravity will be necessary to obtain a complete description of black hole microstructure. A fruitful arena in which to investigate such stringy physics is provided by bound states of NS5 branes carrying fundamental string (F1) and/or momentum charge (P). More specifically, we work in Type IIB compactified on S 1 × T 4 , with n 5 NS5 branes wrapped on S 1 × T 4 , n 1 units of F1 winding on S 1 , and n P units of momentum charge along S 1 .
Upon taking the fivebrane decoupling limit, one obtains asymptotically linear dilaton configurations, which are holographically dual to (doubly scaled) Little String Theory [15,16]. In an appropriate region of the parameter space, there is an AdS 3 regime in the IR, and one can take a further AdS 3 decoupling limit [17]. Upon doing so, one obtains the well-studied NS5-F1 instance of AdS 3 /CFT 2 holography [18,19].
The NSNS vacuum of the holographic CFT corresponds to the global AdS 3 × S 3 × T 4 background, whose worldsheet theory involves an SL(2,R)×SU(2) Wess-Zumino-Witten (WZW) model [20][21][22][23][24]. In recent work, a family of gauged WZW models has been constructed and studied involving the same Lie groups, providing an exact worldsheet description of a set of NS5-F1-P black hole microstates [25][26][27][28][29].
Processes in which light probes interact with a heavy background such as a black hole or a black hole microstate give rise to interesting and computable dynamical observables. Mixed heavy-light (HL) correlators have been previously studied in holography, see e.g. [30][31][32][33][34]. In the NS5-F1 system, there is a locus in moduli space at which the holographic CFT is conjectured to be the N = (4, 4) symmetric product orbifold CFT with target space T 4 N /S N , where N = n 1 n 5 . There is now a substantial body of evidence for this conjecture, see e.g. [10][11][12][13][14][35][36][37][38]. For recent discussions of holography in related systems, see [39][40][41].
For instance, heavy-light-light-heavy (HLLH) four-point functions have been computed in the supergravity approximation and/or in the symmetric product orbifold CFT, for particular sets of heavy and light operators [30,33,34]. Having solvable worldsheet models associated to black hole microstates means we can go much further by taking into account α corrections [25]. Given a worldsheet model describing string dynamics on a heavy background, the relevant quantities correspond to (a particular limit of) integrated correlators of light operators in the worldsheet vacuum.
Worldsheet correlators in global AdS 3 were first studied in [24], building in part on [21,42], and the role of the vertex operators associated with spectrally flowed representations was highlighted. Further studies include [43][44][45][46][47]. The spectrum of chiral primaries and their three-point functions in global AdS 3 × S 3 × T 4 were computed in [35][36][37][38], and shown to match those of the symmetric product orbifold CFT, as studied in [48,49].
The supergravity backgrounds we consider are known as NS5-F1 circular supertubes and spectral flows thereof [50][51][52][53][54][55]. This includes non-BPS spectrally flowed supertubes, known as the JMaRT solutions, after the authors of [53]. The associated worldsheet models are null-gauged WZW models, where before gauging one considers a (10+2)-dimensional target space AdS 3 × S 3 × R t × S 1 y × T 4 . Roughly speaking, in the IR AdS 3 regime, the gauging is concentrated mostly along the t and y directions, while in the linear dilaton regime the gauging is concentrated mostly in the time and angular directions of SL(2,R).
These coset models can also be thought of as marginal current-current deformations of the worldsheet theory for strings in AdS 3 . These are instances of a larger class of deformations that undo the decoupling limit with respect to the F1 harmonic function, i.e. they "add back the 1+" in that function, leading to linear dilaton asymptotics; see e.g. [28]. At the level of the dual field theory, a closely related procedure has been argued to correspond to the so-called single-trace TT irrelevant deformation of the original holographic CFT [56,57], flowing towards a non-local Little String Theory.
In this paper we study string correlators in these highly excited backgrounds. To do so, we first compute a large set of physical vertex operators, in both NSNS and RR sectors, building on [26,28]. These describe linearized perturbations of the background configurations. We focus primarily on coset states in discrete series representations, including worldsheet spectral flow, that are dual to chiral primary operator excitations in the HCFT. When the background is BPS, a subset of these are BPS fluctuations.
The currents being gauged in these cosets are linear combinations of the Cartan generators of the symmetry algebra. Therefore the "m-basis" for vertex operators, in which the actions of these currents are diagonalized, is the natural framework to use. In the IR AdS 3 limit, we describe how these operators are related to their global AdS 3 × S 3 counterparts.
We then compute a large set of correlators in the AdS 3 limit. It is well known that in worldsheet models of global AdS 3 , one can define an "x" variable that corresponds to the local coordinate of the holographic CFT [19]. One of the main novelties of our approach is the identification of the analogous x variable in the coset models we study. This identification requires some care due to the gauging. Indeed, the construction of [19] breaks down, because the SL(2,R) raising and lowering operators do not commute with the BRST charge. A considerable amount of interesting physics follows from this step. It leads, for instance, to the combination of seemingly simple m-basis two-point functions into spacetime-local x-basis correlators with highly non-trivial x-dependence.
Our first main result is of a family of HLLH correlators, for which we obtain fully explicit expressions. In doing so, we show that these correlators assume a remarkably simple structure when written in terms of a covering space related only to the heavy states. From this observation, we obtain our second main result: a closed-form expression for a set of HL worldsheet correlators with an arbitrary number n of massless insertions, in terms of a correlator consisting of n light insertions in global AdS 3 × S 3 . For n = 3 this result can be made completely explicit, and we present a particular example in full detail. This constitutes the first correlator in the literature involving three light worldsheet vertices on a black hole microstate background, dual to a heavy-light five-point function of the holographic CFT.
A priori, our worldsheet correlators give predictions for correlators of the dual holographic CFT at strong coupling. Generically, four-point correlators are not protected across moduli space, however, a specific set of HLLH correlators have been shown to precisely agree between supergravity and the symmetric product orbifold CFT [30]. Similarly, the emission spectrum and rate for the unitary analog of Hawking radiation from the JMaRT solutions agrees between supergravity and symmetric product orbifold CFT [55,[58][59][60]. Thus it is natural to investigate more generally which HL correlators are protected (at large N ) between worldsheet and symmetric product orbifold CFT, and which are not.
We carry out this comparison for three sub-families of our worldsheet correlators. Firstly, we compare various sets of HLLH correlators to the symmetric product orbifold CFT, finding exact agreement in all cases for which the orbifold CFT correlator is available in the literature. Importantly, this matching holds at leading order in large N , but exactly in α . This comparison includes a substantial generalization of the supergravity and holographic CFT correlators computed in [30]. Our comparisons notably include an example in which the light operators in the symmetric orbifold CFT are twist-two. In this case, and as shown recently in [61,62], the Lunin-Mathur covering map used in the symmetric orbifold computation is different to the one appearing in the worldsheet computation, making the comparison highly non-trivial. Remarkably, both results agree exactly in the large N limit.
Secondly, we compute the five-point HLLLH symmetric orbifold CFT correlator corresponding to the three-point worldsheet correlator mentioned above, and also find exact agreement. While most of our main results were announced in the short paper [1], this five-point correlator is completely new.
Finally, we compute the analogue of the Hawking radiation rate for the JMaRT solutions. Once again, we find perfect agreement with the dual symmetric product orbifold CFT, extending the supergravity and holographic CFT results of [55,[58][59][60].
A likely explanation for this remarkable agreement is that the heavy states we consider are quite special. Specifically, the heavy backgrounds are related to the global AdS 3 × S 3 vacuum via orbifolding and fractional spectral flow [54,55]. This fact also underlies our general formula for the HL correlators with n light insertions. When n > 3, we do not expect these HL correlators to be generically protected across moduli space; we shall discuss this in detail in due course.
The structure of the paper is as follows. In Section 2 we review the null-gauged WZW models we study. We present the supergravity fields in the fivebrane decoupling limit, take the AdS 3 limit, and describe the dual heavy states of the holographic CFT. In Section 3 we present the light operators we are interested in. We review the chiral primaries of the symmetric product orbifold CFT. We then describe how the corresponding operators are constructed in the worldsheet theory for strings in AdS 3 × S 3 in the RNS formalism, including spectrally flowed states. In Section 4 we construct a large set of vertex operators of the worldsheet cosets we study, both in the NS and in the R sectors. We then examine their AdS 3 limit and relate these vertices to those constructed in Sec. 3.
In Sections 5 and 6, we present our main results. We identify the "x" variable dual to the local coordinate of the holographic CFT, and obtain an extensive set of novel HLLH correlators, including massless insertions with arbitrary spacetime weights and charges. The final results are presented in Eqs. (6.16) and (6.28). We then compare a subset of these results to the symmetric product orbifold CFT, finding exact agreement for all correlators available in the literature. We present a closed formula for a large class of worldsheet correlators with an arbitrary number of massless insertions, Eq. (6.29). We compute a five-point correlator in the symmetric orbifold CFT and find agreement with our general worldsheet formula. Finally, we compute the amplitude describing the unitary analogue of Hawking radiation for the JMaRT microstates. We discuss our results in Section 7.
The heavy background states
In this section we introduce the heavy backgrounds known as NS5-F1 circular supertubes and spectral flows thereof [50][51][52][53][54][55], including the general NS5-F1-P JMaRT solutions and their BPS limits. We work in the fivebrane decoupling limit, and review the worldsheet description of tree-level string theory on these backgrounds in terms of null-gauged WZW models [25,26,29]. In the IR, the backgrounds become asymptotically AdS 3 × S 3 , and we review the corresponding heavy states of the holographically dual CFT at the symmetric orbifold point [55].
JMaRT backgrounds from the worldsheet
We begin by reviewing the family of coset CFT models that describe strings probing the JMaRT backgrounds (and their BPS limits), introduced in [25] and analyzed in [26,29]. We will mostly use the notation and conventions from [29], to which we refer the reader who is interested in more details. We work in units in which α = 1.
The null-gauged WZW model relevant for the present work has the following coset as a target space:
G/H × T 4 = SL(2, R) × SU(2) × R t × U(1) y U(1) L × U(1) R × T 4 . (2.1)
To be precise, globally we work with the universal cover of SL(2,R), and we gauge R×U (1).
The line element and NSNS three-form flux of the 10+2-dimensional "upstairs" model before gauging are given in local coordinates by ds 2 = n 5 − cosh 2 ρ dτ 2 + dρ 2 + sinh 2 ρ dσ 2 + dθ 2 + cos 2 θdψ 2 + sin 2 θdφ 2 − dt 2 + dy 2 ,
H = n 5 sinh 2ρ dρ ∧ dτ ∧ dσ + sin 2θ dθ ∧ dψ ∧ dφ . (2.2)
The Killing vectors associated to the group action being gauged are 1
ξ L = (∂ τ − ∂ σ ) − l 2 (∂ ψ − ∂ φ ) + l 3 ∂ t − l 4 ∂ y , ξ R = (∂ τ + ∂ σ ) + r 2 (∂ ψ + ∂ φ ) + r 3 ∂ t − r 4 ∂ y . (2.3)
We could be slightly more general and include similar parameters l 1 , r 1 in (2.3), however we have assumed these to be non-vanishing and have set l 1 = r 1 = 1 by a choice of normalization. The corresponding currents are
J = j 3 sl + l 2 j 3 su + l 3 P t L + l 4 P y L ,J =¯j 3 sl + r 2j 3 su + r 3 P t R + r 4 P y R ,(2.
4)
1 Note that some conventions differ from our letter [1]. In the latter, the following changes must be performed to make contact with our current notation:
s+ → l2, s− → −r2, µ → l3, k+ → l4, k− → r4.
where 2 j 3 sl = n 5 cosh 2 ρ ∂τ + sinh 2 ρ ∂σ ,j 3 sl = n 5 cosh 2 ρ∂τ − sinh 2 ρ∂σ , j 3 su = n 5 cos 2 θ ∂ψ − sin 2 θ ∂φ ,¯j 3 su = −n 5 cos 2 θ∂ψ + sin 2 θ∂φ , (2.5) and
P t L = ∂t , P t R =∂t , P y L = ∂y , P y R =∂y . (2.6)
For the currents in Eq. (2.4) to be null, we impose the constraints
n 5 (1 − l 2 2 ) + l 2 3 − l 2 4 = 0 , n 5 (1 − r 2 2 ) + r 2 3 − r 2 4 = 0 . (2.7)
Upon integrating out the gauge fields, the gauging procedure effectively adds a term quadratic in the currents, resulting in an action of the schematic form
S WZW + 2 π JJ Σ d 2 z , (2.8) with Σ ≡ − 1 2 ξ i 1 G ij ξ j 2 , (2.9)
where G ij is the metric in Eq. (2.2). One can then read off the resulting line element and B-field of the gauged model. The change in the measure also generates a non-trivial dilaton, which can be obtained by solving for the vanishing of the appropriate worldsheet one-loop beta function, see [29]. The geometry obtained from the WZW model is free of horizons and closed timelike curves (CTCs) if and only if [29] l 3 = r 3 .
(2.10)
To obtain smooth geometries up to orbifold singularities or NS5 sources, we further impose 3
l 2 = m + n ∈ 2Z + 1 , r 2 = m − n ∈ 2Z + 1 , m, n ∈ Z ,(2.11)
and
l 4 = − kR y − p R y , r 4 = kR y + p R y , k, p ∈ Z . (2.12)
Combining these expression with the null constraints Eq. (2.7) leads to
l 3 = r 3 = − k 2 R 2 y + p 2 R 2 y + n 5 (m 2 + n 2 − 1) ,(2.13)
and k p = n 5 m n , (2.14)
hence that only three of the four integers k, m, n, p are independent. The very same conditions are necessary and sufficient for the consistency of the spectrum of the worldsheet CFT 2 Compared to [29] we have implemented the change θ → π 2 − θ, φ ↔ −ψ. This effectively exchanges the sign of r2. 3 There are also consistent models with l2 = r2 = 0, which we do not consider in this work. [29]. For the AdS 3 limit of these backgrounds to be dual to pure states of the holographic CFT, there is an additional requirement from momentum quantization in the k-twisted sectors that we shall review in Section 2.3 (see e.g. [55]),
mn k ∈ Z . (2.15)
Without loss of generality, we work in the range of parameters k ≥ 0, m > n ≥ 0.
Supergravity configurations and AdS 3 limit
The family of coset CFTs we have just defined corresponds precisely to the NS5-decoupled JMaRT configurations and their limits [29]. Using the integer parametrization introduced above, the supergravity fields are given by
ds 2 = n 5 (dθ 2 + dρ 2 ) + 1 Σ 0 − sinh 2 ρ + (m 2 − n 2 ) cos 2 θ + 1 − m 2 − p 2 n 5 R 2 y dt 2 + sinh 2 ρ + (m 2 − n 2 ) cos 2 θ + n 2 + p 2 n 5 R 2 y dy 2 − 2 p n 5 R y ∆ dtdy + n 5 sinh 2 ρ + n 5 m 2 + k 2 R 2 y sin 2 θ dφ 2 + n 5 sinh 2 ρ + n 5 n 2 + k 2 R 2 y cos 2 θdψ 2 + 2 m∆dt − m p R y + nkR y dy sin 2 θdφ − 2 n∆dt − n p R y + mkR y dy cos 2 θ dψ , B = 1 Σ 0 − kR y n 5 ∆dt ∧ dy + n 5 sinh 2 ρ + n 5 m 2 + k 2 R 2 y cos 2 θ dφ ∧ dψ + m∆dt − m p R y + nkR y dy ∧ cos 2 θdψ − n∆dt − n p R y + mkR y dy ∧ sin 2 θdφ , (2.16)
where R y is the asymptotic proper radius of the S 1 y circle, and where
Σ 0 = sinh 2 ρ + (m 2 − n 2 ) cos 2 θ + n 2 + k 2 R 2 y n 5 , (2.17) ∆ = n 5 (m 2 + n 2 − 1) + k 2 R 2 y + p 2 R 2 y . (2.18)
We also note the relations between the supergravity charges and integer charge quanta,
Q 1 = n 1 g 2 s V 4 , Q p = n p R 2 y g 2 s V 4 ,(2.19)
where (2π) 4 V 4 is the coordinate volume of the T 4 . Note that the three-charge NS5-decoupled JMaRT solutions are specified by the integers n 5 , k, m, n, the modulus R y , and the charge Q 1 appearing in the dilaton.
One can take a further IR AdS 3 decoupling limit by taking R y to be large, keeping fixed the charge Q 1 and the rescaled energy ER y and momentum P y R y . We define rescaled coordinatest = t R y ,ỹ = y R y . (2.20) and perform the large-R y expansion at the level of the coefficients in Eqs. (2.11)-(2.14), such that the leading terms in l 3 , r 3 , l 4 and r 4 become independent of p. However, the product kp is kept fixed and the relation Eq. (2.14) still holds, defining the momentum per strand for the holographic CFT [55]. Order-by-order in 1/R y , the coefficients still satisfy the null conditions Eq. (2.7). The six-dimensional fields in (2.16) then become
ds 2 = n 5 − 1 k 2 cosh 2 ρ dt 2 + 1 k 2 sinh 2 ρ dỹ 2 + dρ 2 (2.21) + dθ 2 + sin 2 θ dφ − n k dt + m k dỹ 2 + cos 2 θ dψ + m k dt − n k dỹ 2 , B = n 5 sinh 2 ρ + (m 2 − n 2 ) cos 2 θ k 2 dt ∧ dỹ + cos 2 θ dφ ∧ dψ (2.22) + sin 2 θ − n k dt + m k dỹ ∧ dφ + cos 2 θ m k dt − n k dỹ ∧ dψ , e 2Φ = n 5 Q 1 = Q 5 Q 1 , (2.23)
where a trivial gauge transformation has been performed on the B-field. These solutions are related by a large coordinate transformation to Z k orbifolds of global AdS 3 × S 3 , which are the decoupling limits of the supertube solutions of [63,64]. This large coordinate transformation is known as spacetime spectral flow, and takes the formψ
= ψ + m kt − n kỹ ,φ = φ − n kt + m kỹ . (2.24)
For the special case m = n = 0, spectral flow is not relevant and the solutions are already Z k orbifolds of global AdS 3 × S 3 . When m, n are not both zero, one typically works in the range m > n ≥ 0 without loss of generality. For m = 1, n = 0, the solutions (2.21)-(2.23), are the AdS decoupling limits of the two-charge circular supertube solutions of [63,64]. For m = n + 1 with n > 0, the solutions the AdS limit of the supersymmetric spectral flowed solutions of [50][51][52]54], and for other values of m, n one obtains the AdS limit of the non-supersymmetric JMaRT solutions [53]. For k = 1 the solutions are smooth; for k > 1 the solutions have orbifold singularities nearỹ = 0, the details of which depend on the common divisors of m, n, k [26,53,55].
Holographic description and boundary spectral flow
As mentioned in the Introduction, the holographic CFT that corresponds to the AdS 3 limit of the system in which we work is an N = (4, 4) symmetric product orbifold CFT with target space T 4 N /S N , where N = n 1 n 5 . To make the presentation self-contained, we now review some aspects of this theory.
Recall that we work in Type IIB compactified on S 1 × T 4 , with n 5 NS5 branes wrapped on S 1 × T 4 , n 1 units of F1 winding on S 1 , and n P units of momentum charge along S 1 . The moduli space is 20-dimensional and the symmetric product orbifold CFT lies at a particular locus of this moduli space [65], see also [6]. The configuration breaks the SO (1,9) Lorentz group to SO(1, 1) × SO(4) E × U(1) 4 , where the external R-symmetry SO(4) E SU(2) L × SU(2) R corresponds to rotations in the spatial R 4 transverse to the branes (in the IR limit, rotations of the S 3 ). It is customary to introduce an approximate internal SO(4) I SU(2) 1 × SU(2) 2 , which is broken to U(1) 4 by the compactification, but which is useful for classifying states and organizing fields [66,67].
In the symmetric product orbifold theory, for each copy of T 4 there are four free bosons, together with their left and right-moving fermionic superpartners. Indices α,α, A,Ȧ correspond respectively to SU(2) L , SU(2) R , SU(2) 1 , SU(2) 2 . The free fields are denoted as (we use the conventions of [59])
X AȦ (r) (z,z) , ψ αȦ (r) (z) ,ψαȦ (r) (z) ,(2.25)
where the subscript (r) denotes the r-th copy of the seed T 4 theory. Omitting this copy subscript and focusing on the holomorphic sector, the energy-momentum tensor T(z), the supercurrents G αA (z) and the SU(2) L currents J a generate the small (4,4) supersymmetric algebra. We denote holographic CFT conformal weights by h and R-symmetry quantum numbers by (j, m ) and (j,m ), respectively. The heavy states we are interested in are obtained by fractional spectral flow [55], see also [54,68]. We start from the NSNS vacuum in the k-twisted sector, |0 k NS . In order to have a gauge invariant state we consider n 1 n 5 /k identical strands of length k. Its dimension is
h =h = c 24 1 − 1 k 2 ,(2.26)
where the central charge c = 6N . The R-charges of this state are zero. Because all strands are of length k there is an enhancement of the usual spectral flow, such that one can perform spectral flow with fractional parameters,
α = m + n k = 2s + 1 k ,ᾱ = m − n k = 2s + 1 k , (2.27)
where s,s ∈ Z and the range m > n ≥ 0 is the range s ≥s ≥ 0. This generates a new state with quantum numbers
h = c 24 1 − 1 k 2 + α 2 , m = αc 12 , h = c 24 1 − 1 k 2 +ᾱ 2 ,m =ᾱ c 12
.
(2.28) These states are "heavy" in the sense that their conformal dimensions and charges scale linearly with the large central charge c = 6N . In the dual theory they correspond to the classical configurations presented in Eqs. (2.21)-(2.23). By constrast, the "light" perturbative string states probing these backgrounds will correspond to holographic CFT states with conformal dimensions that are independent of c.
We conclude this section by summarizing how the bulk, the worldsheet, and the dual CFT encode in different ways the same information about the heavy state.
• In the worldsheet model, the heavy state defines the theory itself by means of the gauging parameters l i , r i appearing in Eq. (2.4) and the radius R y .
• In supergravity, the information about the heavy state is contained in the integers m, n, k parameterizing the fields in Eq. (2.16), together with R y , which gets scaled out in the AdS limit.
• In the symmetric orbifold CFT, the information about the heavy states is contained in the spectral flow parameters α andᾱ, and the twist index k.
The map between the three descriptions, in the AdS limit, is then
− R y l 2 l 4 Ry→∞ −−−−→ m + n k = α , R y r 2 r 4 Ry→∞ −−−−→ m − n k =ᾱ . (2.29)
The light probe states
We now introduce the light states that we will study. These correspond to chiral primary operators of the boundary theory. We focus first on fluctuations around the global AdS 3 × S 3 vacuum. We review the dictionary between holographic CFT operators and their counterparts in the worldsheet theory, following [37,38].
Chiral primaries in the D1D5 CFT
We first briefly review the construction of chiral primary operators in the symmetric orbifold CFT [49]. We focus primarily on the holomorphic sector in the following; the antiholomorphic sector is entirely analogous. In the untwisted sector, on each copy of the seed T 4 theory, the chiral primary operators correspond to the states (suppressing the copy (r) label)
|0 NS , ψ +Ȧ − 1 2 |0 NS , J + −1 |0 NS = ψ +1 − 1 2 ψ +2 − 1 2 |0 NS , (3.1)
where |0 NS is the NS vacuum. The corresponding weights and R-charges are h = m = 0, 1 2 , 1, respectively. Physical configurations in the orbifold theory are obtained by symmetrizing the states in (3.1) over the different copies of the seed theory.
By including the antiholomorphic sector we can obtain, for instance, the dimension ( 1 2 , 1 2 ) operator (see e.g. [69])
O ++ = N r=1 O ++ (r) = −i √ 2 N r=1 ψ +Ȧ (r) εȦḂψ +Ḃ (r) , (O ++ ) † = O −− . (3.2)
We will use this operator in an explicit example later in the paper.
In order to construct more general chiral primaries one needs to consider the twisted sectors of the theory. Consider the 'bare' twist operators σ n , defined on the cylinder, that impose the following boundary conditions corresponding to a single-cycle permutation,
X (1) → X (2) → · · · → X (n) → X (1) , ψ (1) → ψ (2) → · · · → ψ (n) → −ψ (1) ,(3.3)
and likewise for the antiholomorphic fermions. The bare twist operators are defined to be the lowest-dimension twist operators that impose the above boundary conditions; they have dimension h =h = 1 4 n − 1 n and zero R-charge. Chiral operators are obtained by exciting the bare twist operators operators to add R-charge. The lowest-dimension chiral operators have h = m = n−1 2 . For n odd, these operators are obtained by acting with modes of the SU(2) currents, which are bilinears in the free fermions. Due to the twist operator, the SU(2) currents are fractional-moded in units of 1/n. The relation between these modes and those of free fermions on the n copies of the seed theory can be found in [49]. To construct the chiral operators, one acts with the currents J + −l/n for which l is odd and l < n, n odd :
σ − n = (n−1)/2 p=1 J + − 2p−1 n σ n = J + − n−2 n · · · J + − 3 n J + − 1 n σ n . (3.4)
For n even, one first acts with a spin field S + n , which has weight 1 4n and charge 1 2 , putting the fermions into the Ramond sector (i.e. their boundary conditions are similar to Eq. (3.3) but with the final sign being +ψ (1) ). One then acts with the currents J + −l/n for which l is even and l < n, n even :
σ − n = (n−2)/2 p=1 J + − 2p n S + n σ n = J + − n−2 n · · · J + − 4 n J + − 2 n S + n σ n . (3.5)
As in the untwisted case, for both odd and even n we can act with ψ +Ȧ
− 1 2 ≡ n r=1 ψ +Ȧ − 1 2 (r) to obtain a chiral operator ψ +Ȧ − 1 2 σ − n which has h = m = n 2 .
Similarly we can act with J + −1 to obtain a chiral operator J + −1 σ − n which has h = m = n+1 2 . Together with the analogous antiholomorphic operators, this exhausts the single-cycle chiral operators. Indeed, by making use of anti-commutators of the supercurrent modes G ±A −m/n in the corresponding twisted sectors, one can show that chiral weights are bounded by [67]
n − 1 2 ≤ h ≤ n + 1 2 . (3.6)
In the twisted sectors, it is often convenient to work in a basis that diagonalizes the twisted boundary conditions. We shall make use of this basis in Section 6.4. One defines
ψ αȦ ρ = 1 √ n n r=1 e α 2πirρ n ψ αȦ (r) , ρ = 0, . . . , n − 1 ,(3.7)
where α = ± should not be confused with the spectral flow parameter in Eq. (2.27). These are mutually orthogonal, and diagonalize the twisted boundary conditions as
ψ αȦ ρ (e 2πi z) = e −α 2πiρ nψ αȦ ρ (z) . (3.8)
These fermions can be bosonized to construct an explicit expression for the spin fields mentioned above. Note that the fieldsψ αȦ ρ=0 are invariant under the twisting. For further discussion, see [37].
We now combine the above holomorphic construction with its antiholomorphic counterpart and define the complete list of scalar left-right chiral primaries we will be interested in:
O −− n = σ −− n , OȦḂ n =ψ +Ȧ ρ=0ψ +Ḃ ρ=0 σ −− n , O ++ n =ψ +1 ρ=0ψ +2 ρ=0ψ +1 ρ=0ψ +2 ρ=0 σ −− n , (3.9) where σ −− n
is defined similarly to Eqs. (3.4), (3.5) but now also with the same construction in the antiholomorphic sector. The operators in (3.9) are normalized such that they have unit two-point functions.
For later reference, we note that in each case the respective weights and twist numbers can be written in terms of j = n+1 2 as
h O −− n = j − 1 , h OȦḂ n = j − 1 2 , h O ++ n = j . (3.10)
An analogous list of anti-chiral primaries (which have h = −m ) is obtained by acting on the bare twist fields with current and fermion modes with opposite charge, i.e. J − −l/n and ψ −Ȧ . As we will shortly review, and up to a shift related to spectral flow, this j will be identified with the principal quantum number of the bosonic (global) SL(2,R) algebra of the worldsheet theory, to which we now turn.
Superstring theory on AdS
3 × S 3 × T 4
We now review the basics of superstring theory on AdS 3 × S 3 × T 4 using the RNS formalism with BRST quantization. We first discuss the bosonic SL(2, R) and SU(2) WZW models and then present their supersymmetric counterparts. We present the current algebra and review the spectrum, including states arising from worldsheet spectral flow.
Bosonic WZW model for SL(2,R)
The SL(2,R) WZW model was studied in detail in [22][23][24]. In what follows we will mostly follow the notation of [37,38,46], and normal ordering will be implicitly assumed. The holomorphic SL(2,R) currents will be denoted j a (z). They satisfy the OPEs
j a (z)j b (w) ∼ k 2 η ab (z − w) 2 + f ab c j c (w) z − w , (3.11)
where k is the level of the affine algebra, and where
− 2η 33 = η +− = 2 , f +− 3 = −2 , f 3+ + = −f 3− − = 1 . (3.12)
The holomorphic stress tensor and the central charge follow from the Sugawara construction, and are given by (likewise for the antiholomorphic sector)
T sl (z) = 1 k − 2 −j 3 (z)j 3 (z) + 1 2 j + (z)j − (z) + 1 2 j − (z)j + (z) , c sl = 3k k − 2 . (3.13)
We denote bosonic SL(2, R) primary vertex operators by V j,m,m (z,z). Their zero-mode wavefunctions do not factorize between holomorphic and antiholomorphic sectors, however as is often done we shall work primarily with the holomorphic sector, and suppress thē m andz dependence. The relevant representations of the holomorphic zero-mode algebra are as follows. The principal series discrete representations of lowest (highest) weight are spanned by D ± j = {|j, m , m = ±j, ±j ± 1, ±j ± 2, · · · } , (3.14)
respectively, where j 3 0 |j, m = m|j, m . These are unitary representations for any positive real j, and one is the charge conjugate of the other (we will restrict the range of j momentarily). There are also the principal continuous series representations, spanned by
Cα j = {|j,α, m , 0 ≤α < 1 , j = 1 2 + is , s ∈ R , m =α,α ± 1,α ± 2, · · · } . (3.15)
The particular caseα = 1/2 = j is actually reducible. It was shown in [22] that the spectrum of the model is built out of continuous and lowest weight representations with
1 2 < j < k − 1 2 , (3.16)
together with their spectrally flowed images, to be introduced below. The allowed range (3.16) follows from L 2 normalization conditions, no-ghost theorems and spectral flow considerations. Before considering worldsheet spectral flow (we refer to this as the "unflowed" sector), the action of the currents on the primary states is given by
j 3 0 |j, m = m|j, m , (3.17a) j ± 0 |j, m = (m ∓ (j − 1))|j, m ± 1 if m = ∓j 0 if m = ∓j , (3.17b) j a n |j, m = 0 ∀n > 0 . (3.17c)
These vertex operators can be obtained from those of the Euclidean counterpart of the model, namely the H + 3 WZW model [21,70,71] (see also [72]), as follows. One introduces a set of operators depending on a complex label x, written as V j (x|z), and having conformal weight
∆ = − j(j − 1) k − 2 . (3.18)
The action of the currents on V j (x, z) is given by
j a (z)V j (x, w) ∼ D a j V j (x, w) (z − w) ,(3.19)
where
D + j = ∂ x , D 3 j = x∂ x + j , D − j = x 2 ∂ x + 2jx . (3.20)
The two-point function is given by [21]
V j 1 (x 1 , z 1 )V j 2 (x 2 , z 2 ) = 1 |z 12 | 4∆ 1 δ 2 (x 1 − x 2 )δ(j 1 + j 2 − 1) + B(j 1 ) |x 12 | 4j 1 δ(j 1 − j 2 ) , (3.21) with B(j) = 2j − 1 π Γ[1 − b 2 (2j − 1)] Γ[1 + b 2 (2j − 1)] ν 1−2j , ν = Γ[1 − b 2 ] Γ[1 + b 2 ] , b 2 = (k − 2) −1 . (3.22)
The operators V j,m (z) are related to V j (x|z) by means of the following Mellin-like transform:
V j,m (z) = C d 2 x x j−m−1xj−m−1 V j (x, z) . (3.23)
In the Euclidean H + 3 model, j takes values j = 1/2 + is. To obtain the unflowed V j,m for Lorentzian AdS 3 , one assumes a well-defined analytically continuation to real values of j. This procedure was discussed in [24], which identified the physical origin of the different divergences arising in correlation functions. For related work, see [73]. The two-point functions in the m-basis then follow from (3.21), (3.23). Using the shorthand
V i ≡ V j i ,m i , one finds V 1 V 2 = δ 2 (m 1 + m 2 ) |z 12 | 4∆ 1 δ(j 1 + j 2 − 1) + δ(j 1 − j 2 ) πB(j 1 ) γ(2j 1 ) γ(j 1 + m 1 ) γ(1 − j 1 + m 1 ) , (3.24)
where γ(x) = Γ(x)/Γ(1 −x), and where δ 2 (m) is a Dirac delta in m +m times a Kroenecker delta in m −m. At first sight, the complex variable x may appear simply as an SL(2,R) version of the isospin variables defined for SU (2) in [20]. However, given that the integrated zero modes of the currents realize the spacetime Virasoro modes L 0 and L ±1 , and by examining the expressions of the associated differential operators (3.20), one is led to interpret x as the local coordinate on the boundary theory [18]. According to (3.21), in the bosonic theory a z-integrated vertex operator V j (x) is identified with a local operator on the boundary theory with weight j. Conversely, the corresponding boundary modes are given by the m-basis operators. Indeed, for states in the discrete sector, the transform in Eq. (3.23) can be inverted, giving
V j (x, z) = m=j+n, n∈N 0 x m−jxm−j V j,mm (z) . (3.25)
The vertex V j (x, z) is realized via Eq. (3.25) as V j,j (z) translated from the origin to x. Poles in the integrand of (3.23) coming from the expansion around x = 0 (x = ∞) are associated to states in the D + j (D − j ) representations [46,74]. Spectral flow automorphisms of the current algebra (3.11) are defined as
j ± (z) →j ± (z) = z ±w j ± (z) , j 3 (z) →j 3 (z) = j 3 (z) − k ω 2 1 z ,(3.26)
where the so-called spectral flow charge ω is an integer. Analogous formulas hold for the antiholomorphic sector. We work with the universal cover of SL(2, R), which imposes that the holomorphic and antiholomorphic spectral flow parameters must be equal,ω = ω. The action of (3.26) on the above representations defines in general inequivalent representations that must be considered in order to generate a consistent spectrum. This holds up to the so-called series identifications due to the fact that the affine modulesD +,w j andD −,w+1 k/2−j are isomorphic. Thus, as mentioned above, the discrete series spectrum is constructed solely upon lowest weight representations with j restricted to the range (3.16).
At the level of vertex operators and for w > 0, the spectral flow operation introduced in (3.26) defines the so-called flowed primaries, whose OPEs with the currents take the form
j + (z)V ω j,m (w) = (m + 1 − j)V ω j,m+1 (w) (z − w) ω+1 + ω n=1 (j + n−1 V ω j,m )(w) (z − w) n + . . . , (3.27a) j 3 (z)V ω j,m (w) = m + k 2 ω V ω j,m (w) (z − w) + . . . , (3.27b) j − (z)V ω j,m (w) = (z − w) ω−1 (m − 1 + j)V ω j,m−1 (w) + . . . , (3.27c)
where the ellipses indicate higher-order terms. Similar expressions hold for ω < 0, with the roles of j + and j − inverted. The operators V ω j,m (z) are not affine primaries. They are, however, Virasoro primaries, with worldsheet conformal weight
∆ = − j(j − 1) k − 2 − mω − k 4 ω 2 .
(3.28)
Note that for ω > 0 (ω < 0), independently of the characteristics of the original state, these correspond to lowest (highest) weight states, with SL(2,R) spin
h = m + k 2 ω ,(3.29)
(h = −m − kω/2, respectively). The notation h anticipates that the SL(2,R) spin is identified with the holographic CFT conformal weight [19] (see also e.g. [37]), as we shall see in Eq. (3.31). The flowed affine modules alluded above are built by acting freely with the currents on flowed primary states. In particular, the remaining states in the zero-mode algebra, which are obtained by acting with j − 0 , are not flowed primaries. Nevertheless, one can proceed as done for the unflowed states, and combine them into a local operator, defined initially for
ω > 0 as V w j,h (x, z) = n∈N 0 x nxn V w j,h+n,h+n (z) . (3.30)
Moreover, by inverting x → 1/x in the expansion, one also obtains the states in the highestweight representation with the same spin and opposite ω and m. This shows that the resulting x-basis states are actually defined in terms of the absolute value of ω, its sign being irrelevant. A direct x-basis definition for spectrally flowed vertex operators was recently derived in [75], extending the original proposal of [24] valid only for the singly flowed case. The classical analog of the spectral flow operation (3.26) maps space-like geodesics of point-like strings into solutions in which a long string wound around the AdS 3 angular direction at large radius comes in to the centre of global AdS 3 , collapses to a point, and then re-expands to large radial distance [22]. The spectral flow parameter ω is thus sometimes referred to in the literature as a "winding" number. Note that since the AdS 3 angular direction is contractible in the interior of global AdS 3 , the parameter ω is not a conserved quantity. However, the m-basis two-point functions are diagonal in ω: it was shown in [24] that the m-basis two-point function of flowed primaries is as in (3.24) with an extra factor of δ ω 1 ,−ω 2 and the worldsheet conformal weight ∆ 1 replaced by∆ 1 given in Eq. (3.28). On the other hand, in the x-basis one finds
V ω 1 j 1 ,h 1 (x 1 , z 1 )V ω 2 j 2 ,h 2 (x 2 , z 2 ) = 1 |x 12 | 4h 1 V ω 1 j 1 ,m 1 V ω 2 j 2 ,m 2 V conf . (3.31)
Thus, as mentioned above, the SL(2,R) spin h is identified with the holographic CFT conformal weight [19], even though in the flowed sectors the spin is independent of the value of j of the corresponding unflowed operator. The factor V conf stands for the divergent volume of the conformal group; it reflects the fact we are picking up the contribution from a pole, and it will cancel in the relevant computations that follow.
Bosonic WZW model for SU(2)
The bosonic WZW model based on the SU(2) group manifold was studied in [20,76]. We denote the generators of the current algebra by k a , and for most quantities we use primes to distinguish them from their SL(2, R) counterparts. The currents satisfy the OPEs
k a (z)k b (w) ∼ k 2 δ ab (z − w) 2 + f ab c k c (w) z − w , (3.32)
where k is the level of the affine Lie algebra, δ ab is the Killing form, and f abc are the corresponding structure constants,
2δ 33 = δ +− = 2 , f +− 3 = 2 , f 3+ + = −f 3− − = 1 . (3.33)
The energy-momentum tensor and central charge are
T su (z) = 1 k + 2 k 3 (z)k 3 (z) + 1 2 k + (z)k − (z) + 1 2 k − (z)k + (z) , c su = 3k k + 2 . (3.34)
We denote SU(2) vertex operators by V j ,m ,m (z,z). Again, their zero-mode wavefunctions do not factorize into holomorphic and antiholomorphic parts, however we shall mostly work holomorphically and suppress antiholomorphic quantities (m ,z). For SU (2), the unitary representations of the zero-mode algebra are labeled by
0 ≤ j ≤ k 2 , j ∈ Z/2 ,(3.35)
and their states are |j , m with m = −j , −j + 1, . . . , j − 1, j . Using conventions that mimic those used above for SL(2, R), we have
k 3 0 |j , m = m |j , m , (3.36a) k ± 0 |j , m = (j + 1 ± m )|j , m ± 1 if m = ±j 0 if m = ±j , (3.36b)
k a n |j , m = 0 ∀n > 0 , (3.36c) and
∆ = j (j + 1) k + 2 . (3.37)
Unlike SL(2, R), in the SU(2) WZW model spectral flow is not necessary for constructing a consistent spectrum due to the compactness of the group manifold. Indeed, the spectral flow automorphisms merely reshuffle primary and descendant fields, and they do not introduce new inequivalent representations. Nevertheless, for superstring theory applications it is of practical use to include it in the discussion [26,29,38]. We will discuss this in more detail shortly.
For SU (2), spectral flow is defined as
k ± (z) →k ± (z) = z ∓w k ± (z) , k 3 (z) →k 3 (z) = k 3 (z) − k ω 2 1 z . (3.38)
In this case, however, it is possible to haveω = ω . As before, spectrally flowed primaries V ω j ,m (z) are Virasoro primaries, with weight
∆ = j (j + 1) k − 2 + m ω + k 4 ω 2 ,(3.39)
but they are not affine primaries, and for ω > 0 they are defined in terms of the OPEs
k + (z)V ω j ,m (w) = (z − w) ω −1 (j − m )V ω j ,m +1 (w) + . . . , (3.40a) k 3 (z)V ω j ,m (w) = m + k 2 ω V ω j ,m (w) (z − w) + . . . , (3.40b) k − (z)V ω j ,m (w) = (j + m )V ω j ,m −1 (w) (z − w) ω +1 + ω n=0 (k − n−1 V ω j ,m )(w) (z − w) n + . . . . (3.40c)
The corresponding two-point functions are, again, the unflowed ones times δ ω 1 ,−ω 2 , with the appropriate powers of z 12 .
Superstrings in AdS
3 × S 3 × T 4
We now review supersymmetric generalizations of the bosonic WZW models discussed above. We introduce fermions ψ a and χ a which are superpartners of the SL(2, R) and SU(2) currents J a and K a respectively. The appropriate N = 1 supersymmetric extensions of the affine sl(2,R) k and su(2) k algebras are generated by the supercurrents ψ a + θJ a and χ a +θK a , where θ is a Grassmann variable. The currents J a and K a satisfy the OPEs (3.11) and (3.32) respectively, with level n 5 in both cases, and the OPEs involving the fermions ψ a and χ a are
J a (z)ψ b (w) ∼ f ab c ψ c (w) (z − w) , K a (z)χ b (w) ∼ f ab c χ c (w) (z − w) , (3.41a) ψ a (z)ψ b (w) ∼ n 5 2 η ab (z − w) , χ a (z)χ b (w) ∼ n 5 2 δ ab (z − w)
.
(3.41b)
One can split the currents into two independent contributions via
J a = j a − 1 n 5 f a bc ψ b ψ c , K a = k a − 1 n 5 f a bc χ b χ c . (3.42)
The "bosonic" currents j a and k a commute with the free fermions, and are currents of bosonic WZW models as described in Section 3.2, with levels k = n 5 + 2 and k = n 5 − 2 respectively. In the fermionic sector, the spectral flow automorphisms act as
ψ ± (z) = z ∓ ψ ± (z) ,ψ 3 (z) = ψ 3 (z) ,χ ± (z) = z ∓ χ ± (z) ,χ 3 (z) = χ 3 (z) . (3.43)
The remaining flat compact directions are treated as usual. For the T 4 , we simply have four (canonically normalized) free bosons Y i and their fermionic partners λ i (i = 6, . . . , 9), with OPEs
Y i (z)Y j (w) ∼ −δ ij log(z − w) , λ i (z)λ j (w) ∼ δ ij (z − w)
.
(3.44)
We can now write down the energy-momentum tensor T and the supercurrent G of the worldsheet theory for type II superstrings in AdS 3 × S 3 × T 4 . The matter contributions read 46) and the resulting central charge is compensated by the usual bc and βγ ghost systems, leading to the BRST charge
T = 1 n 5 j a j a − ψ a ∂ψ a + k a k a − χ a ∂χ a + 1 2 ∂Y i ∂Y i − λ i ∂λ i , (3.45) G = 2 n 5 ψ a j a − 1 3n 5 f abc ψ a ψ b ψ c + χ a k a − 1 3n 5 f abc χ a χ b χ c + i λ j ∂Y j ,(3.Q = dz c (T + T βγ ) − γ G + c(∂c)b − 1 4 bγ 2 . (3.47)
Here T βγ is the energy-momentum tensor of the βγ system, which is bosonized as
β = e −ϕ ∂ξ , γ = ηe ϕ , (3.48) where ϕ(z)ϕ(w) − ln(z − w) has background charge 2, and ξ(z)η(w) ∼ (z − w) −1 .
For computational purposes it is useful to also bosonize the rest of the fermions [18,37]. We thus define (canonically normalized) bosonic fields H I with I = 1, . . . 5, and writê
H I = H I + π J<I N J , N J ≡ i∂H J ,(3.49)
where the number operators N I are introduced in order to keep track of the cocycle factors, namely
e iaĤ I e ibĤ J = e ibĤ J e iaĤ I e iπab , if I > J . (3.50)
We bosonize as
ψ ± = √ n 5 e ±iĤ 1 , χ ± = √ n 5 e ±iĤ 2 , λ 6 ± iλ 7 = e ±iĤ 4 , λ 8 ± iλ 9 = e ±iĤ 5 , (3.51a) ψ 3 = √ n 5 2 e iĤ 3 − e −iĤ 3 , χ 3 = √ n 5 2 e iĤ 3 + e −iĤ 3 , (3.51b) whereĤ † I =Ĥ I for I = 3 andĤ † 3 = −Ĥ 3 . Then we have i∂Ĥ 1 = 1 n 5 ψ + ψ − , i∂Ĥ 2 = 1 n 5 χ + χ − , i∂Ĥ 3 = 2 n 5 ψ 3 χ 3 , (3.52a) i∂Ĥ 4 = iλ 6 λ 7 , i∂Ĥ 5 = iλ 8 λ 9 . (3.52b)
The phases in (3.50) ensure that bosonized fermions anticommute, and will be important when working with states in the Ramond sector. From now on we will simply omit the hats, and explicitly include the phase factors when they are needed. The spacetime supercharges can be written as:
Q ε = dz e −ϕ/2 S ε , S ε = exp i 2 5 I=1 ε I H I , (3.53)
where S ε are spin fields and ε I = ±1. Imposing BRST invariance -where the relevant contributions come from the f abc ψ a ψ b ψ c and f abc χ a χ b χ c pieces of G in (3.46) -and mutual locality (chiral GSO) leads to the conditions
3 I=1 ε I = 5 I=1 ε I = 1 . (3.54)
In the holomorphic sector this gives the expected four 'ordinary' supercharges and four 'superconformal' supercharges. The same applies in the antiholomorphic sector, giving the total 16 real supercharges of global AdS 3 × S 3 [18]. For later use, let us also recall that the R-symmetry of the boundary theory is generated on the worldsheet by the SU(2) currents. More precisely, the zero modes of the spacetime R-currents are given by the integrated worldsheet currents [18], i.e.
J a 0 = dz K a (z) . (3.55)
Consequently, the holomorphic R-charge in the holographic CFT is identified with m , the eigenvalue of K 3 . It is for this reason that we used the notation m in Sections 2.3 and 3.1.
Vertex operators and two-point functions
We now discuss physical vertex operators and their two-point functions, both in NS and R sectors. This section is largely review, though we also give explicit expressions for some R sector operators that to our knowledge have not appeared before in the literature.
Our main interest is in worldsheet operators that correspond to chiral primaries of the holographic CFT. We thus focus on states belonging to the discrete representations of SL(2,R). We also discuss the role of SL(2,R) and SU(2) spectral flows in the string theoretical construction. These vertex operators and their two-point functions will be used as building blocks for constructing the vertex operators and two-point functions of the null-gauged models in Section 4.
NS sector
The unflowed NS-NS sector was considered in [77], see also [37]. We continue to suppress antiholomorphic parts of the SL(2, R) and SU(2) vertex operators V j,m,m and V j ,m ,m . The complete NSNS vertex is obtained by including the antiholomorphic fermions and ghosts.
We work in the canonical "−1" ghost picture, and consider only states with vanishing momentum in the T 4 directions. Then the (holomorphic part of the) BRST invariant states with up to a single fermionic excitation include the tachyon (which is projected out by GSO), 56) and the spacetime vectors (ε = ±1, and recall i = 6, . . . , 9)
T j,m,j ,m = e −ϕ V j,m V j ,m ,(3.V i j,m,j ,m = e −ϕ λ i V j,m V j ,m , (3.57a) W ε j,m,j ,m = e −ϕ (ψV j ) j+ε,m V j ,m , (3.57b) X ε j,m,j ,m = e −ϕ V j,m (χV j ) j +ε,m , (3.57c)
where we have introduced the linear combinations
(ψV j ) j+ε,m = c r ε ψ r V j,m−r , (χV j ) j +ε,m = d r ε χ r V j ,m −r , (3.58)
where a summation over r = +1, −1, 0 is implicit, "0" corresponding to the "3" direction of the respective algebras. These combine the products of bosonic primaries and free fermions into fields of total spins J = j + ε and J = j + ε under the action of the supersymmetric currents J a and K a [77]. The Clebsh-Gordan coefficients are given in our conventions by
c r − = 1 2 , 1 2 , −1 , d r + = − 1 2 , 1 2 , 1 , c r + = 1 2 (j + m)(j + m − 1), 1 2 (j − m)(j − m − 1), (j + m)(j − m) , (3.59) d r − = 1 2 (j − m )(j − m + 1), − 1 2 (j + m )(j + m + 1), (j − m )(j + m ) .
The Virasoro condition associated to all vertex operators in Eq. (3.57) reads
1 2 + 1 2 − j(j − 1) n 5 + j (j + 1) n 5 = 1 ,(3.60)
and is solved by j = j + 1 (or its reflection under j → 1 − j), thus implying that we are dealing with bosonic primaries in the discrete representations of SL(2,R) k .
Let us briefly discuss the worldsheet two-point functions involving these operators. The different bosonic sectors factorize and the fermions are free, so we can express the results directly in terms of the non-trivial contributions coming from the bosonic SL(2,R) WZW model, namely Eq. (3.24). By construction, the only non-vanishing two-point functions are the diagonal ones. For the 6D scalars coming from the NS-NS sector polarizations on the
T 4 , (3.57a), we have V iī 1 V jj 2 = V 1 V 2 V 1 V 2 e −ϕ 1 e −ϕ 2 λ i 1 λ j 2 × c.c. = V 1 V 2 V 1 V 2 × δ ij δīj |z 12 | 4 . (3.61)
Since we are dealing with discrete representations, the contact term in (3.24) vanishes, thus imposing j 1 = j 2 ≡ j. As discussed above Eq. (3.25), the conformal weight in the holographic CFT is to be identified with the SL(2,R) spin, i.e. h = j. On the other hand, the R-charge is given by m , with |m | ≤ j = j − 1. Thus h = |m |, so V i cannot correspond to a chiral primary of the HCFT. We now turn to the operators introduced in the second and third line of (3.57). When computing correlators of two W states, we must deal with expressions of the form
(ψ 1 V j 1 ) j 1 +ε 1 ,m 1 (ψ 2 V j 2 ) j 2 +ε 2 ,m 2 = r 1,2 c r 1 ε 1 c r 2 ε 2 ψ r 1 ψ r 2 V j 1 ,m 1 −r 1 V j 2 ,m 2 −r 2 . (3.62)
We use the action of the bosonic currents (3.17) to express (3.59), and perform the sum. Recalling the shorthand
V j 1 ,m 1 −r 1 V j 2 ,m 2 −r 2 in terms of V j 1 ,m 1 V j 2 ,m 2 , insert the coefficientsV i ≡ V j i ,m i , V i ≡ V j i ,m i , we obtain W ε 1 W ε 2 = n 2 5 4|z 12 | 4 V 1 V 2 V 1 V 2 × j 1 (1 − 2j 1 )(j 2 1 − m 2 1 ) × c.c. ε 1 = ε 2 = 1 (j 1 − 1)(1 − 2j 1 ) (j 1 − 1) 2 − m 2 1 × c.c. ε 1 = ε 2 = −1 0 ε 1 = −ε 2 .
(3.63) From Eq. (3.24) we find that the coefficients are exactly those needed to produce the shift j → j + ε in the two-point function. Hence, Eq. (3.23) shows that the weight of the corresponding holographic dual is h = j + ε [36]. In particular, the operator W − with maximal SU(2) charge has h = j − 1 = m , and thus corresponds to a chiral primary operator of the holographic CFT. The computation of the X X correlators is analogous; we obtain
X ε 1 X ε 2 = n 2 5 4|z 12 | 4 V 1 V 2 V 1 V 2 × (j 1 + 1)(1 + 2j 1 ) (j 1 + 1) 2 − m 1 2 × c.c. ε 1 = ε 2 = 1 j 1 (1 + 2j 1 )(j 1 2 − m 1 2 ) × c.c. ε 1 = ε 2 = −1 0 ε 1 = −ε 2 = ±1 (3.64)
For X + at highest SU(2) weight we have h = j = j + 1 = m , leading to a second family of spacetime chiral states. We will discuss the corresponding operators in the holographic CFT theory and fix their normalization below.
So far, we have constructed chiral operators whose boundary weights h = j − 1 and h = j are bounded from above by h < n 5 +1 2 , see Eq. (3.16). However, in the D1D5 CFT one can have chiral primaries in n-twisted sectors with n up to n 1 n 5 , and where h grows linearly with n, as discussed around Eq. (3.6). Thus, it seems that so far we are missing most of the heavier chiral operators. However, as discussed in [38], such states lie in the sectors of the worldsheet theory with non-trivial spectral flow charges, as we now review.
In the supersymmetric theory, spectrally flowed primary operators are built by combining the bosonic flowed primaries introduced in Eqs. (3.27) and (3.40) with fermionic excitations. The bosons H I allow us to express the spectral flow operation in the fermionic sectors of SL(2,R) and SU(2), Eq. (3.43), in the following form,
ψ ± ω = ψ ± e −iωH 1 , χ ± ω = χ ± e iω H 2 , (3.65)
while the other fermions remain unchanged. Indeed, the OPEs between the operators in (3.65) and the fermionic currents are analogous to those in (3.27) and (3.40). Once factors of e −iωH 1 and e iω H 2 are included, the corresponding weights take the form in (3.28) and (3.39), with k − 2 = n 5 = k + 2.
In principle, one could simply ignore the possibility of including spectral flow in SU(2) since it does not give any new representations. However, for discrete series states it is useful to do so in order to solve the modified Virasoro condition, as discussed in [26,29]. We use the spectral flow operator with equal amount of spectral flow in SL(2,R) and SU (2),
exp −iωH 1 + ω n 5 + 2 2 φ + iωH 2 + iω n 5 − 2 2 φ , (3.66)
which is mutually local with the supercharges, thus producing flowed singly-excited states which will also survive the GSO projection. The corresponding Virasoro condition for the flowed vertex operators is
1 2 + 1 2 − j(j − 1) n 5 − mω − n 5 4 ω 2 + j (j + 1) n 5 + m ω + n 5 4 ω 2 = 1 . (3.67)
We seek to solve this for general n 5 . We thus impose j = j + 1 as well as m = m . The latter constraint is quite restrictive, since by definition we have
V-type operators: |m| ≥ j , |m | ≤ j = j − 1 , W-type operators: |m| ≥ j − 1 , |m | ≤ j = j − 1 , X -type operators: |m| ≥ j , |m | ≤ j + 1 = j .
Consequently, our only candidates are highest/lowest-weight W − -type operators with m = m = j = j −1 and X + -type operators with m = m = j +1 = j. Their explicit expressions are given by
W ω j = e −ϕ ψ − e −iωH 1 e iωH 2 V ω j,j V ω j−1,j−1 , (3.68a) X ω j = e −ϕ e −iωH 1 χ + e iωH 2 V ω j,j V ω j−1,j−1 . (3.68b)
These flowed states are also BRST-invariant [38] since the supercurrent G can be written in the flowed frame as
G(z) =G(z) + ω z χ 3 − ψ 3 , (3.69)
such that the extra terms on the RHS of this equation act trivially on highest/lowest weight states.
The two-point functions of these spectrally flowed operators can be determined straightforwardly from the corresponding bosonic ones. This is because the latter impose ω 2 = −ω 1 , such that the charge conservation rules for the H I exponentials are automatically satisfied. We define the conjugate operatorsŴ ω ,X ω , and obtain
W ω 1 1Ŵ ω 2 2 = X ω 1 1X ω 2 2 = V ω 1 j 1 ,j 1 V ω 2 j 1 ,−j 1 V ω 1 j 1 −1,j 1 −1 V ω 2 j 1 −1,1−j 1 n 2 5 |z 12 | 4(1+ω 1 +ω 2 1 ) . (3.70)
Spectral flowed primaries are always annihilated by J − 0 , and are thus lowest-weight with respect to the SL(2,R) zero mode algebra. After the supersymmetric spectral flow (3.66), similarly to the bosonic transformations (3.29), (3.40b), the spectral flowed primaries have quantum numbers h and m that have increased by n 5 2 ω from their values for the unflowed vertex operators. We thus conclude that these vertex operators correspond exactly to the additional chiral operators we were looking for, with
W ω j : h = m = j − 1 + n 5 2 ω , X ω j : h = m = j + n 5 2 ω . (3.71)
These quantum numbers extend to large values, by raising ω. In the holographic CFT there are states with conformal weight of order n 1 n 5 , however in our worldsheet models n 1 is of order g −2 s (c.f. (2.19)) and we work in perturbation theory in g s , so finite n 1 physics is not accessible. Moreover, when considering holographic CFT operators with conformal weight of order n 1 n 5 , the dual bulk configuration is not a light probe on the original background, but rather a different background. The rest of the modes associated to such boundary operators are obtained by acting with the global current J + 0 as in Eq. (3.30), and do not have simple expressions in the m-basis since they are not flowed primaries.
Ramond sector
We now review the Ramond sector physical operators of the worldsheet theory, in the m-basis. To our knowledge, this construction has only been carried out explicitly in the literature for the case of highest/lowest-weight states [37,38]; we shall present explicit expressions for more general Ramond sector operators.
We will make use of the spin fields introduced in (3.53), and distinguish the slightly more involved AdS 3 × S 3 sector, for which we write the relevant factors as
S ε 1 ε 2 ε 3 = e i 2 (ε 1 H 1 +ε 2 H 2 +ε 3 H 3 ) . (3.72)
We denote the AdS 3 × S 3 chirality by ε ≡ ε 1 ε 2 ε 3 . We shall implement this by considering ε 3 to be fixed to be ε 3 = εε 1 ε 2 . We impose the chiral GSO projection via the mutual locality condition 5 I=1 ε I = 1, and we implement this by fixing ε 5 = εε 4 . We introduce a generic linear combination of bosonic primaries and spin fields of AdS 3 × S 3 of fixed chirality,
SV V ε J,m,J ,m = ε 1 ,ε 2 =±1 f ε ε 1 ε 2 S ε 1 ε 2 ε 3 V j,m− ε 1 2 V j ,m − ε 2 2 , ε 3 = εε 1 ε 2 , (3.73)
where the total spins (J, J ) will be related to (j, j ) in various ways momentarily. Note that for highest/lowest weight states, there may be only one allowed choice of ε 1 and/or ε 2 , as we shall see in an example below. In the canonical "− 1 2 " picture, the Ramond sector vertex operators then take the form
Y ε,ε 4 J,m,J ,m = e − ϕ 2 SV V ε J,m,J ,m e iε 4 2 (H 4 +εH 5 ) . (3.74)
The Clebsch-Gordan coefficients f ε ε 1 ε 2 are computed by requiring that the Y operators transform appropriately under the action of the currents J ± , K ± . In our conventions, this gives four linear combinations. In the equations below, the first bracket specifies how (J, J ) are related to j and j ; for instance, for case A, (J = j − 1/2, J = j + 1/2). For each case we write the coefficients as a list, f ε
ε 1 ε 2 = f ε ++ , f ε +− , f ε −+ , f ε −− . We obtain A : (j − 1 2 , j + 1 2 ) , f ε,A ε 1 ε 2 = 1, i, ε, ε i , (3.75a) B : (j + 1 2 , j + 1 2 ) , f ε,B ε 1 ε 2 = f B 1 , i f B 1 , ε f B 2 , ε i f B 2 , (3.75b) C : (j − 1 2 , j − 1 2 ) , f ε,C ε 1 ε 2 = f C 1 , −i f C 2 , ε f C 1 , ε(−i)f C 2 , (3.75c) D : (j + 1 2 , j − 1 2 ) , f ε,D ε 1 ε 2 = f B 1 f C 1 , (−i)f B 1 f C 2 , ε f B 2 f C 1 , ε(−i)f B 2 f C 2 , (3.75d) where f B 1 = m + j − 1 2 , f B 2 = m − j + 1 2 , f C 1 = j − m + 1 2 , f C 2 = j + m + 1 2 . (3.76)
We note that in all cases we have f +
ε 1 ε 2 = ε 1 f − ε 1 ε 2 .
In addition, and using j = j + 1, BRST-invariance gives four equations for each chirality, out of which only two are linearly independent, namely
f ε −+ = 1 j + m − 1 2 f ε ++ ε m + m − i f ε +− j − m + 1 2 , (3.77a) f ε −− = 1 j + m − 1 2 i f ε ++ j + m + 1 2 + f ε +− ε m − m . (3.77b)
These are satisfied by only half of the states in Eq. (3.75). The physical states in the "− 1 2 " picture are given by the A and D states with ε = 1 (and either choice of ε 4 = ±1), plus the B and C states with ε = −1 (and again either sign of ε 4 ), making the correct eight physical polarizations.
The full list of expressions in (3.75) is useful in order to construct the representatives of such operators in the "− 3 2 " ghost picture, necessary for computing two-point functions. To obtain the "− 3 2 " picture operators, we make an educated guess for their expressions, and then apply the picture raising operator, i.e.
Φ (− 1 2 ) (w) = lim z→w (e ϕ G) (z) Φ (− 3 2 ) (w) . (3.78)
In order to get a non-trivial propagator, and up to an overall constant, the appropriate guess is that they are given by the states with the same spins but opposite chirality. Explicitly, we have
Y ε,ε 4 (− 3 2 ) J,m,J ,m = ± √ n 5 2j − 1 e − 3ϕ 2 SV V −ε J,m,J ,m e iε 4 2 (H 4 +εH 5 ) , (3.79)
where the negative (positive) sign holds for the cases A and B (C and D).
We can now compute the two-point functions in the unflowed Ramond sector. Only diagonal pairings are non-zero, by construction. Denoting the antiholomorphic sector contributions by "c.c.", we obtain
Y ε 4 ,(− 1 2 ) [A] Y −ε 4 ,(− 3 2 ) [A] = n 5 |z 12 | 4 V 1 V 2 V 1 V 2 (2j − 1) (j − m − 1 2 )(j + m + 1 2 ) × c.c. , (3.80a) Y ε 4 ,(− 1 2 ) [B] Y −ε 4 ,(− 3 2 ) [B] = n 5 |z 12 | 4 V 1 V 2 V 1 V 2 (2j − 1)(j + m − 1 2 ) (j + m + 1 2 ) × c.c. , (3.80b) Y ε 4 ,(− 1 2 ) [C] Y −ε 4 ,(− 3 2 ) [C] = n 5 |z 12 | 4 V 1 V 2 V 1 V 2 (2j − 1)(j − m + 1 2 ) (m − j + 1 2 ) × c.c. , (3.80c) Y ε 4 ,(− 1 2 ) [D] Y −ε 4 ,(− 3 2 ) [D] = n 5 |z 12 | 4 V 1 V 2 V 1 V 2 (2j − 1)(m + j − 1 2 )(j − m + 1 2 ) × c.c. , (3.80d) where here V 1 V 2 = V m 1 −1/2 V −m 1 +1/2 and V 1 V 2 = V m 1 −1/2 V −m 1 +1/2 .
As expected, the coefficients resulting from the linear combinations effectively shift the spins j → J and j → J in the gamma functions coming from the bosonic correlators. Among the states described above, the only chiral one corresponds to the SU(2) highestweight operator of type A. To simplify notation and for later convenience, we suppress the SU(2) labels and use the label j rather than J (here J = j − 1/2). This operator has quantum numbers
Y +,ε 4 j,m[A] : h = J = j − 1 2 = j + 1 2 = J = m . (3.81)
The explicit form of this operator is simpler than the generic Ramond sector operator, and is given by
Y +,ε 4 j,m[A] = e − ϕ 2 S +++ V j,m− 1 2 + S −+− V j,m+ 1 2 j− 1 2 ,m V j−1,j−1 e iε 4 2 (H 4 +H 5 ) . (3.82)
As in the NS sector, the rest of the chiral operators belong to the spectrally flowed sectors. These are obtained by acting with the spectral flow operator (3.66). From the flowed Virasoro condition, similar to Eq. (3.67), the resulting operators must have m = m (together with the relations in (3.81)), and so only the second term in (3.82) is non-vanishing, giving rise to the flowed Ramond operators (we now suppress also the label m = J = j − 1/2)
Y +,ε 4 ,ω j [A] = e − ϕ 2 S ω −+− V ω j,j V ω j−1,j−1 e iε 4 2 (H 4 +H 5 ) , (3.83) where S ω −+− ≡ e i 2 [(1+2ω)(−H 1 +H 2 )−H 3 ] . (3.84)
The corresponding two-point function is equivalent to the highest-weight case of (3.80a), up to the usual additional δ ω 1 ,−ω 2 factor.
Holographic dictionary for light chiral primaries
We have reviewed three sets of m-basis vertex operators corresponding to chiral primaries of the holographic CFT. Two sets are in the NS sector: W − j,m and X + j,m , together with the corresponding spectral flowed operators W ω j and X ω j . The third set is in the Ramond sector, Y +,ε 4 j,m[A] and its spectral flow, Y +,ε 4 ,ω j [A] . From now on we shall omit the label A and the AdS 3 × S 3 chirality = +, denoting this operator by Y ω,ε 4 j . Recall that in the spectral flowed sectors, the remaining states in the zero-mode algebra are obtained by acting with J + 0 , as discussed around Eqs. (3.30) and (3.71).
In order to reconstruct the corresponding local operators of the spacetime CFT, we need to combine such modes by going to the x-basis, as done in Eqs. (3.25) and (3.30) in the bosonic SL(2,R) model. 4 For the operators at hand, the sum over m in the analog of Eqs. (3.25) and (3.30) factorizes between fermionic and bosonic contributions, leading to expressions of the following form:
W ω j (x) = e −ϕ ψ ω (x)e iωH 2 V ω j (x)V ω j−1,j−1 , (3.85a) X ω j (x) = e −ϕ ψ ω−1 (x)e i(ω+1)H 2 V ω j (x)V ω j−1,j−1 , (3.85b) Y ω,ε 4 j (x) = e − ϕ 2 S ω (x)V ω j (x)V ω j−1,j−1 e iε 4 2 (H 4 +H 5 ) . (3.85c)
Here ψ ω (x) and S ω (x) are defined as follows. First, note that the fermions ψ a introduced in (3.41), which generate an affine sl(2, R) −2 algebra with level k ψ = −2, constitute affine primaries with spin J ψ = −1, on which, however, the zero-mode currents act as in (3.17) but with 5 J → 1 − J, i.e. J ± 0 |J, m ± 1 = (m ± J)|J, m ± 1 . As a consequence, and in contrast with what happens with bosonic primaries, the action J + 0 on ψ − , the lowest-weight state, is truncated. Identifying ψ ω=0 (0) = ψ − , the resulting x-basis operator has only three terms:
ψ ω=0 (x) ≡ ψ − (x) = e xJ + 0 ψ − e −xJ + 0 = ψ − − 2xψ 3 + x 2 ψ + . (3.86)
Of course, we already knew the action of the currents on ψ a from (3.41), but the advantage of the above discussion is that it extends to the spectrally flowed sectors. Indeed,
ψ ω (0) = √ n 5 e −i(1+ω)H 1 is the lowest-weight component of a spin J ω ψ = −1 − ω field. The corresponding x-basis operator is of the form ψ ω (x) ≡ √ n 5 e xJ + 0 e −i(1+ω)H 1 e −xJ + 0 ,(3.87)
and contains 1 − 2J ω ψ = 2ω + 3 terms. 4 In this paper we are interested in operators of fixed R-charge. Hence, and in contrast to [37,38], we do not introduce isospin variables in the SU(2) sector. 5 This convention is perhaps more natural from the SL(2,R) point of view, but we have decided to employ the conventions used in the most relevant literature for us, i.e. [22,24,37,38].
Similarly, the spin field (3.84) is the lowest-weight component of a representation with SL(2,R) and SU (2)
spins (J ω S , J ω S ) = (− 1 2 − ω, 1 2 + ω), such that the x-basis operator is S ω (x) ≡ e xJ + 0 S ω −+− e −xJ + 0 , (3.88)
and contains 2(1 + ω) terms. We thus have three types of vertices, W ω j (x, z), X ω j (x, z) and Y ω,± j (x, z), which correspond to local chiral primary operators of the boundary theory. As mentioned before, these should be completed with analogous antiholomorphic excitations, which have been omitted in the presentation. As discussed around Eqs. (3.71) and (3.81), their boundary weights are given by
h W ω j = j ω − 1 , h Y ω,± j = j ω − 1 2 , h X ω j = j ω ,(3.89)
where j = j + 1 and so
j ω = j + n 5 2 ω, j = 1, 3 2 , . . . , n 5 2 , ω = 0, 1, . . . . (3.90)
Up to normalization, which will be fixed shortly, these operators are identified with the chiral primaries of the holographic CFT listed in Eq. (3.10). In the Y ω,ε 4 j tower, ε 4 = ± is identified with the boundary quantum numberȦ in (3.9). Note also that the W ω j tower starts with the identity operator of the boundary theory. 6 The dictionary is summarized in Table 1. Table 1. Dictionary between worldsheet vertex operators and chiral primaries of the holographically dual CFT. Here j ω = j + n5 2 ω, j = 1, 3 2 , . . . , n5 2 , and ω = 0, 1, . . .
Worldsheet Weight h Twist n Dual Operator W ω j j ω − 1 2j ω − 1 O − n Y ω,ε 4 j j ω − 1 2 2j ω − 1 OȦ n X ω j j ω 2j ω − 1 O + n
Although most of the chiral primaries of the holographic CFT are accounted for by considering the ranges given in (3.90), it is known that those belonging to the the n-twisted sectors with n = pn 5 with p ∈ N are still missing [37,38]. These would correspond to operators sitting at the boundary of the allowed range of j in Eq. (3.16), at which the spectrum becomes degenerate and the continuous representations appear [16,70]. The absence of these states in the worldsheet spectrum has been related to the fact that the NS5-F1 model sits at a singular point in the moduli space where all RR modes are turned off [79]. 6 As discussed in [19], this is subtle, since the operator is actually a spectral-flow-sector dependent constant. This subtlety is related to the fact that spectral flow charge is not conserved in n-point functions with n ≥ 3, and was resolved in [78] by performing a Legendre transform to the microcanonical ensemble, in which the total number of fundamental strings is fixed.
The twist n of the holographic CFT operators is identified as [37,38] n = 2j − 1 + n 5 ω .
(3.91)
Let us make a side comment regarding the limit in which there is only a single NS5 brane sourcing the background, n 5 = 1. This model is special in that it corresponds to the tensionless limit of the theory. It has to be treated with care since the usual RNS formalism outlined above breaks down due to the fact that the bosonic SU(2) level would become negative. It was shown in [80,81] that for n 5 = 1 the worldsheet theory is exactly dual to the supersymmetric symmetric orbifold (T 4 ) n 1 /S n 1 . In this model, the discrete series is absent, the spectrum truncates to j = 1/2, physical states have ω > 0, and the spectral flow charge is identified with n, i.e. n = ω. Eq. (3.91) is the known generalization of this relation for n 5 > 1.
In order to fix the normalization of the operators, we compute their two-point functions. Making use of
ψ ω 1 (x 1 )ψ ω 2 (x 2 ) × c.c. = x 2(ω 1 +1) 12 e −i(1+ω 1 )H 1 e i(1−ω 2 )H 1 × c.c. = δ ω 1 ,−ω 2 |x 12 | 4(ω 1 +1) |z 12 | 2(ω 1 +1) 2 , S ω 1 ,+ (x 1 )S ω 2 ,− (x 2 ) × c.c. = x 2ω 1 +2 12 z 12 S ω 1 −+− S ω 2 +−+ × c.c. = δ ω 1 ,−ω 2 |x 12 | 4ω 1 +2 |z 12 | 4ω 1 (ω 1 +1)+ 5 2 ,
we obtain
W ω 1 j 1 (x 1 , z 1 )W ω 2 j 2 (x 2 , z 2 ) = n 2 5 B(j 1 ) 16 δ(j 1 − j 2 )δ ω 1 ,−ω 2 |x 12 | 4(j 1 −1)+2n 5 ω 1 |z 12 | 4 , (3.92) X ω 1 j 1 (x 1 , z 1 )X ω 2 j 2 (x 2 , z 2 ) = n 2 5 B(j 1 ) 16 δ(j 1 − j 2 )δ ω 1 ,−ω 2 |x 12 | 4j 1 +2n 5 ω 1 |z 12 | 4 , (3.93) Y ω 1 ,± j 1 (x 1 , z 1 )Y ω 2 ,∓ j 2 (x 2 , z 2 ) = n 5 B(j 1 ) (2j 1 − 1 + n 5 ω 1 ) 2 δ(j 1 − j 2 )δ ω 1 ,−ω 2 |x 12 | 4j 1 +2(n 5 ω 1 −1) |z 12 | 4 . (3.94)
Here we have used that in the spectrally flowed R sectors the denominator in the extra factor of the corresponding vertex operator in the "− 3 2 " picture is shifted as 2j − 1 → 2j − 1 + n 5 ω as compared to (3.79). Moreover, the V conf factor in Eq. (3.31) is cancelled by the pole appearing in (3.24) upon setting m 1 = j 1 .
The string two-point function is then obtained by including an extra factor g −2 s ∼ n 1 /n 5 as usual in string perturbation theory, fixing z 1 = 0 and z 2 = 1, and dividing by a volume of the conformal group that leaves such worldsheet insertions fixed. As discussed in [19,24], this cancels the divergence coming from δ(j 1 − j 2 ), leaving a constant j-dependent factor of the form (2j − 1 + n 5 ω). As a consequence, the holographic dictionary reads
O −− n (x,x) ↔ A NS (j, ω)W ω j (x,x), O ++ n (x) ↔ A NS (j, ω)X ω j (x,x), (3.95) OȦḂ n ↔ A R (j, ω)Y ω,ȦḂ j (x,x),(3.96)
with n related to the worldsheet quantum numbers as in (3.91) and where [36,37] A NS (j, ω) = 4g s n 2 5 B(j)(2j − 1 + n 5 ω)
,
A R (j, ω) = g s (2j − 1 + n 5 ω) n 5 B(j) . (3.97)
Of course, this identification is only expected to hold at small string coupling, i.e. for n 1 n 5 . Analysis and comparison of boundary and worldsheet three-point functions was carried out in [36,37,82].
Null-gauged description and worldsheet spectrum
We now proceed to describe massless excitations in the worldsheet theories associated with the heavy backgrounds we study. We describe in detail how to obtain the low-lying physical states via BRST quantization in these null-gauged models, both in the NSNS and RR sectors. In the subset of the backgrounds that preserve some supersymmetry, we discuss the BPS light excitations.
In the full null-gauged models, these massless vertex operators describe linearized fluctuations around the full asymptotically linear dilaton solutions describing the heavy states, and so can be thought of as worldsheet representatives of light states belonging to the Little String Theory living on the NS5 branes.
Our main interest in this work will be computing correlators in the IR AdS 3 limit, in which we have reviewed the fact that the backgrounds are related to orbifolded AdS 3 × S 3 × T 4 via a spacetime spectral flow large coordinate transformation. We describe how the AdS 3 limit can be taken on the worldsheet vertex operators. This leads to states that can be understood holographically, in the spacetime (fractionally) spectrally flowed frame defining the heavy background, as discussed around Eq. (2.27).
BRST quantization
We start by reviewing the quantization of the class of worldsheet coset models introduced in Section 2, which describe the propagation of superstrings in the JMaRT backgrounds and their (BPS and/or two-charge) limits [25,26,28,29].
Before gauging, we have the WZW model associated to the (10+2)-dimensional group manifold SL(2,R) × SU(2) × R t × U(1) y × U(1) 4 as introduced in (2.1). This is described simply by adding the extra time direction t and spatial circle y to the matter content employed in the previous section, together with the corresponding fermionic partners λ t and λ y . The latter are bosonized using a canonically normalised scalar H 6 as
λ t = 1 2 e iH 6 − e −iH 6 , λ y = 1 2 e iH 6 + e −iH 6 , (4.1) i∂H 6 = 2 λ t λ y , H † 6 = −H 6 . (4.2)
The holomorphic parts of their OPEs are
−t(z)t(w) ∼ y(z)y(w) ∼ − 1 2 log(z−w) , −λ t (z)λ t (w) ∼ λ y (z)λ y (w) ∼ 1 2 1 (z − w)
, Introducing P t,L = i∂t , P t,R = i∂t , P y,L = i∂y , and P y,R = i∂y, we gauge the chiral null currents 7 J = iJ = J 3 + l 2 K 3 + l 3 P t,L + l 4 P y,L ,J = iJ =J 3 + r 2K 3 + r 3 P t,R + r 4 P y,R , (4.5)
which are the quantum operator versions of the classical currents in Eq. (2.4). The supersymmetric partners of the currents J andJ are
λ = ψ 3 + l 2 χ 3 + l 3 λ t + l 4 λ y ,λ =ψ 3 + r 2χ 3 + r 3λ t + r 4λ y . (4.6)
To perform the null gauging, one introduces additional fermionic and bosonic first-order ghosts, denoted by (b,c) and (β,γ), with conformal weights ∆[c] = 0 and ∆[γ] = 1/2 [28]. The central charges cbc = −2 and cβγ = −1 cancel the additional matter contribution c ty = 3. The (β,γ) system has no background charge and is bosonized viã
β = e −φ ∂ξ ,γ =η eφ .
(4.7)
We shall momentarily introduce a modified BRST charge that imposes invariance under the action of the null currents (4.5) and their supersymmetric partners (4.6). Physical operators in 9+1 dimensions will be given by states of the ungauged (10+2)-dimensional WZW model that survive the gauging procedure [28]. Of course, and as we shall see shortly, the Virasoro conditions and the expressions of the BRST-exact states will be modified accordingly. We consider a set of mutually local operators before the gaugings, i.e. we perform the analog of the GSO projection in the (10+2)-dimensional model. We thereby obtain get a tachyon-free spectrum in the gauged models.
Underlying this procedure is the fact that for the case of chiral null gaugings, the Polyakov-Wiegmann identity allows one to rewrite the gauged action of the downstairs model into a form identical to that of the upstairs model in terms of a new gauge-invariant variables [83,84], see also [29]. This is achieved at the level of the path integral by means of a field redefinition with a Jacobian that is almost trivial except for a factor which, when exponentiated, gives rise to the additional ghost fields described above.
Explicitly, physical operators in the coset model are defined by the cohomology classes of the BRST charge [28] Q = dz c T + T βγβγ + γG +cJ +γλ + ghosts ,
where the last two terms implement the null-gauging procedure. Whether the resulting spectrum is supersymmetric or not depends on whether some linear combination(s) of the following supercharges are BRST-invariant [28], We shall discuss the conditions for spacetime supersymmetry after we have analyzed more general Ramond sector vertex operators, around Eq. (4.30). For now, we emphasize that only a subset of the backgrounds we consider preserve some spacetime supersymmetry.
Q ε = dz e −(ϕ−φ)/2 S ε , S ε = exp i 2 6 I=1 ε I H I .
The unflowed NS sector
We now analyze physical NS sector states of the gauged models, focusing on states with no spectral flow charges in SL(2,R) or SU (2), and no winding charge around the y-circle. As usual, the lightest physical operators come with a single fermionic excitation on top of the tachyon state
T j,m,j ,m = e −ϕ V j,m V j ,m e i(−E t+Pyy) . (4.11)
Note that since t is a non-compact direction and ω y = 0, both E and P y are identical on the left and on the right sectors. For massless states, the L 0 andL 0 Virasoro constraints both read
0 = − j(j − 1) n 5 + j (j + 1) n 5 − 1 4 E 2 + 1 4 P 2 y . (4.12)
Moreover, operators are uncharged with respect to the null-currents J,J in (4.5) if and only if their quantum numbers are related by
0 = m + l 2 m + l 3 2 E + l 4 2 P y , 0 =m + r 2m + r 3 2 E + r 4 2 P y . (4.13)
We will work in the canonical "−1" picture for the ϕ ghost. On the other hand, the fact thatφ has background charge Qφ = 0 allows us to build NS states directly atφ-picture zero. BRST-closed operators must then have a vanishing second-order pole in their OPE with the supercurrent G, and vanishing first-order pole in their OPE with the fermionic current λ given in (4.6). As can be expected from the fact that the T 4 is untouched by the gaugings, the simplest solutions are the 6D scalars
V i j,m,j ,m = e −ϕ λ i V j,m V j ,m e i(−Et+Pyy) , i = 6, . . . , 9.
(4.14)
These are direct analogs of the global AdS 3 states defined in Eq. (3.57a). They were considered in detail in [26], and their energies were matched with those of the minimally coupled scalar perturbations on top of the JMaRT background as computed in supergravity. The remaining massless vertex operators will constitute the beginning of the main new results of this work. They are slightly more involved to construct, due to the fact that their polarization lies in a direction in which the null currents act non-trivially. An important consequence is that the raising/lowering operators J ± 0 and K ± 0 do not commute with the BRST charge Q anymore. So, unlike in global AdS 3 × S 3 × T 4 as reviewed in Section 3.2, physical states need not have definite SL(2,R) and SU(2) spins. They will, however, have definite projections m,m, m andm , and also well-defined energy E and momentum P y .
This situation is a consequence of the fact that the AdS 3 × S 3 isometries are absent in the asymptotically linear dilaton geometry. Nevertheless, these isometries are restored in the IR, by taking R y large while keeping ER y and P y R y fixed, see Eqs. (2.20)-(2.23). In this regime, the vertex operators of the gauged models will reduce to the AdS 3 × S 3 expressions in Eqs. (3.57b) and (3.57c).
Let us consider a generic linear combination of NS sector vertex operators,
e −ϕ c r ψ r V j,m−r V j ,m + d r ψ r V j,m V j ,m −r + c t λ t + c y λ y V j,m V j ,m e i(−Et+Pyy) , (4.15)
where the notation mirrors that of the AdS 3 × S 3 expressions in Eq. (3.58), in particular summation over r = +1, −1, 0 is implicit, with "0" corresponding to the "3" direction of the respective algebras. Of these eight degrees of freedom, two are removed by the conditions arising from the G and λ terms in the BRST charge, which respectively read
0 = mc 3 + (m − j)c + + (m + j)c − + m d 3 + (j + m )d + + (j − m )d − + c t E 2 + c y P y 2 ,(4.16)
and
0 = n 5 −c 3 + l 2 d 3 − l 3 c t + l 4 c y . (4.17)
This leaves six states, out of which two turn out to be BRST-exact. The first exact state comes, as usual, from the action of G on the tachyon operator (4.11), while the second one has no global AdS 3 counterpart and appears due to the action of λ on the same state. Their explicit expressions are
Φ G = e −ϕ 2 n 5 1 2 V j ,m (m − j + 1)ψ − V j,m+1 + (m + j − 1)ψ + V j,m−1 − 2mψ 3 V j,m + 2 n 5 1 2 V j,m (j + m + 1)χ − V j ,m +1 + (j − m + 1)χ + V j ,m −1 + 2m χ 3 V j ,m + 1 2 −Eλ t + P y λ y V j,m V j ,m e i(−E t+Pyy) ,(4.18)
and
Φ λ = e −ϕ ψ 3 + l 2 χ 3 + l 3 λ t + l 4 λ y V j,m V j ,m e i(−E t+Pyy) ,(4.19)
respectively. Such states are trivially BRST invariant since G and λ square to the Virasoro constraint (4.12) and the null condition (2.7), and the relevant term in their product is G(z) λ(0) ∼ λ(z) G(0) ∼ J(0)/z, whose action vanishes by means of the condition (4.13).
In the end, we are left with four physical vertex operators to add to the four from the T 4 directions to give the correct eight polarizations in the holomorphic sector in 9+1 dimensions.
We choose a basis for these four physical vertex operators such that, in the AdS 3 limit, they reduce to the basis of global AdS 3 vertex operators described around Eq. (3.58). We thus obtain
W ε = e −ϕ (ψV j ) j+ε,m V j ,m + c t ε λ t + c y ε λ y V j,m V j ,m e i(−E t+Pyy) , (4.20a) X ε = e −ϕ V j,m (χV j ) j +ε,m + d t ε λ t + d y ε λ y V j,m V j ,m e i(−E t+Pyy) , (4.20b)
where the SL(2,R) and SU(2) coefficients are those given in (3.58)-(3.59), while the novel ones are 8
c t ε = −c 3 ε n 5 P y l 4 E + l 3 P y , c y ε = c 3 ε n 5 E l 4 E + l 3 P y , (4.21) d t ε = d 3 ε n 5 l 2 P y l 4 E + l 3 P y , d y ε = −d 3 ε n 5 l 2 E l 4 E + l 3 P y . (4.22)
By construction, the resulting states are polarized transverse to the gauge directions. As anticipated, they are built out of a linear combination of terms of spin j and j + ε (j and j + ε). Moreover, at leading order in the large R y expansion, the coefficients in the t, y directions go to zero, since E, P y ∼ O(1/R y ), l 3,4 ∼ O(R y ), and l 2 ∼ O(1).
The unflowed R sector
We now describe the physical states in the R sector of the null-gauged model. The computation turns out to be more involved than in the NS sector, since the spin fields necessarily involve all ε-chiralities. As a consequence, we will not find a situation akin to (4.20) in which a subset of coefficients are exactly those of the global AdS 3 × S 3 operators. However, we will again show that in the AdS 3 limit the vertex operators will reduce to their global AdS 3 × S 3 counterparts. We introduce AdS 3 × S 3 and R t × S 1 y × T 4 spin fields, 9
S ε 1 ε 2 ε 3 = e i 2 (ε 1 H 1 +ε 2 H 2 +ε 3 H 3 ) , S ε 6 ε 4 ε 5 = e i 2 (ε 6 H 6 +ε 4 H 4 +ε 5 H 5 ) . (4.23)
Recalling the definition of the AdS 3 chirality ε and the mutual locality / chiral GSO projection in (10+2) dimensions, (4.10), we substitute away ε 3 and ε 6 via ε 3 = εε 1 ε 2 , ε 6 = εε 4 ε 5 .
(4.24)
The H 4,5 exponentials are spectators under the action of Q, so the parameters ε 4 , ε 5 will label the vertex operators. For fixed ε 4 , ε 5 , we consider ε 6 to be controlled by ε through the second equation in (4.24), and we will form linear combinations of different values over ε 1 , ε 2 , ε.
We work with vertex operators in ghost pictures (q ϕ , qφ) = (− 1 2 , + 1 2 ), for which the λ-constraint is non-trival, while there is no need to worry about BRST-exact states. We thus make an ansatz for R sector vertices of the following form:
Y ε 4 ,ε 5 = e −(ϕ−φ)/2 ε 1 ,ε 2 ,ε F ε ε 1 ε 2 ε 4 ε 5 S ε 1 ε 2 ε 3 S ε 6 ε 4 ε 5 V j,m− ε 1 2 V j ,m − ε 2 2 e i(−E t+Py y) ,(4.25)
Note that the coefficients F ε ε 1 ε 2 ε 4 ε 5 are not determined by the representation theory of SL(2, R)×SU(2), since the states will not in general have definite spin. 8 The coefficients c t,y and d t,y were reported in the Letter [1] with a slightly different notation, related by c t,y there = c t,y ε,here /c 3 ε , and likewise for d t,y . 9 The order of the spin fields in Sε 6 ε 4 ε 5 has been chosen for convenience in order to reduce clutter in computations involving cocycle factors.
The cT andcJ terms of the BRST operator Q (4.8) act as in the NS sector. Hence the unflowed, non-winding states of the R sector also satisfy both the Virasoro condition (4.12) and the bosonic null-gauge constraint (4.13).
Next, the eφλ term in Q leaves ε 1,2,4,5 unchanged, so for this term we can treat ε 1,2,4,5 as fixed, and focus on the sum over ε = ±. The resulting constraints on F ± ε 1 ε 2 ε 4 ε 5 form a two-dimensional homogeneous linear system, which is degenerate due to the null condition on the gauge parameters, Eq. (2.7). For each choice of ε 1,2,4,5 , we have These constraints halve the degrees of freedom. When |l 2 | = 1 (and so |l 3 | = |l 4 |), some of the F ε ε 1 ε 2 ε 4 ε 5 get set to zero. For a given ε 1,2,4,5 , when neither of F ± ε 1 ε 2 ε 4 ε 5 get set to zero, their ratio F − ε 1 ε 2 ε 4 ε 5 /F + ε 1 ε 2 ε 4 ε 5 becomes determined. So the 32 d.o.f. remaining after imposing GSO in (10+2) dimensions have now become 16, corresponding to ε 1,2,4,5 in our parameterization.
(l 3 + ε 4 ε 5 l 4 )F − ε 1 ε 2 ε 4 ε 5 − i √ n 5 (1 − ε 1 ε 2 l 2 )F + ε 1 ε 2 ε 4 ε 5 = 0 , i √ n 5 (1 + ε 1 ε 2 l 2 )F − ε 1 ε 2 ε 4 ε 5 − (l 3 − ε 4 ε 5 l 4 )F + ε 1 ε 2 ε 4 ε 5 = 0 .
Let us pause to discuss how Eq. (4.26) behaves in the large R y limit. We have l 2 ∼ O(1) while l 3 + l 4 ∼ O(R y ) and l 3 − l 4 ∼ O(1/R y ), from (2.11)-(2.13). When ε 4 ε 5 = +1, we obtain F + ∼ O(1) and F − ∼ O(1/R y ), so at leading order in large R y we obtain a purely positive chirality operator. Similarly, when ε 4 ε 5 = −1, at leading order in large R y we obtain a purely negative chirality operator. So we obtain operators of definite AdS 3 × S 3 chirality ε, with ε 4 ε 5 = ε, exactly as in Section 3.2, see Eq. (3.74). As before, one of ε 4 or ε 5 remains unfixed, say ε 4 .
We now examine the action of e ϕ G on the R vertex operator ansatz (4.25). This will reduce the remaining 16 degrees of freedom to the correct 8 physical polarizations in the holomorphic sector. It leads to the following set of equations (we suppress the ε 4 , ε 5 subscripts on the RHS for ease of notation):
B ε ε 1 ε 2 ε 4 ε 5 ≡ m + ε 1 j − ε 1 2 F ε (−ε 1 )ε 2 + i ε 1 ε 2 j − ε 2 m + 1 2 F ε ε 1 (−ε 2 ) − (εm + ε 1 ε 2 m )F ε ε 1 ε 2 + i √ n 5 2 (ε 4 ε 5 P − ε E) F (−ε) ε 1 ε 2 = 0 . (4.27)
Comparing to the AdS 3 ×S 3 BRST condition, the only new term is the fourth and final one, proportional to F (−ε) ε 1 ε 2 , which has the effect of mixing the ε chiralities. The first three terms are unchanged from the AdS 3 × S 3 BRST condition, so the AdS 3 × S 3 limit of this condition is simply to drop the fourth term. Note that Eq. (4.26) implies that half of these equations are redundant, and allows us to decouple the F + from the F − coefficients. Moreover, by using the Virasoro constraint (4.12), the bosonic null-gauge condition (4.13), and the null constraint on the gauge parameters (2.7), one can show that for fixed ε 4 and ε 5 , actually only two equations are linearly independent. For generic values of quantum numbers, such that all denominators appearing below are nonzero, the linearly independent equations can be taken to be
F + −+ = −i j − m + 1 2 j + m − 1 2 F + +− + −j(j − 1) + j (j + 1) + m 2 − m 2 (j + m − 1 2 ) m − m + l 4 −ε 4 ε 5 l 3 2(l 2 −1) (ε 4 ε 5 E − P y ) F + ++ , F + −− = m − m + l 4 −ε 4 ε 5 l 3 2(l 2 −1) (ε 4 ε 5 E − P y ) j + m − 1 2 F + +− + i j + m + 1 2 j + m − 1 2 F + ++ .
(4.28)
Alternatively, the two linearly independent equations can generically be taken to be
F − −+ = −i j − m + 1 2 j + m − 1 2 F − +− + −j(j − 1) + j (j + 1) + m 2 − m 2 (j + m − 1 2 ) −m − m + n 5 (l 2 −1) 2(l 4 −ε 4 ε 5 l 3 ) (ε 4 ε 5 E + P y ) F − ++ , F − −− = −m − m + n 5 (l 2 −1) 2(l 4 −ε 4 ε 5 l 3 ) (ε 4 ε 5 E + P y ) j + m − 1 2 F − +− + i j + m + 1 2 j + m − 1 2 F − ++ . (4.29)
Let us pause again to check consistency with the AdS 3 × S 3 limit. Setting ε 4 ε 5 = ε and taking the large R y limit of Eqs. (4.28), we indeed find that a solution is given by setting (the AdS 3 × S 3 limit of) F ε ε 1 ε 2 to be equal to the values f ε ε 1 ε 2 specified in Eqs. (3.75)-(3.77). Similarly to AdS 3 × S 3 , Eqs. (4.28) are two equations for four unknowns, so for each ε 4 , ε 5 there is a two-parameter family of solutions, which we take to be parameterized by the values of F + +± . If working with Eqs. (4.29), we take the two-parameter family of solutions to be parameterized by the values of F − +± . Together with ε 4 , ε 5 , this gives 8 physical polarizations.
In Section 3.2, for AdS 3 × S 3 these unfixed coefficients were chosen such that the vertex operators transform appropriately under the action of the currents J ± and K ± . However, in the null-gauged worldsheet theory associated to the full asymptotically linear dilaton geometry this need not necessarily be the case.
In the cosets we fix these coefficients by requiring a reasonable IR limit. We treat the different ε chiralities separately. For ε = 1 we set the particular components F + +± equal to their values in the AdS limit, F + +± = f + +± . The rest of the coefficients are then obtained using Eqs. (4.28) and (4.26). Alternatively, for ε = −1 we set F − +± = f − +± and again solve for the remaining coefficients using Eqs. (4.29) and (4.26).
We now turn to the analysis of the spacetime supercharges preserved by the null gauging, following on from the initial discussion around Eq. (4.9) (see also [28]). The supercharge analysis corresponds to the limit of the Ramond vertex operator analysis in which we take j = j = E = P y = 0, m = ε 1 2 , m = ε 2 2 , as can be seen by comparing Eqs. (4.9) and (4.25). In this limit, the center-of-mass wavefunction trivializes and we are left with integrated vertex operators involving only the spin fields. As before, we parameterize the ε i according to (4.24); ε 4 and ε 5 are spectators that will label the supercharges; and we will sum over ε, ε 1 , ε 2 as in (4.25). The J constraint (4.13) reduces to Combining this analysis with the corresponding one in the antiholomorphic sector, we observe consistency with the passage below Eq. (2.24) describing which subset of the backgrounds are supersymmetric. In terms of the spacetime spectral flow parameters s,s introduced in Eq. (2.27), we have l 2 = 2s + 1, r 2 = 2s + 1. The circular supertube backgrounds of [63,64] have s =s = 0 and preserve supersymmetry in both holomorphic and antiholomorphic sectors; the backgrounds of [50][51][52]54] haves = 0, s = 0 and so preserve supersymmetry only in the antiholomorphic sector; the general JMaRT backgrounds [53] have s ands both nonzero, and preserve no supersymmetry.
ε 1 + ε 2 l 2 = 0 ⇒ ε 1 ε 2 = −l 2 ,
Picture changing in the R sector
In order to compute two-point functions of operators Y in the Ramond sector of the gauged model, we need to define their picture-changed versions. Propagators will be non-vanishing only if the total ghost charges add up to −Q ϕ = −2 and −Qφ = 0. The picture-changing operators are given by P +1 ∼ e ϕ G andP +1 ∼ eφλ. One possible natural choice would be to compute the two-point function (superscripts denote (q ϕ , qφ) charges)
Y (− 3 2 ,− 1 2 ) (z) Y (− 1 2 ,+ 1 2 ) (w) .
(4.33)
However, it turns out that looking for an explicit expression for the state Y (− 3 2 ,− 1 2 ) is not the simplest way to go. This is due to the fact that such a state is automatically BRST closed, so that it must be determined by the somewhat cumbersome procedure of removing all the BRST-exact contributions. To avoid this issue, one can distribute the ghost charges in a different way inside the correlator, and consider instead the equivalent two-point function 2 ) is in the canonical ϕ-picture. Thus, although this forces us to compute two additional R-sector operators instead of only one, these are constrained by theγλ and γG BRST constraints, respectively. The procedure is then similar to that employed above to construct Y (− 1 2 ,+ 1 2 ) . We thus make the Ansätze
Y (− 3 2 ,+ 1 2 ) (z) Y (− 1 2 ,− 1 2 ) (w) .Y (− 3 2 ,+ 1 2 ),ε4ε5 = e −( 3 2 ϕ− 1 2φ ) ε,ε 1 ,ε 2 L ε ε 1 ε 2 ε 4 ε 5 S ε 1 ε 2 ε 3 S ε 6 ε 4 ε 5 V j,m− ε 1 2 V j ,m − ε 2 2 e i(−E t+Py y) , Y (− 1 2 ,− 1 2 ),ε4ε5 = e −( 1 2 ϕ+ 1 2φ ) ε,ε 1 ,ε 2 G ε ε 1 ε 2 ε 4 ε 5 S ε 1 ε 2 ε 3 S ε 6 ε 4 ε 5 V j,m− ε 1 2 V j ,m − ε 2 2 e i(−E t+Py y) .
where again ε 3 , ε 6 are substituted away using (4.24). These must satisfy By solving the above constraints, all the coefficients L ε and G ε can be expressed explicitly in terms of the F ε coefficients in Eq. (4.28).
In addition, one can explicitly check that in the AdS limit they correctly reproduce the expected behaviour. From the definition of the corresponding coset states Y (− 3 2 ,+ 1 2 ) and Y (− 1 2 ,− 1 2 ) , one might reasonably expect that they would reduce to the states Y (− 3 2 ) and Y (− 1 2 ) of Section 3.3.2 respectively. However, care is needed when comparing both chiralities and normalisations in the UV and IR. To explain this, let us consider for instance Y (− 1 2 ) A , whose ε-chirality is ε = +1. (Analogous comments hold for other operators.) First of all, recall that already in the case of AdS 3 × S 3 × T 4 , the picture-changing operator induces a change in chirality of the state. Indeed, in case "A" of the analysis in (3.75), the physical states in the "− 3 2 " picture have negative ε-chirality. In the full coset models, an analogous pattern occurs with the two picture-changing operators P +1 ,P +1 . The coset state Y (− 3 2 ,+ 1 2 ) that correctly reduces to Y (− 3 2 ),ε=−1 A in the AdS limit indeed has positive ε-chirality. This means that, in our case under study, the
coefficients L + ε 1 ε 2 reduce to − √ n 5 j+j f − ε 1 ε 2 . Similarly, the coset state Y (− 1 2 ,− 1 2 ) with negative ε-chirality reduces to the Y (− 1 2 ) A
state. However, in the latter case there is a normalisation factor (i kR y ) −1 to account for. This is removed by taking into account the normalisation of the picture-changing operator, which indeed contains a term which is dominant in the AdS limit, l 3 λ t + l 4 λ y ∼ (kR y )(λ t + λ y ).
We illustrate the example of the G − ++ coefficient, appearing in the coset state Y (− 1 2 ,− 1 2 ) . The argument holds analogously for all the other coefficients. The explicit expression of G − ++ in terms of positive coefficients F + ε 1 ε 2 is obtained by solving the constraints Eq. (4.35) and Eq. (4.36), without using Eq. (4.12) and Eq. (4.13). For generic quantum numbers such that the denominators below are non-zero, one finds
(l 3 + l 4 ) 2 i(E − P y )n 5 (1 + l 2 ) G − ++ (4.37) = (m + j − 1 2 )F + −+ + i(j − m + 1 2 )F + +− − 2(j−j +1)(j+j )(l 3 +l 4 ) (E−Py)n 5 (1+l 2 ) + (−1+l 2 ) (1+l 2 ) (m − m ) F + ++ −j(j − 1) + j (j + 1) + n 5 (m+l 2 m )(E−Py) l 3 +l 4 + n 5 n 5 (−1+l 2 2 )(E−Py) 2 4(l 3 +l 4 ) 2 ,
and thus for R y 1 one has G − ++ (ikR y ) −1 f + ++ + O(R −2 y ), as claimed above. The expressions for the coefficients G, L are quite lengthy, and we leave the computation of correlators in the full coset model for future work. Nevertheless, it is easy to check that in all cases the coefficients reduce to the expected expressions when going into the IR regime. Consequently, we find that, to leading order in R y , the coset two-point functions in the RR sector reproduce the m-basis expressions in Eq. (3.80), as they should. As will become clear in Section 5 below, this does not mean that the physics in the IR regime of the coset model is that of global AdS 3 × S 3 × T 4 ; the bosonic null gauge condition (4.13) will lead to substantially different correlators in the appropriately defined x-basis.
Flowed/winding sectors
We now briefly discuss the states with non-trivial spectral flow we will be interested in, that is, those corresponding to the description of the higher-weight chiral primaries described in Section 3.2.
A generic state with excitation numbers ( 1 2 , 1 2 ) in the null-gauged worldsheet theory must satisfy the the L 0 andL 0 Virasoro constraints
0 = j (j + 1) − j(j − 1) n 5 − mω + m ω + n 5 4 ω 2 − ω 2 − 1 4 E 2 − P 2 y,L , (4.38a) 0 = j (j + 1) − j(j − 1) n 5 −mω +m ω + n 5 4 ω 2 − ω 2 − 1 4 E 2 − P 2 y,R , (4.38b)
where (we reuse the notation P y,L , P y,R for the eigenvalues of the corresponding operators)
P y,L/R = n y R y ± ω y R y , (4.39)
with ω y ∈ Z the winding on the y-circle. The level-matching L 0 −L 0 constraint thus reads Regarding the gauge constraints, recall that, both in SL(2,R) and in SU (2), the different modes of the spectrally flowed operators are obtained by acting with the raising/lowering operators J ± 0 and K ± 0 on the flowed primary. Although this does not lead to operators that can be expressed in a simple way in the m-basis, the presence of these different modes is crucial in order to obtain the set of physical modes that satisfy the gauge constraints. Focusing on (lowest-weight) discrete states corresponding to operators of spacetime weight h in the chiral multiplets, this gives modes described by worldsheet operators with projections m ω = J + n 5 2 ω + n = h + n and m ω = J + n 5 2 ω − n , with n, n ∈ N 0 , and similarly in the antiholomorphic sector. The bosonic gauge constraints now read
0 = ω(m − m) + m ω −m ω + n 5 4 (ω 2 −ω 2 ) + n y ω y .0 = m ω + l 2 m ω + l 3 2 E + l 4 2 P y , 0 =m ω + r 2m ω + r 3 2 E + r 4 2 P y . (4.41)
As discussed below Eq. (4.14), this implies that J ± 0 and K ± 0 do not commute with the BRST charge. Consequently and importantly, only the subset of modes satisfying Eqs. (4.41) will be physical. The implications will be discussed at length in the following section.
Before concluding this section, let us review the fact that there is a residual discrete gauge symmetry in these models, which implies that operators related by shifts of the following form describe the same physical state [26], see also [29]. Parameterizing (l 4 , r 4 ) through p and k as in (2.12), the symmetry is δ ω, ω ,ω , E, n y , ω y = (1, −l 2 , −r 2 , l 3 , −p, k) .
(4.42)
In particular, one can trade a unit of SL(2,R) spectral flow for −k units of winding ω y , together with corresponding shifts in ω andω . The energy also acquires a term linear in R y , namely δE = −kR y + O(1/R y ). The interpretation of the factor k relating ω and ω y can be traced back, for instance, to the Z k orbifold appearing in the IR, see Eq. (2.21). It reflects the fact that the CFT state associated with the background lives in the k-twisted sector of the D1D5 CFT. The operators discussed in this section do not exhaust the spectrum of the worldsheet model; for instance we have not discussed operators that do not not satisfy ω y ≡ 0 mod k, which were analyzed in [26]. However, the operators described above comprise a large set of light operators in spectral flowed sectors, in parallel to the analysis of global AdS 3 × S 3 , which will be general enough for our purposes in the present work.
Novel heavy-light correlators from the worldsheet
In this section we describe the computation of two-point correlators in the null gauged models, corresponding to HLLH correlators of the holographic CFT. To do so, we take a set of physical coset operators derived in the previous section and flow them to the IR, in which the geometry is locally an orbifold of AdS 3 × S 3 . We develop a proposal to define coset operators in an appropriate x-basis corresponding to local operators of the holographic CFT. We then use this definition to compute a large set of HLLH correlators. We observe precise agreement between a subset of these and known results computed in supergravity and holographic CFT, and significantly extend these results.
Light states in the AdS 3 regime
We begin by describing in more detail the vertex operators of the null-gauged model in the AdS 3 limit. As discussed around Eq. (2.20), we send R y → ∞, keepingt = t/R y and y = y/R y fixed. After choosing the gauge τ = σ = 0, this leads to a geometry described by the six-dimensional metric (2.21), which is related to Z k -orbifolded AdS 3 × S 3 × T 4 by the large coordinate transformation Eq. (2.24).
We focus initially on light states with no winding or worldsheet spectral flow. As we have argued in the previous section, the different polarizations and the associated coefficients simply reduce to those described in Section 3.2 in the AdS 3 limit. Here we further describe what happens to their quantum numbers in the regime of interest. In general, the Virasoro condition (4.12) determines j via the solution
j = 1 2 + j + 1 2 2 + n 5 4 P 2 y − E 2 , (5.1)
where we have fixed the sign in order to have j in the range (3.16). As R y → ∞, we hold fixed the rescaled energy and momentum E = ER y , n y = P y R y .
(5.2)
Hence, the second term inside the square root in (5.1) is O 1/R 2 y , and at large R y the solution becomes j = j + 1 + O 1/R 2 y , which to leading order is the usual AdS 3 × S 3 relation. The O 1/R 2 y corrections to j are non-zero when |E| = |n y |, which is generically the case. To see this, note that at large R y the gauging parameters associated to the t and y directions become
l 3 = r 3 = l 4 = −r 4 = −kR y + O (1/R y ) . (5.3)
On the other hand, those associated to the S 3 angular directions do not scale with R y , and remain l 2 = 2s + 1 and r 2 = 2s + 1. Hence, at leading order at large R y , Eqs. (4.13) take the form
0 = m + (2s + 1) m − k 2 (E + n y ) , 0 =m + (2s + 1)m − k 2 (E − n y ) , (5.4)
which fix E and n y in terms of m, m ,m,m , such that indeed generically E = ±n y . Although for simplicity we restricted to light states with no winding or worldsheet spectral flow in Eqs. (5.1) and (5.4), the present discussion and the computations in the rest of this section are analogous for winding states after replacing the projections m → m ω , etc, where m ω was defined above Eq. (4.41). Our results will be valid for the full set of chiral primaries that can be described within the usual AdS 3 ×S 3 ×T 4 worldsheet theory, as well as their descendants under the global part of the chiral algebra, that fill out the short multiplet. We shall comment further on states with non-trivial winding and/or worldsheet spectral flow in due course.
Identifying the spacetime modes
Let us discuss the identification of the spacetime modes. We shall work in a gauge in which the upstairs SL(2,R) time τ and angular direction σ are fixed. Then, importantly, the asymptotic boundary of the physical downstairs AdS 3 is parameterized by t/R y and y/R y , at a fixed point on the S 3 . We therefore define
m y = 1 2 (E + n y ) ,m y = 1 2 (E − n y ) ,(5.5)
and we interpret these as the asymptotic mode labels. We will see that this gives rise to a rich set of correlators that agree with and extend previous results. Let us first consider k = 1, and continue to focus mainly on the holomorphic sector. Given a holographic CFT chiral primary with spacetime weight h and definite (left) R-charge m , we wish to construct the dual worldsheet operator by summing over the corresponding modes. As reviewed in Section where we have temporarily included the antiholomorphic dependence to emphasize the coupling between the left-and right-moving sectors due to the null gauge constraints, Eq. (4.13). The normalization factors of k have been introduced for later convenience and will be discussed below Eq. (6.16). We emphasize that, from the point of view of the worldsheet theory, x is an auxiliary complex variable, whilet andỹ are scalar fields.
Note that combining the bosonic null constraint (5.4) and the definition of the modes (5.5), we obtain m y = m + (2s + 1)m . This relation parallels the supergravity spectral flow large gauge transformation (2.24), and we will make use of this observation when discussing the relation to the holographic CFT in due course.
For k > 1 we follow the same logic, and make the same definition (5.6). This time however, combining Eqs. We shall shortly see that this gives rise to an important technical complication relating the holomorphic and antiholomorphic sectors. Before computing our first example of a HLLH correlator, let us briefly return to operators with non-zero spectral flow and/or winding charge. In Section 4.4 we analyzed a set of coset vertex operators in sectors with non-zero worldsheet spectral flow, corresponding to chiral primaries of higher twists, similar to those in global AdS 3 × S 3 . In that section we worked with ω y = 0, and reviewed that the large gauge spectral flow transformation (4.42) relates these to operators with ω y ∈ kZ.
When ω y = 0, in general one should be careful both when examining whether the states survive the AdS 3 limit, and also when defining the AdS 3 energy E and angular momentum n y , since fundamental string y-winding charge can be exchanged for background flux [26]. However, for operators which have ω y ∈ kZ, the situation is straightforward: we shall simply use gauge spectral flow to always work in a frame in which ω y = 0, and then use the definitions of E and n y in (5.2) and the modes m y ,m y in (5.5).
The discussion becomes more complicated when considering global SL(2,R)×SU(2) descendants of the spectrally flowed primaries. The isomorphism between the affine modules gives a simple identification for the highest/lowest weight states, but this structure becomes more complicated for rest of the of the multiplet. Indeed, global descendants of the spectrally flowed affine primary state are identified with affine/Virasoro descendants of the corresponding unflowed equivalent state with non-trivial y-winding, such that one needs to include string oscillator excitations. The situation is similar to what happens for the usual series identification V ω=0
j,j ∼ V ω=1 k 2 −j,j− k 2 in bosonic SL(2,R), where, for instance, one has V ω=0 j,m ∼ (j + 0 ) m−j V ω=0 j,j ∼ (j + −1 ) m−j V ω=1 k 2 −j,j− k 2
, see [28]. We leave a more detailed exploration of these features in the coset models for future work. In the reminder of this paper we will work with operators that have ω y = 0.
HLLH correlators: first example
We now compute a first explicit example of a worldsheet two-point function in the cosets corresponding to the heavy backgrounds under consideration. For this purpose, we focus on a particular light operator probing the backgrounds withs = 0 (hence r 2 = 1), but general s. These describe supersymmetric spectral flowed supertubes [50][51][52][53][54]. We shall demonstrate that this worldsheet correlator agrees with both the supergravity and symmetric product orbifold CFT HLLH four-point functions computed in [30]. We will also significantly extend beyond the set of correlators computed in [30]. The light operator in this first example is a massless RR operator of Y [A] type with h = m = 1/2 and hence j = 1, see Eq. (3.81). We shall denote this operator by Y 1 2 . Together with the analogous antiholomorphic part, this vertex operator is dual to a particular (h,h) = ( 1 2 , 1 2 ) chiral primary of the HCFT denoted by O ++ , which we introduced in Eq. (3.2). In the six-dimensional supergravity arising from reduction on T 4 , this corresponds to a particular combination of fluctuations of a scalar and anti-self-dual two-form potential, in a tensor multiplet that is not turned on in the backgrounds we consider. In type IIB supergravity, in the present NS5-F1-P duality frame, these fields correspond to a particular combination of supergravity fluctuations of the RR axion and certain components of the RR two-form and four-form potentials. (In the D1-D5-P frame, the fluctuations are of the RR axion and certain components of the RR four-form and NS-NS two-form potentials). The light operator corresponds to a particular scalar spherical harmonic of these fields [13,14,30].
In the large R y limit, we have shown that the Y operators simply reduce to their AdS 3 × S 3 cousins of Section 3.2, times the additional exponentials in t and y. These exponentials give trivial contributions to the two-point functions Y To illustrate this more explicitly and also more generally, we introduce the following notation: O h,m will denote a generic massless worldsheet vertex operator in the AdS limit of the null-gauged model with spacetime weight h and spacetime R-charge m . When m = h, we shall suppress the label m and write O h . The corresponding holographic CFT chiral primaries will be denoted by O h . Let us then consider the following worldsheet correlator,
s,s, k|O h 1 ,m 1 (x 1 , z 1 )O h 2 ,m 2 (x 2 , z 2 ) |s,s, k ≡ 1 k 2h 1 +2h 2 m y,i ,m y,i i=1,2 x m y,i −h i ixm y,i −h i i lim Ry→∞ V 1 (z 1 )V 2 (z 2 ) ,(5.8)
where V denotes a generic m-basis massless vertex operator of the full coset model. The spacetime correlator corresponds to the worldsheet-integrated version of (5.8).
As discussed below (3.94), the normalization of the vertex operators is chosen so that the overall factors coming from the worldsheet integration procedure cancel out. After setting h 1 = h 2 = h, x 1 = 1 and x 2 = x, computing the free-field correlators, and imposing the various charge conservations, the integrated correlator becomes
s,s, k| O h,m (1)O † h,m (x) |s,s, k = 1 k 4h my,my x my−hxmy−h lim Ry→∞ V 1 V 2 , (5.9)
where V 1 V 2 stands for the m-basis two-point function with the z-dependence stripped out, c.f. Eqs. (3.63), (3.80). Note that on the right-hand side, the sum over m y involves the R-charge and other quantum numbers of the operator located at x. In turn, the remaining correlator is particularly simple in our context. As argued around Eq. (3.80a), the m-basis two-point functions of worldsheet chiral primary operators reduce to the Gamma functions expression in the bulk term of (3.24) with the replacement j → J = h. These are simply the coefficients obtained by Mellin-transforming the usual propagator |1 − x| −4h , which further all become equal to one when h = 1 2 . Thus, for Y 1 2 in thes = 0 backgrounds, we obtain
s, k| Y 1 2 (1)Y † 1 2 (x) |s, k = 1 k 2 my,my x my− 1 2xm y − 1 2 . (5.10)
To be fully explicit, we take the operator at x 2 = x to be a discrete series state D + j corresponding to an anti-chiral primary which, having set J = h =h, has m = h + n, m = h+n and m =m = −h, where n,n are non-negative integers. For the supersymmetric backgrounds withs = 0 and setting h = 1 2 , the relation (5.7) becomes
m y = n − s k ,m y =n k . (5.11)
Hence, the correlator takes the form
s, k| Y 1 2 (1)Y † 1 2 (x) |s, k = 1 k 2 n,n x n−s k − 1 2xn k − 1 2 ,(5.12)
where we have denoted the sum with a prime because the range of summation over n,n is constrained. We now determine this constraint. Subtracting the two equations in (5.11) and comparing with (5.5), we obtain m y −m y = 1 k (n −n − s) = n y ⇒ n −n = kn y + s .
(5.13)
Thus the allowed values of n −n are constrained by n y ∈ Z. To see this in detail, let us first write s = kp +ŝ with 0 ≤ŝ < k and p ∈ N. For convenience here and later, we define a ≡ k −ŝ, so that s = kp − a, and 1 ≤ a ≤ k. Then the sum in (5.12) is restricted to be over non-negative integers n,n satisfyinḡ n − n ≡ a mod k. (5.14)
We now demonstrate that the correlator (5.12), (5.14) agrees precisely with the supergravity and orbifold CFT expressions derived in [30]. In the holographic CFT, we have a
HLLH four-point function O H (x 3 )O L (x 1 )O † L (x 2 )O † H (x 4 )
. Using Möbius symmetry we set x 3 = 0 and x 4 → ∞, in which case the heavy operators are interpreted as in/out states which we similarly denote as |s, k , such that the four-point function becomes a two-point function in the heavy background. We further set x 1 = 1, while x 2 = x parametrizes the usual cross-ratio. Then the supergravity result of [30,Eq. (4.25)] is
s, k|O 1 2 (1)O † 1 2 (x)|s, k = x (ŝ−s)/k |x||1 − x| 2 1 − |x| 2(1−ŝ/k) +x |x| −2ŝ/k − 1 1 − |x| 2/k ,(5.15)
where the overall normalization of the supergravity amplitude was not fixed in [30]. Eq (5.15) was further shown to coincide with the corresponding symmetric orbifold CFT calculation for the particular casesŝ = 0 andŝ = k − 1.
To demonstrate agreement between (5.12) and (5.15), it is easier to work from (5.15) towards our expression (5.12). We start by rewriting (5.15) in terms of sums akin to those involved in the definition of the x-basis, i.e. the mode expansion of local operators in spacetime as seen from the worldsheet theory. Recalling thatŝ = k − a, the correlator becomes
s, k|O 1 2 (1)O † 1 2 (x)|s, k = x 1−p |x||1 − x| 2 1 − |x| 2a/k +x |x| 2a/k−2 − 1 1 − |x| 2/k = x −p |x| 1 |1 − x| 2 1 − |x| 2 1 − |x| 2/k − 1 1 −x 1 − |x| 2a/k 1 − |x| 2/k . (5.16)
Assuming |x| < 1, the RHS can then be expressed as Indeed, for a = k we simply parametrize n = kn + δ andn = kn + δ, which enforces the mod k condition (5.19), so that the sum gives the first term in (5.17). On the other hand, for a < k we can take n = kn + δ − a andn = kn + δ, so that the factors of a coming from s and n cancel each other out in the exponent. However, in this case we need to explicitly subtract the contribution of all pairs (n, δ) which lead to n < 0, thus again giving (5.17). Therefore we observe agreement between (5.12), (5.14) and (5.15), up to the overall normalization that was not fixed in the supergravity calculation of [30]. Here and throughout the paper, we shall not keep track of overall normalization factors. Forŝ = 0 andŝ = k − 1, since Eq. (5.15) matches the corresponding symmetric orbifold CFT correlator, we have demonstrated an explicit match between the worldsheet and symmetric orbifold CFT correlators. This is striking, as it is an agreement across moduli space for a correlator that a priori is not covered by an existing non-renormalization theorem.
This agreement is almost certainly due to the special nature of the heavy states we consider. Indeed, let us compare the worldsheet x-basis operator Eq. (5.6) with the discussion of holographic CFT spectral flow in [59, App. A]. Spectral flow in the holographic CFT is an automorphism of the small (4, 4) superconformal algebra, that is a useful tool to relate different states and operators. For instance, the heavy backgrounds we consider are related by fractional spectral flow to the k-orbifolded NSNS vacuum, as discussed around Eq. (2.27).
Given a symmetric orbifold CFT correlator, one can perform spectral flow on both the operators and the background states. The value of the correlator is invariant under this operation. One can use this to map the correlator in one of our heavy backgrounds to a correlator in the k-orbifolded NSNS vacuum. Of course, for k = 1, after undoing the spectral flows one obtains a vacuum correlator.
Taking for simplicity k = 1 ands = 0, the transformation of a chiral primary operator under this operation is [59
, App. A] O h (x) = x (2s+1)m O h (x) .
(5.20)
The exponent of x directly parallels the x factors appearing in Eq. (5.6). This observation generalizes straightforwardly tos = 0 and to k > 1, whereupon operators have fractional modes taking values in Z/k. We will comment further on the relation between worldsheet and symmetric product orbifold CFT correlators in due course.
Non-BPS HLLH correlators for
h = 1 2
The correlator presented in the previous subsection can be readily generalized to compute a set of novel HLLH correlators involving the same light operators, but probing the more general class of non-supersymmetric backgrounds given by the JMaRT solutions, in which both spacetime (fractional) spectral flow parameters s ands are non-trivial. Note that the parameters s,s, k defining the background must satisfy s(s + 1) −s(s + 1) ∈ k Z, from combining Eqs. (2.15) and (2.27). The same steps as described in the previous subsection lead directly to the following generalization of Eq. (5.12):
s,s, k| Y 1 2 (1)Y † 1 2 (x) |s,s, k = 1 k 2 n,n x n−s k − 1 2xn −s k − 1 2 . (5.21)
To make precise the restricted summation, analogously to s = kp − a we writes = kp −ā, with 1 ≤ā ≤ k. We parametrize n = kn + δ − a andn = kn + δ −ā in order to satisfy the condition coming from the subtracted gauge constraint generalizing Eq. (5.14), namelȳ n − n ≡ (a −ā) mod k .
(1)Y † 1 2 (x,x)|s,s, k = 1 k 2 x −px−p |x| × (5.23) 1 |1 − x| 2 1 − |x| 2 1 − |x| 2/k − 1 1 −x 1 − |x| 2a/k 1 − |x| 2/k − 1 1 − x 1 − |x| 2ā/k 1 − |x| 2/k + 1 − |x| 2b/k 1 − |x| 2/k .
As before, the second and third terms remove contributions for which either n orn become negative, while the fourth one compensates for the over-counting of cases in which both n andn are negative.
At first sight, Eq. (5.23) may seem to depend on the values of a andā separately, in apparent contradiction with the fact that, as is implied by (5.22), only their difference matters. However, the RHS of Eq. (5.23) can be rewritten as
1 k 2 x −s/kx−s/k |x||1 − x| 2 1 1 − |x| 2/k x x (ā−a)/2k × (5.24) (1 − x)|x| (a−ā)/k + (1 −x)|x| −(a−ā)/k − |1 − x| 2 |x| −|a−ā|/k ,
which explicitly depends only the orbifold parameter k, the spectral flow parameters s and s, and the difference a −ā ≡s − s mod k, as expected. Note that (5.24) is symmetric under the simultaneous replacements x ↔x and s ↔s.
The worldsheet correlators (5.23)-(5.24) are one of the main results of this paper. Unlike thes = 0 supersymmetric example of the previous subsection, generically the corresponding supergravity or holographic CFT correlators have not been computed in the literature.
Since the backgrounds are non-supersymmetric when s ands are both non-zero, again a priori there is no obvious reason to expect the correlators to be protected across moduli space.
However, better-than-expected agreement between supergravity and holographic CFT has already been observed for a closely related observable describing the analog of the Hawking radiation process [55,58,60]; we shall comment further on this in due course.
Moreover, for the particular cases where s ands are congruent to either 0 or k − 1 mod k, the holographic CFT correlator follows straightforwardly from the techniques in App. A of [30], providing another exact match even for the non-supersymmetric backgrounds. We shall generalize this further in the next section.
More general heavy-light correlators
In this section we present HLLH correlators for generic chiral primaries of conformal weights h > 1/2. We then progress to describe higher-point heavy-light correlators. As an application, we compute the analogue of the Hawking radiation process from the backgrounds under consideration. We also compute a five-point HLLLH correlator of the symmetric product orbifold CFT, and demonstrate precise agreement with the corresponding worldsheet correlator.
HLLH correlators for general h
We now consider HLLH correlators where the light operators are chiral primaries with h ≥ 1. For these correlators the m-basis SL(2, R) two-point functions do not trivialize, and the resulting sums become more complicated.
By following the method outlined in the previous sections, the worldsheet computation leads to restricted sums of the form
1 k 4h n,n x n−2hs k −hxn −2hs k −h Γ(2h + n)Γ(2h +n) Γ(2h) 2 Γ(n + 1)Γ(n + 1) . (6.1)
For k = 1, there are no restrictions on the allowed values of the mode numbers n andn, and so the sum can be performed straightforwardly to obtain
s,s, k = 1|O h (1)O † h (x,x)|s,s, k = 1 = x −2hsx−2hs |1 − x| 4h . (6.2)
This expression agrees with the corresponding HLLH correlator of the symmetric product orbifold CFT, as a direct consequence of the discussion around Eq. (5.20).
On the other hand, for k > 1 the sum becomes more difficult to carry out explicitly since n andn, which appear in the arguments of the Gamma functions, must satisfy the constraintn − n ≡ 2h (s − s) mod k ,
(6.3)
which is the direct generalization of Eq. (5.22). We shall first briefly describe a method to construct these correlators iteratively, starting from the h = 1 2 case obtained above, then present an improved method.
Our iterative construction operates by expressing the additional coefficients in the sum in Eq. (6.1) in terms of differential operators acting on results for lower values of h. Let us illustrate how this works for the simplest non-trivial case h = 1. Hence, the differential operators act on a sum similar to the one analyzed in the previous section. For the general case, the procedure iterates. We redefine a andā to be generalizations of the a andā used in the h = 1/2 correlators (see above Eqs. (5.14) and (5.22)), where we replace s → 2hs,s → 2hs, such that a −ā ≡ 2h(s − s) mod k. Then we obtain
s,s, k|O h (1)O † h (x,x)|s,s, k = 1 k 4h x − 2hs kx − 2hs k |x| 2h D h,kDh,k Γ(2h) 2 n,n x n kxn k ,(6.5)
where we have introduced differential operators of order 2h − 1 defined as
D h,k ≡ (kx∂ x + 2h − 1) · · · (kx∂ x + 1) ,(6.6)
and where n,n
x n kxn k = (1 − x)|x| (a−ā)/k + (1 −x)|x| −(a−ā)/k − |1 − x| 2 |x| −|a−ā|/k |1 − x| 2 1 − |x| 2/k x x (ā−a)/2k
. (6.7) Although it leads to correct results, the procedure outlined above quickly becomes cumbersome, and leads to seemingly complicated expressions for higher values of h. In addition, it does not appear to give any insight into whether the results are likely to match with computations in the symmetric product orbifold CFT. However, we can improve on both these points with a different method. We now describe this method by first rederiving the h = 1/2 correlators of the previous section, and then generalizing the improved method to arbitrary values of h, and also to higher-point heavy-light correlators.
Let us thus re-examine the general expression Eq. (6.1), and consider the case in which a −ā = 0 for simplicity. We can take into account the restriction on the allowed values of n andn by considering an unrestricted sum over arbitrary positive integers by making use of a "Kronecker comb". In other words, we impose that n −n = 0 mod k by including extra coefficients of the form , (6.9) where u r ,ū r are the k th roots of x andx, respectively; writing
x 1 k ≡ |x| 1 k e 2πi Arg(x) k , u r ≡ x 1 k e 2πi r k ,ū r ≡x 1 k e −2πi r k . (6.10)
Thus, inside the convergence region |x| < 1 we can exchange the order of the sums, such that the unrestricted sum over integers n andn leads to the usual expression for the CFT two-point function. However, it is evaluated at the different values of u r , instead of the insertion point itself. Thus, the expression (6.9) becomes
1 k 4h+1 x − 2hs kx − 2hs k |x| 2h k−1 r=0 1 |1 − u r | 4h . (6.11)
In fact, we can rewrite this in a slightly more general form. Indeed, it is easy to "unfix" the first insertion point and write the full expression of the HLLH correlator in terms of x 1 and x 2 . To do so, we introduce k th roots of x 1 , x 2 , and x 2 /x 1 via u k 1,r 1 = x 1 , u k 1,r 2 = x 2 , and u k 21,r = x 2 /x 1 , and then make use of the identity
1 |x 1 | 4h k k−1 r=0 1 |1 − u 21,r | 4h = 1 k k−1 r 1 ,r 2 =0 1 |u 1,r 1 − u 2,r 2 | 4h . (6.12)
This gives
s,s, k|O h (x 1 ,x 1 )O † h (x 2 ,x 2 )|s,s, k 2h(s−s) = 0 mod k = 1 k 4h+2 x 2 x 1 −h (2s+1) k x 2 x 1 −h (2s+1) k |x 1 x 2 | 2h( 1 k −1) k−1 r 1 ,r 2 =0 1 |u 1,r 1 − u 2,r 2 | 4h . (6.13)
The general case is computed entirely analogously. We must simply replace n −n → n−n+(a−ā) in Eq. (6.8), which induces some extra phases. The appropriate generalization of Eq. (6.12) is given by
1 |x 1 | 4h k k−1 r=0 e 2πir(a−ā)/k |1 − u 21,r | 4h = 1 k k−1 r 1 ,r 2 =0 e 2πi(r 2 −r 1 )(a−ā)/k |u 1,r 1 − u 2,r 2 | 4h .
(6.14)
Then the HLLH correlator with generic values of the orbifold parameter k, the spectral flow parameters s ands, and the weight of the light chiral primary operator h, takes the form
s,s, k|O h (x 1 ,x 1 )O † h (x 2 ,x 2 )|s,s, k = 1 k 4h+2 x 2 x 1 −h (2s+1) k x 2 x 1 −h (2s+1) k |x 1 x 2 | 2h( 1 k −1) k−1 r 1 ,r 2 =0 e 2πi(r 2 −r 1 )(a−ā)/k |u 1,r 1 − u 2,r 2 | 4h , (6.15)
where the sum is over the k-th roots of the insertion points x 1 and x 2 , as defined above (6.12), and where 2h(s − s) ≡ a −ā mod k. Note that we can relax the chiral primary condition and consider operators in which m = ±h. We shall continue to focus on massless vertex operators, however this could be generalized further. In addition, by making use of the phases and the x 1,2 powers on the RHS of (6.15), we can rewrite the result in a cleaner form,
s,s, k|O h,m (x 1 ,x 1 )O † h,m (x 2 ,x 2 )|s,s, k = 1 k 2 k−1 r 1 ,r 2 =0 u 2,r 2 u 1,r 1 −m (2s+1) ū 2,r 2 u 1,r 1 −m (2s+1) |u 1,r 1 u 2,r 2 | 2h(1−k) k 4h |u 1,r 1 − u 2,r 2 | 4h .
(6.16)
Matching between worldsheet and symmetric product orbifold
The appearance of the k th roots of the physical insertions x 1,2 in Eq. (6.16) is related to the fact that the holographic description of the heavy backgrounds involves heavy states in k-twisted sectors of the boundary CFT. The same feature appears in certain computations performed using the Lunin-Mathur covering space technique [49], specifically when there are operators of twist k inserted at the origin and infinity of the CFT plane, and when there are other untwisted operators in the correlator. Then the coordinate transformation to the k-fold covering space is precisely x = u k . Thus, when the light worldsheet operators correspond to untwisted operators of the symmetric product orbifold CFT, it is natural to identify u with the coordinate on the k-fold covering space that trivializes the twist operators involved in the definition of the heavy states.
The sum over the different roots generates the usual phases included in the definition of fractional modes by summing over the different copies of the theory [49],
O m k = dx 2πi k r=1 O (r) (x)e 2πim k (r−1) x h+ m k −1 . (6.17)
Moreover, the fractional spectral flow defining the background, when mapped to a k-fold covering space, becomes integer spectral flow with parameters 2s + 1 and 2s + 1 [54,55]. Hence, one can generalize the discussion around Eq. (5.20) and simply consider the appropriate powers of u i,r i to arise from performing spacetime spectral flow on the operators, in the k-fold covering space. Finally, the last factor on the RHS of (6.16) corresponds to the usual two-point function evaluated at the roots, including the necessary Jacobian factors arising from mapping to the k-fold covering space, |∂u/∂x| 2h . Obtaining precisely this Jacobian is the justification for the factors of k introduced in the definition of the x-basis operators in Eq. (5.6). Thus, we see that symmetric orbifold CFT HLLH correlators for which the covering space is x = u k agree in both structure and value with the worldsheet correlator (6.16). By contrast, for twisted operators, the interpretation of our worldsheet result (6.16) is more involved: the Lunin-Mathur covering map for such correlators is not x = u k . To understand the precise relation, we focus on light operators of twist two, and show that Eq. (6.16) nevertheless matches with the symmetric orbifold CFT also in this case. The relevant four-point function was studied recently in [61,62,85] in the Sym N (T 4 ) CFT. At leading order in large N , the correlator is dominated by a contribution from a covering space with genus zero, where the copy indices of the light twist-two operator act on different k-strands corresponding to the heavy state. One of the light insertions effectively joins together two k-strands into a 2k-strand, and the other light insertion effectively cuts the 2k-strand back to two strands of length k. For this process, and setting for simplicity s =s = 0 as done in [61], the relation between the physical-space cross-ratio x and the covering-space cross ratio v is 10
x(v) = v + 1 v − 1 2k . (6.18)
The correlator of interest involves the function
(v + 1) 2+2k (v − 1) 2−2k v 2 ,(6.19)
where v(x) is defined through Eq. (6.18). The correlator itself is obtained by summing over the 2k pre-images of x and including an N -and k-dependent overall factor. However, due to the v → 1/v symmetry of the map (6.18), there are actually only k distinct contributions [62], corresponding to distinct ramified coverings of the base space. When normalisation factors are taken into account, this corresponds to what in [62] is called a 'Hurwitz block function'. Upon inserting the explicit solutions
v r (x) = x 1 2k e iπr k + 1 x 1 2k e iπr k − 1 , r = 0, . . . , k − 1 ,(6.20)
where as before x 1 2k stands for a particular (2k) th root of x, the final expression remarkably coincides with the s =s = 0 case of Eq. (6.16). The analysis for the JMaRT states and for more general light insertions can be carried out analogously.
Recall that, as reviewed in Section 3.1, at a generic spacetime dimension h there is a degeneracy in the twist n of light states in the symmetric product orbifold CFT. An interesting feature of the worldsheet correlator (6.16) is that it is independent of this twist n. Recall also that, for untwisted light operators, the worldsheet correlator has the same structure as the covering space method of the symmetric product orbifold CFT. The fact that the agreement of HLLH correlators extends to (at least some) twisted light operators is thus remarkable from the point of view of the holographic CFT. Despite the more complicated covering map, the above discussion demonstrates how, for these correlators, the end result agrees with an expression whose structure is that of the simple map x = u k .
Higher-point heavy-light correlators
Our general expression for HLLH correlators, Eq. (6.16), together with the matching to the symmetric product orbifold CFT that we have observed so far, motivate a deeper exploration. Thus, we now describe how local x-basis operators are seen from the spectrally flowed frame as indicated by our null-gauged worldsheet models. This will allow us to extract consequences for worldsheet three-point and higher-point functions, corresponding to holographic CFT correlators with two heavy states and three or more light operators.
The AdS 3 limit of the holomorphic gauge condition, (5.4), upon using the definition of m y in Eq. (5.5), reads 0 = m + (2s + 1)m − km y . (6.21)
We wish to re-interpret this constraint in the local coordinate basis of the holographic CFT. A priori, it is perhaps not obvious that this is a useful thing to do, since the usual x-basis operators are constructed by resumming the action of J ± 0 , which does not commute with the BRST charge in the coset theory. However we shall see that it will be very useful.
Let us observe that there are two notions of x-type local coordinates in the worldsheet model. The one used so far in Section 5 and the present section is the physical x coordinate of the gauged models. However, before gauging, there is an analogous coordinate for the upstairs SL(2,R) algebra. We will denote the associated coordinate by the complex variable u; we will see momentarily that u k = x, so that there will be no clash with the u used above.
The differential operator x∂ x + h corresponds to the quantity m y , as can be seen by comparing Eqs. (3.20), (3.25), (5.6). On the other hand, the upstairs SL(2,R) projection m corresponds to an analogous operator in the u variable: we write this as u∂ u + h − β, where we have allowed for a shift β, whose precise form will become clear shortly, as will the reason for its existence. Then Eq. (6.21) can be expressed in terms of these differential operators as kx∂ x = u∂ u + (2s + 1)m + h(1 − k) − β . (6.22) In order that this condition is solved by u k = x, we choose
β = h(1 − k) + (2s + 1)m . (6.23)
Thus the role of β is two-fold. On the one hand, the first term in (6.23) effectively replaces the weight by h → h u ≡ kh, which further supports the discussion above about u corresponding to a covering space coordinate in the holographic CFT. It also generates the Jacobian factor obtained in Eq. (6.16). On the other hand, the second term in (6.23) takes into account the shift arising from spacetime spectral flow. We now use this to obtain an improved construction of gauge-invariant operators directly in the x-basis, built upon u-basis operators of the upstairs SL(2,R), i.e. without relying on their spacetime Virasoro mode expansion as in (5.6). Although such a construction gives equivalent results at the level of worldsheet two-point functions (6.16), its importance for higher-point functions was highlighted recently in [46]. The construction proceeds as follows:
1. We consider an operator whose upstairs SL(2,R) part is expressed in the usual local SL(2,R) basis, V h (u,ū), where for simplicity we set h =h. We multiply this by an SU(2) vertex operator V j ,m ,m . We suppress the exponentials of t and y, since they have weight zero in the AdS 3 limit, and their only effect is taken into account through (6.21) and its antiholomorphic counterpart. We introduce the notation 3. We sum the resulting operator over all insertion points u such that u k = x.
Explicitly, we define
O h,m ,m (x,x) ≡ 1 k 2h+1 u k =x u βūβÔ h,m ,m (u,ū) . (6.25)
Comparing with Eq. (5.6), using the Kronecker comb (6.8) to impose the constraints as above, we indeed have
O h,m ,m (x,x) ≡ 1 k h+h my,my x my−hxmy−h V j,m,m V j ,m ,m = 1 k 2h m,m x 1 k [m+(2s+1)m ]−hx 1 k [m+(2s+1)m ]−h V j,m,m V j ,m ,m = 1 k 2h+1 u k = x m,m u m−h+(2s+1)m +h(1−k)ūm−h+(2s+1)m +h(1−k) V j,m,m V j ,m ,m = 1 k 2h+1 u k = x u βūβÔ h,m ,m (u,ū) . (6.26)
We note that in the symmetric product orbifold CFT, when mapping the S k -invariant untwisted operators O(x) = k r=1 O (r) (x) to the k-fold covering space, using the inverse relation to (6.17),
O (r) (x) = 1 k m O m k x − m k −h e − 2πim k (r−1) , (6.27)
one obtains an expression closely analogous to Eq. (6.25). We now exploit the expression (6.26) to study higher-point functions. We first rewrite the HLLH correlator (6.16) in the simple form
s,s, k|O h,m (x 1 ,x 1 )O † h,m (x 2 ,x 2 )|s,s, k = 1 k 4h+2 u k i =x i u β 1 1ūβ 1 1 u β 2 2ūβ 2 2 |u 1 − u 2 | 4h , (6.28) with β i = h i (1−k)+(2s+1)m i ,β i = h i (1−k)+(2s+1)m i , and
where the charge conservation m 1 + m 2 = 0 is understood. We then observe that the worldsheet correlator with n light insertions with weights h i and charges m i ,m i is given by the following straightforward generalization of (6.28) (in which we partially suppress antiholomorphic quantities):
s,s, k|O h 1 ,m 1 (x 1 ) . . . O hn,m n (x n )|s,s, k = 1 k H+H+n u k i =x i n =1 u β ūβ Ô h 1 ,m 1 (u 1 ) . . .Ô hn,m n (u n ) , (6.29)
where H = h 1 + · · · + h n , and Ô h 1 ,m 1 (u 1 ) . . .Ô hn,m n (u n ) stands for the global AdS 3 × S 3 vacuum n-point function evaluated at the roots of the original insertion points. The expression (6.29), which holds for generic values of s,s, k and generic light weights and charges h i , m i ,m i , constitutes one of the main results of this paper. In Eq. (6.29), the n = 3 case can be made quite explicit, as we shall do so in the next subsection.
The above result can straightforwardly seen to include spectrally-flowed vertex operators, as follows. Setting ω =ω = ω for simplicity, the bosonic null-gauge condition Eq. (4.41) in the AdS limit becomes 0 = m ω + (2s + 1)m ω − km y , 0 =m ω + (2s + 1)m ω − km y , (6.30) where, for discrete states in the lowest weight representation, m ω = h ω +n, h ω = J +n 5 ω/2 and m ω = h ω + n 5 ω/2 − n . As a consequence, the exponent β of the covering space coordinate u gets replaced by β → β ω = h ω (1 − k) + (2s + 1)m ω , and the power of k in the normalisation factor is modified accordingly. Thus, for the vertex operators in the coset models, the net effects of the spectral flow procedure are the replacements h → h ω , m → m ω . This is understood by the fact that when a boundary light operator has the spacetime dimension h = J that renders the SL(2,R) spin above the unitary bound Eq. (3.16), it corresponds holographically to a spectrally-flowed worldsheet vertex operator [22]. This implies that the structure of the correlator in Eq. (6.29) is not drastically modified when ω = 0.
It is important to note, however, that the entire computational complication due to worldsheet spectral flow remains present in the resulting vacuum correlator. Indeed, we see that the n-point function on the heavy state is now written in terms of a vacuum n-point function of spectrally-flowed states. It is thus natural to expect that the AdS 3 selection rules carry over to n-point functions in the JMaRT microstates. We conclude that the generalisation of Eq. (6.29) to the case of worldsheet spectrally-flowed states reads s,s, k|O ω 1
h 1 ,m 1 (x 1 ) . . . O ωn hn,m n (x n )|s,s, k = (6.31) 1 k Hω+Hω+n u k i =x i n =1 u β ω, ūβ ω, Ô ω 1 h 1 ,m 1 (u 1 ) . . .Ô ωn hn,m n (u n ) ,
where H ω = i h ω,i and the light operators O ω i h i ,m i are x-basis spectrally-flowed worldsheet vertex operators.
We emphasize that the construction we have outlined in this section only holds in the IR AdS 3 × S 3 limit. In the full asymptotically linear dilaton geometry, the identification of the modes m y andm y as defined in (5.5) breaks down, and the t and y exponentials can no longer be ignored. This is consistent with the fact that in the UV the dual holographic theory is not a CFT, but is instead a little string theory. Since little string theories are non-local, it is correct that the above definition of local operators does not apply. Note, however, that the mode correlators computed in the m-basis still make perfect sense, and carry information about string perturbation theory in the full geometry.
Let us speculate on which subset of the above correlators can be expected to agree with those of the symmetric product orbifold theory. Since our expressions for the general correlators (6.29), (6.31) involve vacuum correlators, it is natural to conjecture that for these particular heavy backgrounds, the heavy-light correlator is protected whenever the global AdS 3 × S 3 vacuum correlator appearing in (6.29), (6.31) is protected. Recall that, in the global AdS 3 ×S 3 vacuum, two-point and three-point correlation functions of chiral primaries are protected [86], while four-point and higher-point functions are generically renormalized. So heavy-light correlators with two or three light insertions on these backgrounds may be protected between worldsheet and symmetric product orbifold CFT. It may even be possible to prove a non-renormalization theorem generalizing [86]; work in this direction is in progress. For now however, we next compute a heavy-light correlator with three light insertions in both worldsheet and holographic CFT.
An HLLLH correlator in worldsheet and holographic CFT
We now investigate the general expression for our worldsheet correlator (6.29), in a particular example with three light insertions, and compare it to the symmetric product orbifold CFT. We shall observe another highly non-trivial agreement.
We consider three light insertions living in the untwisted sector of the holographic CFT, with weights (h 1 , h 2 , h 3 ) = ( 1 2 , 1 2 , 1). In the dual CFT notation, we are then interested in computing the correlator O 1
2 (x 1 )O 1 2 (x 2 )O † 1 (x 3 ) H .
We further focus on heavy backgrounds with s = kp with p ∈ Z ands = 0.
We start by evaluating the general expression (6.29) for this particular worldsheet correlator. In the worldsheet theory associated to the global AdS 3 × S 3 , the O 1 2 correspond to two RR states, while O 1 is an NSNS state polarized on the S 3 directions. The (integrated) vacuum three-point functions for these chiral primaries were studied in [37]. In our notation, they take the form
O RR h 1 (x 1 )O RR h 2 (x 2 )O † NSNS h 3 (x 3 ) = 1 N 1/2 (2j 1 − 1)(2j 2 − 1)(2j 3 − 1) −1 |x 12 | 2(h 1 +h 2 −h 3 ) |x 13 | 2(h 1 +h 3 −h 2 ) |x 23 | 2(h 2 +h 3 −h 1 ) , (6.32) where j 1 = h 1 + 1 2 , j 2 = h 2 + 1 2 , and j 3 = h 3 .
The relevant values for us are simply j i = 1, and upon a global SL(2,C) transformation to set x 3 = 1, we have
O 1 2 (x 1 )O 1 2 (x 2 )O † 1 (x 3 ) = 1 N 1/2 1 |1 − x 1 | 2 |1 − x 2 | 2 .
(6.33)
In order to compute the HLLLH correlator in the worldsheet coset models corresponding to the JMaRT backgrounds, we must sum (6.33) evaluated at all k th -roots of the insertion points. An explicit expression can be obtained following the arguments of Sec. 5.3, using the following generalisation of Eq. (6.12) and Eq. (6.14) for the case of three insertions,
k |x 3 | 2(α+β) k k−1 r 1,2 =0 1 |1 − u 13,r | 2α |1 − u 23,r | 2β = k−1 r 1,2,3 =0 1 |u 3,r 3 − u 1,r 1 | 2α |u 3,r 3 − u 2,r 2 | 2β , (6.34) where u i,r i = x 1/k i e 2πi r i /k and u j ,r = (x j /x ) 1/k e 2πi r/k with α = β = 1, one obtains O 1 2 (x 1 )O 1 2 (x 2 )O † 1 (x 3 ) H = 1 k 7 (x 1 x 2 ) p |x 1 x 2 | 2 k −1 |1 − x 1 | 2 |1 − x 2 | 2 1 − |x 1 | 2 1 − |x 1 | 2 k 1 − |x 2 | 2 1 − |x 2 | 2 k . (6.35)
The result (6.35) constitutes the first computation of a heavy-light worldsheet correlator with three light insertions probing a black hole microstate. We now show that the same result can be obtained from the HCFT at the symmetric orbifold point. We follow the method used in [30, App. A] for the HLLH correlator reviewed in Section 5.3. The heavy states we use indicate that we should work in the k-twisted sector of the theory. The operators can be written in terms of the fermions introduced in Eq. (3.7). For the h = 1 2 chiral primaries, and for each strand of k copies of the theory, this simply reads
O 1 2 = k r=1 O 1 2 ,(r) = − i √ 2 k r=1 ψ +Ȧ (r)ψ +Ḃ (r) ȦḂ = − i √ 2 k−1 ρ=0 ψ +Ȧ ρψ +Ḃ ρ ȦḂ ,(6.36)
while for the h = 1 operator we find
O † 1 = k r=1 O 1,(r) = 1 4 k r=1 ψ −Ȧ (r) ψ −Ḃ (r)ψ −Ċ (r)ψ −Ḋ (r) ȦḂ ĊḊ = 1 4k k−1 ρ i =0 δ ρ 1 +ρ 2 ,ρ 3 +ρ 4 ψ −Ȧ ρ 1 ψ −Ḃ ρ 2ψ −Ċ ρ 3ψ −Ḋ ρ 4 ȦḂ ĊḊ .(6.37)
We will work in the bosonized language, in which
ψ +1 ρ = ie iHρ , ψ −2 ρ = ie −iHρ , ψ +2 ρ = e iKρ , ψ −1 ρ = e −iKρ ,(6.38)
Here H ρ and K ρ are canonically normalized bosonic fields, in terms of which the (unit normalized) heavy states take the form [30]
|H = |s = kp, k = Σ kΣk k−1 ρ=0 e i(p+ 1 2 − ρ k )(Hρ+Kρ) e i( 1 2 − ρ k )(Hρ+Kρ) N k |0 ,(6.39)
where Σ k andΣ k are the twist operators. Note that the contribution of Σ k andΣ k to the correlators will simply factorize, since the ψ ρ fermions diagonalize the twisted boundary conditions. Choosing the labelling of the insertion points for later convenience, the correlator to be computed is then
O 1 2 (x 1 )O 1 2 (x 2 )O † 1 (x 3 )O H (x 4 )O † H (x 5 ) = 1 k k−1 ρ,ρ =0 k−1 ρ i =0
δ ρ 3 +ρ 4 ,ρ 5 +ρ 6 (6.40)
ψ +Ȧ 1 ρ 1ψ +Ḃ 1 ρ 1 Ȧ 1Ḃ1 (x 1 ) ψ +Ȧ 2 ρ 2ψ +Ḃ 2 ρ 2 Ȧ 2Ḃ2 (x 2 ) ψ −Ȧ 3 ρ 3 ψ −Ḃ 3 ρ 4ψ −Ċ 3 ρ 5ψ −Ḋ 3 ρ 6 Ȧ 3Ḃ3 Ċ 3Ḋ3 (x 3 ) e i(p+ 1 2 − ρ k )(Hρ+Kρ) (x 4 )e −i p+ 1 2 − ρ k (H ρ +K ρ ) (x 5 )e i( 1 2 − ρ k )(Hρ+Kρ) (x 4 )e −i 1 2 − ρ k (H ρ +K ρ ) (x 5 ) .
Clearly, charge conservation implies ρ = ρ . For the same reason, the correlator vanishes unless ρ 1 = ρ 3 and ρ 2 = ρ 4 , or ρ 1 = ρ 4 and ρ 2 = ρ 3 , or both. An analogous statement holds with ρ 3 , ρ 4 replaced by ρ 5 , ρ 6 , hence all contributions trivially satisfy the ρ 3 + ρ 4 = ρ 5 + ρ 6 constraint. Consequently, we only really need to sum over all possible values of, say, ρ 1 and ρ 2 , and also compute the product over ρ. In this way, up to an irrelevant numerical factor, the holomorphic free field contractions give
k−1 ρ 1 ,ρ 2 =0 1 |x 2h H 45 x 13 x 23 | 2 x 41 x 35 x 51 x 34 p+ 1 2 − ρ 1 k x 42 x 35 x 52 x 34 p+ 1 2 − ρ 2 k x 41x35 x 51x34 1 2 − ρ 1 k x 42x35 x 52x34 1 2 − ρ 2 k (6.41)
where h H is the weight of the heavy state.
We can now take x 3 → 1, x 4 → 0 and x 5 → ∞, and perform the sums over ρ 1 and ρ 2 explicitly. Upon doing so, we find that the structure of this orbifold CFT correlator Eq. (6.41) precisely matches the worldsheet correlator (6.35).
Hawking radiation from the worldsheet
As a final application of our results, we now use the HLLH correlator (6.16) to compute the amplitude that describes the analogue of the Hawking radiation process for the JMaRT backgrounds [55,[58][59][60]. In the bulk, this process is ergoregion radiation, which is a feature of the full asymptotically flat JMaRT solutions [87]. The ergoregion does not survive the fivebrane decoupling limit [26] or AdS 3 decoupling limit, however aspects of the process can still be studied quantitatively in those limits. This process has been interpreted as an enhanced analogue of Hawking radiation, since both are described by the same microscopic process in the holographic CFT [58]. Indeed, acting on a thermal state, this vertex operator gives precisely the spectrum and rate of Hawking radiation of the corresponding black hole, while acting on the states dual to the JMaRT solutions yields their characteristic spectrum and rate of emission [55,[58][59][60].
The emission spectrum and rate for general k, s,s was computed in supergravity and symmetric product orbifold CFT in [55], building on the results of [58][59][60]. We will reproduce these results from the worldsheet CFT.
We start with a specific HLLH correlator in which the light operators are given by minimally coupled scalars in six dimensions, after reducing on the T 4 . The corresponding vertex operators were defined in Eq. (4.14). These are not chiral primaries of the boundary theory, but are their superdescendants within the short multiplet, so the holographic correlator arising in the AdS 3 limit is easily computed by using the techniques outlined in the previous sections. The amplitude of interest involves an initial state consisting of a probe excitation on top of the JMaRT background, a vertex operator V associated to a light insertion, and a final state given by the black hole microstate. Schematically we have
s,s, k| O(x) |s,s, k + probe = s,s, k| O(x) O † (0) |s,s, k . (6.42)
To begin with, we work with k = 1. Up to an overall sign, and considering the lowest energy state, the holomorphic part of the amplitude for the Hawking emission of a single quanta of dimension h = l 2 + 1 and whose corresponding vertex operator has charge m = k − l 2 reads [59] A L (x) = 1
x (1+α) l 2 −α k+1 = 1 x l 2 +1−α(k− l 2 )
. (6.43)
Here l 2 denotes the total angular momentum of the probe on the S 3 part of the geometry, while k is the number of J + 0 operators acting on the state with the lowest projection, appearing in the definition of the vertex operator. To compare their computation with our worldsheet result, one uses the following (notation) map:
k − l 2 → m , l 2 + 1 → h , α → l 2 = m + n = 2s + 1 . (6.44)
Taking care of the cylinder-to-plane conversion factor x − l 2 −1 , one obtains
A L (x) = 1 x 2h−m (2s+1) . (6.45)
We now perform the analogous computation in the worldsheet cosets. From Eq. (6.42), in the worldsheet formalism all we need to do is to insert the second operator at the boundary origin, i.e. to take the x 2 → 0 limit in Eq. (6.16) (with k = 1 for now). This gives x 2h−m (2s+1) , (6.46) in agreement with (6.45) upon including the antiholomorphic contribution. The procedure is analogous for general k, s,s. We again evaluate the amplitude for x 2 → 0 by including the appropriate Jacobian factor for the light state, and obtain lim
x 2 →0 k 2h x m 2 (2s+1) k +h(1− 1 k ) 2xm 1 k−1 r 2 =0 e 2πi r 2 k (−m (2s+1)+m (2s+1)) = 1 k 2h 1 x h(1+ 1 k )−m (2s+1) kx h(1+ 1 k )−m (2s+1) k ∈Z δ m (2s+1)−m (2s+1), k ,(6.47)
where in the first equality we have exchanged the finite sum with the limit, and x = x 1 , m = m 1 = −m 2 . When k = 1, this reduces to Eq. (6.46). The Kronecker comb enforces the constraint (2s + 1)m − (2s + 1)m ∈ k Z, which is a direct consequence of the difference beween left and right null-gauge constraints (5.4) in the regime of interest. Moreover, by first multiplying the correlator in Eq. (6.16) by x nxn we can also consider descendant insertions. This condition is in agreement with the results present in [55,60] (see also [26]), where our n y has to be identified with their λ from the supergravity analysis. When considering the case of multi-particle emission, the above amplitude must be multiplied by a combinatorial factor, as explained in [58][59][60]. To obtain the emission rate, one needs to consider the unit amplitude evaluated at (x,x) = (1, 1), implying that the spatial dependence trivialises. Nevertheless, the crucial feature related to the presence of the prefactor k −2h , which enters the final expression of the emission rate 11 , is reproduced by (6.47).
Even though the spatial dependence of the two-point function Eq. (6.47) plays a trivial role in the emission rate, the power of x has a precise meaning in terms of the energy spectrum of the nearly unstable Hawking quanta [55,59]. Indeed, consider the holomorphic part of the energy of these modes. In the conventions of [55], the corresponding spectrum reads ω kR y = 1 2 αk(m φ − m ψ ) − 1 2ᾱ k(m φ + m ψ ) − 2 l 2 + 1 . (6.48)
In our notation, (m φ − m ψ ) = 2m , (m φ + m ψ ) = −2m , and l 2 + 1 = h, so this becomes
− ωR y = 2h k − α m −ᾱm ,(6.49)
where α,ᾱ are the same as in Eq. (2.27). Finally, taking care of the cylinder-to-plane conversion factor for a field of spacetime conformal dimension h, we obtain − ωR y = 2h 1 + 1 k − α m −ᾱm . (6.50)
The RHS is exactly the sum of the exponents of x andx in Eq. (6.47). Furthermore, we note that this relation is precisely the sum of the left and right bosonic null gauge constraints Eq. (5.4) for discrete states with n =n = 0.
The emission takes place when the energy is positive, ω > 0, and corresponds to quanta leaving the AdS region; in a near-decoupling limit, these quanta escape into the asymptotically flat region. Indeed, the exponent of x in Eq. (6.47) becomes positive and the amplitude diverges at large x, such that the energy indeed turns from negative to positive. This is consistent with the description of the ergoregion radiation process as pair creation [88].
Discussion and outlook
In this paper we have computed a large set of worldsheet correlators describing the dynamics of light modes probing a class of highly-excited supergravity backgrounds, the JMaRT solutions, in the fivebrane decoupling limit. The results are exact in α and were obtained by exploiting the solvability of the null-gauged WZW models corresponding to these backgrounds.
These coset models provide a powerful method to calculate HLLH correlators, since the heavy states are already taken into account in the worldsheet CFT itself. Thus spacetime HLLH correlators are two-point functions on the worldsheet, which can be computed once the vertex operators have been constructed.
We constructed physical vertex operators in both NS and R sectors, and then computed several families of correlators in the full coset models. We primarily focused on short strings belonging to discrete representations of the affine SL(2,R) algebra, as well as a tower of modes generated by worldsheet spectral flow. Our main techniques can also be employed in more general sectors of the theory.
In the IR AdS 3 limit, due to the non-trivial gauging, the identification of the x variable dual to the local coordinate of the holographic CFT requires some care. Once we made this identification, we computed several non-trivial HLLH correlators explicitly, and analyzed them in the context of AdS 3 /CFT 2 .
Vertex operators that are local on the AdS 3 boundary are constructed by summing over all allowed values of the spacetime modes. An important step in our analysis consists of identifying these modes. We chose a gauge in which the IR AdS 3 boundary coordinates are (t, y) of the timelike R and spacelike S 1 directions of the (10+2)-dimensional model before gauging. We therefore identified the spacetime mode indices with the quantum numbers m y andm y defined in (5.5). Then the gauge constraints (4.13) satisfied by the physical states imply that the m y mode numbers take values in Z/k. This is how the worldsheet coset models capture the fact that when k > 1, the heavy background states of the symmetric product orbifold CFT are in the k-twisted sector [54,55].
We observed that, at large N , several correlators agree exactly between worldsheet and symmetric product orbifold CFT. The fact that our correlators are exact in α significantly strengthens previous results that compared HLLH correlators between the separate supergravity and symmetric product orbifold CFT regimes.
To demonstrate our method, we presented a detailed example with an (h,h) = ( 1 2 , 1 2 ) chiral primary. The worldsheet correlator involves a non-trivial structure in terms of the boundary coordinate x, Eq. (5.15). When the background is BPS, the correlator agrees precisely with the supergravity and symmetric product orbifold CFT correlators computed in [30]. The non-BPS JMaRT backgrounds were not considered in [30], however we demonstrated that the agreement extends also to those backgrounds.
Similarly to correlators on the background of the global AdS 3 vacuum, the holomorphic and antiholomorphic sectors are related through the constraint m y −m y = n y , where n y is the quantized momentum on the y circle. Thus, while the spacetime modes m y ,m y are fractional, their difference must be an integer. This mirrors the m −m ∈ Z condition in the SL(2,R)/U(1) cigar coset and in global AdS 3 , which ensures that the wavefunctions are single-valued. In our models, the difference of the left and right gauge constraints leads to the mod k condition in Eq. (6.3), constraining which of the SL(2,R) modes can contribute. The HLLH correlator is then obtained by summing over a specific linear combination of m-basis worldsheet two-point functions.
Our analysis of these correlators involving the h = 1/2 light operator indicated a way to obtain similar expressions for more general correlators. We considered general massless vertex operators, which correspond to symmetric product orbifold CFT operators in short multiplets whose top component is a chiral primary of arbitrary weight h, including those that live in twisted sectors. We computed all HLLH correlators where the light operators are massless, and where the heavy states correspond to any of the general family of orbifolded JMaRT configurations, including their BPS limits. The result assumes a remarkably simple form, presented in Eq. (6.16). It is built from three distinct factors: (1) the global AdS 3 ×S 3 vacuum two-point function of the light operators inserted at the k-th roots of the original insertion points x i , (2) the Jacobian factor associated with the corresponding change of coordinates, and (3) an additional factor coming from the way in which operators of definite R-charge transform under spectral flow. The product of these factors is then summed over all such roots. This structure reflects that one can formulate the computation in a k-fold covering space of the target space.
We then obtained a similar expression for all higher-point functions of the schematic form H|O 1 (x 1 ,x 1 ) . . . O n (x n ,x n )|H , with heavy JMaRT states, and n massless insertions. This is presented in Eq. (6.29). We expect this to be valid for an arbitrary number of massless insertions of weights h i and charges m i andm i , and also arbitrary parameters (k, s,s) for which a consistent background exists. In this way, we have provided a recipe for computing such (n + 2)-point heavy-light correlators in terms of n-point global AdS 3 × S 3 vacuum correlation functions of the corresponding light insertions.
It is known that vacuum two-and three-point functions of chiral primary operators are protected [86]. We therefore conjectured that heavy-light correlators in JMaRT heavy states are protected whenever the corresponding vacuum correlator in our general formula (6.29) is protected. We have investigated a particular HLLLH five-point function-the first of its kind in the literature-finding that worldsheet and symmetric product results agree. We leave a more general investigation of this proposal to future work.
As an application, we have shown that our results describe the analog of the Hawking radiation process for the general family of non-BPS JMaRT black hole microstates, generalizing the analysis in [55,59,67].
In addition to these main results, our work has clarified some important technical details. For instance, the full asymptotically linear dilaton JMaRT backgrounds do not have AdS 3 × S 3 isometries. Correspondingly, in the worldsheet cosets, the SL(2,R) and SU(2) raising and lowering operators J ± , K ± of the (10+2)-dimensional ungauged model do not commute with the gauging. Thus the NS sector vertex operators of the cosets do not have well-defined SL(2,R) or SU(2) spins, see for instance Eq. (4.20). The same holds for the chirality quantum number ε in the R sector, as discussed around Eq. (4.25). The absence of the SL(2,R) spin has important implications also from the holographic point of view. It underlines the fact that the construction of x-basis operators is only appropriate in the AdS 3 limit, and breaks down otherwise. The breakdown of the x coordinate is a signal of the non-locality of the non-gravitational little string theory that lives on the worldvolume of the NS5 branes, dual to the full asymptotically linear-dilaton models. Thus the states we have constructed contain valuable information about the dual LST and, more generally, about non-AdS holography [56,57,89]. We have nevertheless demonstrated how, in the AdS 3 limit, our vertex operators acquire definite spins and reduce to the appropriate expressions.
Our results suggest several directions for future investigations. First, it would be interesting to compute more general worldsheet correlators, both in the AdS 3 limit and in the full models. Our correlators are likely to generalize to a larger set of worldsheet vertex operators that correspond to operators in the symmetric product orbifold CFT that transform nicely under spacetime spectral flow [90]. In global AdS 3 , correlators are known to involve a highly non-trivial structure related to the non-conservation of the spectral flow number [24]. More generally, one would like to describe the physics of long/winding strings and their correlators in these backgrounds. A number of interesting techniques recently developed in [46,47] (for the bosonic case) are likely to have interesting implications for computations in the coset theories, for which SL(2,R) constitutes a crucial building block.
It would also be interesting to study such correlators by using conformal perturbation theory on top of a putative dual CFT explicitly associated to the NSNS singular point [79] of the moduli space, defined along the lines of [39,41]. Doing so would require an understanding of how to define the JMaRT heavy states in such a theory. Separately, it would be interesting to investigate the case n 5 = 1, which would require going beyond the RNS formalism, as done in related recent developments [80,91,92]. Here one should go though the coset construction starting with the supergroup PSU(1,1|2).
In the full asymptotically linear dilaton models, more general correlators can be computed by using the vertex operators constructed in Section 4. However, a shift in perspective will be needed, since the x coordinate seems unlikely to be of any use in this regime. Although a priori in our case it is more natural to work in the m-basis, it seems plausible to relate our results to the momentum-space correlators studied in [56,57], see also [93]. In those papers, the authors work with a related null-gauged model, and further interpret their holographic LST correlators in terms of an irrelevant (single-trace) TT -deformation of the IR CFT 2 .
Separately, it will be interesting to investigate our proposal for the subset of heavylight correlators that we expect to be protected by considering the dual computations in the symmetric product orbifold CFT.
Last, but not least, one would like to explore further how these correlation functions encode more detailed information about the physics of the microstate backgrounds we are working with. For instance, two-point functions are expected to probe the multipole ratios of the geometry [94,95], while certain worldsheet three-point functions should be related to the Penrose process in the JMaRT backgrounds [96].
Although the JMaRT backgrounds are atypical microstates, the HLLH correlators we have computed approach black-hole-like behaviour at large k, reflecting the properties of the backgrounds in this limit. We expect that the techniques developed in this work will help further the study of more typical black hole microstates in string theory.
give additional free field contributions to the matter T and G in (3.45) and (3.46), namely T (ty) = ∂t∂t − ∂y∂y , G (ty) = 2i −λ t ∂t + λ y ∂y .(4.4)
7
The symbol J for the current operators should not be confused with the total SL(2,R) spin that has appeared in Section 3.3. The meaning should be clear from the context.
is preserved in the holomorphic sector if and only if |l 2 | = 1, and thus |l 3 | = |l 4 |. The γG constraint (4.27) reduces directly to ε = −1 , (4.31) so all supercharges have negative AdS 3 × S 3 chirality. Then the λ constraint in the first line of (4.26), with F + = 0, reduces to l 3 + ε 4 ε 5 l 4 = 0 . (4.32) So when |l 2 | = 1 and |l 3 | = |l 4 | = 0, there are four holomorphic supercharges, labelled by say ε 2 and ε 4 .
2
) is in the canonicalφ-picture, while Y (− 1 2 ,− 1
: e ϕ G tot : (z) Y (− 1 2 ,− 1 2 ),ε4ε5 (w) = 0 ,
try to follow the global AdS 3 procedure as close as possible, and consider states with ω = ω =ω and m = m =m =m . Then, (4.40) forces us to set ω y = 0. Moreover, for states constructed by spectrally flowing highest/lowest weight primaries, the discussion around Eq.(3.69) shows that the e ϕ G part of the BRST charge acts as in the unflowed case. Moreover, the action of eφλ is left unchanged. The derivation of the coefficients involved in the definition of the vertex operators constructed above then goes through without changes, and we only need to restrict to the highest (lowest) possible values of m (m ) in each case.
(3.2), in global AdS 3 × S 3 this leads to x-basis operators of the form O h,m (x) = m x m−h O h,m,m , where the modes O h,m,m are identified as either W j,m,j ,m , X j,m,j ,m or Y j,m,j ,m , where j and h are related by Eq. (3.89).For simplicity, we collect all of these modes under the notation V j,m,m V j ,m ,m . In the null-gauged models, we replace m → m y in the exponent of x in the sum,
charge conservation conditions m y,1 = −m y,2 andm y,1 = −m y,2 are imposed. So do the SU(2) parts of the vertex operators for m 1 = −m 2 , upon using the appropriate normalization. Hence, at the level of the two-point function of m-basis operators, i.e. the mode correlators from the spacetime point of view, the only non-trivial contribution comes from the SL(2, R) part of the upstairs theory.
term in(5.17) is understood as subtracting then = 0 and δ = 0, . . . , a − 1 coming from the first term. As a consequence, we can further rewrite the sum overn,n and δ in Eq. (5.17) as a restricted double sum, of non-negative integers (n,n) satisfying the following restrictions n − n ≡ a mod k , n,n ∈ N 0 .(5.19)
( 5 . 22 )
522Then, by summing over all possible values ofn,n and δ such that n andn are non-negative and satisfy(5.22), defining b ≡ min(a,ā)
final equality is obtained by Fourier transformation, and represents a simple form of the discrete Poisson summation formula. The RHS in Eq. (6.8) is interesting, because the exponentials can be absorbed into terms involving powers of x andx. Explicitly, we can rewrite the expression (6.1) with a −ā
O
h,m ,m (u,ū) ≡ V h (u,ū)V j ,m ,m . (6.24) 2. We introduce the above β-shift by multiplying by an extra factor u βūβ .
Note that in[61,62], the base (physical) space coordinates are denoted by z or u rather than our x, while the covering space coordinates are denoted by t or x rather than our v.
(2s+1) k +h(1− 1 k ) 2 s,s, k|O (m ,m ) h (x 1 )O (m ,m ) † h (x 2 )|s,s, k = 1 k 2h+2 k−1 r 1 =0 e 2πi r 1 k (m (2s+1)−m (2s+1)) x h(1+ 1 k )−m (2s+1) k 1x h(1+ 1 k )−m (2s+1) k
To compare to the final results of[55,60] one must include the additional factor √ kν, where ν is related to the Bose enhancement, which is not visible for a single-particle process.
AcknowledgmentsFor discussions, we thank I. Bena
Worldsheet Correlators in Black Hole Microstates. D Bufalini, S Iguri, N Kovensky, D Turton, 10.1103/PhysRevLett.129.121603Phys. Rev. Lett. 1291216032203.13828D. Bufalini, S. Iguri, N. Kovensky and D. Turton, Worldsheet Correlators in Black Hole Microstates, Phys. Rev. Lett. 129 (2022) 121603 [2203.13828].
Microscopic Origin of the Bekenstein-Hawking Entropy. A Strominger, C Vafa, 10.1016/0370-2693(96)00345-0hep-th/9601029Phys. Lett. 37999A. Strominger and C. Vafa, Microscopic Origin of the Bekenstein-Hawking Entropy, Phys. Lett. B379 (1996) 99 [hep-th/9601029].
AdS / CFT duality and the black hole information paradox. O Lunin, S D Mathur, 10.1016/S0550-3213(01)00620-4hep-th/0109154Nucl. Phys. B. 623342O. Lunin and S. D. Mathur, AdS / CFT duality and the black hole information paradox, Nucl. Phys. B 623 (2002) 342 [hep-th/0109154].
O Lunin, J M Maldacena, L Maoz, hep-th/0212210Gravity solutions for the D1-D5 system with angular momentum. O. Lunin, J. M. Maldacena and L. Maoz, Gravity solutions for the D1-D5 system with angular momentum, hep-th/0212210.
Smooth horizonless geometries deep inside the black-hole regime. I Bena, S Giusto, E J Martinec, R Russo, M Shigemori, D Turton, 10.1103/PhysRevLett.117.2016011607.03908Phys. Rev. Lett. 117201601I. Bena, S. Giusto, E. J. Martinec, R. Russo, M. Shigemori, D. Turton et al., Smooth horizonless geometries deep inside the black-hole regime, Phys. Rev. Lett. 117 (2016) 201601 [1607.03908].
Asymptotically-flat supergravity solutions deep inside the black-hole regime. I Bena, S Giusto, E J Martinec, R Russo, M Shigemori, D Turton, 10.1007/JHEP02(2018)014JHEP. 02141711.10474I. Bena, S. Giusto, E. J. Martinec, R. Russo, M. Shigemori, D. Turton et al., Asymptotically-flat supergravity solutions deep inside the black-hole regime, JHEP 02 (2018) 014 [1711.10474].
. N Ceplak, R Russo, M Shigemori, Supercharging Superstrata, 10.1007/JHEP03(2019)0951812.08761JHEP. 0395N. Ceplak, R. Russo and M. Shigemori, Supercharging Superstrata, JHEP 03 (2019) 095 [1812.08761].
P Heidmann, N P Warner, 10.1007/JHEP09(2019)0591903.07631Superstratum Symbiosis. 59P. Heidmann and N. P. Warner, Superstratum Symbiosis, JHEP 09 (2019) 059 [1903.07631].
Elliptical and purely NS superstrata. B Ganchev, A Houppe, N P Warner, 10.1007/JHEP09(2022)0672207.04060JHEP. 0967B. Ganchev, A. Houppe and N. P. Warner, Elliptical and purely NS superstrata, JHEP 09 (2022) 067 [2207.04060].
Holographic anatomy of fuzzballs. I Kanitscheider, K Skenderis, M Taylor, 10.1088/1126-6708/2007/04/023hep-th/0611171JHEP. 0423I. Kanitscheider, K. Skenderis and M. Taylor, Holographic anatomy of fuzzballs, JHEP 04 (2007) 023 [hep-th/0611171].
Fuzzballs with internal excitations. I Kanitscheider, K Skenderis, M Taylor, 10.1088/1126-6708/2007/06/056JHEP. 06560704.0690I. Kanitscheider, K. Skenderis and M. Taylor, Fuzzballs with internal excitations, JHEP 06 (2007) 056 [0704.0690].
AdS 3 holography for 1/4 and 1/8 BPS geometries. S Giusto, E Moscato, R Russo, 10.1007/JHEP11(2015)0041507.00945JHEP. 114S. Giusto, E. Moscato and R. Russo, AdS 3 holography for 1/4 and 1/8 BPS geometries, JHEP 11 (2015) 004 [1507.00945].
Ads 3 holography at dimension two. S Giusto, S Rawash, D Turton, 10.1007/JHEP07(2019)1711904.12880JHEP. 07171S. Giusto, S. Rawash and D. Turton, Ads 3 holography at dimension two, JHEP 07 (2019) 171 [1904.12880].
Supercharged AdS 3 Holography. S Rawash, D Turton, 10.1007/JHEP07(2021)178JHEP. 071782105.13046S. Rawash and D. Turton, Supercharged AdS 3 Holography, JHEP 07 (2021) 178 [2105.13046].
Little string theory in a double scaling limit. A Giveon, D Kutasov, 10.1088/1126-6708/1999/10/034hep-th/9909110JHEP. 1034A. Giveon and D. Kutasov, Little string theory in a double scaling limit, JHEP 10 (1999) 034 [hep-th/9909110].
A Giveon, D Kutasov, 10.1016/S0550-3213(01)00573-9hep-th/0106004Notes on AdS. 621303A. Giveon and D. Kutasov, Notes on AdS(3), Nucl. Phys. B621 (2002) 303 [hep-th/0106004].
The Large N limit of superconformal field theories and supergravity. J M Maldacena, 10.1023/A:1026654312961,10.4310/ATMP.1998.v2.n2.a1hep-th/9711200Int. J. Theor. Phys. 381113J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Int. J. Theor. Phys. 38 (1999) 1113 [hep-th/9711200].
A Giveon, D Kutasov, N Seiberg, 10.4310/ATMP.1998.v2.n4.a3hep-th/9806194Comments on string theory on AdS. 2733A. Giveon, D. Kutasov and N. Seiberg, Comments on string theory on AdS(3), Adv. Theor. Math. Phys. 2 (1998) 733 [hep-th/9806194].
. D Kutasov, N Seiberg, 10.1088/1126-6708/1999/04/008hep-th/9903219More comments on string theory on AdS. 38JHEPD. Kutasov and N. Seiberg, More comments on string theory on AdS(3), JHEP 04 (1999) 008 [hep-th/9903219].
Operator Algebra and Correlation Functions in the Two-Dimensional Wess-Zumino SU(2) x SU(2) Chiral Model. A Zamolodchikov, V Fateev, Sov. J. Nucl. Phys. 43657A. Zamolodchikov and V. Fateev, Operator Algebra and Correlation Functions in the Two-Dimensional Wess-Zumino SU(2) x SU(2) Chiral Model, Sov. J. Nucl. Phys. 43 (1986) 657.
Operator product expansion and factorization in the H+(3) WZNW model. J Teschner, 10.1016/S0550-3213(99)00785-3hep-th/9906215Nucl. Phys. 571555J. Teschner, Operator product expansion and factorization in the H+(3) WZNW model, Nucl. Phys. B571 (2000) 555 [hep-th/9906215].
R) WZW model 1.: The Spectrum. J M Maldacena, H Ooguri, 10.1063/1.1377273hep-th/0001053Strings in AdS(3) and SL. 422929J. M. Maldacena and H. Ooguri, Strings in AdS(3) and SL(2,R) WZW model 1.: The Spectrum, J. Math. Phys. 42 (2001) 2929 [hep-th/0001053].
Strings in AdS(3) and the SL(2,R) WZW model. Part 2. Euclidean black hole. J M Maldacena, H Ooguri, J Son, 10.1063/1.1377039hep-th/0005183J. Math. Phys. 422961J. M. Maldacena, H. Ooguri and J. Son, Strings in AdS(3) and the SL(2,R) WZW model. Part 2. Euclidean black hole, J. Math. Phys. 42 (2001) 2961 [hep-th/0005183].
R) WZW model. Part 3. Correlation functions. J M Maldacena, H Ooguri, 10.1103/PhysRevD.65.106006hep-th/0111180Strings in AdS(3) and the SL. 2106006J. M. Maldacena and H. Ooguri, Strings in AdS(3) and the SL(2,R) WZW model. Part 3. Correlation functions, Phys. Rev. D65 (2002) 106006 [hep-th/0111180].
String Theory of Supertubes. E J Martinec, S Massai, 10.1007/JHEP07(2018)1631705.10844JHEP. 07163E. J. Martinec and S. Massai, String Theory of Supertubes, JHEP 07 (2018) 163 [1705.10844].
String dynamics in NS5-F1-P geometries. E J Martinec, S Massai, D Turton, 10.1007/JHEP09(2018)0311803.08505JHEP. 0931E. J. Martinec, S. Massai and D. Turton, String dynamics in NS5-F1-P geometries, JHEP 09 (2018) 031 [1803.08505].
Little Strings, Long Strings, and Fuzzballs. E J Martinec, S Massai, D Turton, 10.1007/JHEP11(2019)019JHEP. 11191906.11473E. J. Martinec, S. Massai and D. Turton, Little Strings, Long Strings, and Fuzzballs, JHEP 11 (2019) 019 [1906.11473].
Stringy Structure at the BPS Bound. E J Martinec, S Massai, D Turton, 10.1007/JHEP12(2020)135JHEP. 12135E. J. Martinec, S. Massai and D. Turton, Stringy Structure at the BPS Bound, JHEP 12 (2020) 135 [2005.12344].
Black hole microstates from the worldsheet. D Bufalini, S Iguri, N Kovensky, D Turton, 10.1007/JHEP08(2021)0112105.02255JHEP. 0811D. Bufalini, S. Iguri, N. Kovensky and D. Turton, Black hole microstates from the worldsheet, JHEP 08 (2021) 011 [2105.02255].
Correlators at large c without information loss. A Galliani, S Giusto, E Moscato, R Russo, 10.1007/JHEP09(2016)0651606.01119JHEP. 0965A. Galliani, S. Giusto, E. Moscato and R. Russo, Correlators at large c without information loss, JHEP 09 (2016) 065 [1606.01119].
On the Late-Time Behavior of Virasoro Blocks and a Classification of Semiclassical Saddles. A L Fitzpatrick, J Kaplan, 10.1007/JHEP04(2017)0721609.07153JHEP. 0472A. L. Fitzpatrick and J. Kaplan, On the Late-Time Behavior of Virasoro Blocks and a Classification of Semiclassical Saddles, JHEP 04 (2017) 072 [1609.07153].
Heavy-Heavy-Light-Light correlators in Liouville theory. V Balasubramanian, A Bernamonti, B Craps, T De Jonckheere, F Galli, 10.1007/JHEP08(2017)0451705.08004JHEP. 0845V. Balasubramanian, A. Bernamonti, B. Craps, T. De Jonckheere and F. Galli, Heavy-Heavy-Light-Light correlators in Liouville theory, JHEP 08 (2017) 045 [1705.08004].
Holographic 4-point correlators with heavy states. A Galliani, S Giusto, R Russo, 10.1007/JHEP10(2017)0401705.09250JHEP. 1040A. Galliani, S. Giusto and R. Russo, Holographic 4-point correlators with heavy states, JHEP 10 (2017) 040 [1705.09250].
AdS 3 four-point functions from 1 8 -BPS states. A Bombini, A Galliani, 10.1007/JHEP06(2019)0441904.02656JHEP. 0644A. Bombini and A. Galliani, AdS 3 four-point functions from 1 8 -BPS states, JHEP 06 (2019) 044 [1904.02656].
Superstrings on AdS(3) and symmetric products. R Argurio, A Giveon, A Shomer, 10.1088/1126-6708/2000/12/003hep-th/0009242JHEP. 123R. Argurio, A. Giveon and A. Shomer, Superstrings on AdS(3) and symmetric products, JHEP 12 (2000) 003 [hep-th/0009242].
. M R Gaberdiel, I Kirsch, 10.1088/1126-6708/2007/04/050hep-th/0703001Worldsheet correlators in AdS. 350JHEPM. R. Gaberdiel and I. Kirsch, Worldsheet correlators in AdS(3)/CFT(2), JHEP 04 (2007) 050 [hep-th/0703001].
Exact chiral ring of AdS(3) / CFT(2). A Dabholkar, A Pakman, 10.4310/ATMP.2009.v13.n2.a2hep-th/0703022Adv. Theor. Math. Phys. 13409A. Dabholkar and A. Pakman, Exact chiral ring of AdS(3) / CFT(2), Adv. Theor. Math. Phys. 13 (2009) 409 [hep-th/0703022].
G Giribet, A Pakman, L Rastelli, 10.1088/1126-6708/2008/06/013Spectral Flow in AdS(3)/CFT. 130712.3046G. Giribet, A. Pakman and L. Rastelli, Spectral Flow in AdS(3)/CFT(2), JHEP 06 (2008) 013 [0712.3046].
A perturbative CFT dual for pure NS-NS AdS 3 strings. L Eberhardt, 10.1088/1751-8121/ac47b2J. Phys. A. 55640012110.07535L. Eberhardt, A perturbative CFT dual for pure NS-NS AdS 3 strings, J. Phys. A 55 (2022) 064001 [2110.07535].
String theory on AdS 3 and the symmetric orbifold of Liouville theory. L Eberhardt, M R Gaberdiel, 10.1016/j.nuclphysb.2019.1147741903.00421Nucl. Phys. 948114774L. Eberhardt and M. R. Gaberdiel, String theory on AdS 3 and the symmetric orbifold of Liouville theory, Nucl. Phys. B948 (2019) 114774 [1903.00421].
Asymptotically free AdS 3 /CFT 2. B Balthazar, A Giveon, D Kutasov, E J Martinec, 10.1007/JHEP01(2022)0082109.00065JHEP. 018B. Balthazar, A. Giveon, D. Kutasov and E. J. Martinec, Asymptotically free AdS 3 /CFT 2 , JHEP 01 (2022) 008 [2109.00065].
. V Fateev, A Zamolodchikov, A Zamolodchikov, Unpublished notesV. Fateev, A. Zamolodchikov and A. Zamolodchikov, Unpublished notes, .
G Giribet, C A Nunez, 10.1088/1126-6708/2001/06/010hep-th/0105200Correlators in AdS(3) string theory. 10G. Giribet and C. A. Nunez, Correlators in AdS(3) string theory, JHEP 06 (2001) 010 [hep-th/0105200].
Worldsheet four-point functions in AdS 3 /CF T 2. C A Cardona, I Kirsch, 10.1007/JHEP01(2011)0151007.2720JHEP. 0115C. A. Cardona and I. Kirsch, Worldsheet four-point functions in AdS 3 /CF T 2 , JHEP 01 (2011) 015 [1007.2720].
More AdS 3 correlators. Y Cagnacci, S M Iguri, 10.1103/PhysRevD.89.0660061312.3353Phys. Rev. D. 8966006Y. Cagnacci and S. M. Iguri, More AdS 3 correlators, Phys. Rev. D 89 (2014) 066006 [1312.3353].
String correlators on AdS 3 : three-point functions. A Dei, L Eberhardt, 10.1007/JHEP08(2021)0252105.12130JHEP. 0825A. Dei and L. Eberhardt, String correlators on AdS 3 : three-point functions, JHEP 08 (2021) 025 [2105.12130].
String correlators on AdS 3 : four-point functions. A Dei, L Eberhardt, 10.1007/JHEP09(2021)2092107.01481JHEP. 09209A. Dei and L. Eberhardt, String correlators on AdS 3 : four-point functions, JHEP 09 (2021) 209 [2107.01481].
O Lunin, S D Mathur, 10.1007/s002200100431hep-th/0006196Correlation functions for M**N / S(N) orbifolds. 219399O. Lunin and S. D. Mathur, Correlation functions for M**N / S(N) orbifolds, Commun. Math. Phys. 219 (2001) 399 [hep-th/0006196].
Three point functions for M(N) / S(N) orbifolds with N=4 supersymmetry. O Lunin, S D Mathur, 10.1007/s002200200638hep-th/0103169Commun. Math. Phys. 227385O. Lunin and S. D. Mathur, Three point functions for M(N) / S(N) orbifolds with N=4 supersymmetry, Commun. Math. Phys. 227 (2002) 385 [hep-th/0103169].
Adding momentum to D1-D5 system. O Lunin, 10.1088/1126-6708/2004/04/054hep-th/0404006JHEP. 0454O. Lunin, Adding momentum to D1-D5 system, JHEP 04 (2004) 054 [hep-th/0404006].
Dual geometries for a set of 3-charge microstates. S Giusto, S D Mathur, A Saxena, 10.1016/j.nuclphysb.2004.09.001hep-th/0405017Nucl. Phys. 701357S. Giusto, S. D. Mathur and A. Saxena, Dual geometries for a set of 3-charge microstates, Nucl. Phys. B701 (2004) 357 [hep-th/0405017].
3-charge geometries and their CFT duals. S Giusto, S D Mathur, A Saxena, 10.1016/j.nuclphysb.2005.01.009hep-th/0406103Nucl. Phys. 710425S. Giusto, S. D. Mathur and A. Saxena, 3-charge geometries and their CFT duals, Nucl. Phys. B710 (2005) 425 [hep-th/0406103].
Non-supersymmetric smooth geometries and D1-D5-P bound states. V Jejjala, O Madden, S F Ross, G Titchener, 10.1103/PhysRevD.71.124030hep-th/0504181Phys. Rev. 71124030V. Jejjala, O. Madden, S. F. Ross and G. Titchener, Non-supersymmetric smooth geometries and D1-D5-P bound states, Phys. Rev. D71 (2005) 124030 [hep-th/0504181].
D1-D5-P microstates at the cap. S Giusto, O Lunin, S D Mathur, D Turton, 10.1007/JHEP02(2013)050JHEP. 1302501211.0306S. Giusto, O. Lunin, S. D. Mathur and D. Turton, D1-D5-P microstates at the cap, JHEP 1302 (2013) 050 [1211.0306].
Holographic description of non-supersymmetric orbifolded D1-D5-P solutions. B Chakrabarty, D Turton, A Virmani, 10.1007/JHEP11(2015)0631508.01231JHEP. 1163B. Chakrabarty, D. Turton and A. Virmani, Holographic description of non-supersymmetric orbifolded D1-D5-P solutions, JHEP 11 (2015) 063 [1508.01231].
A solvable irrelevant deformation of AdS 3 /CFT 2. A Giveon, N Itzhaki, D Kutasov, 10.1007/JHEP12(2017)1551707.05800JHEP. 12155A. Giveon, N. Itzhaki and D. Kutasov, A solvable irrelevant deformation of AdS 3 /CFT 2 , JHEP 12 (2017) 155 [1707.05800].
. A Giveon, N Itzhaki, D Kutasov, 10.1007/JHEP07(2017)1221701.05576JHEP. 07122A. Giveon, N. Itzhaki and D. Kutasov, TT and LST, JHEP 07 (2017) 122 [1701.05576].
Radiation from the non-extremal fuzzball. B D Chowdhury, S D Mathur, 10.1088/0264-9381/25/13/135005Class. Quant. Grav. 251350050711.4817B. D. Chowdhury and S. D. Mathur, Radiation from the non-extremal fuzzball, Class. Quant. Grav. 25 (2008) 135005 [0711.4817].
Emission from the D1D5 CFT. S G Avery, B D Chowdhury, S D Mathur, 10.1088/1126-6708/2009/10/065JHEP. 10650906.2015S. G. Avery, B. D. Chowdhury and S. D. Mathur, Emission from the D1D5 CFT, JHEP 10 (2009) 065 [0906.2015].
Emission from the D1D5 CFT: Higher Twists. S G Avery, B D Chowdhury, 10.1007/JHEP01(2010)087JHEP. 01870907.1663S. G. Avery and B. D. Chowdhury, Emission from the D1D5 CFT: Higher Twists, JHEP 01 (2010) 087 [0907.1663].
On the dynamics of protected ramond ground states in the D1-D5 CFT. A A Lima, G M Sotkov, M Stanishkov, 10.1007/JHEP07(2021)1202103.04459JHEP. 07120A. A. Lima, G. M. Sotkov and M. Stanishkov, On the dynamics of protected ramond ground states in the D1-D5 CFT, JHEP 07 (2021) 120 [2103.04459].
Four-point functions with multi-cycle fields in symmetric orbifolds and the D1-D5 CFT. A Lima, G M Sotkov, M Stanishkov, 10.1007/JHEP05(2022)106JHEP. 051062202.12424A. Alves Lima, G. M. Sotkov and M. Stanishkov, Four-point functions with multi-cycle fields in symmetric orbifolds and the D1-D5 CFT, JHEP 05 (2022) 106 [2202.12424].
Supersymmetric conical defects: Towards a string theoretic description of black hole formation. V Balasubramanian, J Boer, E Keski-Vakkuri, S F Ross, 10.1103/PhysRevD.64.064011hep-th/0011217Phys. Rev. D. 6464011V. Balasubramanian, J. de Boer, E. Keski-Vakkuri and S. F. Ross, Supersymmetric conical defects: Towards a string theoretic description of black hole formation, Phys. Rev. D 64 (2001) 064011 [hep-th/0011217].
Desingularization by rotation. J M Maldacena, L Maoz, 10.1088/1126-6708/2002/12/055hep-th/0012025JHEP. 1255J. M. Maldacena and L. Maoz, Desingularization by rotation, JHEP 12 (2002) 055 [hep-th/0012025].
U(1) charges and moduli in the D1 -D5 system. F Larsen, E J Martinec, 10.1088/1126-6708/1999/06/019hep-th/9905064JHEP. 0619F. Larsen and E. J. Martinec, U(1) charges and moduli in the D1 -D5 system, JHEP 06 (1999) 019 [hep-th/9905064].
Microscopic formulation of black holes in string theory. J R David, G Mandal, S R Wadia, 10.1016/S0370-1573(02)00271-5hep-th/0203048Phys. Rept. 369549J. R. David, G. Mandal and S. R. Wadia, Microscopic formulation of black holes in string theory, Phys. Rept. 369 (2002) 549 [hep-th/0203048].
Using the D1D5 CFT to Understand Black Holes. S G Avery, 1012.0072S. G. Avery, Using the D1D5 CFT to Understand Black Holes, 1012.0072.
String theory on AdS orbifolds. E J Martinec, W Mcelgin, 10.1088/1126-6708/2002/04/029hep-th/0106171JHEP. 0429E. J. Martinec and W. McElgin, String theory on AdS orbifolds, JHEP 04 (2002) 029 [hep-th/0106171].
E Moscato, Black hole microstates and holography in the D1D5 CFT. Queen Mary, U. of London, 9Ph.D. thesisE. Moscato, Black hole microstates and holography in the D1D5 CFT, Ph.D. thesis, Queen Mary, U. of London, 9, 2017.
On structure constants and fusion rules in the SL(2,C) / SU(2) WZNW model. J Teschner, 10.1016/S0550-3213(99)00072-3hep-th/9712256Nucl. Phys. 546390J. Teschner, On structure constants and fusion rules in the SL(2,C) / SU(2) WZNW model, Nucl. Phys. B546 (1999) 390 [hep-th/9712256].
The Minisuperspace limit of the sl(2,C) / SU(2) WZNW model. J Teschner, 10.1016/S0550-3213(99)00071-1hep-th/9712258Nucl. Phys. B. 546369J. Teschner, The Minisuperspace limit of the sl(2,C) / SU(2) WZNW model, Nucl. Phys. B 546 (1999) 369 [hep-th/9712258].
W Mcelgin, 1511.07256Notes on the SL. 2W. McElgin, Notes on the SL(2,R) CFT, 1511.07256.
Analytic Continuation of Liouville Theory. D Harlow, J Maltz, E Witten, 10.1007/JHEP12(2011)0711108.4417JHEP. 1271D. Harlow, J. Maltz and E. Witten, Analytic Continuation of Liouville Theory, JHEP 12 (2011) 071 [1108.4417].
Deriving the AdS 3 /CFT 2 correspondence. L Eberhardt, M R Gaberdiel, R Gopakumar, 10.1007/JHEP02(2020)1361911.00378JHEP. 02136L. Eberhardt, M. R. Gaberdiel and R. Gopakumar, Deriving the AdS 3 /CFT 2 correspondence, JHEP 02 (2020) 136 [1911.00378].
On spectrally flowed local vertex operators in AdS 3. S Iguri, N Kovensky, 2208.00978S. Iguri and N. Kovensky, On spectrally flowed local vertex operators in AdS 3 , 2208.00978.
Parafermionic Currents in the Two-Dimensional Conformal Quantum Field Theory and Selfdual Critical Points in Z(n) Invariant Statistical Systems. V Fateev, A Zamolodchikov, Sov. Phys. JETP. 62215V. Fateev and A. Zamolodchikov, Parafermionic Currents in the Two-Dimensional Conformal Quantum Field Theory and Selfdual Critical Points in Z(n) Invariant Statistical Systems, Sov. Phys. JETP 62 (1985) 215.
String theory in magnetic monopole backgrounds. D Kutasov, F Larsen, R G Leigh, 10.1016/S0550-3213(99)00144-3hep-th/9812027Nucl. Phys. B. 550183D. Kutasov, F. Larsen and R. G. Leigh, String theory in magnetic monopole backgrounds, Nucl. Phys. B 550 (1999) 183 [hep-th/9812027].
On the central charge of spacetime current algebras and correlators in string theory on AdS 3. J Kim, M Porrati, 10.1007/JHEP05(2015)0761503.07186JHEP. 0576J. Kim and M. Porrati, On the central charge of spacetime current algebras and correlators in string theory on AdS 3 , JHEP 05 (2015) 076 [1503.07186].
The D1 / D5 system and singular CFT. N Seiberg, E Witten, 10.1088/1126-6708/1999/04/017hep-th/9903224JHEP. 0417N. Seiberg and E. Witten, The D1 / D5 system and singular CFT, JHEP 04 (1999) 017 [hep-th/9903224].
The Worldsheet Dual of the Symmetric Product CFT. L Eberhardt, M R Gaberdiel, R Gopakumar, 10.1007/JHEP04(2019)1031812.01007JHEP. 04103L. Eberhardt, M. R. Gaberdiel and R. Gopakumar, The Worldsheet Dual of the Symmetric Product CFT, JHEP 04 (2019) 103 [1812.01007].
Partition functions of the tensionless string. L Eberhardt, 10.1007/JHEP03(2021)176JHEP. 031762008.07533L. Eberhardt, Partition functions of the tensionless string, JHEP 03 (2021) 176 [2008.07533].
Exact N=4 correlators of AdS(3)/CFT(2). A Pakman, A Sever, 10.1016/j.physletb.2007.06.041Phys. Lett. B. 652600704.3040A. Pakman and A. Sever, Exact N=4 correlators of AdS(3)/CFT(2), Phys. Lett. B 652 (2007) 60 [0704.3040].
Chiral gauged WZW theories and coset models in conformal field theory. S.-W Chung, S H H Tye, 10.1103/PhysRevD.47.4546hep-th/9202002Phys. Rev. D. 474546S.-w. Chung and S. H. H. Tye, Chiral gauged WZW theories and coset models in conformal field theory, Phys. Rev. D 47 (1993) 4546 [hep-th/9202002].
The Partition function of the supersymmetric two-dimensional black hole and little string theory. D Israel, C Kounnas, A Pakman, J Troost, 10.1088/1126-6708/2004/06/033hep-th/0403237JHEP. 0633D. Israel, C. Kounnas, A. Pakman and J. Troost, The Partition function of the supersymmetric two-dimensional black hole and little string theory, JHEP 06 (2004) 033 [hep-th/0403237].
Correlation functions of composite Ramond fields in deformed D1-D5 orbifold SCFT 2. A A Lima, G M Sotkov, M Stanishkov, 10.1103/PhysRevD.102.106004Phys. Rev. D. 102106004A. A. Lima, G. M. Sotkov and M. Stanishkov, Correlation functions of composite Ramond fields in deformed D1-D5 orbifold SCFT 2 , Phys. Rev. D 102 (2020) 106004 [2006.16303].
A non-renormalization theorem for chiral primary 3-point functions. M Baggio, J Boer, K Papadodimas, 10.1007/JHEP07(2012)137JHEP. 071371203.1036M. Baggio, J. de Boer and K. Papadodimas, A non-renormalization theorem for chiral primary 3-point functions, JHEP 07 (2012) 137 [1203.1036].
Instability of non-supersymmetric smooth geometries. V Cardoso, O J C Dias, J L Hovdebo, R C Myers, 10.1103/PhysRevD.73.064031hep-th/0512277Phys. Rev. D. 7364031V. Cardoso, O. J. C. Dias, J. L. Hovdebo and R. C. Myers, Instability of non-supersymmetric smooth geometries, Phys. Rev. D 73 (2006) 064031 [hep-th/0512277].
Pair creation in non-extremal fuzzball geometries. B D Chowdhury, S D Mathur, 10.1088/0264-9381/25/22/225021Class. Quant. Grav. 252250210806.2309B. D. Chowdhury and S. D. Mathur, Pair creation in non-extremal fuzzball geometries, Class. Quant. Grav. 25 (2008) 225021 [0806.2309].
Holography Beyond AdS. M Asrat, A Giveon, N Itzhaki, D Kutasov, 10.1016/j.nuclphysb.2018.05.0051711.02690Nucl. Phys. B. 932241M. Asrat, A. Giveon, N. Itzhaki and D. Kutasov, Holography Beyond AdS, Nucl. Phys. B 932 (2018) 241 [1711.02690].
Universal lifting in the D1-D5 CFT. B Guo, M R R Hughes, S D Mathur, M Mehta, 10.1007/JHEP10(2022)1482208.07409JHEP. 10148B. Guo, M. R. R. Hughes, S. D. Mathur and M. Mehta, Universal lifting in the D1-D5 CFT, JHEP 10 (2022) 148 [2208.07409].
. M R Gaberdiel, R Gopakumar, 10.1007/JHEP05(2018)0851803.04423Tensionless string spectra on AdS. 30585JHEPM. R. Gaberdiel and R. Gopakumar, Tensionless string spectra on AdS 3 , JHEP 05 (2018) 085 [1803.04423].
BPS correlators for AdS 3 /CFT 2. M R Gaberdiel, B Nairz, 10.1007/JHEP09(2022)2442207.03956JHEP. 09244M. R. Gaberdiel and B. Nairz, BPS correlators for AdS 3 /CFT 2 , JHEP 09 (2022) 244 [2207.03956].
TT -deformations, AdS/CFT and correlation functions. G Giribet, 10.1007/JHEP02(2018)1141711.02716JHEP. 02114G. Giribet, TT -deformations, AdS/CFT and correlation functions, JHEP 02 (2018) 114 [1711.02716].
Multipole Ratios: A New Window into Black Holes. I Bena, D R Mayerson, 10.1103/PhysRevLett.125.221602Phys. Rev. Lett. 125221602I. Bena and D. R. Mayerson, Multipole Ratios: A New Window into Black Holes, Phys. Rev. Lett. 125 (2020) 221602 [2006.10750].
Distinguishing fuzzballs from black holes through their multipolar structure. M Bianchi, D Consoli, A Grillo, J F Morales, P Pani, G Raposo, 10.1103/PhysRevLett.125.221601Phys. Rev. Lett. 1252216012007.01743M. Bianchi, D. Consoli, A. Grillo, J. F. Morales, P. Pani and G. Raposo, Distinguishing fuzzballs from black holes through their multipolar structure, Phys. Rev. Lett. 125 (2020) 221601 [2007.01743].
Accelerating strangelets via Penrose process in non-BPS fuzzballs. M Bianchi, M Casolino, G Rizzo, 10.1016/j.nuclphysb.2020.1150101904.01097Nucl. Phys. B. 954115010M. Bianchi, M. Casolino and G. Rizzo, Accelerating strangelets via Penrose process in non-BPS fuzzballs, Nucl. Phys. B 954 (2020) 115010 [1904.01097].
| []
|
[
"PalmTree: Learning an Assembly Language Model for Instruction Embedding",
"PalmTree: Learning an Assembly Language Model for Instruction Embedding"
]
| [
"Xuezixiang Li \nUniversity of California Riverside Riverside\n92521CAUSA\n",
"Yu Qu \nUniversity of California Riverside Riverside\n92521CAUSA\n",
"Heng Yin \nUniversity of California Riverside Riverside\n92521CAUSA\n"
]
| [
"University of California Riverside Riverside\n92521CAUSA",
"University of California Riverside Riverside\n92521CAUSA",
"University of California Riverside Riverside\n92521CAUSA"
]
| []
| Deep learning has demonstrated its strengths in numerous binary analysis tasks, including function boundary detection, binary code search, function prototype inference, value set analysis, etc. When applying deep learning to binary analysis tasks, we need to decide what input should be fed into the neural network model. More specifically, we need to answer how to represent an instruction in a fixed-length vector. The idea of automatically learning instruction representations is intriguing, but the existing schemes fail to capture the unique characteristics of disassembly. These schemes ignore the complex intra-instruction structures and mainly rely on control flow in which the contextual information is noisy and can be influenced by compiler optimizations.In this paper, we propose to pre-train an assembly language model called PalmTree for generating general-purpose instruction embeddings by conducting self-supervised training on large-scale unlabeled binary corpora. PalmTree utilizes three pre-training tasks to capture various characteristics of assembly language. These training tasks overcome the problems in existing schemes, thus can help to generate high-quality representations. We conduct both intrinsic and extrinsic evaluations, and compare PalmTree with other instruction embedding schemes. PalmTree has the best performance for intrinsic metrics, and outperforms the other instruction embedding schemes for all downstream tasks.CCS CONCEPTS• Security and privacy → Software reverse engineering; • Theory of computation → Program analysis; • Computing methodologies → Knowledge representation and reasoning. | 10.1145/3460120.3484587 | [
"https://arxiv.org/pdf/2103.03809v3.pdf"
]
| 232,134,887 | 2103.03809 | 7d0c1cb43e8b398ad5b064e74f00802d4d585be6 |
PalmTree: Learning an Assembly Language Model for Instruction Embedding
Xuezixiang Li
University of California Riverside Riverside
92521CAUSA
Yu Qu
University of California Riverside Riverside
92521CAUSA
Heng Yin
University of California Riverside Riverside
92521CAUSA
PalmTree: Learning an Assembly Language Model for Instruction Embedding
Deep LearningBinary AnalysisLanguage ModelRepresentation Learning
Deep learning has demonstrated its strengths in numerous binary analysis tasks, including function boundary detection, binary code search, function prototype inference, value set analysis, etc. When applying deep learning to binary analysis tasks, we need to decide what input should be fed into the neural network model. More specifically, we need to answer how to represent an instruction in a fixed-length vector. The idea of automatically learning instruction representations is intriguing, but the existing schemes fail to capture the unique characteristics of disassembly. These schemes ignore the complex intra-instruction structures and mainly rely on control flow in which the contextual information is noisy and can be influenced by compiler optimizations.In this paper, we propose to pre-train an assembly language model called PalmTree for generating general-purpose instruction embeddings by conducting self-supervised training on large-scale unlabeled binary corpora. PalmTree utilizes three pre-training tasks to capture various characteristics of assembly language. These training tasks overcome the problems in existing schemes, thus can help to generate high-quality representations. We conduct both intrinsic and extrinsic evaluations, and compare PalmTree with other instruction embedding schemes. PalmTree has the best performance for intrinsic metrics, and outperforms the other instruction embedding schemes for all downstream tasks.CCS CONCEPTS• Security and privacy → Software reverse engineering; • Theory of computation → Program analysis; • Computing methodologies → Knowledge representation and reasoning.
INTRODUCTION
Recently, we have witnessed a surge of research efforts that leverage deep learning to tackle various binary analysis tasks, including function boundary identification [37], binary code similarity detection [23,31,40,42,43], function prototype inference [5], value set analysis [14], malware classification [35], etc. Deep learning has shown noticeably better performances over the traditional program analysis and machine learning methods.
When applying deep learning to these binary analysis tasks, the first design choice that should be made is: what kind of input should be fed into the neural network model? Generally speaking, there are three choices: we can either directly feed raw bytes into a neural network (e.g., the work by Shin et al. [37], Diff [23], DeepVSA [14], and MalConv [35]), or feed manually-designed features (e.g., Gemini [40] and Instruction2Vec [41]), or automatically learn to generate a vector representation for each instruction using some representation learning models such as word2vec (e.g., In-nerEye [43] and EKLAVYA [5]), and then feed the representations (embeddings) into the downstream models.
Compared to the first two choices, automatically learning instruction-level representation is more attractive for two reasons: (1) it avoids manually designing efforts, which require expert knowledge and may be tedious and error-prone; and (2) it can learn higherlevel features rather than pure syntactic features and thus provide better support for downstream tasks. To learn instruction-level representations, researchers adopt algorithms (e.g., word2vec [28] and PV-DM [20]) from Natural Language Processing (NLP) domain, by treating binary assembly code as natural language documents.
Although recent progress in instruction representation learning (instruction embedding) is encouraging, there are still some unsolved problems which may greatly influence the quality of instruction embeddings and limit the quality of downstream models:
First, existing approaches ignore the complex internal formats of instructions. For instance, in x86 assembly code, the number of operands can vary from zero to three; an operand could be a CPU register, an expression for a memory location, an immediate constant, or a string symbol; some instructions even have implicit operands, etc. Existing approaches either ignore this structural information by treating an entire instruction as a word (e.g., Inner-Eye [43] and EKLAVYA [5]) or only consider a simple instruction format (e.g., Asm2Vec [10]). Second, existing approaches use Control Flow Graph (CFG) to capture contextual information between instructions (e.g., Asm2Vec [10], InnerEye [43], and the work by Yu et al. [42]). However, the contextual information on control flow can be noisy due to compiler optimizations, and cannot reflect the actual dependency relations between instructions.
Moreover, in recent years, pre-trained deep learning models [33] are increasingly attracting attentions in different fields such as Computer Vision (CV) and Natural Language Processing (NLP). The intuition of pre-training is that with the development of deep learning, the numbers of model parameters are increasing rapidly. A much larger dataset is needed to fully train model parameters and to prevent overfitting. Thus, pre-trained models (PTMs) using large-scale unlabeled corpora and self-supervised training tasks have become very popular in some fields such as NLP. Representative deep pre-trained language models in NLP include BERT [9], GPT [34], RoBERTa [24], ALBERT [19], etc. Considering the naturalness of programming languages [1,16] including assembly language, it has great potential to pre-train an assembly language model for different binary analysis tasks.
To solve the existing problems in instruction representation learning and capture the underlying characteristics of instructions, in this paper, we propose a pre-trained assembly language model called PalmTree 1 for general-purpose instruction representation learning. PalmTree is based on the BERT [9] model but pre-trained with newly designed training tasks exploiting the inherent characteristics of assembly language.
We are not the first to utilize the BERT model in binary analysis. For instance, Yu et al. [42] proposed to take CFG as input and use BERT to pre-train the token embeddings and block embeddings for the purpose of binary code similarity detection. Trex [31] uses one of BERT's pre-training tasks -Masked Language Model (MLM) to learn program execution semantics from functions' micro-traces (a form of under-constrained dynamic traces) for binary code similarity detection.
Contrast to the existing approaches, our goal is to develop a pretrained assembly language model for general-purpose instruction representation learning. Instead of only using MLM on control flow, PalmTree uses three training tasks to exploit special characteristics of assembly language such as instruction reordering introduced by compiler optimizations and long range data dependencies. The three training tasks work at different granularity levels to effectively train PalmTree to capture internal formats, contextual control flow dependency, and data flow dependency of instructions.
Experimental results show that PalmTree can provide high quality general-purpose instruction embeddings. Downstream applications can directly use the generated embeddings in their models. A static embedding lookup table can be generated in advance for common instructions. Such a pre-trained, general-purpose language model scheme is especially useful when computing resources are limited such as on a lower-end or embedded devices.
We design a set of intrinsic and extrinsic evaluations to systematically evaluate PalmTree and other instruction embedding models. In intrinsic evaluations, we conduct outlier detection and basic block similarity search. In extrinsic evaluations, we use several downstream binary analysis tasks, which are binary code similarity detection, function type signatures analysis, and value set analysis, to evaluate PalmTree and the baseline models. Experimental results show that PalmTree has the best performance in intrinsic evaluations compared with the existing models. In extrinsic evaluations, PalmTree outperforms the other instruction embedding models and also significantly improves the quality of the downstream applications. We conclude that PalmTree can effectively generate high-quality instruction embedding which is helpful for different downstream binary analysis tasks.
In summary, we have made the following contributions:
• We lay out several challenges in the existing schemes in instruction representation learning. • We pre-train an assembly language model called PalmTree to generate general-purpose instruction embeddings and overcome the existing challenges.
• We propose to use three pre-training tasks for PalmTree embodying the characteristics of assembly language such as reordering and long range data dependency. • We conduct extensive empirical evaluations and demonstrate that PalmTree outperforms the other instruction embedding models and also significantly improves the accuracy of downstream binary analysis tasks. • We plan to release the source code of PalmTree, the pretrained model, and the evaluation framework to facilitate the follow-up research in this area.
To facilitate further research, we have made the source code and pre-trained PalmTree model publicly available at https://github. com/palmtreemodel/PalmTree.
BACKGROUND
In this section, we firstly summarize existing approaches and background knowledge of instruction embedding. Then we discuss some unsolved problems of the existing approaches. Based on the discussions, we summarize representative techniques in this field.
Existing Approaches
Based on the embedding generation process, existing approaches can be classified into three categories: raw-byte encoding, manuallydesigned encoding, and learning-based encoding.
2.1.1 Raw-byte Encoding. The most basic approach is to apply a simple encoding on the raw bytes of each instruction, and then feed the encoded instructions into a deep neural network. One such encoding is "one-hot encoding", which converts each byte into a 256-dimensional vector. One of these dimensions is 1 and the others are all 0. MalConv [35] and DeepVSA [14] take this approach to classify malware and perform coarse-grained value set analysis, respectively.
One instruction may be several bytes long. To strengthen the sense of an instruction, DeepVSA further concatenates the one-hot vectors of all the bytes belonging to an instruction, and forms a vector for that instruction.
Shin et al. [37] take a slightly different approach to detect function boundaries. Instead of a one-hot vector, they encode each byte as a 8-dimensional vector, in which each dimension presents a corresponding digit in the binary representation of that byte. For instance, the 0x90 will be encoded as
[ 1 0 0 1 0 0 0 0 ]
In general, this kind of approach is simple and efficient, because it does not require disassembly, which can be computationally expensive. Its downside, however, is that it does not provide any semantic level information about each instruction. For instance, we do not even know what kind of instruction it is, and what operands it operates on. While the deep neural networks can probably learn some of this information by itself, it seems very difficult for the deep neural networks to completely understand all the instructions.
Manual Encoding of Disassembled Instructions.
Knowing that disassembly carries more semantic information about an instruction, this approach first disassembles each instruction and encodes some features from the disassembly.
Li et al. [21] proposed a very simple method, which only extracts opcode to represent an instruction, and encodes each opcode as a one-hot vector. Unfortunately, this method completely ignores the information from operands. Instruction2Vec [41] makes use of both opcode and operand information. Registers, addresses, and offsets are encoded in different ways, and then concatenated to form a vector representation. Each instruction is encoded as a nine-dimensional feature vector. An instruction is divided into tokens, and tokens are encoded as unique index numbers. While an opcode takes one token, a memory operand takes up to four tokens, including base register, index register, scale, and displacement.
While this approach is able to reveal more information about opcode and operands for each instruction than raw-byte encoding, it does not carry higher-level semantic information about each instruction. For instance, it treats each opcode instruction equally unique, without knowing that add and sub are both arithmetic operations thus they are more similar to each other than call, which is a control transfer operation. Although it is possible to manually encode some of the higher-level semantic information about each instruction, it requires tremendous expert knowledge, and it is hard to get it right.
Learning-based Encoding.
Inspired by representation learning in other domains such as NLP (e.g., word2vec [27,28]), we would like to automatically learn a representation for each instruction that carries higher-level semantic information. Then this instructionlevel representation can be used for any downstream binary analysis tasks, achieving high analysis accuracy and generality.
Several attempts have been made to leverage word2vec [28] to automatically learn instruction-level representations (or embeddings), for code similarity detection [26,43] and function type inference [5], respectively. The basic idea of this approach is to treat each instruction as a word, and each function as a document. By applying a word2vec algorithm (Skip-gram or CBOW [27,28]) on the disassembly code in this way, we can learn a continuous numeric vector for each instruction.
In order to detect similar functions in binary code, Asm2Vec [10] makes use of the PV-DM model [20] to generate instruction embeddings and an embedding for the function containing these instructions simultaneously. Unlike the above approach that treats each instruction as a word, Asm2Vec treats each instruction as one opcode and up to two operands and learns embeddings for opcodes and operands separately.
Challenges in Learning-based Encoding
While the learning-based encoding approach seems intriguing, there exist several challenges. In x86, an instruction can have between 0 to 3 operands. An operand can be a CPU register, an expression for a memory location, an immediate constant, or a string symbol. A memory operand is calculated by an expression of "base+index×scale+displacement". While base and index are CPU registers, scale is a small constant number and displacement can be either a constant number or a string symbol. All these fields are optional. As a result, memory expressions vary a lot. Some instructions have implicit operands. Arithmetic instructions change EFLAGS implicitly, and conditional jump instructions take EFLAGS as an implicit input.
A good instruction-level representation must understand these internal details about each instruction. Unfortunately, the existing learning-based encoding schemes do not cope with these complexities very well. Word2vec, adopted by some previous efforts [5,26,43], treats an entire instruction as one single word, totally ignoring these internal details about each instruction.
Asm2Vec [10] looks into instructions to a very limited degree. It considers an instruction having one opcode and up to two operands. In other words, each instruction has up to three tokens, one for opcodes, and up to two for operands. A memory operand with an expression will be treated as one token, and thus it does not understand how a memory address is calculated. It does not take into account other complexities, such as prefix, a third operand, implicit operands, EFLAGS, etc. 1 ; prepare the third argument for function call 2 mov rdx , rbx 3 ; prepare the second argument for function call 4 mov rsi , rbp 5 ; prepare the first argument for function call 6 mov rdi , rax 7 ; call memcpy () function 8 call memcpy 9 ; test rbx register ( this instruction is reordered ) 10 test rbx , rbx 11 ; store the return value of memcpy () into rcx register 12 mov rcx , rax 13 ; conditional jump based on EFLAGS from test instruction 14 je 0 x40adf0
Listing 2: Instructions can be reordered 2.2.2 Noisy Instruction Context. The context is defined as a small number of instructions before and after the target instruction on the control-flow graph. These instructions within the context often have certain relations with the target instruction, and thus can help infer the target instruction's semantics.
While this assumption might hold in general, compiler optimizations tend to break this assumption to maximize instruction level parallelism. In particular, compiler optimizations (e.g., "-fscheduleinsns", "-fmodulo-sched", "-fdelayed-branch" in GCC) seek to avoid stalls in the instruction execution pipeline by moving the load from a CPU register or a memory location further away from its last store, and inserting irrelevant instructions in between. DeepVSA [14] 1-hot encoding on raw-bytes no no no Instruction2Vec [41] manually designed yes no yes InnerEye [43] word2vec no control flow yes Asm2Vec [10] PV-DM partial control flow yes PalmTree (this work) BERT yes control flow & data flow yes Listing 2 gives an example. The test instruction at Line 10 has no relation with its surrounding call and mov instructions. The test instruction, which will store its results into EFLAGS, is moved before the mov instruction by the compiler, such that it is further away from the je instruction at Line 14, which will use (load) the EFLAGS computed by the test instruction at Line 10. From this example, we can see that contextual relations on the control flow can be noisy due to compiler optimizations.
Note that instructions also depend on each other via data flow (e.g., lines 8 and 12 in Listing 2). Existing approaches only work on control flow and ignore this important information. On the other hand, it is worth noting that most existing PTMs cannot deal with the sequence longer than 512 tokens [33] (PTMs that can process longer sequences, such as Transformer XL [8], will require more GPU memory), as a result, even if we directly train these PTMs on instruction sequences with MLM, it is hard for them capture long range data dependencies which may happen among different basic blocks. Thus a new pre-training task capturing data flow dependency is desirable. Table 1 summarizes and compares the existing approaches, with respect to which encoding scheme or algorithm is used, whether disassembly is required, whether instruction internal structure is considered, and what context is considered for learning. In summary, raw-byte encoding and manually-designed encoding approaches are too rigid and unable to convery higher-level semantic information about instructions, whereas the existing learning-based encoding approaches cannot address challenges in instruction internal structures and noisy control flow.
Summary of Existing Approaches
DESIGN OF PALMTREE
Overview
To meet the challenges summarized in Section 2, we propose PalmTree, a novel instruction embedding scheme that automatically learns a language model for assembly code. PalmTree is based on BERT [9], and incorporates the following important design considerations.
First of all, to capture the complex internal formats of instructions, we use a fine-grained strategy to decompose instructions: we consider each instruction as a sentence and decompose it into basic tokens.
Then, in order to train the deep neural network to understand the internal structures of instructions, we make use of a recently proposed training task in NLP to train the model: Masked Language Model (MLM) [9]. This task trains a language model to predict the masked (missing) tokens within instructions.
Moreover, we would like to train this language model to capture the relationships between instructions. To do so, we design a training task, inspired by word2vec [28] and Asm2Vec [10], which attempts to infer the word/instruction semantics by predicting two instructions' co-occurrence within a sliding window in control flow. We call this training task Context Window Prediction (CWP), which is based on Next Sentence Prediction (NSP) [9] in BERT. Essentially, if two instructions and fall within a sliding window in control flow and appears before , we say and have a contextual relation. Note that this relation is more relaxed than NSP, where two sentences have to be next to each other. We make this design decision based on our observation described in Section 2.2.2: instructions may be reordered by compiler optimizations, so adjacent instructions might not be semantically related.
Furthermore, unlike natural language, instruction semantics are clearly documented. For instance, the source and destination operands for each instruction are clearly stated. Therefore, the data dependency (or def-use relation) between instructions is clearly specified and will not be tampered by compiler optimizations. Based on these facts, we design another training task called Def-Use Prediction (DUP) to further improve our assembly language model. Essentially, we train this language model to predict if two instructions have a def-use relation. Figure 1 presents the design of PalmTree. It consists of three components: Instruction Pair Sampling, Tokenization, and Language Model Training. The main component (Assembly Language Model) of the system is based on the BERT model [9]. After the training process, we use mean pooling of the hidden states of the second last layer of the BERT model as instruction embedding. The Instruction Pair Sampling component is responsible for sampling instruction pairs from binaries based on control flow and def-use relations.
Then, in the second component, the instruction pair is split into tokens. Tokens can be opcode, registers, intermediate numbers, strings, symbols, etc. Special tokens such as strings and memory offsets are encoded and compressed in this step. After that, as introduced earlier, we train the BERT model using the following three tasks: MLM (Masked Language Model), CWP (Context Window Prediction), and Def-Use Prediction (DUP). After the model has been trained, we use the trained language model for instruction embedding generation. In general, the tokenization strategy and MLM will help us address the first challenge in Section 2.2, and CWP and DUP can help us address the second challenge.
In Section 3.2, we introduce how we construct two kinds of instruction pairs. In Section 3.3, we introduce our tokenization process. Then, we introduce how we design different training tasks to
Input Generation
We generate two kinds of inputs for PalmTree. First, we disassemble binaries and extract def-use relations. We use Binary Ninja 2 in our implementation, but other disassemblers should work too. With the help of Binary Ninja, we consider dependencies among registers, memory locations, and function call arguments, as well as implicit dependencies introduced by EFLAGS. For each instruction, we retrieve data dependencies of each operand, and identify def-use relations between the instruction and its dependent instructions. Second, we sample instruction pairs from control flow sequences, and also sample instruction pairs based on def-use relations. Instruction pairs from control flow are needed by CWP, while instruction pairs from def-use relations are needed by DUP. MLM can take both kinds of instruction pairs.
Tokenization
As introduced earlier, unlike Asm2Vec [10] which splits an instruction into opcode and up to two operands, we apply a more fine-grained strategy. For instance, given an instruction "mov rax, qword [rsp+0x58]", we divide it into "mov", "rax", "qword", "[", "rsp", "+", "0x58", and "]". In other words, we consider each instruction as a sentence and decompose the operands into more basic elements.
We use the following normalization strategy to alleviate the OOV (Out-Of-Vocabulary) problem caused by strings and constant numbers. For strings, we use a special token [str] to replace them. For constant numbers, if the constants are large (at least five digits in hexadecimal), the exact value is not that useful, so we normalize it with a special token [addr]. If the constants are relatively 2 https://binary.ninja/ small (less than four digits in hexadecimal), these constants may carry crucial information about which local variables, function arguments, and data structure fields that are accessed. Therefore we keep them as tokens, and encode them as one-hot vectors.
Assembly Language Model
In this section we introduce how we apply the BERT model to our assembly language model for instruction embedding, and how we pre-train the model and adopt the model to downstream tasks.
3.4.1 PalmTree model. Our model is based on BERT [9], the stateof-the-art PTM in many NLP tasks. The proposed model is a multilayer bidirectional transformer encoder. Transformer, firstly introduced in 2017 [39], is a neural network architecture solely based on multi-head self attention mechanism. In PalmTree, transformer units are connected bidirectionally and stacked into multiple layers. We treat each instruction as a sentence and each token as a word. Instructions from control flow and data flow sequences are concatenated and then fed into the BERT model. As shown in Figure 2, the first token of this concatenated input is a special token -[CLS], which is used to identify the start of a sequence. Secondly, we use another token [SEP] to separate concatenated instructions. Furthermore, we add position embedding and segment embedding to token embedding, and use this mixed vector as the input of the bi-directional transformer network, as shown in Figure 2. Position embedding represents different positions in the input sequence, while segment embedding distinguishes the first and second instructions. Position embedding and segment embedding will be trained along with token embeddings. These two embeddings can help dynamically adjust token embeddings according to their locations.
3.4.2 Training task 1: Masked Language Model. The first task we use to pre-train PalmTree is Masked Language Model (MLM), which was firstly introduced in BERT [9]. Here is an example shown in Figure 3. Assuming that denotes a token and instruction = 1 , 2 , 3 , ..., consists of a sequence of tokens. For a given input instruction , we first randomly select 15% of the tokens to replace. For the chosen tokens, 80% are masked by [MASK] (maskout tokens), 10% are replaced with another token in the vocabulary (corrupted tokens), and 10% of the chosen tokens are unchanged. Then, the transformer encoder learns to predict the masked-out and corrupted tokens, and outputs a probability for predicting a particular token = [ ] with a softmax layer located on the top of the transformer network:
(ˆ| ) = ( Θ( ) ) =1 ( Θ( ) )(1)
whereˆdenotes the prediction of . Θ( ) is the ℎ hidden vector of the transformer network Θ in the last layer, when having as input. and is weight of label . is the number of possible labels of token . The model is trained with the Cross Entropy loss function:
L = − ∑︁ ∈ ( ) log (ˆ| )(2)
where ( ) denotes the set of tokens that are masked. Then we randomly select some tokens for replacement. Here we select ebx and rbx. The token ebx is replaced by the [MASK] token (the yellow box). The token rbx is replaced by the token jz (another token in the vocabulary, the red box). Next, we feed this modified instruction pair into the PalmTree model. The model will make a prediction for each token. Here we care about the predictions of the yellow and red boxes, which are the green boxes in Figure 3. Only the predictions of those two special tokens are considered in calculating the loss function.
Training task 2: Context Window Prediction.
We use this training task to capture control flow information. Many downstream tasks [5,14,40,43] rely on the understanding of contextual relations of code sequences in functions or basic blocks. Instead of predicting the whole following sentence (instruction) [18,38], we perform a binary classification to predict whether the two given instructions co-occur within a context window or not, which makes it a much easier task compared to the whole sentence prediction. However, unlike natural language, control flows do not have strict dependencies and ordering. As a result, strict Next Sentence Prediction (NSP), firstly proposed by BERT [9], may not be suitable for capturing contextual information of control flow. To tackle this issue, we extend the context window, i.e., we treat each instruction steps before and steps after the target instruction in the same basic block as contextually related. is the context windows size. In Section C.3, we evaluate the performance of different context window sizes, and pick = 2 accordingly. Given an instruction and a candidate instruction as input, the candidate instruction can be located in the contextual window of , or a negative sample randomly selected from the dataset.ˆdenotes the prediction of this model. The probability that the candidate instruction is a context instruction of is defined as
(ˆ| , ) = 1 1 + (Θ( ∥ ) )(3)
where ∈ C, and C is the candidate set including negative and positive samples. Θ is the first output of the transformer network in the last layer. And "∥" means a concatenation of two instructions. Suppose all instructions belongs to the training set D, then the loss function is: Here is an example in Figure 4. We use the input mentioned above. We feed the unchanged instruction pairs into the PalmTree model and pick the first output vector. We use this vector to predict whether the input are located in the same context window or not. In this case, the two instructions are next to each other. Therefore the correct prediction would be "true".
L = − ∑︁ ∈D log (ˆ| , )(4)
Training task 3: Def-Use Prediction.
To further improve the quality of our instruction embedding, we need not only control flow information but also data dependency information across instructions.
Sentence Ordering Prediction (SOP), first introduced by Lan et al. [19], is a very suitable choice. This task can help the PalmTree model to understand the data relation through DFGs, and we call it Def-Use Prediction (DUP).
Given an instruction pair 1 and 2 as input. And we feed 1 ∥ 2 as a positive sample and 2 ∥ 1 as a negative sample.ˆdenotes the prediction of this model. The probability that the instruction pair is swapped or not is defined as
(ˆ| 1 , 2 ) = 1 1 + (Θ( 1 ∥ 2 ) )(5)
where Θ is the first output of the transformer network in the last layer. The Cross Entropy loss function is: Figure 5: Def-Use Prediction (DUP)
L = − ∑︁ ∈ D (ˆ| 1 , 2 )(6)
We show an example in Figure 5. We still use the instruction pair discussed in Figure 4, but here we swap the two instructions. So the sequence is "[CLS] mov rdx rbx [SEP] mov ebx 0x1 [SEP]". We feed it into PalmTree and use the first output vector to predict whether this instruction pair remains unswapped or not. In this case, it should be predicted as "false" (which means this pair is swapped).
The loss function of PalmTree is the combination of three loss functions:
L = L + L + L(7)
3.4.5 Instruction Representation. The transformer encoder produces a sequence of hidden states as output. There are multiple ways to generate instruction embeddings from the output. For instance, applying a max/mean pooling. We use mean pooling of the hidden states of the second last layer to represent the whole instruction. This design choice has the following considerations. First, the transformer encoder encodes all the input information into the hidden states. A pooling layer is a good way to utilize the information encoded by transformer. Second, results in BERT [9] also suggest that hidden states of previous layers before the last layer have offer more generalizability than the last layer for some downstream tasks. We evaluated different layer configurations and reported the results in Section C.2.
3.4.6 Deployment of the model. There are two ways of deploying PalmTree for downstream applications: instruction embedding generation, where the pre-trained parameters are frozen, and finetuning, where the pre-trained parameters can be further adjusted.
In the first way (instruction embedding generation), PalmTree is used as an off-the-shelf assembly language model to generate high-quality instruction embeddings. Downstream applications can directly use the generated embeddings in their models. Our evaluation results show that PalmTree without fine-tuning can still outperform existing instruction embedding models such as word2vec and Asm2Vec. This scheme is also very useful when computing resources are limited such as on a lower-end or embedded devices. In this scenario, we can further improve the efficiency by generating a static embedding lookup table in advance. This lookup table contains the embeddings of most common instructions. A trade-off should be made between the model accuracy and the available resources when choosing the lookup table size. A larger lookup table will consume more space but can alleviate the OOV problem (happens when the encountered instruction is not in the table) and improve the accuracy.
In the second way (fine-tuning), PalmTree is fine-tuned and trained together with the downstream model. This scheme will usually provide extra benefits when enough computing resources and training budget are available. There are several fine-tuning strategies [33], e.g., two-stage fine-tuning, multi-task fine-tuning.
EVALUATION
Previous binary analysis studies usually evaluate their approaches by designing specific experiments in an end-to-end manner, since their instruction embeddings are only for individual tasks. In this paper, we focus on evaluating different instruction embedding schemes. To this end, we have designed and implemented an extensive evaluation framework to evaluate PalmTree and the baseline approaches. Evaluations can be classified into two categories: intrinsic evaluation and extrinsic evaluation. In the remainder of this section, we first introduce our evaluation framework and experimental configurations, then report and discuss the experimental results.
Evaluation Methodology
Intrinsic Evaluation. In NLP domain, intrinsic evaluation refers to the evaluations that compare the generated embeddings with human assessments [2]. Hence, for each intrinsic metric, manually organized datasets are needed. This kind of dataset could be collected either in laboratory on a limited number of examinees or through crowd-sourcing [25] by using web platforms or offline survey [2]. Unlike the evaluations in NLP domain, programming languages including assembly language (instructions) do not necessarily rely on human assessments. Instead, each opcode and operand in instructions has clear semantic meanings, which can be extracted from instruction reference manuals. Furthermore, debug information generated by different compilers and compiler options can also indicate whether two pieces of code are semantically equivalent. More specifically, we design two intrinsic evaluations: instruction outlier detection based on the knowledge of semantic meanings of opcodes and operands from instruction manuals, and basic block search by leveraging the debug information associated with source code.
Extrinsic Evaluation. Extrinsic evaluation aims to evaluate the quality of an embedding scheme along with a downstream machine learning model in an end-to-end manner [2]. So if a downstream model is more accurate when integrated with instruction embedding scheme A than the one with scheme B, then A is considered better than B. In this paper, we choose three different binary analysis tasks for extrinsic evaluation, i.e., Gemini [40] for binary code similarity detection, EKLAVYA [5] for function type signatures inference, and DeepVSA [14] for value set analysis. We obtained the original implementations of these downstream tasks for this evaluation. All of the downstream applications are implemented based on TensorFlow 3 . Therefore we choose the first way of deploying PalmTree in extrinsic evaluations (see Section 3.4.6). We encoded all the instructions in the corresponding training and testing datasets and then fed the embeddings into downstream applications.
Experimental Setup
Baseline Schemes and PalmTree Configurations. We choose In-struction2Vec, word2vec, and Asm2Vec as baseline schemes. For fair comparison, we set the embedding dimension as 128 for each model. We performed the same normalization method as PalmTree on word2vec and Asm2Vec. We did not set any limitation on the vocabulary size of Asm2Vec and word2vec. We implemented these baseline embedding models and PalmTree using PyTorch [30]. PalmTree is based on BERT but has fewer parameters. While in BERT # = 12, = 12 and _ = 768, we set # = 12, = 8, _ = 128 in PalmTree, for the sake of efficiency and training costs. The ratio between the positive and negative pairs in both CWP and DUP is 1:1.
Furthermore, to evaluate the contributions of three training tasks of PalmTree, we set up three configurations: Hardware Configuration. All the experiments were conducted on a dedicated server with a Ryzen 3900X [email protected]×12, one GTX 2080Ti GPU, 64 GB memory, and 500 GB SSD.
Intrinsic Evaluation
Outlier Detection.
In this intrinsic evaluation, we randomly create a set of instructions, one of which is an outlier. That is, this instruction is obviously different from the rest of the instructions in this set. To detect this outlier, we calculate the cosine distance between any two instructions' vector representations (i.e., embeddings), and pick whichever is most distant from the rest. We designed two outlier detection experiments, one for opcode outlier detection, and one for operand, to evaluate whether the instruction embeddings are good enough to distinguish different types of opcodes and operands respectively.
We classify instructions into 12 categories based on their opcode, according to the x86 Assembly Language Reference Manual [29].
More details about this process can be found in Table 8 in the Appendix. We prepared 50,000 instruction sets. Each set consists of four instructions from the same opcode category and one instruction from a different category. Similarly, we classify instructions based on their operands. Table 9 in the Appendix provides details about this process. Essentially, we classify operand lists, according to the number of operands as well as the operand types. We created another 50,000 sets of instructions covering 10 categories, and each set contains four instructions coming from the same category, and one from a different category. The first and second columns of Table 2 present the accuracy distributions for opcode outlier detection and operand outlier detection respectively. We can make the following observations: (1) word2vec performs poorly in both experiments, because it does not take into account the instruction internal structures; (2) Instruction2Vec, as a manually-designed embedding, performs generally well in both experiments, because this manual design indeed takes different opcodes and operands into consideration; (3) Asm2Vec performs slightly better than Instruction2Vec in opcode outlier detection, but considerably worse in operand outlier detection, because its modeling for operands is not fine-grained enough; (4) Even though PalmTree-M and PalmTree-MC do not show obvious advantages over Asm2Vec and Instruction2Vec, PalmTree has the best accuracy in both experiments, which demonstrate that this automatically learned representation can sufficiently capture semantic differences in both opcodes and operands; and (5) All the three pre-training tasks contribute positively to PalmTree in both outlier detection experiments. Particularly, the DUP training task considerably boots the accuracy in both experiments, demonstrating that the def-use relations between instructions indeed help learn the assembly language model. A complete result of outlier detection can be found in Figure 6 and Figure 7.
Basic Block Search.
In this intrinsic evaluation, we compute an embedding for each basic block (a sequence of instructions with only one entry and one exit), by averaging the instruction embeddings in it. Given one basic block, we use its embedding to find semantically equivalent basic blocks based on the cosine distance between two basic block embeddings.
We use openssl-1.1.0h and glibc-2.29.1 as the testing set, which is not included in our training set. We compile them with O1, O2, and O3 optimization levels. We use the same method used in DeepBinDiff [11], which relies on the debug information from the program source code as the ground truth. Figure 8 shows the ROC curves of Instruction2Vec, word2vec, Asm2Vec, and PalmTree for basic block search. Table 2 further lists the AUC (Area Under the Curve) score for each embedding scheme. We can observe that (1) word2vec, once again, has the worst performance; (2) the manually-designed embedding scheme, Instruction2Vec, is even better than word2vec, an automatically learned embedding scheme; (3) Asm2Vec performs reasonably well, but still worse than three configurations of PalmTree; and (4) The three PalmTree configurations have better AUC than other baselines, while consecutive performance improvements are observed.
PalmTree ranks the first in all intrinsic evaluation experiments, demonstrating the strength of the automatically learned assembly language model. And the performance improvements between different PalmTree configurations show positive contributions of individual training tasks.
Extrinsic Evaluation
An extrinsic evaluation reflects the ability of an instruction embedding model to be used as an input of downstream machine learning algorithms for one or several specific tasks [2]. As introduced earlier, we select three downstream tasks in binary analysis field, which are binary code similarity detection, function type signature analysis, and value set analysis. In an ACFG, each node is a manually formed feature vector for each basic block. Table 3 shows the attributes (i.e., features) of a basic block in the original implementation. In this experiment, we evaluate the performance of Gemini, when having Instruction2Vec, word2vec, Asm2Vec, PalmTree-M, PalmTree-MC, and PalmTree as input, respectively. Moreover, we also used one-hot vectors with an embedding layer as a kind of instruction embedding (denoted as "one-hot") as another baseline. The embedding layer will be trained along with Gemini. Figure 9 shows how we adopt different instruction embedding models to Gemini. Since Gemini takes a feature vector for each basic block, we use mean pooling to generate basic block embeddings based on embeddings of the instructions in the corresponding basic block. The architectures of our modified model and the original model are both shown in Figure 9. We also included its original basic block features as an additional baseline (denoted as "Gemini") for comparison. The accuracy of the original Gemini is reported to be very high (with an AUC of 0.971). However, this might be due to overfitting, since the training and testing sets are from OpenSSL compiled by the same compiler Clang. To really evaluate the generalizability (i.e., the ability to adapt to previously unseen data) of the trained models under different inputs, we use binutils-2.26, binutils-2.30, and coreutils-8.30 compiled by Clang as training set (237 binaries in total), and used openssl-1.1.0h, openssl-1.0.1, and glibc-2.29.1 compiled by GCC as testing set (14 binaries). In other words, the training and testing sets are completely different and the compilers are different too. Table 4 gives the AUC values of Gemini when different models are used to generate its input. Figure 10 shows the ROC curves of Gemini when different instruction embedding models are used. Based on Table 4, we can make the following observations:
(1) Although the original paper [40] reported very encouraging performance of Gemini, we can observe that the original Gemini model does not generalize very well to completely new testing data. (2) The manually designed embedding schemes, Instruction2Vec and one-hot vector, perform poorly, signifying that manually selected features might be only suitable for specific tasks. Figure 11: Instruction embedding models and EKLAVYA
Function
Type Signature Inference. Function type signature inference is a task of inferring the number and primitive types of the arguments of a function. To evaluate the quality of instruction embeddings in this task, we select EKLAVYA, an approach proposed by Chua et al. [5]. It is based on a multi-layer GRU (Gated Recurrent Unit) network and uses word2vec as the instruction embedding method. According to the original paper, word2vec was pre-trained with the whole training dataset. Then, they trained a GRU network to infer function type signatures.
In this evaluation, we test the performances of different types of embeddings using EKLAVYA as the downstream application. Since the original model is not an end-to-end model, we do not need an embedding layer between instruction embeddings and the GRU network. We replaced the original word2vec in EKLAVYA with one-hot encoding, Instruction2Vec, Asm2Vec, PalmTree-M, PalmTree-MC, and PalmTree, as shown in Figure 11.
Similarly, in order to evaluate the generalizability of the trained downstream models, we used very different training and testing sets (the same datasets described in Section 4.4.1). Table 5 and Figure 12 presents the accuracy of EKLAVYA on the testing dataset. Figure 15, and Figure 16 From the results we can make the following observations:
(1) PalmTree and Asm2Vec can achieve higher accuracy than word2vec, which is the original choice of EKLAVYA. (2) PalmTree has the best accuracy on the testing dataset, demonstrating that EKLAVYA when fed with PalmTree as instruction embeddings can achieve the best generalizability. Moreover, CWP contributes more (see PalmTree-MC), which implies that control-flow information plays a more significant role in EKLAVYA. DeepVSA [14] makes use of a hierarchical LSTM network to conduct a coarse-grained value set analysis, which characterizes memory references into regions like global, heap, stack, and other. It feeds instruction raw bytes as input into a multi-layer LSTM network to generate instruction embeddings. It then feeds the generated instruction representations into another multi-layer bi-directional LSTM network, which is supposed to capture the dependency between instructions and eventually predict the memory access regions.
In our experiment, we use different kinds of instruction embeddings to replace the original instruction embedding generation model in DeepVSA. We use the original training and testing datasets of DeepVSA and compare prediction accuracy of different kinds of embeddings. The original datasets contain raw bytes only, thus we need to disassemble these raw bytes. After that we tokenize and encode these disassembled instructions for training and testing. We add an embedding layer before the LSTM network to further adjust instruction embeddings, as shown in Figure 13.
We use part of the dataset provided by the authors of Deep-VSA. The whole dataset provided by the authors has 13.8 million instructions for training and 10.1 million for testing. Our dataset has 9.6 million instructions for training and 4.8 million for testing, due to the disassembly time costs. As explained in their paper [14], their dataset also used Clang and GCC as compilers and had no overlapping instructions between the training and testing datasets. Table 6 lists the experimental results. We use Precision (P), Recall (R), and F1 scores to measure the performance. Figure 14 depicts the loss values of DeepVSA during training, when different instruction embedding schemes are used as its input. From these results, we have the following observations:
(1) PalmTree has visibly better results than the original Deep-VSA and the other baselines in Global and Heap, and has slightly better results in Stack and Other since other baselines also have scores greater than 0.9. PalmTree outperforms the other instruction embedding approaches in each extrinsic evaluation. Also, PalmTree can speed up training and further improve downstream models by providing high-quality instruction embeddings. In contrast, word2vec and Instruction2Vec perform poorly in all the three downstream tasks, showing that the poor quality of an instruction embedding will adversely affect the overall performance of downstream applications.
Runtime Efficiency
In this section, we conduct an experiment to evaluate runtime efficiencies of PalmTree and baseline approaches. First, we test the runtime efficiencies of different instruction embedding approaches. Second, we test the runtime efficiency of PalmTree when having different embedding sizes. We use 64, 128, 256, and 512 as embedding sizes, while 128 is the default setting. In the transformer encoder of PalmTree, the width of each feed-forward hidden layer is fixed and related to the size of the final output layer, which is 4 times of the embedding size [19]. We use Coreutils-8.30 as the dataset. It includes 107 binaries and 1,006,169 instructions. We disassembled the binaries with Binary Ninja and feed them into the baseline models. Due to the limitation of GPU memory, we treated 5,000 instructions as a batch. Table 7 shows the encoding time and throughput of different models when encoding the 107 binaries in Coreutils-8. 30. From the results, we can make several observations. First, PalmTree is much slower than previous embedding approaches such as word2vec and Asm2Vec. This is expected, since PalmTree has a deep transformer network. However, with the acceleration of the GPU, PalmTree can finish encoding the 107 binaries in about 70 seconds, which is acceptable. Furthermore, as an instruction level embedding approach, PalmTree can have an embedding lookup table as well to store some frequently used embeddings. This lookup table works as fast as word2vec and can further boost the efficiency of PalmTree. Last but not least, from the results we observed that it would be 1.7 to 1.9 times slower when doubling the embedding size.
Hyperparameter Selection
To further study the influences of different hyperparameter configurations of PalmTree, we trained PalmTree with different embedding sizes (64, 128, 256, and 512) and different context window sizes (1, 2, 3, and 4). We also evaluated different output layer configurations when generating instruction embeddings. Interested readers are referred to the Appendix for more details.
RELATED WORK
Representation Learning in NLP. Over the past several years, representation learning techniques have made significant impacts in NLP domain. Neural Network Language Model (NNLM) [4] is the first work that used neural networks to model natural language and learn distributed representations for words. In 2013, Mikolov et al. introduced word2vec and proposed Skip-gram and Continuous Bag-Of-Words (CBOW) models [28]. The limitation of word2vec is that its embedding is frozen once trained, while words might have different meanings in different contexts. To address this issue, Peters et al. introduced ELMo [32], which is a deep bidirectional language model. In this model, word embeddings are generated from the entire input sentence, which means that the embeddings can be dynamically adjusted according to different contextual information.
In 2017, Vaswani et al. introduced transformer [39] to replace the RNN networks (e.g., LSTM). Devlin et al. proposed BERT [9] in 2019, which is a bi-directional transformer encoder. They designed the transformer network using a full connected architecture, so that the model can leverage both forward and backward information. Clark et al. [6] proposed ELECTRA and further improved BERT by using a more sample-efficient pre-training task called Replaced Token Detection. This task is an adversarial learning process [13].
Representation Learning for Instructions. Programming languages, including low level assembly instructions, have clear grammar and syntax, thus can be treated as natural language and be processed by NLP models.
Instruction representation plays a significant role in binary analysis tasks. Many techniques have been proposed in previous studies.
Instruction2Vec [41] is a manually designed instruction representation approach. InnerEye [43] uses Skip-gram, which is one of the two models of word2vec [28], to encode instructions for code similarity search. Each instruction is treated as a word while a code snippet as a document. Massarelli et al. [26] introduced an approach for function-level representation learning, which also leveraged word2vec to generate instruction embeddings. DeepBindiff [11] also used word2vec to generate representations for instructions with the purpose of matching basic blocks in different binaries. Unlike InnerEye, they used word2vec to learn token embeddings and generate instruction embeddings by concatenating vectors of opcode and operands.
Although word2vec has been widely used in instruction representation learning. It has the following shortcommings: first, using word2vec at the instruction level embedding will lose internal information of instructions; on the other hand, using word2vec at the token level may fail to capture instruction level semantics. Second, the model has to handle the OOV problem. InnerEye [43] and Deep-Bindiff [11] provided good practices by applying normalization. However, normalization also results in losing some important information. Asm2Vec [10] generates embeddings for instructions and functions simultaneously by using the PV-DM model [20]. Unlike previous word2vec based approaches, Asm2Vec exploits a token level language model for training and did not have the problem of breaking the boundaries of instructions, which is a problem of token level word2vec models. Coda [12] is a neural program decompiler based on a Tree-LSTM autoencoder network. It is an end-to-end deep learning model which was specifically designed for decompilation. It cannot generate generic representations for instructions, thus cannot meet our goals.
Representation Learning for Programming Languages. NLP techniques are also widely used to learn representations for programming languages. Harer et al. [15] used word2vec to generate token embeddings of C/C++ programs for vulnerability prediction. The generated embeddings are fed into a TextCNN network for classification. Li et al. [22] introduced a bug detection technique using word2vec to learn token (node) embedding from Abstract Syntax Tree (AST). Ben-Nun et al. [3] introduced a new representation learning approach for LLVM IR in 2018. They generated conteXtual Flow Graph (XFG) for this IR, which leverages both data dependency and control flow. Karampatsis et al. [17] proposed a new method to reduce vocabulary size of huge source code dataset. They introduced word splitting, subword splitting with Byte Pair Encoding (BPE) [36] cache, and dynamic adaptation to solve the OOV problem in source code embedding.
DISCUSSION
In this paper, we focus on training an assembly language model for one instruction set or one architecture. We particularly evaluated x86. The technique described here can be applied to other instruction sets as well, such as ARM and MIPS.
However, in this paper, we do not intend to learn a language model across multiple CPU architectures. Cross-architecture means that semantically similar instructions from different architectures can be mapped to near regions in the embedded space. Crossarchitecture assembly language model can be very useful for crossarchitecture vulnerability/bug search. We leave it as a future work.
It is worth noting that instead of feeding a pair of instructions into PalmTree, we can also feed code segment pairs or even basic block and function pairs, which may better capture long-term relations between instructions (currently we use sampling in the context window and data flow graph to capture long-term relations) and has a potential to further improve the performance of PalmTree. We leave this as a future work.
CONCLUSION
In this paper, we have summarized the unsolved problems and existing challenges in instruction representation learning. To solve the existing problems and capture the underlying characteristics of instruction, we have proposed a pre-trained assembly language model called PalmTree for generating general-purpose instruction embeddings.
PalmTree can be pre-trained by performing self-supervised training on large-scale unlabeled binary corpora. PalmTree is based on the BERT model but pre-trained with newly designed training tasks exploiting the inherent characteristics of assembly language. More specifically, we have used the following three pre-training tasks to train PalmTree: MLM (Masked Language Model), CWP (Context Window Prediction), and DUP (Def-Use Prediction). We have designed a set of intrinsic and extrinsic evaluations to systematically evaluate PalmTree and other instruction embedding models. Experimental results show that PalmTree has the best performance in intrinsic evaluations compared with the existing models. In extrinsic evaluations that involve several downstream applications, PalmTree outperforms all the baseline models and also significantly improves downstream applications' performance. We conclude that PalmTree can effectively generate high-quality instruction embedding which is helpful for different downstream binary analysis tasks. Table 8 shows how we categorize different opcodes by referring to [29]. Table 9 shows how we categorize different operand types. The first column shows the type of operands combination. "none" means the instruction has no operand, such as retn. "tri" means the instruction has three operands. The other ones are instructions that have two operands. For instance, "reg-reg" means both operands are registers. The type of each operand has been listed in the second and third columns. Figure 15 and Figure 16 show the results of EKLAVYA in the Function Type Signature Inference task. Figure 15 is the loss value curves Figure 16 shows the accuracy curves during the training. In this experiment, we evaluate the performance of PalmTree with different embedding sizes. Here we use 64, 128, 256, and 512 as instruction sizes, which is the same as the previous experiment. We test these 4 models on our intrinsic evaluation tasks. Table 10 shows all of the results of intrinsic evaluation when having different embedding sizes. From the results, we can observe that there is a clear trend that the performance becomes better when increasing the embedding size. The largest embedding size has the best performance in all three metrics. However, considering efficiency, we recommend having a suitable embedding size configuration according to the hardware capacities. For example, we only have a single GPU (GTX 2080Ti) in our server, thus we chose 128 as the embedding size.
B MORE FIGURES IN EVALUATIONS
C.2 Output layer configurations
In this experiment, we evaluate the performance of PalmTree with different output layer configurations. It means that we select a different layer of the transformer model as the output of PalmTree. By default, PalmTree uses the second-last layer as the output layer. And we evaluate five different settings, which are the last layer, the second-last layer, the third-last layer, and the fourth-last layer, on our intrinsic evaluation tasks. The embedding size in this experiment is set as 128. Table 11 shows all of the results of the intrinsic metrics when having a different layer as the output layer. There is no obvious advantage to choose any layer as the output layer. However, the second-last layer has the best results in opcode outlier detection and basicblock similarity search. Thus we chose the second-last layer as the output layer in this paper. In this experiment, we evaluate the performance of PalmTree with different context window sizes in the CWP task. For instance, if the context window size is 2, it means that we consider − 2, − 1, + 1 and + 2 as contextual instruction when given instruction as a sample. We evaluate 1, 2, 3, and 4 as four different context window sizes in this experiment. Table 12 shows all of the results of the intrinsic metrics when training PalmTree with different context window configurations. We can observe that context window size 1 and 2 have similar performance on the three intrinsic evaluation metrics, but context window size 2 has the best performance on the downstream task EKLAVYA. Further increasing the context window size to 3 and 4 will lead to worse results. Based on these results, we choose the context window size to be 2.
C.3 Context window for CWP
2.2.1 Complex and Diverse Instruction Formats. Instructions (especially those in CISC architectures) are often in a variety of formats, with additional complexities. Listing 1 gives several examples of instructions in x86.
Figure 1 :
1System design of PalmTree. Trm is the transformer encoder unit, C is the hidden state of the first token of the sequence (classification token), ( = 1 . . . ) are hidden states of other tokens of the sequence pre-train a comprehensive assembly language model for instruction embedding in Section 3.4.
Figure 2 :
2Input Representation
Figure 3 :
3Masked Language Model (MLM) Figure 3 shows an example. Given an instruction pair "mov ebx, 0x1; mov rdx, rbx", we first add special tokens [CLS] and [SEP].
Figure 4 :
4Context Window Prediction (CWP)
•
PalmTree-M: PalmTree trained with MLM only • PalmTree-MC: PalmTree trained with MLM and CWP • PalmTree: PalmTree trained with MLM, CWP, and DUP Datasets. To pre-train PalmTree and evaluate its transferability and generalizability, and evaluate baseline schemes in different downstream applications, we used different binaries from different compilers. The pre-training dataset contains different versions of Binutils 4 , Coreutils 5 , Diffutils 6 , and Findutils 7 on x86-64 platform and compiled with Clang 8 and GCC 9 with different optimization levels. The whole pre-training dataset contains 3,266 binaries and 2.25 billion instructions in total. There are about 2.36 billion positive and negative sample pairs during training. To make sure that training and testing datasets do not have much code in common in extrinsic evaluations, we selected completely different testing dataset from different binary families and compiled by different compilers. Please refer to the following sections for more details about dataset settings.
Figure 6 :Figure 7 :
67Accuracy Accuracy of Operands Outlier Detection
Figure 8 :
8ROC curves for Basic Block Search
Figure 9 :
9Instruction embedding models and the downstream model Gemini 4.4.1 Binary Code Similarity Detection. Gemini [40] is a neural network-based approach for cross-platform binary code similarity detection. The model is based on Structure2Vec [7] and takes ACFG (Attributed Control Flow Graph) as input.
Figure 10 :
10ROC curves of Gemini
Figure 12 :
12Accuracy of EKLAVYA loss value and accuracy of EKLAVYA during training and testing.
( 3 )Figure 13 :
313Instruction2Vec performs very poorly in this evaluation, signifying that, when not done correctly, manual feature selection may disturb and mislead a downstream model.(4) The poor results of one-hot encoding show that a good instruction embedding model is indeed necessary. At least in this task, it is very difficult for the deep neural network to learn instruction semantic through end-to-end training. Instruction embedding models and the downstream model DeepVSA 4.4.3 Value Set Analysis.
Figure 14 :
14Loss value of DeepVSA during training
( 2 )
2The three training tasks of PalmTree indeed contribute to the final result. It indicates that PalmTree indeed captures the data flows between instructions. In comparison, the other instruction embedding models are unable to capture data dependency information very well.(3) PalmTree converged faster than original DeepVSA (see, indicating that instruction embedding model can accelerate the training phase of downstream tasks.
Figure 15 :
15Loss value during training of EKLAVYA during training.
Figure 16 :
16Accuracy
Table 1 :
1Summary of ApproachesName
Encoding
Internal Structure
Context
Disassembly Required
Table 2 :
2Intrinsic Evaluation Results, Avg. denotes the average of accuracy scores, and Stdev. denotes the standard deviationopcode
outlier
operand
outlier
basicblock
sim search
Model
Avg.
Stdev.
Avg.
Stdev.
AUC
Instruction2Vec 0.863 0.0529 0.860 0.0363
0.871
word2vec
0.269 0.0863 0.256 0.0874
0.842
Asm2Vec
0.865 0.0426 0.542 0.0238
0.894
PalmTree-M
0.855 0.0333 0.785 0.0656
0.910
PalmTree-MC
0.870 0.0449 0.808 0.0435
0.913
PalmTree
0.871 0.0440 0.944 0.0343
0.922
Table 3 :
3Attributes of Basic Blocks in Gemini[40] Type
Attribute name
Block-level attributes
String Constants
Numeric Constants
No. of Transfer Instructions
No. of Calls
No. of Instructions
No. of Arithmetic Instructions
Inter-block attributes
No. of offspring
Betweenness
Table 4 :
4AUC values of GeminiModel
AUC Model
AUC
one-hot
0.745 Gemini
0.866
Instruction2Vec 0.738 PalmTree-M
0.864
word2vec
0.826 PalmTree-MC 0.866
Asm2Vec
0.823 PalmTree
0.921
Table 5 :
5Accuracy and Standard Deviation of EKLAVYAModel
Accuracy Standard Deviation
one-hot
0.309
0.0338
Instruction2Vec
0.311
0.0407
word2vec
0.856
0.0884
Asm2Vec
0.904
0.0686
PalmTree-M
0.929
0.0554
PalmTree-MC
0.943
0.0476
PalmTree
0.946
0.0475
Table 6 :
6Results of DeepVSAEmbeddings
Global
Heap
Stack
Other
P
R
F1
P
R
F1
P
R
F1
P
R
F1
one-hot
0.453 0.670 0.540
0.507 0.716 0.594
0.959 0.866 0.910
0.953 0.965 0.959
Instruction2Vec 0.595 0.726 0.654
0.512 0.633 0.566
0.932 0.898 0.914
0.948 0.946 0.947
word2vec
0.147 0.535 0.230
0.435 0.595 0.503
0.802 0.420 0.776
0.889 0.863 0.876
Asm2Vec
0.482 0.557 0.517
0.410 0.320 0.359
0.928 0.894 0.911
0.933 0.964 0.948
DeepVSA
0.961 0.738 0.835
0.589 0.580 0.584
0.974 0.917 0.944
0.943 0.976 0.959
PalmTree-M
0.845 0.732 0.784
0.572 0.625 0.597
0.963 0.909 0.935
0.956 0.969 0.962
PalmTree-MC
0.910 0.755 0.825 0.758 0.675 0.714
0.965 0.897 0.929
0.958 0.988 0.972
PalmTree
0.912 0.805 0.855 0.755 0.678 0.714 0.974 0.929 0.950 0.959 0.983 0.971
Table 7 :
7Efficiency of PalmTree and baselinesembedding size encoding time throughput (#ins/sec)
Instruction2vec
6.684
150,538
word2vec
0.421
2,386,881
Asm2Vec
17.250
58,328
PalmTree-64
41.682
24,138
PalmTree-128
70.202
14,332
PalmTree-256
135.233
7,440
PalmTree-512
253.355
3,971
Table 8 :
8Types of Opcodes cmovenz, cmovs, cmovns, cmovg, cmovnle, cmovge, cmovnl, cmovnge, cmovle, cmovng, cmova, cmovnbe, cmovae, cmovnb, cmovb, cmovnae, cmovbe, cmovna Procedure Call Instructions call, leave, ret, retn String Instructions cmps, cmpsb, cmpsl, cmpsw, lods, lodsb, lodsl, lodsw,mov, movsb, movsl, movsw Floating Point Arithmetic fabs, fadd, faddp, fchs, fdiv, fdivp, fdivr, fdivrp, fiadd, fidivr, fimul, fisub, fisubr, fmul, fmulp, fprem, fpreml,frndint, fscale, fsqrt, fsub,fsubp, fsubr, fsubrp, fxtractTypes
Opcodes
Data Movement
mov, push, pop, cwtl, cltq, cqto,
cqtd
Unary Operations
inc, dec, neg, not
Binary Operations
lea, leaq, add, sub,imul, xor, or,
and
Shift Operations
sal, sar, shr, shl
Special Arithmetic
Operations
imulq, mulq, idivq, divq
Comparison and Test
Instructions
cmp, test
Conditional Set
Instructions
sete, setz, setne, setnz, sets,
setns, setg, setnle,setge, setnl,
setl, setnge,setle, setng, seta,
setnbe, setae, setnb, setbe, setna
Jump Instructions
jmp, je, jz, jne, jnz, js, jns, jg,
jnle, jge, jnl, jl jnge, jle, jng, ja,
jnbe, jae, jnb, jb, jnae, jbe, jna
Conditional Move
Instructions
cmove,
cmovz,
cmovne,
Table 9 :
9Types of OperandsType
Operand 1 Operand 2 # of Operands
none
-
-
0
addr
address
-
1
ref
memory
reference
-
1
reg-reg
register
register
2
reg-addr
register
register
2
reg-cnst
register
constant
value
2
reg-ref
register
memory
reference
2
ref-cnst
memory
reference
constant
value
2
ref-reg
memory
reference
register
2
tri
-
-
3
0
200
400
600
800
1000
Iterations
0.0
0.2
0.4
0.6
0.8
1.0
Accuracy
one-hot
Instruction2Vec
word2vec
Asm2Vec
PᴀʟᴍTʀᴇᴇ-M
PᴀʟᴍTʀᴇᴇ-MC
PᴀʟᴍTʀᴇᴇ
Table 10 :
10Embedding sizesopcode outlier
detection
operand outlier
detecion
basicblock
sim search
Embedding
Sizes
Avg.
Stdev.
Avg.
Stdev.
AUC
64
0.836 0.0588
0.940
0.0387
0.917
128
0.871 0.0440
0.944
0.0343
0.922
256
0.848 0.0560
0.954
0.0343
0.929
512
0.878 0.0525
0.957 0.0335
0.929
Table 11 :
11Output layer configurationsLayers
opcode outlier
detection
operand outlier
detecion
basicblock
sim search
Avg.
Stdev.
Avg.
Stdev.
AUC
last
0.862
0.0460 0.982
0.0140
0.915
2nd-last 0.871 0.0440
0.944
0.0343
0.922
3rd-last
0.868
0.0391
0.956
0.0287
0.918
4th-last
0.866
0.0395
0.961
0.0248
0.913
Table 12 :
12Context Window Sizesopcode
outlier
operand
outlier
bb sim
search
EKLAVYA
Sizes
Avg.
Stdev.
Avg.
Stdev.
AUC
Avg.
Stdev.
1
0.864 0.0467 0.962 0.0168
0.923
0.930 0.0548
2
0.871 0.0440 0.944 0.0343
0.922
0.945 0.0476
3
0.849 0.0444 0.873 0.0514
0.916
0.908 0.0633
4
0.864 0.0440 0.957 0.0238
0.914
0.916 0.0548
PalmTree stands for Pre-trained Assembly Language Model for InsTRuction EmbEdding
https://www.tensorflow.org/
https://www.gnu.org/software/binutils/ 5 https://www.gnu.org/software/coreutils/ 6 https://www.gnu.org/software/diffutils/
ACKNOWLEDGEMENTWe would like to thank the anonymous reviewers for their helpful and constructive comments. This work was supported in part by National Science Foundation under grant No. 1719175, and Office of Naval Research under Award No. N00014-17-1-2893. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.
A survey of machine learning for big code and naturalness. Miltiadis Allamanis, T Earl, Premkumar Barr, Charles Devanbu, Sutton, ACM Computing Surveys (CSUR). 51Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A survey of machine learning for big code and naturalness. ACM Computing Surveys (CSUR) 51, 4 (2018), 1-37.
A Survey of Word Embeddings Evaluation Methods. Amir Bakarov, arXiv:1801.09536Amir Bakarov. 2018. A Survey of Word Embeddings Evaluation Methods. CoRR abs/1801.09536 (2018). arXiv:1801.09536 http://arxiv.org/abs/1801.09536
Neural code comprehension: a learnable representation of code semantics. Tal Ben-Nun, Alice Shoshana Jakobovits, Torsten Hoefler, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsTal Ben-Nun, Alice Shoshana Jakobovits, and Torsten Hoefler. 2018. Neural code comprehension: a learnable representation of code semantics. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 3589-3601.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of machine learning research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research 3, Feb (2003), 1137-1155.
Neural nets can learn function type signatures from binaries. Zheng Leong Chua, Shiqi Shen, Prateek Saxena, Zhenkai Liang, 26th {USENIX} Security Symposium ({USENIX} Security 17. Zheng Leong Chua, Shiqi Shen, Prateek Saxena, and Zhenkai Liang. 2017. Neural nets can learn function type signatures from binaries. In 26th {USENIX} Security Symposium ({USENIX} Security 17). 99-116.
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. Kevin Clark, Minh-Thang Luong, V Quoc, Christopher D Le, Manning, International Conference on Learning Representations. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2019. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In International Conference on Learning Representations.
Discriminative Embeddings of Latent Variable Models for Structured Data. Hanjun Dai, Bo Dai, Le Song, Proceedings of the 33rd International Conference on International Conference on Machine Learning. the 33rd International Conference on International Conference on Machine LearningNew York, NY, USA48ICML'16). JMLR.orgHanjun Dai, Bo Dai, and Le Song. 2016. Discriminative Embeddings of Latent Variable Models for Structured Data. In Proceedings of the 33rd International Conference on International Conference on Machine Learning -Volume 48 (New York, NY, USA) (ICML'16). JMLR.org, 2702-2711.
Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. Zihang Dai, Zhilin Yang, Yiming Yang, G Jaime, Quoc Carbonell, Ruslan Le, Salakhutdinov, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsZihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2978-2988.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short Papers1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171-4186.
Asm2vec: Boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. H H Steven, Ding, C M Benjamin, Philippe Fung, Charland, 2019 IEEE Symposium on Security and Privacy (SP). IEEESteven HH Ding, Benjamin CM Fung, and Philippe Charland. 2019. Asm2vec: Boosting static representation robustness for binary clone search against code obfuscation and compiler optimization. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 472-489.
Yue Duan, Xuezixiang Li, Jinghan Wang, Heng Yin, DEEPBINDIFF: Learning Program-Wide Code Representations for Binary Diffing. NDSS. Yue Duan, Xuezixiang Li, Jinghan Wang, and Heng Yin. 2020. DEEPBINDIFF: Learning Program-Wide Code Representations for Binary Diffing. NDSS (2020).
Coda: An end-to-end neural program decompiler. Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, Jishen Zhao, Advances in Neural Information Processing Systems. Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, and Jishen Zhao. 2019. Coda: An end-to-end neural program decom- piler. In Advances in Neural Information Processing Systems. 3703-3714.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672-2680.
{DEEPVSA}: Facilitating Value-set Analysis with Deep Learning for Postmortem Program Analysis. Wenbo Guo, Dongliang Mu, Xinyu Xing, Min Du, Dawn Song, 28th {USENIX} Security Symposium. {USENIX} Security 19Wenbo Guo, Dongliang Mu, Xinyu Xing, Min Du, and Dawn Song. 2019. {DEEPVSA}: Facilitating Value-set Analysis with Deep Learning for Postmortem Program Analysis. In 28th {USENIX} Security Symposium ({USENIX} Security 19). 1787-1804.
Automated software vulnerability detection with machine learning. A Jacob, Harer, Y Louis, Rebecca L Kim, Onur Russell, Ozdemir, Akshay Leonard R Kosta, Rangamani, H Lei, Gabriel I Hamilton, Jonathan R Centeno, Paul M Key, Ellingwood, arXiv:1803.04497arXiv preprintJacob A Harer, Louis Y Kim, Rebecca L Russell, Onur Ozdemir, Leonard R Kosta, Akshay Rangamani, Lei H Hamilton, Gabriel I Centeno, Jonathan R Key, Paul M Ellingwood, et al. 2018. Automated software vulnerability detection with machine learning. arXiv preprint arXiv:1803.04497 (2018).
On the naturalness of software. Abram Hindle, T Earl, Zhendong Barr, Mark Su, Premkumar Gabel, Devanbu, 34th International Conference on Software Engineering (ICSE). IEEEAbram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. 2012. On the naturalness of software. In 2012 34th International Conference on Software Engineering (ICSE). IEEE, 837-847.
Big code!= big vocabulary: Open-vocabulary models for source code. Rafael-Michael Karampatsis, Hlib Babii, Romain Robbes, Charles Sutton, Andrea Janes, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEERafael-Michael Karampatsis, Hlib Babii, Romain Robbes, Charles Sutton, and An- drea Janes. 2020. Big code!= big vocabulary: Open-vocabulary models for source code. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 1073-1085.
Skip-thought vectors. Ryan Kiros, Yukun Zhu, R Russ, Richard Salakhutdinov, Raquel Zemel, Antonio Urtasun, Sanja Torralba, Fidler, Advances in neural information processing systems. 28Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. Advances in neural information processing systems 28 (2015), 3294-3302.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In International Conference on Learning Representations.
Distributed representations of sentences and documents. Quoc Le, Tomas Mikolov, International conference on machine learning. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning. 1188-1196.
Graph Matching Networks for Learning the Similarity of Graph Structured Objects. Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, Pushmeet Kohli, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, and Pushmeet Kohli. 2019. Graph Matching Networks for Learning the Similarity of Graph Structured Ob- jects. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97. 3835-3845.
Improving bug detection via context-based code representation learning and attention-based neural networks. Yi Li, Shaohua Wang, N Tien, Son Nguyen, Van Nguyen, Proceedings of the ACM on Programming Languages. 3OOPSLAYi Li, Shaohua Wang, Tien N Nguyen, and Son Van Nguyen. 2019. Improving bug detection via context-based code representation learning and attention-based neural networks. Proceedings of the ACM on Programming Languages 3, OOPSLA (2019), 1-30.
Diff: Cross-version Binary Code Similarity Detection with DNN. Bingchang Liu, Wei Huo, Chao Zhang, Wenchao Li, Feng Li, Aihua Piao, Wei Zou, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. the 33rd ACM/IEEE International Conference on Automated Software EngineeringASEBingchang Liu, Wei Huo, Chao Zhang, Wenchao Li, Feng Li, Aihua Piao, and Wei Zou. 2018. Diff: Cross-version Binary Code Similarity Detection with DNN. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE 2018).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
An improved crowdsourcing based evaluation technique for word embedding methods. Liza Farhana Ferdousi, Marek Grześ, Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. the 1st Workshop on Evaluating Vector-Space Representations for NLPFarhana Ferdousi Liza and Marek Grześ. 2016. An improved crowdsourcing based evaluation technique for word embedding methods. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. 55-61.
Safe: Self-attentive function embeddings for binary similarity. Luca Massarelli, Giuseppe Antonio , Di Luna, Fabio Petroni, Roberto Baldoni, Leonardo Querzoni, International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. SpringerLuca Massarelli, Giuseppe Antonio Di Luna, Fabio Petroni, Roberto Baldoni, and Leonardo Querzoni. 2019. Safe: Self-attentive function embeddings for binary similarity. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 309-329.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111-3119.
ORACLE. 2019. x86 Assembly Language Reference Manual. ORACLE. 2019. x86 Assembly Language Reference Manual. https://docs.oracle. com/cd/E26502_01/html/E28388/ennbz.html.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. 8026-8037.
Kexin Pei, Junfeng Zhou Xuan, Suman Yang, Baishakhi Jana, Ray, arXiv:2012.08680TREX: Learning Execution Semantics from Micro-Traces for Binary Similarity. arXiv preprintKexin Pei, Zhou Xuan, Junfeng Yang, Suman Jana, and Baishakhi Ray. 2020. TREX: Learning Execution Semantics from Micro-Traces for Binary Similarity. arXiv preprint arXiv:2012.08680 (2020).
Deep contextualized word representations. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, Proceedings of NAACL-HLT. NAACL-HLTMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT. 2227-2237.
Pre-trained models for natural language processing: A survey. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang, 10.1007/s11431-020-1647-3Science China Technological Sciences. 63Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences 63, 10, 1872-1897. https://doi.org/10.1007/s11431- 020-1647-3
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training (2018). URL http://openai-assets.s3.amazonaws.com/research-covers/language- unsupervised/language_understanding_paper.pdf (2018).
Malware Detection by Eating a Whole EXE. Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, Charles Nicholas, AAAI-2018 Workshop on Artificial Intelligence for Cyber Security. Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles Nicholas. 2018. Malware Detection by Eating a Whole EXE. In AAAI-2018 Workshop on Artificial Intelligence for Cyber Security.
Neural Machine Translation of Rare Words with Subword Units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1715-1725.
Recognizing functions in binaries with neural networks. 24th {USENIX} Security Symposium ({USENIX} Security 15. Eui Chul Richard Shin, Dawn Song, and Reza MoazzeziEui Chul Richard Shin, Dawn Song, and Reza Moazzezi. 2015. Recognizing functions in binaries with neural networks. In 24th {USENIX} Security Symposium ({USENIX} Security 15). 611-626.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. 27Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems 27 (2014), 3104-3112.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998-6008.
Neural network-based graph embedding for cross-platform binary code similarity detection. Xiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, Dawn Song, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityXiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, and Dawn Song. 2017. Neural network-based graph embedding for cross-platform binary code similarity detection. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 363-376.
Learning Binary Code with Deep Learning to Detect Software Weakness. Lee Young Jun, Choi Sang-Hoon, Kim Chulwoo, Lim Seung-Ho, Park Ki-Woong, KSII The 9th International Conference on Internet (ICONI) 2017 Symposium. Lee Young Jun, Choi Sang-Hoon, Kim Chulwoo, Lim Seung-Ho, and Park Ki- Woong. 2017. Learning Binary Code with Deep Learning to Detect Software Weakness. In KSII The 9th International Conference on Internet (ICONI) 2017 Symposium.
Order Matters: Semantic-Aware Neural Networks for Binary Code Similarity Detection. Zeping Yu, Rui Cao, Qiyi Tang, Sen Nie, Junzhou Huang, Shi Wu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Zeping Yu, Rui Cao, Qiyi Tang, Sen Nie, Junzhou Huang, and Shi Wu. 2020. Order Matters: Semantic-Aware Neural Networks for Binary Code Similarity Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1145-1152.
Neural Machine Translation Inspired Binary Code Similarity Comparison beyond Function Pairs. Fei Zuo, Xiaopeng Li, Zhexin Zhang, Patrick Young, Lannan Luo, Qiang Zeng, NDSS. Fei Zuo, Xiaopeng Li, Zhexin Zhang, Patrick Young, Lannan Luo, and Qiang Zeng. 2019. Neural Machine Translation Inspired Binary Code Similarity Comparison beyond Function Pairs. In NDSS.
| []
|
[
"Self-aware Social Learning over Graphs",
"Self-aware Social Learning over Graphs"
]
| [
"Konstantinos Ntemos ",
"Virginia Bordignon ",
"Stefan Vlaski ",
"Ali H Sayed "
]
| []
| []
| In this paper we study the problem of social learning under multiple true hypotheses and self-interested agents which exchange information over a graph. In this setup, each agent receives data that might be generated from a different hypothesis (or state) than the data other agents receive. In contrast to the related literature in social learning, which focuses on showing that the network achieves consensus, here we study the case where every agent is self-interested and wants to find the hypothesis that generates its own observations. However, agents do not know which ones of their peers wants to find the same state with them and as a result they do not know which agents they should cooperate with. To this end, we propose a scheme with adaptive combination weights and study the consistency of the agents' learning process. The scheme allows each agent to identify and collaborate with neighbors that observe the same hypothesis, while excluding others, thus resulting in improved performance compared to both non-cooperative learning and cooperative social learning solutions. We analyze the asymptotic behavior of agents' beliefs under the proposed social learning algorithm and provide sufficient conditions that enable all agents to correctly identify their true hypotheses. The theoretical analysis is corroborated by numerical simulations. | 10.1109/tit.2023.3276214 | [
"https://arxiv.org/pdf/2110.13292v1.pdf"
]
| 239,885,511 | 2110.13292 | 609e08b8774f518d175fb67f83a42f94841f3c79 |
Self-aware Social Learning over Graphs
Konstantinos Ntemos
Virginia Bordignon
Stefan Vlaski
Ali H Sayed
Self-aware Social Learning over Graphs
1Index Terms-social learningself-interested agentsinfor- mation diffusion
In this paper we study the problem of social learning under multiple true hypotheses and self-interested agents which exchange information over a graph. In this setup, each agent receives data that might be generated from a different hypothesis (or state) than the data other agents receive. In contrast to the related literature in social learning, which focuses on showing that the network achieves consensus, here we study the case where every agent is self-interested and wants to find the hypothesis that generates its own observations. However, agents do not know which ones of their peers wants to find the same state with them and as a result they do not know which agents they should cooperate with. To this end, we propose a scheme with adaptive combination weights and study the consistency of the agents' learning process. The scheme allows each agent to identify and collaborate with neighbors that observe the same hypothesis, while excluding others, thus resulting in improved performance compared to both non-cooperative learning and cooperative social learning solutions. We analyze the asymptotic behavior of agents' beliefs under the proposed social learning algorithm and provide sufficient conditions that enable all agents to correctly identify their true hypotheses. The theoretical analysis is corroborated by numerical simulations.
I. INTRODUCTION
Social learning [1], [2], [3], [4], [5], [6], [7], [8], [9] refers to the distributed hypothesis testing problem where agents exchange information over a graph and aim at learning an unknown hypothesis of interest. Every agent has access to its own data (observations), as well as to information provided by its neighbors. Furthermore, every agent has access to some likelihood functions that provide the probability of every observation being generated from every possible hypothesis. Under the social learning paradigm every agent utilizes its own observations along with their likelihood functions to perform a Bayesian update of its belief vector (probability distribution over the possible hypotheses). Moreover, every agent uses a fusion rule (e.g., linear rule [4], [6], loglinear rule [1], [2], [3]) to incorporate the information the exchanged belief vectors from its neighbors. By using this procedure and under some assumptions every agent's beliefs converge to the hypothesis that better explains all agents' models (i.e., likelihood functions). In this way, network consensus is achieved (i.e., all agents' beliefs converge to the same hypothesis).
The setup we consider in this work is close to the conflicting hypotheses setup considered in [3], where the observations of each agent are generated according to some unknown distribution and the authors show that all agents' beliefs converge to the hypothesis that best "explains" all agents' models. In contrast, in this work we are interested in studying the problem where each agent wants to find its individual true hypothesis, instead of converging to a consensus.
There are many reasons for which this problem is interesting, as in many cases consensus does not describe the system's behavior. Real-life social networks constitute one example, where there are disparate opinions among the various interacting parties. Another example is the scenario where a network of communicating classifiers uses the social learning protocol as in [10] to classify scenes from different classes. Finally, sensor networks where the agents receive observations generated from different sources is another example of interest.
One main challenge in this problem is the fact that the agents are unaware of which other agents want to find the same state with them. As a result, agents do not know which agents they should cooperate with and whose agents' shared information they should disregard. To tackle the problem, we use the idea that agents' cooperative beliefs are driven by agents' private information. More specifically, every agent use its private information (i.e., own observations) to form a local belief about its true state and exchanges this local belief with its neighbors. Based on these exchanged local beliefs the combination weights are formed in a way that is proportional to the probability that the agents want to find the same state. Our contributions are the following.
1) We propose a scheme with adaptive combination weights that utilizes the agents' private information and helps agents in identifying other agents that aim at finding the same hypotheses. In this way we extend the social learning algorithm proposed in [1], [2] for the problem of multiple true hypotheses and self-interested agents. 2) We analyze the asymptotic behavior of agents' beliefs and characterize the agents' belief evolution at steady-state. 3) We provide sufficient conditions under which the agents in the network manage to learn their true hypotheses.
The problem we study is close to the problem of multitask learning over networks studied in [11], [12], [13], [14], [15]. In [11], [12], every agent aims at estimating its true parameter vector, which might be different from the target vector of other agents. The authors devise an adaptive combination policy to correctly identify the neighbors with which agents should cooperate to correctly estimate their true parameter vectors. The agents adapt their combination weights based on a Mean Square Deviation (MSD) criterion and a diffusion Least Mean Squares (LMS) algorithm is developed. A different approach was followed in [14], where every agent keeps a stand-alone LMS estimate (updated based only on the agent's own signals and not on information from neighbors). At every time instant, every agent performs a binary hypothesis test to decide whether each of its neighbors is interested in the same parameter vector. Related formulations followed in [13], [15], [16], [17], [18], [19], [20] and references therein.
The aforementioned works focus on parameter estimation tasks with agents either being aware that they aim at identifying different parameter vectors or not. Here, we focus on the distributed hypothesis testing problem where every agent aims at identifying an underlying hypothesis of interest and is not aware which agents aim at finding the same hypothesis with it or a different one. Thus, our work is closer to the multi-task decision problem studied in [21]. In [21], an LMS-type algorithm is devised. In contrast, here we study the social learning problem where every agent performs local Bayesian updates before exchanging information with its neighbors. Thus, our results neither imply, nor are implied by the results in [21]. An interesting result that comes out from our analysis is the fact that identifiability (i.e., the ability of an agent to correctly distinguish among the different hypotheses) plays a crucial role on the outcome of the learning process over the network.
A. Notation
We use the notation a.s.
−→ and
P.
−→ to denote almost sure convergence and convergence in probability, respectively. I s denotes the indicator function which is equal to 1 if the statement s is true and 0 otherwise. blockdiag{A 1 , . . . , A n } denotes the block-diagonal ma-trix composed of the matrices A 1 , . . . , A n and 1 denotes the all-ones vector. | · | denotes the cardinality of a set.
II. PROBLEM FORMULATION
We assume a set N = {1, . . . , N } of agents interacting over a network, which is represented by an undirected graph G = N , E , where E includes bidirectional links between agents. The set of neighbors of an agent k (including agent k) is denoted by N k . In contrast to the usual setup, here we assume a heterogeneous setting, where there exist multiple true hypotheses that agents want to retrieve. The set of all possible hypotheses is denoted by Θ = {θ 1 , . . . , θ M }.
We assume that each agent k has access to observations ζ k,i ∈ Z k at every time i ≥ 1. Agent k also has access to the likelihood functions L k (ζ k,i |θ), θ ∈ Θ. The signals ζ k,i are independent and identically distributed (i.i.d.) over time. In this work, the sets Z k are assumed to be finite. We will use the notation L k (θ) instead of L k (ζ k,i |θ) whenever it is clear from the context. Every agent k's true hypothesis θ (k) is drawn according to some probability P(θ (k) ) initially and remains unchanged throughout the process. Agent k's observations are generated according to the model
ζ k,i ∼ L k (ζ k,i |θ (k) = θ (k) ), θ (k) ∈ Θ(1)
and the states θ (k) are independent across agents, meaning that P(θ (k) , θ ( ) ) = P(θ (k) )P(θ ( ) ).
Agents' observations are possibly generated by different hypotheses and each agent k aims at finding the realization θ (k) of its true hypothesis θ (k) ∈ Θ according to which ζ k,i are created.
Agents share information with their neighbors in a distributed fashion. This information can be utilized to find the underlying true hypothesis by forming beliefs, which are probability distributions over the set of hypothesis Θ. We consider the log-linear social learning rule [1], [5] where the agents update their beliefs, denoted by ν k,i , in the following manner:
ϕ k,i (θ) = L k (ζ k,i |θ)ν k,i−1 (θ) θ L k (ζ k,i |θ )ν k,i−1 (θ ) (2) ν k,i (θ) = ∈N k (ϕ ,i (θ)) a k θ ∈N k (ϕ ,i (θ )) a k , k ∈ N(3)
where a k denotes the static (time-invariant) combination weight assigned by agent k to neighboring agent , satisfying 0 < a k ≤ 1, for all ∈ N k , a k = 0 for all / ∈ N k and ∈N k a k = 1. Let A denote the combination matrix, which consists of all combination weights with [A] k = a k . Clearly, A is left-stochastic.
It is known that if agents use the above algorithm, under the assumption of a strongly connected network (information flows from every agent to any other agent in the network and at least one agent has a self-loop, a kk > 0) [22], then the network achieves consensus [1], [3], [4], [2], thus ruling out the possibility for agents with different true states to correctly identify them.
In order for agents to find their true state, they should evaluate over time if the received information from the neighbourhood is beneficial to them or not. This means that they have to decide if the information received from their neighbors should be taken into account in the information aggregation step (3). One way to do so is to dynamically adjust the combination weights according to whether agents believe each neighbors aim at finding the same state or not.
III. ADAPTIVE COMBINATION WEIGHTS
The idea is that the weights assigned by agent k should be zero towards every neighbor that tries to find a different state than θ (k) . However, this information is not known beforehand. If an agent can identify its true state alone, then it might be better for that agent not to cooperate and just perform stand-alone Bayesian learning. In this way, it will be guaranteed to converge to its true hypothesis without being misled by other agents. However, some agents might not be able to find their true states alone. This happens when for an agent k, its true state θ (k) is observationally equivalent to some other θ = θ (k) . In that case this agent will be unable to find its true state without other agents' help. We define the set of states that are observationally equivalent to θ (k) as follows.
Definition 1. (Observationally equivalent states). The set Θ k θ ∈ Θ : L k (ζ k |θ) = L k (ζ k |θ (k) ), ∀ζ k ∈ Z k (4)
is comprised of all states that are observationally equivalent to θ (k) for an agent k ∈ N .
Note that θ (k) is always contained in Θ k . Before we introduce the adaptive combination weights mechanism, we provide a motivating example. In the network example presented in Fig. 1 the set of possible hypotheses is Θ = {θ 1 , θ 2 , θ 3 } and the true hypotheses of agents 1, 2, 3 are θ 2 , θ 2 , θ 3 , respectively. However, agent 1 cannot distinguish between hypotheses θ 1 and θ 2 . Since agent 1 communicates with both agents 2 and 3, it may not converge to hypothesis θ 2 . However, if agent 1 over time realizes that agent 3's true hypothesis is θ 3 (i.e., it is different from agent 1's true hypothesis), then it can cut off the link with agent 3 and find its true hypothesis with the aid of agent 2 (which can find θ 2 alone as Θ 2 = {θ 2 }), provided that agent 2 also realizes that its true hypothesis is different from agent 3's true hypothesis and cuts off its link to agent 3 as well. Our goal is to devise an adaptive mechanism that enables agents to discriminate over time which agents aim at finding the same hypothesis with them against other agents. In doing so, each agent at every time i can form a local belief about the unknown hypothesis θ (k) based only on its own observations ζ k,1:i = (ζ k,1 , . . . , ζ k,i ) until time i. These local beliefs do not contain any misleading information from other agents and they are given by
π k,i (θ) = P(θ (k) = θ|ζ k,1:i ), θ ∈ Θ(5)
where π k,i is the posterior belief over θ (k) given the sequence of private observations of agent k. The belief π k,i can be computed given π k,i−1 and ζ k,i recursively according to Bayes' rule:
π k,i (θ) = L k (ζ k,i |θ)π k,i−1 (θ) θ ∈Θ L k (ζ k,i |θ )π k,i−1 (θ ) .(6)
Now, we can design a scheme based on the local beliefs so that the weights assigned to every neighbor by agent k evolve according to the probability that the two agents are trying to find the same hypothesis (i.e., θ (k) = θ ( ) ). Let us denote the event that the two agents have the same hypothesis by
S k {θ (k) = θ ( ) } = ∪ θ∈Θ S θ k , k = (7) where S θ k {θ (k) = θ ∩ θ ( ) = θ}, k =(8)
is the event that both agent k and agent 's true state is θ: Since the S θ k are disjoint events for different θ
P( ∪ θ∈Θ S θ k ) = θ∈Θ P(S θ k )(9)
Obviously, the probability that agent k has the same state with itself is 1. Then, the weight each agent k may assign to its neighbor can be set to
a k,i = P(S k | ζ k,1:i ,ζ ,1:i ) σ k,i , if ∈ N k 1 σ k,i , if = k 0, otherwise(10)
where N k N k \ {k} denotes the set of neighbors of agent k without including k and σ k,i is a normalizing factor to ensure that A i is left-stochastic.
Construction (10) ensures that agent k incorporates information from agent in a manner that is proportional to the probability that agents k and are observing the same state. As agents gain confidence in their true state over time, this allows them to exclude inconsistent information, and collaborate only with agents who observe data that do not conflict with their local models.
In the sequel, we first show that agents are able to efficiently compute P(S k |ζ k,1:i , ζ ,1:i ) and then establish formally that the resulting learning process is consistent. Lemma 1. (Conditional probability of two agents sharing the same hypothesis). The probability of two distinct agents k, having the same state conditioned on the joint observations ζ k,1:i , ζ ,1:i is given by
P(S k |ζ k,1:i , ζ ,1:i ) = θ π k,i (θ)π ,i (θ)(11)
Proof. See Appendix A.
Utilizing Lemma 1, the normalizing factor is given by
σ k,i = 1 + ∈N k θ∈Θ π k,i (θ)π ,i (θ).(12)
Note that according to construction (10), we have that a kk,i > 0 for all i ≥ 1 and for all k ∈ N . In order to account for the information from local beliefs, agents perform two parallel updates. A non-cooperative update, in which the local belief π k,i is formed by using (6), which is then shared with every neighbor of k; and a social learning update. The novel part introduced in the social learning algorithm (2), (3) is that the combination step utilizes the adaptive combination weights A i instead of static weights. More specifically, every agent k ∈ N updates its cooperative belief µ k,i according to the following procedure:
ψ k,i (θ) = L k (ζ k,i |θ)µ k,i−1 (θ) θ L k (ζ k,i |θ )µ k,i−1 (θ ) , k ∈ N (13) µ k,i (θ) = ∈N k ψ a k,i ,i (θ) θ ∈N k ψ a k,i ,i (θ ) , k ∈ N .(14)
We call µ k,i cooperative beliefs. For simplicity, and since agents do not have any prior evidence on their true state, we impose the following assumption on the prior local beliefs π k,0 (θ) and prior cooperative beliefs µ k,0 (θ).
Assumption 1. (Uniform prior beliefs). The prior beliefs of all agents are uniform
π k,0 (θ) = µ k,0 (θ) = 1/|Θ|, k ∈ N , θ ∈ Θ. (15)
A. Behavior of Adaptive Weights
The behavior of the adaptive weights depends on the evolution of local beliefs. The next result characterizes the behavior of local beliefs π k,i over time. Before presenting the result, we impose the following technical assumption [23].
Assumption 2. (Likelihood functions with full support). L k (ζ|θ) > α for some α > 0 for all ζ ∈ Z k and for all θ ∈ Θ.
From (11), it follows that the ability of agent k to correctly reject inconsistent information from its neighbor is driven by its ability to reject inconsistent states θ / ∈ Θ k through π k,i (θ). We begin by studying its evolution. ∈ Θ k the following is true:
P (π k,i (θ) ≥ exp (x k i)) ≤ exp (−y k i) (16) where x k − 1 2 min θ / ∈Θ k d k (θ)(17)y k − min θ / ∈Θ k d 2 k (θ) 8(log α) 2(18)
and α is given by Assumption 2 and
d k (θ) D KL L k (θ (k) )||L k (θ)(19)
denotes the KL divergence for an agent k ∈ N between L k (θ (k) ) and L k (θ).
Proof. See Appendix B.
Based on the evolution of the local beliefs π k,i (θ), we can now investigate the behavior of the adaptive weights.
The following result characterizes the asymptotic behavior of the adaptive combination weights.
Theorem 1. (Limiting behavior of the adaptive combination weights). The adaptive combination weights exhibit the following limiting behavior as i → ∞ for every agent k ∈ N :
a k,i a.s. −→ η k 1+ ∈N k η k , if ∈ N k 1 − ∈N k η k 1+ ∈N k η k , if = k 0, otherwise(20)
where η k
|Θ k ∩Θ | |Θ k ||Θ | . Proof. See Appendix C.
We observe that if an agent k can identify its true hypothesis alone (i.e., Θ k = {θ (k) }), then it will assign asymptotically positive weights only to the neighbors ∈ N k for which θ (k) is within their optimal hypothesis set (i.e., θ (k) ∈ Θ ). We see here the implications of the identifiability capabilities of the agents. For example, if all agents can identify their true hypothesis alone (i.e., Θ k = {θ (k) } for all k ∈ N ), then the network will (asymptotically) decompose into components where every agent communicates only with the neighbors that aim at finding the same hypothesis with it.
IV. ANALYSIS OF THE ALGORITHM
In this section we examine whether the adaptive combination scheme is sufficient to drive the agents' cooperative beliefs µ k,i to the individually correct hypotheses. First, we observe from Theorem 1 that the combination matrix A i converges to a limiting matrix
A ∞ with elements [A ∞ ] k = a k,∞ , defined as A ∞ lim i→∞ A i .(21)
In order to study the evolution of the cooperative beliefs µ k,i generated by our proposed algorithm with adaptive combination weights (13)-(14), we will show that they track the evolution of beliefs generated by the algorithm (2)-(3) with the steady-state combination matrix A ∞ , which is much simpler to analyze. More specifically, we will show that asymptotically µ k,i tracks µ c k,i which is given by
ψ c k,i (θ) = L k (ζ k,i |θ)µ c k,i−1 (θ) θ L k (ζ k,i |θ )µ c k,i−1 (θ ) , k ∈ N (22) µ c k,i (θ) = ∈N k (ψ c ,i (θ)) a k,∞ θ ∈N k (ψ c ,i (θ )) a k,∞ , k ∈ N . (23)
The evolution of the beliefs generated by (22)-(23) has been analyzed for a time-invariant combination matrix for both cases of strongly-connected [1], [5] and weaklyconnected networks [24]. First we prove a useful Lemma that characterizes the structure of A ∞ .
Lemma 2. (Structure of A ∞ ). A ∞ is comprised of S ∈ N disjoint strongly-connected components, meaning A ∞ = A ∞,1 . . . 0 . . . . . . . . . 0 . . . A ∞,S .(24)
Then, we havē
A T ∞ lim t→∞ (A T ∞ ) t = blockdiag{p 1 1 T , . . . , p S 1 T }.(25)p s , s ∈ {1, . . . , S} is the Perron eigenvector of A ∞,s . Proof. See Appendix D.
Let us define the setN s , s ∈ {1, . . . , S} as the set of agents whose combination weights comprise A ∞,s . Furthermore, let us define forN s the sub-network confidence for a state θ ∈ Θ as
C s (θ) − k∈Ns p s (k)D KL (L k (θ (k) )||L k (θ)). (26)
where p s (k) is the k th element of p s and let
Θ s θ s arg max θ∈Θ C s (θ) .(27)
This set is comprised of the hypotheses that best describe the sub-network agents' observation models weighted by their centrality. Now, we can provide the main result, which characterizes the evolution of cooperative beliefs µ k,i . −→ 0, for every θ / ∈Θ s . 2) Agent k learns its true state, meaning
µ k,i (θ (k) ) P. −→ 1, ifΘ s = {θ (k) }. Proof. See Appendix E.
There is one more question of interest to answer. We see from the result above that whether an agent is able to learn its true state is dependent on the structure of the sub-networkN s . However, from Theorem 1 we see that this structure depends on the identifiability capabilities of the agents (i.e., on the sets Θ k ) and on the graph topology given by G. The following result provides conditions that guarantee that every agent in the network will find its true state.
Corollary 1. (Globally consistent learning). Under the proposed adaptive combination scheme, every agent k ∈ N learns its true state, meaning
µ k,i (θ (k) ) P. −→ 1, ∀k ∈ N(28)
if both of the following hold:
Θ k ∩ Θ = ∅, ∀k ∈ N , ∀ ∈ N k such that θ (k) = θ ( ) (29) ∩ ∈Ns Θ = {θ (k) }, ∀s ∈ {1, . . . , S} such that k ∈N s .(30)
Proof. If (29) holds for two neighbors k, with different states, then we have from Theorem 1 that a k ,∞ = a k,∞ = 0, which means that two agents with different states do not exchange information directly. Since this holds for every two neighbors across the network, this implies that there can not be two agents with different true states in the same sub-networkN s . This further implies that, in every sub-network all agents have the same true hypothesis, because θ (k) ∈ Θ k for all k ∈ N .
Then, if (30) holds, we have that for every θ = θ (k) there is at least one agent k ∈N s such that d k (θ) > 0. This implies that C s (θ) < 0 for all θ = θ (k) . Then, from (26) we have that C s (θ (k) ) = 0, which implies that θ s = θ (k) . Thus,Θ s = {θ (k) } and part 2 of Theorem 2 applies.
Condition (29) ensures that for any two neighboring agents k, with different states both agents can rule out the state of the other agent based on their own observations (i.e., θ (k) / ∈ Θ and θ ( ) / ∈ Θ k ). This condition ensures that in every formed sub-network all agents share the same true state. Then, condition (30) is needed to ensure that in every sub-network the agents can collectively identify their true state. We illustrate the results by means of some numerical examples.
Example 1. (Consistent learning for all agents). We refer to Figure 1 for which from Theorem 1 we have
A ∞ = 2/3 1/3 0 1/3 2/3 0 0 0 1 (31)
where the order of agents' labeling is {1, 2, 3}. As we observe, A ∞ is comprised of two strongly connected components. The limiting matrix in this case is
A T ∞ = 1/2 1/2 0 1/2 1/2 0 0 0 1.
As we see above (A T ∞ ) i converges to a rank-1 matrix. Thus, the network achieves consensus and all agents' beliefs converge to the same hypothesis (θ 3 in this example). As a result, agents 1 and 2 fail to identify their true states as we observe in the right plot of Fig. 2. Also, note that in this example (29), (30) are violated.
As we see from Example 2, consistent learning is not always possible. When an agent cannot find its true hypothesis alone, then their beliefs will be determined by the information received from its neighborhood.
Another interesting remark is the following. In Example 2 one agent (agent 1) cannot rule out any of the three states. This makes information flow from agent 2 to agent 3 and their cooperative beliefs achieve consensus. Because of that, agent 2's cooperative belief converges to θ 3 , despite the fact that agent 2 can find its true state (θ 2 ) alone. This case has to be taken into account. Agents whose private beliefs indicate that a particular state θ is unlikely to be their true hypothesis (i.e., π k,i (θ) → 0), then this information should be taken into account and rule out cooperative beliefs that suggest the opposite. One way to do that is by setting a low threshold > 0. If for a state θ, π k,i (θ) < and simultaneously µ k,i (θ) > , then the cooperative belief µ k,i should be disregarded and the agent should limit itself to using its own local belief for inferring its true hypothesis. To do so, we assume that every agent keeps a global belief vector which is given bȳ
µ k,i = π k,i , if ∃θ : π k,i (θ) < and µ k,i (θ) > , µ k,i , otherwise.(35)
Finally, the proposed social learning algorithm is summarized as follows.
Algorithm. Self Aware Social learning (SASL).
Initialize µ k,0 (θ) = π k,0 (θ) for all k ∈ N , θ ∈ Θ. For all k ∈ N and i ≥ 1:
1) Obtain ζ k,i . 2) Update for all θ ∈ Θ ψ k,i (θ) = L k (ζ k,i |θ)µ k,i−1 (θ) θ L k (ζ k,i |θ )µ k,i−1 (θ ) , k ∈ N . 3) Update for all θ ∈ Θ π k,i (θ) = L k (ζ k,i |θ)π k,i−1 (θ) θ ∈Θ L k (ζ k,i |θ )π k,i−1 (θ ) . 4) Exchange π k,i with every ∈ N k . 5) Update a k,i via (10). 6) Update for all θ ∈ Θ µ k,i (θ) = ∈N k ψ a k,i ,i (θ) θ ∈N k ψ a k,i ,i (θ )
, k ∈ N . 7) Compute global beliefμ k,i via (35).
V. EXPERIMENTS
In the following experiments we illustrate the agents' belief evolution for a network of 10 agents. The network is depicted in Fig. 3. To facilitate the illustration of our results in a simple way we assume that |Z k | = 10 for all k ∈ N and the set of possible hypotheses is Θ = {θ 1 , . . . , θ 10 }. In order highlight the need for an adaptive combination mechanism in updating the cooperative beliefs, we compare the asymptotic beliefs of our proposed scheme to the classic cooperative social learning solution (2) -(3) with a static (time-invariant) combination matrix A (every agent assigns uniform combination weights to its neighbors) as well as to the non-cooperative learning (beliefs are given by (6)).
First, we explore a simple setup where every agent has a distinct true state, meaning θ (k) = θ k , for all k ∈ N . Furthermore, none of the agents faces an identification problem and agents' likelihood functions are given by the following expression:
L k (ζ y k,i |θ x ) = q k ∈ (0, 1), if y = x, 1−q k |Z k |−1 , otherwise.(36)
for all k ∈ N and for x, y, k = 1, . . . , 10. ζ y k,i ∈ Z k denotes the y th observation of agent k. We set q k = 0.28 for all k ∈ N . Note that for the likelihood functions given by (36) and for this value of q k , we have Θ k = {θ (k) } for all k ∈ N . In Fig. 4 (third row) we observe that each agent k's cooperative beliefs converge to its true hypothesis θ (k) for all k ∈ N . We use a light green to orange colormap to indicate the magnitude of agents' beliefs on their true state. Light green indicates beliefs that are close to 0, while orange indicates beliefs close to 1. Also, note that the conditions given by Corollary 1 are satisfied. On the contrary, the cooperative social learning algorithm leads all agents, except for agent 1, to inconsistent learning, as expected due to the fact that the network achieves consensus and agents beliefs converge to θ 1 , which is the hypothesis maximizing (26) in this example. Also note that the the non-cooperative learning (second row) is consistent since every agent can identify its true hypothesis alone. Finally, note that for SASL in steady-state the network decomposes into isolated agents as all adaptive weights assigned from every agent to its neighbors go to 0 and thus no information is exchanged across the network. This is depicted in third row in Fig. 4 where there are no edges as we set the edge-width between any two connected agents to be proportional to the sum of the respective combination weights between the agents. The same rationale is followed in the other experiments as well.
We consider next a more interesting scenario where some agents share the same true hypothesis and some agents face an identification problem. More specifically, the agents' true hypotheses are assigned as follows:
θ (k) = θ 1 , if k ∈ {1, . . . , 5} θ 6 , if k ∈ {6, . . . , 10}.(37)
The agents' likelihood functions are constructed as follows so that some agents cannot discriminate among some states. For agents 1 and 6, their likelihood functions are given by (36) (they can identify their true hypotheses : Steady-state beliefs of every agent on its true hypothesis θ (k) for cooperative social learning (i.e., ν k,i (θ (k) )) (first row), non-cooperative learning (i.e., π k,i (θ (k) )) (second row) and SASL Algorithm (i.e., µ k,i (θ (k) )) (third row). Colormap explanation: Agents colored in orange indicate that their beliefs on their true state are close to 1, while light green denotes that beliefs are close to 0 . alone). For agents 2, 3, 4, 5 L k (ζ y k,i |θ x ) is given by (36) for x ≥ 6 and by
L k (ζ y k,i |θ x ) = 1 |Z k | , ∀y = 1, . . . , |Z k |(38)
for x ≤ 5. For agents 7, 8, 9, 10 L k (ζ y k,i |θ x ) is given by (36) for x ≤ 5 and by (38) for x ≥ 6.
In this case we see that (29) is satisfied and the network decomposes into two strongly connected components, one consisting of agents 1, 2, 3, 4, 5 and one consisting of agents 6, 7, 8, 9, 10 (see third row in Fig. 5). We also can verify from (38) that condition (30) holds. As we see in third row in Fig. 5 (third row), all agents converge to their true hypotheses, as expected by Corollary 1, while both cooperative and non-cooperative solutions lead to inconsistent learning for some of the agents.
Finally, we examine a scenario where the proposed every agent on its true hypothesis θ (k) for cooperative social learning (i.e., ν k,i (θ (k) )) (first row), non-cooperative learning (i.e., π k,i (θ (k) )) (second row) and SASL Algorithm (i.e., µ k,i (θ (k) )) (third row) for all agents. solution fails to achieve consistent learning for all agents. In this setup, some agents again face an identification problem, but conditions (29) and (30) will not be satisfied. In this setup the true hypotheses are given again by (37), the likelihood functions of agents 1 and 6 are given by (38), but now the likelihood functions for the remaining agents are given by
L k (ζ y k,i |θ x ) = 1 |Θ|
, for all y, x, and for k = 1, 6.
We see that in this case all agents except for 1, 6 cannot distinguish between any two hypotheses, meaning Θ k = Θ for all k = 1, 6. Moreover, we can verify that condition (29) does not hold and as a result the network does not decompose into two disconnected components (see third row in Fig. 6). As a result, the network achieves consensus and all agents' beliefs converge to hypothesis θ 1 except for agent 6 that can identify its true hypothesis alone. Of course, the cooperative and non-cooperative solutions lead to inconsistent learning for some agents as well, with the results being strictly worse compared to SASL (agent 6 does not converge to θ 6 with the cooperative solution, while under the noncooperative solution agents 2, 3, 4, 5 do not converge to their true hypothesis θ 1 ). : Steady-state beliefs of every agent on its true hypothesis θ (k) for cooperative social learning (i.e., ν k,i (θ (k) )) (first row), non-cooperative learning (i.e., π k,i (θ (k) )) (second row) and SASL Algorithm (i.e., µ k,i (θ (k) )) (third row).
VI. CONCLUSIONS
In this work the problem of social learning with multiple true hypotheses and self-interested agents was investigated. Contrary to previous works that aim at showing that the network achieves consensus, here we investigated the scenario where every agent wants to converge to its true hypothesis. For this reason, we devised an adaptive combination weights scheme based on agents' private information and studied the performance of the proposed social learning algorithm. We provided conditions under which every agent in the network successfully learns its true hypothesis and we illustrated the learning behavior of the agents via computer simulations.
APPENDIX A PROOF OF LEMMA 1
Let us define for convenience the following:
p k,i|θ P(ζ k,1:i |θ (k) = θ), θ ∈ Θ (40) p ,i|θ P(ζ ,1:i |θ ( ) = θ), θ ∈ Θ (41) qθ(k) P(θ (k) =θ (k) ),θ (k) ∈ Θ (42) qθ( ) P(θ ( ) =θ ( ) ),θ ( ) ∈ Θ.(43)
For every two agents k = let us consider the joint conditional probability P(S k |ζ k,1:i , ζ ,1:i )
(9) = θ∈Θ P(θ (k) = θ, θ ( ) = θ|ζ k,1:i , ζ ,1:i ) (a) = θ∈Θ p k,i|θ p ,i|θ P(θ (k) = θ, θ ( ) = θ) P(ζ k,1:i , ζ ,1:i ) (b) = θ∈Θ p k,i|θ p ,i|θ P(θ (k) = θ)P(θ ( ) = θ) θ(k) ,θ ( ) p k,i|θ (k) p ,i|θ ( ) qθ(k) qθ( ) = θ∈Θ p k,i|θ p ,i|θ P(θ (k) = θ)P(θ ( ) = θ) θ(k) p k,i|θ (k) qθ(k) θ( ) p ,i|θ ( ) qθ( ) = θ∈Θ π k,i (θ)π ,i (θ)(44)
In step (a) Bayes rule was utilised along with the fact that ζ k,1:i and ζ ,1:i are conditionally independent given θ (k) and θ ( ) , respectively.
Step (b) is true due to the assumption that the hypotheses are independent across agents, i.e., P(θ (k) , θ ( ) ) = P(θ (k) )P(θ ( ) ).
APPENDIX B PROOF OF PROPOSITION 1
For convenience for a given hypothesis θ ∈ Θ we define the state-specific weights as
a k,i (θ) P(S θ k |ζ k,1:i , ζ ,1:i ) σ k,i = π k,i (θ)π ,i (θ) σ k,i ,(45)
where ∈ N k . Then, the following holds:
a k,i = θ∈Θ a k,i (θ)(46)
We prove the result by following the techniques used in Theorem 2 in [3]. We define the log ratio of local beliefs:
λ k,i (θ) log π k,i (θ) π k,i (θ (k) ) , θ / ∈ Θ k , k ∈ N .(47)
Using (6) we havē
λ k,i (θ) = i t=1 L k,t (θ) + λ k,0 (θ), θ / ∈ Θ k(48)
where the last term on the right-hand side (RHS) in (48) is equal to 0 due to Assumption 1 and
L k,i (θ) log L k (ζ k,i |θ) L k (ζ k,i |θ (k) ) , θ / ∈ Θ k , k ∈ N . (49)
Taking expectations in (48) we have for every θ / ∈ Θ k
E{λ k,i (θ)} = E i t=1 L k,t (θ) = −id k (θ) ≤ −i min θ =θ (k) d k (θ).(50)
Next, by Assumption 2, we have
log α ≤ d k (θ) ≤ log 1 α , ∀θ / ∈ Θ k .(51)
Let us consider the sequence of random variables ζ k,1:i = (ζ k,1 , . . . , ζ k,i ). We want to establish that λ k,i (θ), which is a function of ζ k,1:i , has bounded differences. For all t such that
1 ≤ t ≤ i we have max ζ k,tλ k,i (θ) − min ζ k,tλ k,i (θ) = max ζ k,t log L k (ζ k,t |θ) L k (ζ k,t |θ (k) ) − min ζ k,t log L k (ζ k,t |θ) L k (ζ k,t |θ (k) ) ≤ log 1 α − log α = 2 log 1 α (52)
where we utilized (51). Thus,λ k,i (θ) has bounded differences and as a result, we can apply McDiarmid's inequality [25], which states the following. Consider a sequence of random variables ζ k,1:i = (ζ k,1 , . . . , ζ k,i ) and a function g : Z i k → R of bounded differences for all 1 ≤ t ≤ i, meaning sup ζ k,t ∈Z k g(. . . , ζ k,t , . . .) − inf ζ k,t ∈Z k g(. . . , ζ k,t , . . .) ≤ ρ t (53) for some ρ t < ∞. Then, for any > 0 and all i ≥ 1
P g(ζ k,1:i ) − E{g(ζ k,1:i )} ≥ ≤ exp − 2 2 i t=1 ρ 2 t .(54)
Thus, from (54) for ρ t = 2 log 1 α (the bound from (52)) we have
P λ k,i (θ) − E{λ k,i (θ)} ≥ ≤ exp − 2 2 4i(log 1 α ) 2 .(55)
Then, since π k,i (θ) ∈ (0, 1) we have for all θ / ∈ Θ k
π k,i (θ) ≤ π k,i (θ) π k,i (θ (k) ) = exp λ k,i (θ) .(56)
Thus, for an arbitrary ε we have P π k,i (θ) ≥ exp(ε) ≤ P exp(λ k,i (θ)) ≥ exp(ε)
= P λ k,i (θ) ≥ ε (50) ≤ P λ k,i (θ) − E{λ k,i } ≥ ε + i min θ / ∈Θ k d k (θ)(57)
By utilizing (55), (57) and by setting
ε = − i 2 min θ / ∈Θ k d k (θ)(58)
we obtain the result.
APPENDIX C PROOF OF THEOREM 1
From Proposition 1 we have for every agent k ∈ N :
π k,i (θ) a.s. −→ 0, ∀θ / ∈ Θ k .(59)
By utilizing (45), the above implies that
a k,i (θ)
a.s.
−→ 0, ∀θ / ∈ Θ k , ∀ ∈ N k .(60)
Moreover, due to Assumption 1 and the fact that all θ ∈ Θ are observationally equivalent we have
π k,i (θ) = π k,i (θ ), ∀θ, θ ∈ Θ k , θ = θ , ∀i ≥ 1.(61)
Utilizing the above, since π k,i is a probability vector we have
π k,i (θ) a.s. −→ 1 |Θ k | , ∀θ ∈ Θ k .(62)
Similarly for a neighbor ∈ N k we have that
π ,i (θ)
a.s.
−→ 0, ∀θ / ∈ Θ (63) a n ,i (θ)
a.s.
−→ 0, ∀θ / ∈ Θ , ∀n ∈ N (64) π ,i (θ) a.s. −→ 1 |Θ | , ∀θ ∈ Θ . (65)
Utilizing the above, for an agent k:
P(θ (k) = θ, θ ( ) = θ|ζ k,1:i , ζ ,1:i ) a.s. −→ 1 |Θ k | 1 |Θ | , θ ∈ Θ k ∩ Θ , ∈ N k 0, otherwise.(66)
Then, (12) yields
σ k,i a.s. −→ 1 + ∈N k θ∈Θ k ∩Θ 1 |Θ k | 1 |Θ | = 1 + ∈N k |Θ k ∩ Θ | |Θ k ||Θ |(67)
Then, from (45) we obtain the result.
APPENDIX D PROOF OF LEMMA 2
A strongly-connected component is a set of connected agents where information can flow from every agent to every other agent in that set and at least one agent has a self-loop (i.e., there is at least one k ∈ N such that a kk > 0) [22].
Since G is undirected we have that if ∈ N k ⇐⇒ k ∈ N . Moreover, due to (20), we observe that a k,∞ > 0 ⇐⇒ a k ,∞ > 0, ∀k, ∈ N , k = (68) a k,∞ = 0 ⇐⇒ a k ,∞ = 0, ∀k, ∈ N , k = (69) since a k ,∞ , a k,∞ ∝ |Θ k ∩Θ | |Θ k ||Θ | . This implies that information can flow from k to if and only if information can flow from to k as well. This means that there is not one-directional flow of information between any two agents in the network. This implies that there can not be any path in the network that allows information to flow from any agent k to any other agent without information from to flow to k as well. As a result, all network is decomposed into disjoint strongly-connected components. Then, letN 1 , . . . ,N S denote the distinct strongly connected components. Then, (25) follows from Perron-Frobenius Theorem.
APPENDIX E PROOF OF THEOREM 2
First we prove a useful Lemma that provides a bound on the difference between the log-belief ratios formed by algorithm (13)- (14) and (22)- (23). Let us first define for every possible pair θ, θ ∈ Θ such that θ = θ the aforementioned log-belief ratios as follows:
λ k,i (θ, θ ) log µ k,i (θ) µ k,i (θ ) , k ∈ N (70) λ c k,i (θ, θ ) log µ c k,i (θ) µ c k,i (θ ) , k ∈ N .(71)
and the log-likelihood ratio:
L k,i (θ, θ ) log L k (ζ k,i |θ) L k (ζ k,i |θ ) , k ∈ N(72)
Then, by utilizing (13)- (14) and (22)-(23), respectively, (70) and (71) yield:
λ k,i (θ, θ ) = ∈N k a k,i L ,i (θ, θ ) + ∈N k a k,i λ ,i−1 (θ, θ ) (73) λ c k,i (θ, θ ) = ∈N k a k,∞ L ,i (θ, θ ) + ∈N k a k,∞ λ c ,i−1 (θ, θ ).(74)
The above are written in matrix-vector notation as
λ i (θ, θ ) = A T i L i (θ, θ ) + A T i λ i−1 (θ, θ ) (75) λ c i (θ, θ ) = A T ∞ L i (θ, θ ) + A T ∞ λ c i−1 (θ, θ )(76)
where
Li(θ, θ ) = log L1,i(ζ 1,i |θ) L1,i(ζ 1,i |θ ) , . . . , log L |N |,i (ζ |N |,i |θ) L |N |,i (ζ |N |,i |θ ) T(77)
and
λ i (θ, θ ) = [λ 1,i (θ, θ ), . . . , λ |N |,i (θ, θ )] T and λ c i (θ, θ ) = [λ c 1,i (θ, θ ), . . . , λ c |N |,i (θ, θ )] T .
Lemma 3. (Bounded difference of log-belief ratios).
The difference between the log-belief ratios is bounded for all i ≥ 1 and for all θ = θ , θ, θ ∈ Θ, meaning
lim i→∞ sup E{||λ i (θ, θ ) − λ c i (θ, θ )|| ∞ } ≤ y < ∞. (78)
Proof. By utilizing (75) and (76), the difference in the log-belief ratios between the two algorithms for any two distinct θ, θ ∈ Θ is given by
λi(θ, θ ) − λ c i (θ, θ ) = (A T i − A T ∞ )Li(θ, θ ) + A T i λi−1(θ, θ ) − A T ∞ λ c i−1 (θ, θ ) + A T i λ c i−1 (θ, θ ) − A T i λ c i−1 (θ, θ ) = (A T i − A T ∞ )Li(θ, θ ) + A T i (λi−1(θ, θ ) − λ c i−1 (θ, θ )) + (A T i − A T ∞ )λ c i−1 (θ, θ )(79)
Taking the L ∞ -norm we have
||λi(θ, θ ) − λ c i (θ, θ )||∞ (a) ≤ ||(A T i − A T ∞ )||∞||Li(θ, θ )||∞ + ||A T i ||∞||(λi−1(θ, θ ) − λ c i−1 (θ, θ ))||∞ + ||(A T i − A T ∞ )||∞||λ c i−1 (θ, θ )||∞ (b) = ||(A T i − A T ∞ )||∞||Li(θ, θ )||∞ + ||λi−1(θ, θ ) − λ c i−1 (θ, θ )||∞ + ||(A T i − A T ∞ )||∞||λ c i−1 (θ, θ )||∞(80)
where ||x|| ∞ corresponds to the maximum (absolute) row sum norm (induced by L ∞ vector norm) if x is a matrix.
(a) is true by the sub-multiplicative property of vector induced norms (see [26] Theorem 5.6.2 property b) and (b) due to the fact that A T is right-stochastic and as a result ||A T i || ∞ = 1 for all i. Expanding the above we get
||λi(θ, θ ) − λ c i (θ, θ )||∞ ≤ i t=1 ||(A T t − A T ∞ )||∞||Lt(θ, θ )||∞ + ||λ0(θ, θ ) − λ c 0 (θ, θ )||∞ + i t=1 ||(A T t − A T ∞ )||∞||λ c t−1 (θ, θ )||∞ (a) = i t=1 ||(A T t − A T ∞ )||∞||Lt(θ, θ )||∞ + i t=1 ||(A T t − A T ∞ )||∞||λ c t−1 (θ, θ )||∞ (b) ≤ L i t=1 ||(A T t − A T ∞ )||∞ + i t=1 ||(A T t − A T ∞ )||∞||λ c t−1 (θ, θ )||∞ (81)
where (a) is true due to Assumption 1 and (b) because of the fact that |L t (θ, θ )| is bounded by L = | log α|.
Regarding the second term in (81), by iterating (76) and using the sub-multiplicative property, we have
i t=1 ||(A T t − A T ∞ )|| ∞ ||λ c t−1 (θ, θ )|| ∞ ≤ i t=1 ||(A T t − A T ∞ )|| ∞ t−1 t =1 ||(A T ∞ ) t || ∞ ||L t−t (θ, θ )|| ∞ + ||(A T ∞ ) t−1 || ∞ ||λ 0 (θ, θ )|| ∞ (a) = i t=1 ||(A T t − A T ∞ )|| ∞ t−1 t =1 ||L t (θ, θ )|| ∞ (b) ≤ i t=1 ||(A T t − A T ∞ )|| ∞ t−1 t =1 L = L i t=1 ||(A T t − A T ∞ )|| ∞ (t − 1)(82)
where (a) is true due to Assumption 1 and the fact that
||(A T ∞ ) t || ∞ = 1 for all t . (b)
holds for the same reason as in (81). Combining (81) and (82) we get
||λ i (θ, θ ) − λ c i (θ, θ )|| ∞ ≤ L i t=1 ||(A T t − A T ∞ )|| ∞ + i t=1 ||(A T t − A T ∞ )|| ∞ i t =t ||L t || ∞ ≤ L i t=1 ||(A T t − A T ∞ )|| ∞ + L i t=1 ||(A T t − A T ∞ )|| ∞ (t − 1)(83)
Let us bound E{ i t=1 |a k,t − a k,∞ |}. We have for every k ∈ N , ∈ N k and for all θ /
∈ Θ k ∩ Θ E i t=1 |a k,t (θ) − 0| = E i t=1 |a k,t (θ)I {a k,t ≥exp(c k t)} | + i t=1 |a k,t (θ)I {a k,t (θ)<exp(c k t)} | (a) ≤ i t=1 1 × exp(−d k t) + i t=1 exp(−c k t) × 1(84)
where in (a) we utilized part 1) of Lemma 5 to upper bound the value of |a k,t (θ) − 0| = a k,t (θ). Moreover, following the same rationale as above, from part 2) of Lemma 5 we have for every k ∈ N , ∈ N k and for all
θ ∈ Θ k ∩ Θ E i t=1 |a k,t (θ) − a k,∞ (θ)| = E i t=1 |a k,t (θ) − a k,∞ |I {|a k,t (θ)|≥α k exp(−c k t)} + i t=1 |a k,t (θ) − a k,∞ |I {|a k,t (θ)|<α k exp(−c k t)} ≤ i t=1 1 × b k exp(−d k t) + i t=1 α k exp(−c k t) × 1.(85)
Moreover, we have
i t=1 ||(A T t − A T ∞ )||∞ ≤ i t=1 max k∈N =k |a k,t − a k,∞ | + |1 − =k a k,t − 1 + =k a k,∞ | ≤ 2 i t=1 max k∈N =k |a k,t − a k,∞ | ≤ 2 i t=1 max k∈N =k ( θ / ∈Θ k ∩Θ |a k,t (θ) − 0| + θ∈Θ k ∩Θ |a k,t (θ) − a k,∞ (θ)|) (a) ≤ 2 i t=1 k∈N =k θ / ∈Θ k ∩Θ |a k,t (θ) − 0| + θ∈Θ k ∩Θ |a k,t (θ) − a k,∞ (θ)| .(86)
where (a) is true due to the fact that all the summation terms are positive and thus the total sum of the elements is greater than the maximum element. By taking expectation, we have
E i t=1 ||(A T t − A T ∞ )||∞ ≤ 2 i t=1 k∈N =k θ / ∈Θ k ∩Θ exp(−d k t) + exp(c k t) + θ∈Θ k ∩Θ b k,t exp(−d k t) + α k,t exp(−c k t) = ξi(87)
We have
ξ ∞ lim i→∞ ξ i < ∞.(88)
because all series appearing in (87) are convergent geometric series. Then, (83) yields
E{||λi(θ, θ ) − λ c i (θ, θ )||∞} ≤E L i t=1 ||(A T t − A T ∞ )||∞+L i t=1 ||(A T t − A T ∞ )||∞(t − 1) ≤ Lξi + 2L i t=1 k∈N =k θ / ∈Θ k ∩Θ exp(−d k t)(t − 1) + exp(−c k t)(t − 1) + θ∈Θ k ∩Θ α k exp(−c k t)(t − 1) + b k exp(−d k t)(t − 1) .(89)
Let us study the series appearing in the second term on the RHS of the above inequality. Performing the ratio test for each one of them we have
r = lim t→∞ exp(−v(t + 1))t exp(−vt)(t − 1) = exp(−v) < 1, ∀v > 0.(90)
Thus, all series on LHS of (89) converge, implying
lim i→∞ sup E{||λ i (θ, θ ) − λ c i (θ, θ )|| ∞ } ≤ y < ∞. (91)
Now we prove Theorem 2. We characterize the asymptotic behavior of µ c k,i and then we use Lemma 3 to characterize the behavior of µ k,i . Expanding (76) yields
λ c i (θ, θ ) = i t=1 (A T ∞ ) i−t+1 L t (θ, θ ) + (A T ∞ ) i λ 0 (θ, θ ), θ = θ(92)
Since the prior beliefs are uniform (Assumption 1) the second term in (92) is equal to 0. Then, by adding and subtractingĀ T ∞ (defined in (25) and by dividing by i and taking the limit as i → ∞ (92) yields
lim i→∞ 1 i λ c i (θ, θ )= lim i→∞ 1 i i t=1 (A T ∞ ) i−t+1 −Ā T ∞ L t (θ, θ ) + lim i→∞ 1 i i t=1Ā T ∞ L t (θ, θ )(93)
Following the same arguments used in the proof of Lemma 8 in [27] we can show that the first term on the RHS of the above expression goes to 0 a.s. Then, from the strong law of large numbers we have that
lim i→∞ 1 i λ c i (θ, θ ) a.s. −→Ā T ∞ E{L t (θ, θ )}(94)
From Lemma 2, the above implies that for every k ∈N s
lim i→∞ 1 i λ c k,i (θ, θ ) a.s. −→ ∈Ns p s ( )E{L ,t (θ, θ )} = ∈Ns p s ( )(d (θ ) − d (θ))(95)
Observing the above, we conclude that for any θ / ∈Θ s and for any θ ∈Θ s we have
lim i→∞ 1 i λ c i (θ, θ ) a.s. −→ −(C s (θ )) − C s (θ))(96)
which is C s (θ s ) − C s (θ) > 0 from the definition of θ s in (27). Also, note that C s (θ ) − C s (θ) is finite due to Assumption 2.
Since λ c k,i (θ,θ ) i converges to a finite negative value, we have that λ c k,i (θ, θ ) diverges to −∞, which in turn implies that µ c k,i (θ) a.s.
−→ 0 for all θ / ∈Θ s .
Then, we have
E λ i (θ, θ ) i − λ c i (θ, θ ) i ∞ = 1 i E{||λ i (θ, θ ) − λ c i (θ, θ )|| ∞ }(97)
From Lemma 3 by taking the limit as i goes to ∞ we get
lim i→∞ E λ i (θ, θ ) i − λ c i (θ, θ ) i ∞ = lim i→∞ 1 i E{||λ i (θ, θ ) − λ c i (θ, θ )|| ∞ } ≤ lim i→∞ y i = 0(98)
which implies that
λ i (θ, θ ) i P. −→ λ c i (θ, θ ) i(99)
Then, since
λ c i (θ, θ ) i a.s. −→ −(C s (θ ) − C s (θ))(100)
we have that
λ i (θ, θ ) i P. −→ −(C s (θ ) − C s (θ)).(101)
This implies
µ k,i (θ) P. −→ 0, ∀θ / ∈Θ s .(102)
For the part 2) of the Theorem, we have that µ k,i is a probability vector and thus its entries must sum up to 1. Then, ifΘ s = {θ (k) }, from the first part of the Theorem we have that µ k,i (θ) P.
−→ 0 for all θ = θ (k) becauseΘ s contains only θ (k) and thus, µ k,i (θ (k) ) P.
−→ 1.
APPENDIX F AUXILIARY RESULTS
Lemma 4. Let two random variables x i , y i such that:
P (|x i − x ∞ | ≥ c x exp(−a x i)) ≤ d x exp(−b x i) (103) P (|y i − y ∞ | ≥ c y exp(−a y i)) ≤ d y exp(−b y i) (104)
for some a x , b x , a y , b y > 0 and x ∞ , y ∞ ∈ R. Then:
P (|x i +y i −x ∞ −y ∞ |≥c exp(−āi)) ≤d exp(−bi) (105) P (|x i y i − x ∞ y ∞ | ≥c exp(−āi)) ≤d exp(−bi) (106) for someā,b,c,d > 0.
If further x i , x ∞ ≥ 1, then:
P |x −1 i − x −1 ∞ | ≥c exp(−āi) ≤d exp(−bi) (107) for someā,b,c,d > 0.
Proof. Let us define the following events:
A {|x i − x ∞ | ≥ c x exp(−a x i)} (108) B {|y i − y ∞ | ≥ c y exp(−a y i)} (109) C(ā,c) {|x −1 i − x −1 ∞ | ≥c exp(−āi)} (110) D(ā,c) {|x i + y i − x ∞ − y ∞ | ≥c exp(−āi)} (111) E(ā,c) {|x i y i − x ∞ y ∞ | ≥c exp(−āi)}(112)
for some a x , a y , c x , c y ,ā,c > 0.
We prove first (107). We have
|x −1 i − x −1 ∞ | = |x i − x ∞ | |x i x ∞ | ≤ |x i − x ∞ | (113) because |x i x ∞ | ≥ 1. The above implies A ⇒C(a x , c x )(114)
whereĀ stands for the complement of an event A. The above in turn implies P(C(a x , c x )) ≥ P(Ā)
⇔ P(C(a x , c x )) ≤ P(A) (103) ≤ d x exp(−b x i). (115)
The above implies that (107) holds for x i , x ∞ ≥ 1 and forā = a
x ,b = b x ,c = c x ,d = d x .
We move on to prove (105). From the triangle inequality, we have
|x i + y i − x ∞ − y ∞ | ≤ |x i − x ∞ | + |y i − y ∞ |(116)
The above implies that
A ∩B ⇒ |x i + y i − x ∞ − y ∞ | < c x exp(−a x i) + c y exp(−a y i) < (c x + c y ) exp(− min{a x , a y }i).(117)
The above implies
A ∩B ⇒D(− min{a x , a y }i, c x + c y )(118)
which in turn implies P(D(min{a x , a y }, c x + c y )) ≥ P(Ā ∩B) ⇔ P(D(min{a x , a y }i, c x + c y ))
(a) ≤ P(A ∪ B) (b) ≤ P(A) + P(B) (103),(104) ≤ d x exp(−a x i) + d y exp(−a y i)(119)
where in (a) De Morgan's law [28] was utilized and in (b) we used the union bound. The above implies that (105) holds forā = min{a x , a y },b = min{b
x , b y },c = c x + c y ,d = d x + d y .
Finally, we prove (106). We have.
|x i y i − x ∞ y ∞ | = |(x i − x ∞ )(y i − y ∞ ) + x ∞ (y i − y ∞ ) + y ∞ (x i − x ∞ )| ≤ |x i − x ∞ ||y i − y ∞ | + |x ∞ ||y i − y ∞ | + |y ∞ ||x i − x ∞ |(120)
By working in the same way as in (118), (119), from the above we obtain
A ∩B ⇒Ē(min{a x , a y }, c x c y + x ∞ c y + y ∞ c x )(121)
which implies
P(Ē(min{a x , a y }, c x c y + x ∞ c y + y ∞ c x )) ≥ P(Ā ∩B) ⇔ P(E(min{a x , a y }, c x c y + x ∞ c y + y ∞ c x )) ≤ P(A ∪ B) ≤ P(A) + P(B) ≤ d x exp(−a x i) + d y exp(−a y i)(122)
The above implies that (106) holds forā = min{a x , a y },b = min{b x , b y },c = c x c y + x ∞ c y + y ∞ c x ),d = d x + d y .
The following auxiliary Lemma characterizes the evolution of the adaptive weights. 1) For every θ / ∈ Θ k ∩ Θ and ∈ N k the following holds:
P (a k,i (θ) ≥ exp (−c k i)) ≤ exp (−d k i) (123) where c k 1 2 min θ / ∈Θ k ∩Θ m∈{k, } d m (θ)(124)d k min θ / ∈Θ k ∩Θ m∈{k, } d 2 m (θ) 32(log α) 2(125)
2) For every θ ∈ Θ k ∩ Θ and for all k ∈ N , ∈ N k a k,i (θ) the following is true:
P (|a k,i (θ) − a k,∞ (θ)| ≥ α k exp(−c k i)) ≤ b k exp(−d k i)(126)
for some α k , c k , b k , d k > 0 and for all i ≥ 1.
Proof. We will prove first part 1) of the Lemma. Since σ k,i ≥ 1, ∀i ≥ 1, k ∈ N , ∈ N k , for all θ / ∈ Θ k ∩ Θ we have from (45):
a k,i (θ) ≤ π k,i (θ)π ,i (θ) ≤ min{π k,i (θ), π ,i (θ)}(127)
where the last inequality follows from the fact that 0 ≤ π k,i (θ), π ,i (θ) ≤ 1. Then, from (10) we have
a k,i = θ∈Θ a k,i (θ) ≤ θ∈Θ π k,i (θ)π ,i (θ)(128)
Let us denote λ k,i (θ) log π k,i (θ)π ,i (θ) π k,i (θ (k) )π ,i (θ ( ) )
, θ / ∈ Θ k ∩ Θ .
Now, we follow the same rationale as in the proof of Proposition 1 and as in [3]. Using (6) we have λ i (θ) = i t=1 log L k (ζ k,i |θ) L k (ζ k,i |θ (k) ) + i t=1 log L (ζ ,i |θ) L (ζ ,i |θ ( ) ) + λ k,0 (θ) + λ ,0 (θ)
where the last two terms λ k,0 (θ) = λ ,0 (θ) = 0 due to Assumption 1. Taking expectations in (130) we have
E{ λ i (θ)} = −i m∈{k, } d m (θ) ≤ −i min θ / ∈Θ k ∩Θ m∈{k, } d m (θ).(131)
Next, by Assumption 2, we have for all θ / ∈ Θ k ∩ Θ
2 log α ≤ m∈{k, } d m (θ) ≤ 2 log 1 α .(132)
Consider the sequence of random variables ζ 1:i = (ζ k,1 , ζ ,1 , . . . , ζ k,i , ζ ,i ). We want to establish that λ k,i (θ), which is a function of ζ 1:i , has bounded differences. For all t such that 1 ≤ t ≤ i we have max ζ k,t ,ζ ,t λ i (θ) − min ζ k,t ,ζ ,t λ i (θ) = max ζ k,t ,ζ ,t log L k (ζ k,t |θ) L k (ζ k,t |θ (k) ) + log L (ζ ,t |θ) L (ζ ,t |θ ( ) ) − min ζ k,t ,ζ ,t log L k (ζ k,t |θ) L k (ζ k,t |θ (k) ) + log L (ζ ,t |θ) L (ζ ,t |θ ( ) ) ≤ 2 log 1 α − 2 log α = 4 log 1 α
where we utilized (132). Thus, λ i (θ) has bounded differences and as a result, we can apply McDiarmid's inequality. Then, by utilizing (54) for ρ t = 4 log 1 α (the bound from (133)) we obtain
P λ i (θ) − E{ λ i (θ)} ≥ ≤ exp − 2 2 16i(log 1 α ) 2 .(134)
Then, since π k,i (θ), π ,i (θ) ∈ (0, 1) for all θ / ∈ Θ k ∩ Θ we have a k,i (θ) = π k,i (θ)π ,i (θ) ≤ π k,i (θ)π ,i (θ) π k,i (θ (k) )π ,i (θ ( ) ) = exp λ i (θ) .
Thus, for an arbitrary ε we have P a k,i (θ) ≥ exp(ε)
≤ P exp( λ i (θ)) ≥ exp(ε) = P λ i (θ) ≥ ε ≤ P λ i (θ) − E{ λ i } (131) ≥ ε + i min θ / ∈Θ k ∩Θ m∈{k, } d m (θ) (136)
Utilizing (134) Now we will prove part 2) of Lemma. From Proposition 1 we have that P (π k,i (θ) ≥ exp(−x k i)) ≤ exp(−y k i)
for all θ / ∈ Θ k for some x k , y k > 0. Consider the events Y θ = {π k,i (θ) < exp(−x k i)} and
Y = { θ / ∈Θ k π k,i (θ) ≥ θ / ∈Θ exp(−x k i) = |Θ \ Θ k | exp(−x k i)}. Then, we have ∩ θ∈Θ Y θ ⇒ ¬Y(140)
which implies
Y ⇒ ∪ θ∈Θ ¬Y θ(141)
where ¬ stands for negation. Then, (141) implies
P θ / ∈Θ k π k (θ) ≥ |Θ \ Θ k | exp(−x k i) ≤ P ∪ θ / ∈Θ k {π k,i (θ) ≥ exp(−x k i)} ≤ |Θ \ Θ k | exp(−y k i)(142)
where in the last inequality the union bound was utilized. Then, since π k,i is a probability vector, we have θ∈Θ π k,i (θ) = θ∈Θ k π k,i (θ) + θ / ∈Θ k π k,i (θ) = 1.
Then, from (142) and (143) we obtain
P θ∈Θ k π k,i (θ) ≤ 1 − |Θ \ Θ k | exp(−x k i) ≤ |Θ \ Θ k | exp(−y k i)(144)
Utilizing (61), (144) yields
P π k,i (θ) ≤ 1 |Θ k | − |Θ \ Θ k | |Θ k | exp(−x k i) ≤ |Θ \ Θ k | exp(−y k i), θ ∈ Θ k .(145)
Also, due to (61) and the fact that π k,i is a probability vector, the probability mass placed on every θ ∈ Θ k cannot exceed 1/|Θ k |. Thus, the following is true:
P π k,i (θ) > 1 |Θ k | + |Θ \ Θ k | |Θ k | exp(−x k i) = 0 ≤ |Θ k | exp(−y k i), θ ∈ Θ k .(146)
Combining (145) and (146), we obtain
P |π k,i (θ) − 1 |Θ k | | ≥ |Θ \ Θ k | |Θ k | exp(−x k i) ≤ |Θ \ Θ k | exp(−y k i), θ ∈ Θ k ,(147)
for all k ∈ N . Also, for all θ / ∈ Θ k we can write (see Proposition 1):
P (|π k,i (θ) − 0| ≥ exp(−x k i)) ≤ exp(−y k i) (148)
Now, we can use the properties shown in Lemma 4 (see Appendix F) to show that a k,i (θ) converges exponentially fast to a k,∞ (θ) for all θ. From (45), we see that the numerator of a k,i (θ) converges exponentially fast to π k,∞ (θ)π ,∞ (θ) from (106). The denominator (i.e., σ k,i ) also converges exponentially fast because we can repeatedly apply (105) and (106), as σ k,i is comprised of products and sums of random variables satisfying (105), (106). The inverse of the denominator also converges exponentially fast due to (107) (note that σ k,i ≥ 1 for all i ≥ 1). Finally, we apply (106) by setting x i = π k,i (θ)π ,i (θ) and y i = σ −1 k,i . Thus, we get the statement of the Lemma.
Fig. 1 :
1A network example with three agents. The true state of agents 1, 2 is θ 2 , while the true state of agent 3 is θ 3 .
Proposition 1 .
1(Rate of rejection of false hypotheses for local beliefs). Under Assumptions 1 and 2, for all θ /
Theorem 2 .
2(Cooperative beliefs convergence and consistent learning). For any agent k ∈N s :1) The cooperative beliefs converge to 0 in probability, meaning: µ k,i (θ) P.
Fig. 2 :
2because (29), (30) hold for all 3 agents, all of them converge to their true hypothesis, as we can see in the left plot of Fig. 2. Example 2. (Inconsistent learning). However, in case Agents' evolution of the beliefs on each agent's true hypothesis (i.e, µ k,i (θ (k) ), k ∈ {1, 2, 3}) for: (left Example 1) and (right) Example 2.we have Θ 1 = {θ 1 , θ 2 , θ 3 } for the same example, then
Fig. 3 :
3Network topology.
Fig. 4
4Fig. 4: Steady-state beliefs of every agent on its true hypothesis θ (k) for cooperative social learning (i.e., ν k,i (θ (k) )) (first row), non-cooperative learning (i.e., π k,i (θ (k) )) (second row) and SASL Algorithm (i.e., µ k,i (θ (k) )) (third row). Colormap explanation: Agents colored in orange indicate that their beliefs on their true state are close to 1, while light green denotes that beliefs are close to 0 .
Fig. 5 :
5Steady-state beliefs of
Fig. 6
6Fig. 6: Steady-state beliefs of every agent on its true hypothesis θ (k) for cooperative social learning (i.e., ν k,i (θ (k) )) (first row), non-cooperative learning (i.e., π k,i (θ (k) )) (second row) and SASL Algorithm (i.e., µ k,i (θ (k) )) (third row).
Lemma 5 .
5(Rate of convergence of the combination weights ). Under Assumptions 1 and 2 the following hold:
, (136) and setting ∈Θ k ∩Θ m∈{k, } dm(θ)ε = −
i
2
min
θ /
∈Θ k ∩Θ
m∈{k, }
d m (θ)
(137)
we have
P (a k,i (θ) ≥ exp(ε))
≤ P
λi(θ) − E{ λi(θ)} ≥
i
2
min
θ /
∈Θ k ∩Θ
m∈{k, }
dm(θ)
≤ exp
−
1
2 i min θ /
2
16i(log 1
α ) 2
= exp −
min θ /
∈Θ k ∩Θ
m∈{k, } d 2
m (θ)
32(log α) 2
i
(138)
Social learning and distributed hypothesis testing. A Lalitha, A Sarwate, T Javidi, Proc. IEEE International Symposium on Information Theory. IEEE International Symposium on Information TheoryHonolulu, HawaiiA. Lalitha, A. Sarwate, and T. Javidi, "Social learning and distributed hypothesis testing," in Proc. IEEE International Sym- posium on Information Theory, Honolulu, Hawaii, 2014, pp. 551- 555.
Social learning and distributed hypothesis testing. A Lalitha, T Javidi, A D Sarwate, IEEE Transactions on Information Theory. 649A. Lalitha, T. Javidi, and A. D. Sarwate, "Social learning and distributed hypothesis testing," IEEE Transactions on Information Theory, vol. 64, no. 9, pp. 6161-6179, 2018.
Fast convergence rates for distributed non-bayesian learning. A Nedić, A Olshevsky, C A Uribe, IEEE Transactions on Automatic Control. 6211A. Nedić, A. Olshevsky, and C. A. Uribe, "Fast convergence rates for distributed non-bayesian learning," IEEE Transactions on Automatic Control, vol. 62, no. 11, pp. 5538-5553, 2017.
Learning over social networks via diffusion adaptation. X Zhao, A H Sayed, Proc. Asilomar Conference on Signals, Systems and Computers. Asilomar Conference on Signals, Systems and ComputersX. Zhao and A. H. Sayed, "Learning over social networks via diffusion adaptation," in Proc. Asilomar Conference on Signals, Systems and Computers, 2012, pp. 709-713.
Social learning with partial information sharing. V Bordignon, V Matta, A H Sayed, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal essing (ICASSP)Barcelona, SpainV. Bordignon, V. Matta, and A. H. Sayed, "Social learning with partial information sharing," in Proc. IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 5540-5544.
Non-bayesian social learning. A Jadbabaie, P Molavi, A Sandroni, A Tahbaz-Salehi, Games and Economic Behavior. 761A. Jadbabaie, P. Molavi, A. Sandroni, and A. Tahbaz-Salehi, "Non-bayesian social learning," Games and Economic Behavior, vol. 76, no. 1, pp. 210-225, 2012.
A theory of nonbayesian social learning. P Molavi, A Tahbaz-Salehi, A Jadbabaie, Econometrica. 862P. Molavi, A. Tahbaz-Salehi, and A. Jadbabaie, "A theory of non- bayesian social learning," Econometrica, vol. 86, no. 2, pp. 445- 490, 2018.
A new approach to distributed hypothesis testing and non-bayesian learning: Improved learning rate and byzantine-resilience. A Mitra, J A Richards, S Sundaram, IEEE Transactions on Automatic Control. 669A. Mitra, J. A. Richards, and S. Sundaram, "A new approach to distributed hypothesis testing and non-bayesian learning: Im- proved learning rate and byzantine-resilience," IEEE Transac- tions on Automatic Control, vol. 66, no. 9, pp. 4084-4100, 2020.
Social learning under inferential attacks. K Ntemos, V Bordignon, S Vlaski, A H Sayed, IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPK. Ntemos, V. Bordignon, S. Vlaski, and A. H. Sayed, "Social learning under inferential attacks," in IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2021, pp. 5479-5483.
Network classifiers based on social learning. V Bordignon, S Vlaski, V Matta, A H Sayed, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal essing (ICASSP)Toronto, CanadaV. Bordignon, S. Vlaski, V. Matta, and A. H. Sayed, "Network classifiers based on social learning," in Proc. IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, Canada, 2021, pp. 5185-5189.
Adaptive clustering for multitask diffusion networks. J Chen, C Richard, A H Sayed, Proc. European Signal Processing Conference (EUSIPCO). European Signal essing Conference (EUSIPCO)Nice, FranceJ. Chen, C. Richard, and A. H. Sayed, "Adaptive clustering for multitask diffusion networks," in Proc. European Signal Processing Conference (EUSIPCO), Nice, France, 2015, pp. 200- 204.
Diffusion LMS over multitask networks. IEEE Transactions on Signal Processing. 6311--, "Diffusion LMS over multitask networks," IEEE Trans- actions on Signal Processing, vol. 63, no. 11, pp. 2733-2748, 2015.
Heterogeneous and multitask wireless sensor networks-algorithms, applications, and challenges. J Plata-Chaves, A Bertrand, M Moonen, S Theodoridis, A M Zoubir, IEEE Journal of Selected Topics in Signal Processing. 113J. Plata-Chaves, A. Bertrand, M. Moonen, S. Theodoridis, and A. M. Zoubir, "Heterogeneous and multitask wireless sensor net- works-algorithms, applications, and challenges," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 3, pp. 450- 465, 2017.
Distributed clustering and learning over networks. X Zhao, A H Sayed, IEEE Transactions on Signal Processing. 6313X. Zhao and A. H. Sayed, "Distributed clustering and learning over networks," IEEE Transactions on Signal Processing, vol. 63, no. 13, pp. 3285-3300, 2015.
Multitask learning over graphs: An approach for distributed, streaming machine learning. R Nassif, S Vlaski, C Richard, J Chen, A H Sayed, IEEE Signal Processing Magazine. 373R. Nassif, S. Vlaski, C. Richard, J. Chen, and A. H. Sayed, "Multitask learning over graphs: An approach for distributed, streaming machine learning," IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 14-25, 2020.
Robust diffusion-based unsupervised object labelling in distributed camera networks. F K Teklehaymanot, M Muma, B Béjar, P Binder, A Zoubir, M Vetterli, Proc. AFRICON 2015. IEEE. AFRICON 2015. IEEEAddis Ababa, EthiopiaF. K. Teklehaymanot, M. Muma, B. Béjar, P. Binder, A. Zoubir, and M. Vetterli, "Robust diffusion-based unsupervised object labelling in distributed camera networks," in Proc. AFRICON 2015. IEEE, Addis Ababa, Ethiopia, 2015, pp. 1-6.
Distributed adaptive node-specific signal estimation in fully connected sensor networks-part i: Sequential node updating. A Bertrand, M Moonen, IEEE Transactions on Signal Processing. 5810A. Bertrand and M. Moonen, "Distributed adaptive node-specific signal estimation in fully connected sensor networks-part i: Sequential node updating," IEEE Transactions on Signal Pro- cessing, vol. 58, no. 10, pp. 5277-5291, 2010.
Distributed adaptive node-specific signal estimation in fully connected sensor networks-part ii: Simultaneous and asynchronous node updating. IEEE Transactions on Signal Processing. 5810--, "Distributed adaptive node-specific signal estimation in fully connected sensor networks-part ii: Simultaneous and asyn- chronous node updating," IEEE Transactions on Signal Process- ing, vol. 58, no. 10, pp. 5292-5306, 2010.
Distributed diffusion-based LMS for node-specific adaptive parameter estimation. J Plata-Chaves, N Bogdanović, K Berberidis, IEEE Transactions on Signal Processing. 6313J. Plata-Chaves, N. Bogdanović, and K. Berberidis, "Distributed diffusion-based LMS for node-specific adaptive parameter esti- mation," IEEE Transactions on Signal Processing, vol. 63, no. 13, pp. 3448-3460, 2015.
Distributed incremental-based LMS for node-specific adaptive parameter estimation. N Bogdanović, J Plata-Chaves, K Berberidis, IEEE Transactions on Signal Processing. 6220N. Bogdanović, J. Plata-Chaves, and K. Berberidis, "Distributed incremental-based LMS for node-specific adaptive parameter estimation," IEEE Transactions on Signal Processing, vol. 62, no. 20, pp. 5382-5397, 2014.
Decision learning and adaptation over multi-task networks. S Marano, A H Sayed, IEEE Transactions on Signal Processing. 69S. Marano and A. H. Sayed, "Decision learning and adaptation over multi-task networks," IEEE Transactions on Signal Process- ing, vol. 69, pp. 2873-2887, 2021.
Adaptation, learning, and optimization over networks. A H Sayed, Foundations and Trends® in Machine Learning. 7A. H. Sayed, "Adaptation, learning, and optimization over net- works," Foundations and Trends® in Machine Learning, vol. 7, no. 4-5, pp. 311-801, 2014.
Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs. A Nedić, A Olshevsky, C A Uribe, Proc. 2015 American Control Conference (ACC). 2015 American Control Conference (ACC)Chicago, IllinoisA. Nedić, A. Olshevsky, and C. A. Uribe, "Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs," in Proc. 2015 American Control Conference (ACC), Chicago, Illinois, 2015, pp. 5884-5889.
Graph learning with partial observations: Role of degree concentration. V Matta, A Santos, A H Sayed, Proc. IEEE International Symposium on Information Theory (ISIT). IEEE International Symposium on Information Theory (ISIT)Paris, FranceV. Matta, A. Santos, and A. H. Sayed, "Graph learning with partial observations: Role of degree concentration," in Proc. IEEE International Symposium on Information Theory (ISIT), Paris, France, 2019, pp. 1312-1316.
Regularity properties of certain families of chance variables. J L Doob, Transactions of the American Mathematical Society. 473J. L. Doob, "Regularity properties of certain families of chance variables," Transactions of the American Mathematical Society, vol. 47, no. 3, pp. 455-486, 1940.
R A Horn, C R Johnson, Matrix Analysis. Cambridge University PressR. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 2012.
Social learning with partial information sharing. V Bordignon, V Matta, A H Sayed, arXiv:2006.13659V. Bordignon, V. Matta, and A. H. Sayed, "Social learning with partial information sharing," arXiv:2006.13659, 2020. [Online].
A Concise Introduction to Logic. P J Hurley, Cengage LearningP. J. Hurley, A Concise Introduction to Logic. Cengage Learning, 2014.
| []
|
[
"Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack",
"Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack"
]
| [
"Ye Liu \nCenter for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina\n",
"Yaya Cheng [email protected] \nCenter for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina\n",
"Lianli Gao [email protected] \nCenter for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina\n",
"Xianglong Liu [email protected] \nBeihang University\nChina\n",
"Qilong Zhang [email protected] \nCenter for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina\n",
"Jingkuan Song [email protected] \nCenter for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina\n"
]
| [
"Center for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina",
"Center for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina",
"Center for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina",
"Beihang University\nChina",
"Center for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina",
"Center for Future Media\nSchool of Computer Science and Engineering\nUniversity of Electronic Science and Technology of China\nChina"
]
| []
| Defense models against adversarial attacks have grown significantly, but the lack of practical evaluation methods has hindered progress. Evaluation can be defined as looking for defense models' lower bound of robustness given a budget number of iterations and a test dataset. A practical evaluation method should be convenient (i.e., parameterfree), efficient (i.e., fewer iterations) and reliable (i.e., approaching the lower bound of robustness). Towards this target, we propose a parameter-free Adaptive Auto Attack (A 3 ) evaluation method which addresses the efficiency and reliability in a test-time-training fashion. Specifically, by observing that adversarial examples to a specific defense model follow some regularities in their starting points, we design an Adaptive Direction Initialization strategy to speed up the evaluation. Furthermore, to approach the lower bound of robustness under the budget number of iterations, we propose an online statistics-based discarding strategy that automatically identifies and abandons hard-to-attack images. Extensive experiments on nearly 50 widely-used defense models demonstrate the effectiveness of our A 3 . By consuming much fewer iterations than existing methods, i.e., 1/10 on average (10× speed up), we achieve lower robust accuracy in all cases. Notably, we won first place out of 1681 teams in CVPR 2021 White-box Adversarial Attacks on Defense Models competitions with this method. Code is available at: https://github.com/liuye6666/adaptive auto attack | 10.1109/cvpr52688.2022.01468 | [
"https://arxiv.org/pdf/2203.05154v3.pdf"
]
| 247,362,439 | 2203.05154 | 8d21ff9eb99fb2482461ef6269f89d4350cb9450 |
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack
Ye Liu
Center for Future Media
School of Computer Science and Engineering
University of Electronic Science and Technology of China
China
Yaya Cheng [email protected]
Center for Future Media
School of Computer Science and Engineering
University of Electronic Science and Technology of China
China
Lianli Gao [email protected]
Center for Future Media
School of Computer Science and Engineering
University of Electronic Science and Technology of China
China
Xianglong Liu [email protected]
Beihang University
China
Qilong Zhang [email protected]
Center for Future Media
School of Computer Science and Engineering
University of Electronic Science and Technology of China
China
Jingkuan Song [email protected]
Center for Future Media
School of Computer Science and Engineering
University of Electronic Science and Technology of China
China
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack
Defense models against adversarial attacks have grown significantly, but the lack of practical evaluation methods has hindered progress. Evaluation can be defined as looking for defense models' lower bound of robustness given a budget number of iterations and a test dataset. A practical evaluation method should be convenient (i.e., parameterfree), efficient (i.e., fewer iterations) and reliable (i.e., approaching the lower bound of robustness). Towards this target, we propose a parameter-free Adaptive Auto Attack (A 3 ) evaluation method which addresses the efficiency and reliability in a test-time-training fashion. Specifically, by observing that adversarial examples to a specific defense model follow some regularities in their starting points, we design an Adaptive Direction Initialization strategy to speed up the evaluation. Furthermore, to approach the lower bound of robustness under the budget number of iterations, we propose an online statistics-based discarding strategy that automatically identifies and abandons hard-to-attack images. Extensive experiments on nearly 50 widely-used defense models demonstrate the effectiveness of our A 3 . By consuming much fewer iterations than existing methods, i.e., 1/10 on average (10× speed up), we achieve lower robust accuracy in all cases. Notably, we won first place out of 1681 teams in CVPR 2021 White-box Adversarial Attacks on Defense Models competitions with this method. Code is available at: https://github.com/liuye6666/adaptive auto attack
Introduction
Despite the breakthroughs for a wide range of fields, deep neural networks (DNNs) [21,23,43,53,58] have been shown high vulnerabilities to adversarial examples. For instance, inputs added with human-imperceptible pertur-* Corresponding author bations can deceive DNNs to output unreasonable predictions [4, 11, 14-16, 30, 33, 51, 61-63]. To tackle this issue, various adversarial defense methods [9,17,18,50] have been proposed to resist against malicious perturbations. Unfortunately, these defense methods could be broken by more advanced attack methods [7,12,45,46], making it difficult to identify the state-of-the-art. Therefore, we urgently need a practical evaluation method to judge the adversarial robustness of different defense strategies.
Robustness evaluation can be defined as looking for defense models' lower bound of robustness given a budget number of iterations and a test dataset [7]. White-box adversarial attacks on defense models are crucial for testing adversarial robustness. Among these methods, widelyused random sampling has been proven effective in generating diverse starting points for attacks in a large-scale study [7,20,31,44]. In general, there are two kinds of random sampling strategies. From the perspective of input space, given an original image x of label y and a random perturbation ζ sampled from uniform distributions, the starting point is x st = x + ζ, e.g., Projected Gradient Descent (PGD) [31]. From the perspective of output space, given a classifier f and a randomly sampled direction of diversification w d , evaluators generate starting points by maximizing the change of output, i.e., w d f (x). Intuitively, random sampling strategy is sub-optimal since it is modelagnostic.
Comprehensive statistics on random sampling are conducted to verify whether it is sub-optimal. In other words, we want to study whether adversarial examples, i.e., images that successfully fooling the victim's models, to a specific defense model follow some regularities in their starting points. Statistic results on input space show that adversarial examples' starting points are actually random. This is reasonable because starting points are highly dependent on their corresponding input images, and input images are randomly distributed in high-dimensional space. Different from input space, statistics results in output space show that the direction of diversification w d to a specific defense model follows some regularities. Specifically, w d is not uniformly distributed but with a model-specific bias in the positive/negative direction. Therefore, random sampling in the output space cannot obtain a good starting point, which may slow down the evaluation. To speed up the evaluation, we propose an Adaptive Direction Initialization (ADI) strategy in this paper. ADI firstly adopts an observer to record the direction of diversification of adversarial examples at the first restart. Then, based on these directions, ADI introduces a novel way to generate better staring points than random sampling for the following restarts.
In addition to using ADI to accelerate robustness evaluation, we design another strategy named online statisticsbased discarding for improving the reliability of existing methods. Currently, the naïve iterative strategy that treats all images evenly and allocates them the same iterations is widely applied to robustness evaluation [6,7,20,31,32,44,52]. However, this strategy is unreasonable because it pays unnecessary efforts to perturb hard-to-attack images. Intuitively, given the budget number of iterations, the more examples we successfully attack, the closer robustness to the lower bound we obtain. Therefore, the number of iterations assigned to hard-to-attack images is a lower priority. Based on our observation that loss values can roughly distinguish the difficulty of attacks, we propose an online statisticsbased discarding strategy that automatically identifies and abandons hard-to-attack images. Specifically, we stop perturbing images with considerable difficulties at the beginning of every restart. For remaining images, the same number of iterations are allocated to them. Obviously, online statistics-based discarding strategy makes full use of the number of iterations and increases the chance of perturbing images to adversarial examples. We can further approach the lower bound of robustness based on this reliable strategy. Essentially, speeding up the evaluation is also closely related to improving the reliability because the saved iterations can be used to attack easy-to-attack examples, resulting in lower robust accuracy. By incorporating the above two strategies, a practical evaluation method Adaptive Auto Attack (A 3 ) is proposed.
To sum up, our main contributions are three-fold: 1) Based on comprehensive statistics, we propose an adaptive direction initialization (ADI) strategy which generates better starting points than random sampling to speed up the robustness evaluation. 2) We propose an online statisticsbased discarding strategy that automatically identifies and abandons hard-to-attack images to approach further the lower bound of robustness under the budget number of iterations. 3) Extensive experiments demonstrate the effectiveness and reliability of the method. Particularly, we apply A 3 to nearly 50 widely-used defense models, without parameters adjustment, our method achieves a lower robust accuracy and a faster evaluation.
Related Works
Many white-box attacks against defense models have been proposed for robustness evaluation. Fast Adaptive Boundary Attack (FAB) [6] aims at finding the minimal perturbation necessary to change the class of a given input. Projected Gradient Descent (PGD) [31] further improves the evaluation performance by assigning random perturbations as the starting point at every restart. Based on PGD, Gowal et al. [20] proposes a MultiTargeted attack (MT) that picks a new target class at each restart. Tashiro [44] provides a more effective initialization strategy to generate diverse starting points. Unfortunately, most of these promising methods overestimate the robustness [2,12,45,46]. Potential reasons for this are improper hyper-parameters tuning and gradient masking [7]. Therefore, we urgently need a practical evaluation method that is convenient ( i.e., parameter-free), efficient ( i.e., less iterations), and reliable ( i.e., approaching the lower bound of robustness).
To address this issue, Croce et al. [7] proposes Auto Attack (AA) by integrating four attacks methods. Large-scale studies have shown that AA achieves lower robust test accuracy than existing methods. However, AA is inefficient since it requires a massive number of iterations for robustness evaluation. Instead of adopting an ensemble strategy, Yu et al. [52] uses latent features and introduces a unified ∞ -norm white-box attack algorithm LAFEAT to evaluate robustness more reliably. However, the time and space complexity of LAFEAT is unacceptable, as it requires the training of new modules for each defense model. Generally speaking, it is impracticable since training sets are often inaccessible in practical applications.
Methodology
Preliminaries
In this section, we give the background knowledge of adversarial attacks. Given a C-class classifier f :
x ∈ [0, 1] D → R C , where x is original image with label y, model prediction is calculated by: h(x) = argmax c=1,...,C f c (x),(1)
where f c (x) refers to the output logits of x on the c-th class. In this paper, we mainly focus on untargeted attacks. The goal of untargeted adversarial attacks is fooling f to missclassify a human-imperceptible adversarial example x adv = x + δ, i.e., h(x adv ) = y. By adopting ∞ distance to evaluate the imperceptibility of δ, i.e., δ ∞ ≤ , the constrained optimization problem is defined as:
arg max L (f (x adv ) , y) s.t. x adv − x ∞ ≤ .(2)
In the scenario of white-box adversarial attacks, attackers can access all information of the victims' model. In this case, one of the most popular methods is PGD [31]. Specifically, PGD calculates the gradient at iteration t:
g t = ∇ x t adv L f x t adv , y ,(3)
where x t adv is the adversarial example at iteration t and the starting point x st is generated as:
x st = x + ζ,(4)
where ζ is a random perturbation sampled from a uniform distribution U (− , ) D . Then, PGD generates an adversarial example by performing the iterative update:
x t+1 adv = P x, x t adv + η t · sign(g t ) ,(5)
where η t is the step size at iteration t and x 0 adv = x st , function P x, (·) clips the input to the -ball of x. To accurately evaluate the robustness of defense models, PGD usually adopts multiple restarts. At each restart, starting points are randomly sampled from the perturbation space.
To further improve the diversity of starting points, Tashiro et al. [44] propose ODI to find starting points by maximizing the change in the output space. To be specific, given a random direction of diversification w d sampled from uniform distributions U (−1, 1) C , ODI firstly calculates a normalized perturbation vector as follows:
υ (x, f, w d ) = ∇ x w d f (x) ∇ x w d f (x) ,(6)
and then generates starting points by maximizing the output change via the following iterative update:
x t+1 adv = x t adv + η odi · sign υ x t adv , f, w d , x t+1 adv = P x, x t+1 adv ,(7)
where η odi is the step size for ODI, which is usually set to , and in this paper we keep the same setting. x 0 adv is calculated by Eq. (4). After N odi iterations, i.e., number of iteration for initialization, ODI obtains the starting point x st :
x st = x Nodi adv .(8)
As same as PGD, ODI performs multiple restarts to achieve lower robust accuracy.
Motivations
A practical evaluation method should be convenient ( i.e., parameter-free 1 ), efficient ( i.e., fewer iterations), and 1 Following [7], 'parameter-free' indicates that we do not need the finetuning of parameters for every new defense.
Misclassification
Ground truth reliable ( i.e., approaching the lower bound of robustness). Although using a large number of iterations, most of the existing methods usually overestimate the robustness. There are two potential reasons for this: a) Despite the effectiveness of generating diverse starting points for attacks, random sampling is sub-optimal since it is model-agnostic. Exploiting random sampling to generate starting points will slow down the robustness evaluation, and b) The widely adopted naïve iterative strategy, i.e., assigning the same number of iteration to all test examples, is unreasonable. Intuitively, naïve iterative strategy pays unnecessary efforts to perturb hard-to-attack images. To verify the above two points, we perform a comprehensive statistical analysis of some white-box adversarial attacks methods against defense models. Random sampling is sub-optimal. Considering Eq. (8) and Eq. (4), random sampling is widely used to generate diverse starting points for attacks in the input space ( e.g., PGD), and output space ( e.g., ODI). It is worth noting that there is no need to do statistics on random sampling in the input space, since the starting points are highly dependent on the corresponding input images and the input images are randomly distributed in the high-dimensional space of the input space. Therefore, we mainly focus on the output space.
As shown in Eq. (7), the noise in the output space is determined by w d . Therefore, to verify the unreasonableness of random sampling in the output space, we analyze w d of adversarial examples (i.e., examples that were successfully attacked) and explore what kind of w d is conducive. Specifically, we adopt ODI parameterized by number of restarts R = 50, N odi = 7 and η odi = . The number of iterations for attacks at each restarts N atk is set to 30, and the step size at t-th iteration is formulated as follows:
η t = 1 2 · 1 + cos t mod N atk N atk π .(9)
Then we use ODI to attack 11 defense models, including Geometry [60], FBTF [48], HYDRA [40], AT HE [35], MART [47], Pre-training [22], Robustness [13], TRADES [57], Interpolation [56], Feature Scatter [55], Regularization [25]. Among adversarial examples against different models, we summarize statistic results of w d in Fig. 1. The 1st and 2nd columns give mean values of w d at theŷ-th (the misclassification label), i.e., wŷ d , and the y-th (the ground truth), i.e., w y d . From Fig. 1, we have the following observations. Firstly, for adversarial examples, their direction of diversification w d disobeys uniform distribution. Secondly, there is a model-specific positive/negative bias in the direction of diversification. Mainly, there are three kinds of biases: a) w y
d < 0, wŷ d > 0, b) w y d > 0, wŷ d > 0, and c) w y d > 0, wŷ d < 0. Considering Eq. (7)
, for a, f y (x st ) and fŷ (x st ) will decrease and increase respectively, which satisfies the goal of adversarial attacks. For b, both of f y (x st ) and fŷ (x st ) will increase. For c, f y (x st ) will increase and fŷ (x st ) will decrease. Obviously, cases b and c are counter-intuitive, a potential reason is that these defense models adopt gradient masks [34]. Based on these observations, random sampling is sub-optimal since it is modelagnostic. Adopting it to generate starting points hinders the algorithms from approaching the lower bound of robustness rapidly. For more detailed statistical results of w d , please refer to the Appendix. B. Limitations of Naïve iterative strategy. Most of the existing methods adopt the naïve iterative strategy, i.e., treats all images evenly. However, based on two intuitions: (1) Images vary in difficulty to perturb them to adversarial examples, and (2) The higher the difficulty, the more iterations are needed to perturb them. Naïve iterative strategy is impractical because it pays unnecessary efforts to perturb hard-to-attack images. In order to successfully attack more images and get closer to the lower bound of robustness with the budget number of iterations, the number of iterations assigned to hard-to-attack images is a lower priority. Therefore, we need a method that can roughly distinguish hard-to-attack images and easy-to-attack images to allocate the budget number of iterations reasonably.
Intuitively, loss function values can roughly reflect the difficulty of perturbing an image to an adversarial example. Multiple loss functions L(·) can be used for attacks, including cross-entropy loss and margin loss defined as max c =y f c (x) − f y (x). In this paper, we use the margin loss and define hard-to-attack images as images that cannot be successfully attacked even after 2000 iterations. The other images that are successfully attacked, we define as easy-to-attack images.
To verify that loss values can distinguish between hardto-attack and easy-to-attack images, given 2000 iterations for attacks, we first use ODI to attack all images of 5 models (including, AWP [49], FAT [59], Proxy [39], OAAT [1] and RLPE [42]), and then identify and mark the easy-to-attack images of each model. Finally, we attack the 5 models again with ODI to record the percentile of loss values in descending order for easy-to-attack images in the process of attacks. We visualized the statistical results in Fig. 2.
From this figure, we have the following observations. As the number of iterations increases, the loss percentiles of easy-to-attack images continually decrease. Besides, the loss percentiles of easy-to-attack images always take a higher position than that of most hard-to-attack images. Take easy-to-attack image as an example, when the number of iterations reaches 100, the percentile of loss ranks in the top 60%, as the number of iterations increases, it decreases to the top 5% or even the top 0.1%. In other words, loss values of easy-to-attack and hard-to-attack images are not randomly distributed, and the loss values of hard-to-attack images are more likely to be small. We can distinguish between these images according to loss value. Based on these observations, to make full use of the budget number of iterations, we can automatically abandon hard-to-attack images with an increasing proportion according to loss values during the process of attacks.
Adaptive Direction Initialization
Inspired by the above analysis of random sampling in Sec. 3.2, we propose a method named Adaptive Direction Initialization (ADI) to generate better directions than random sampling to initialize the attacks. Specifically, ADI has two steps: useful directions observer and adaptive directions generation.
For useful directions observer step, ADI first adopts random sampling to generate the direction of diversification, i.e., w d = U (−1, 1) C . Then ADI uses starting points obtained by Eq. (8)
κ c (W ) = sign wd∈W w c d ,(10)
where κ c (W ) is the prior knowledge of generating w c a , i.e., the c-th dimension of w a . With the help of κ(·), ADI generates the y-th components of w a as:
w y a = w y a ∼ U (−0.5, 0.1) , κ y (·) < 0, w y a ∼ U (−0.1, 0.5) , κ y (·) > 0.(11)
To improve the effectiveness of w a , ADI randomly selects a label y to follow the sign of symbol of κ y (·):
w y a = −0.8, κ y (·) < 0, 0.8, κ y (·) > 0.(12)
Notably, our method is insensitive to w y a experimentally. For simplicity, we set w y a = ±0.8. For the remaining dimensions of adaptive direction w a , ADI calculates them as:
w i a ∼ U (−1, 1) C−2 , i ∈ T ,(13)
where the set T = {1, ..., C}\{y, y} contains all classes other than y and y. Compared with random sampling adopted by ODI, the adaptive direction generated by ADI is guided by prior knowledge, i.e., the direction of diversification of adversarial examples. Furthermore, ADI randomly generates the remaining C − 2 dimensions of the adaptive direction to improve the diversity of starting points.
Online Statistics-based Discarding Strategy
In light of approaching the lower bound of robustness with the budget number of iterations, we propose a novel iterative strategy named Online Statistics-based Discarding (OSD). According to the observation in Sec. 3.2. OSD adopts loss values to distinguish between hard-to-attack and easy-to-attack images. OSD first sorts test images in descending order by the corresponding loss values at the beginning of every restart, and then discards hard-to-attack images, i.e., stopping perturbing images with small loss values. Particularly, given an initial discarding rate φ and an discarding increment ι, the discarding rate at the r-th restart is formulated as follows:
ς r = φ + r × ι.(14)
For the remaining images, OSD assigns the same number of iterations to them. Intuitively, to further increase attack success rate, OSD allocates more iterations to remaining images at restart r than previous restart. Concretely, given an
Algorithm 1: Adaptive Auto Attack (A 3 )
Inputs: norm bound , the number of iteration for initialization N , step sizes η, the number of iteration for attacks at the r-th restart N r atk , attack iteration step sizes η atk , number of restarts R, test dataset I Outputs:
Adversarial example x t+1 adv for r = 0 → R do Update the test dataset I by OSD for x in I do Sample ζ from U (− , ) D x st = x + ζ if r = 0 then Sample w d from U (−1, 1) C else w d ← w a / * ADI * / for n = 0 to N do Compute x n+1 adv by Eq. (7) for t = 0 to N r atk do Compute x t+1 adv by Eq. (5) if r=0 then
Compute w a by Eqs. (11) to (13). if x t+1 adv is an adversarial example then return x t+1 adv initial number of iteration γ for attacks and an iteration increment ν, the number of iterations for attacks at the r-th restart is computed as follows:
N r atk = γ + r × ν.(15)
Compared with naïve iterative strategy, OSD makes full use of the budget number of iterations by automatically identifying and abandoning hard-to-attack images. In addition, by allocating various number of iterations for attacks at different restarts, OSD helps to further approach the lower bound of adversarial robustness.
Adaptive Auto Attack
We integrate above two strategies to form a practical evaluation method Adaptive Auto Attack (A 3 ). Firstly, A 3 is convenient since we do not need the fine-tuning of parameters for every new defense models. Secondly, A 3 is efficient. For the gradient-based optimization to craft adversarial examples, unlike random sampling, our method A 3 generates adaptive directions for each model and provides better starting points to speed up the evaluation. Thirdly, A 3 is reliable. By discarding hard-to-attack images online and adjusting iterations for attacks adaptively, our method A 3 makes full use of a budget number of iterations and further approaches the lower bound of adversarial robustness.
Compared with the mainstream method AA, our parameterfree A 3 is a more efficient and reliable protocol for robustness evaluation. The algorithm of Adaptive Auto Attack is summarized in Algorithm 1.
Experiments
We conduct comprehensive experiments to evaluate the practicability of our method. Specifically, five baselines are included: PGD [31], ODI [44], MT-PGD [20], I-FGSM [29] and AA [7]. Nearly 50 ∞ -defense models with 8 different architectures are chosen from recent conferences. To be specific, the evaluation is performed on 35 and 12 defense models trained on CIFAR-10 and CIFAR-100 [27] datasets, respectively. Notably, for fair comparisons, the step size is calculated by Eq. (9) for all attack methods in the experiments. We use margin loss for all attack methods.
Following AA, we adopt robust accuracy (acc) to reflect evaluation reliability. A robustness evaluation method is considered reliable if it can better downgrade the model's classification accuracy. In our experiments, we assume computational complexity of all methods in each iteration is similar. So the evaluation efficiency of each method can be reflected by the total number of iterations. For simplicity, we test the forward propagation ("→") and backward propagation ("←") to indicate the evaluation efficiency of different methods.
Comparisons with State-of-the-Art Attacks
To comprehensively validate the efficiency and reliability of our method, we compare AA, PGD, ODI with our A 3 on nearly 50 defense models. The evaluation results are shown in Tab. 4. Setup. Following the setup of ODI and PGD, 100 iterations are allocated for each image (4 restarts, 25 iterations for attacks at each restart), N odi = 2. For AA, the standard version 2 is adopted. For our A 3 , initial number of iterations for attacking γ is set to 25, and the iteration increment ν is 5. The number of iteration for initialization N = 7. When the number of iterations reaches 50, we keep it unchanged to save the budget number of iterations. Initial discarding rate φ = 0 and discarding increment ι = 0.1, when the discarding rate reaches to 0.9, we gradually increase it to 0.97 at an interval of ι = 0.035. Results. As can be seen in Tab. 4, A 3 uses the same parameters for all defense models and achieves lower robust accuracy than AA in all cases, downgrading acc by 0.1% on average. Besides, our method achieves a faster evaluation, on average, 10.4× speed up for forward propagation, and 5.4× for backward propagation. In general, the nature of parameter-free, reliability, and efficiency enables A 3 to be a practical method for robustness evaluation. For the results 2 https://github.com/fra31/auto-attack on other datasets, network architectures and metrics ( i.e., 2 -norm), please refer to the Appendix. C.
Ablation Study
Efficacy of adaptive direction initialization. To evaluate the effectiveness of ADI, we design another variant called Reverse Adaptive Direction Initialization (R-ADI) which adopts the reverse direction of the adaptive direction. For all methods, we allocate 150 iterations for each image (5 restarts and 30 iterations at each restart.), and N =10.
The comparison results among R-ADI, ODI, and ADI are reported in Tab. 2. As can be seen, compared to ODI, our ADI achieves better attack performance in all cases. The result indicates the initial direction does affect the performance. Compared with uniformly generating initial direction, our ADI can generate model-specific initial direction and help to obtain better performance. In general, R-ADI achieves the worst performance in all cases. One possible reason is that R-ADI chooses a bad initial direction, which hinders the performance. Efficacy of online statistics-based discarding strategy.
To verify the effect of OSD, we compare the robust accuracy curves of the defense models under the attack of ADI, ADI+OSD (A 3 ), and AA. For our methods ADI and ADI+OSD, the setup is the same as in Sec. 4.1.
Note that ADI+OSD denotes the Online Statistics-based Discarding strategy is applied on ADI. The result is shown in the Fig. 3. As can be observed, it costs more iterations for AA to achieve the same robust accuracy of ADI and ADI+OSD in all cases. Meanwhile, to achieve higher attack performance, ADI and AA require a large number of additional iterations, but ADI+OSD requires fewer iterations. The curves reveal the efficiency of our ADI+OSD, especially for reliable attacks. Efficacy on other robustness evaluation methods. In this section, we study the effect of integrating ADI and OSD into different robustness evaluation methods. For all methods, we allocate 500 iterations for each image (5 restarts and 100 iterations for attacks at each restart), for ODI method, N odi =10. For MT-PGD, the number of multiple targets is 3. Tab. 3 shows the robust accuracy (%) of defense models under the attack of different attack methods [20,29,31,44] integrated with our modules (ADI and OSD). It shows that ADI and OSD can be integrated into multiple attack methods and effectively improve their performance. The "acc" column shows the robust accuracies of different models. The "Nominal" column shows the robust accuracies reported by defense models. The "∆" column shows the difference between the robust accuracies of "Nominal" and A 3 . The "→" column shows the iteration number of forward propagation (million), while the "←" column shows the iteration number of backward propagation (million). Models marked with † were additionally trained with unlabeled datasets. We used = 8/255 except for models marked with ‡ , which used = 0.031 as originally reported by the authors. Notably, the "acc" column of A 3 shows the difference between the robust accuracies of AA and A 3 , the "←" and "→" columns of A 3 show the speedup factors of A 3 relative to AA.
Result of Competition
tested with the official adversarial robustness evaluation platform ARES. The competition has three stages. Stage 1 and stage 2 employed 13 and 2 defense models trained on CIFAR-10 and ImageNet datasets, respectively. The ∞ constraint is set to 8/255, 4/255 on CIFAR-10 and Ima-geNet, respectively. Methods for all participants tested on CIFAR-10 dataset and 1, 000 random images of the Ima-geNet validation set to evaluate the final score. The evalu- ation score is defined as the average misclassification rate. Finally, we obtained 51.10% score in the final stage and achieved first place among 1681 teams.
Conclusion
In this paper, we find that model-agnostic initial directions drawn from uniform distributions result in unsatisfactory robust accuracy, and naïve iterative strategy leads to unreliable attacks. We propose a novel approach called Adaptive Auto Attack (A 3 ) to address these issues, which adopts adaptive direction initialization and an online statisticsbased discarding strategy to achieve efficient and reliable robustness evaluation. Extensive experiments demonstrate the effectiveness of A 3 . Particularly, we achieve lower robust accuracy in all cases by consuming much fewer iterations than existing models, e.g., 1/10 on average (10× speed up). We won first place out of 1,681 teams in CVPR
Broader Impacts
Performing reliable robustness evaluation helps distinguish good from bad defenses to resist against the widespread adversarial examples. Our research proposes a practical robustness evaluation method. On the positive side, our method enables us to identify advanced defenses to defend against adaptive attacks, preventing critical safety systems from crashing. On the negative side, malicious users can exploit our method to attack the system, raising a security risk. We will continue to expand the scope of our evaluation for advanced defenses in the future.
A. Introduction
Due to the page limitation of the paper, we further illustrate our method in this supplementary material, which contains the following sections: 1). Detailed quantitative results of the diversified direction w d ; 2). The results of the proposed A 3 attack across various defense strategies, datasets, network architectures and metrics.
B. Detailed quantitative results of the diversified direction w d
In section 3.2 of the main paper, to illustrate that random sampling is sub-optimal, we use ODI [44] to attack 11 defense models, and only give the mean values of w d at thê y-th (the misclassification label), and the y-th (the ground truth).
In order to observe the detailed quantitative results of the diversified direction w d . In this section, we use ODI [44] to attack 12 defense models, including AWP [49], Proxy [39], Fast [48], Feature Scatter [55], Geometry [60], HYDRA [40], Hypersphere [35], Interpolation [56], Regular [25], MART [47], MMA [10] and Pre-training [22]. The experiment settings are the same as the section 3.2 of the main paper. The CIFAR-10 dataset is used in this experiment, there are a total of 10 categories, with 9 error categories and one ground truth.
Among adversarial examples against different models, we summarize detailed statistic results of the direction of diversification w d in Fig. 4 and Fig. 5. For each model, there are 9 rows, representing 9 error categories, where "1st" is the error category with the largest output logits, "9th" is the error category with the ninth largest output logits, and so on. There are 10 columns, representing 10 classes (9 error categories and 1 ground truth.), from "1st" to "9th" representing the 9 error categories and "GT" representing the ground truth. For the error categories, we arrange the error categories in descending order according to the output logits of each error category, where the output logits refer to the output logits of the clean example corresponding to the adversarial example we counted. The "i" row and "j" column represent the mean values of w d on the "j" class when the adversarial example is misclassified as the error category with the "i" largest output logits. For all rows, we initialize their values to 0. We add up the w d of all adversarial examples that are misclassified as the same row and average them. If none of the adversarial examples are misclassified as a error category, then the values of the corresponding row are 0.
From Fig. 4 and Fig. 5, we have the same observations as section 3.2 of the main paper: 1). The diversified direction w d disobeys uniform distribution in all cases. 2). The diversified direction for each model has a model-specific bias in the positive/negative direction, specifically, as follows:
(a). The output logits of the error category increases, while the output logits of the ground truth decreases. For most models (e.g., AWP [49], Proxy [39], Fast [48], Geometry [60], HYDRA [40], Hypersphere [35], MART [47], Pre-training [22]), when an adversarial example is misclassified as an error category, the w d for the error category is mostly positive, i.e., the output logits of the error category increases, while the w d for the ground truth is mostly negative, i.e., the output logits of the ground truth decreases. This is intuitive because when the output logits of the error category of adversarial examples are greater than the output logits of the ground truth, then the examples are successfully attacked. (b). The output logits of the error category increases, and the output logits of the ground truth also increases. However, there are some models whose w d is counterintuitive, such as Feature Scatter [55], Interpolation [56] and MMA [10]. When adversarial examples are misclassified as an error category, the w d for the error category is positive, i.e., the output logits of the error category increases, and the w d for the ground truth is also positive, i.e., the output logits of the ground truth also increases. Although this model has good adversarial robustness against weaker adversarial attack ( i.e., PGD [31]), it is poor in adversarial robustness against stronger attacks ( i.e., AA and A 3 ). A potential reason is that these defense models use gradient masks [34], and PGD chooses a bad starting point, which hinders the performance. (c). The output logits of the error category decreases, and the output logits of the ground truth also increases. The most counter-intuitive is Regular [25], when an adversarial example is misclassified as an error category, the w d for the error category is negative, i.e., the output logits of the error category decreases, and the w d for the ground truth is positive, i.e., the output logits of the ground truth increases. This model also uses gradient masks, which leads to extremely poor adversarial robustness of this model against stronger attacks.
Since the diversified initialization directions of models have some bias, and are not uniformly distributed, generating model-specific initial directions is very important and helps to obtain better performance.
C. Results of A 3 across various datasets, network architectures and metrics.
In this section, we show the results of the proposed A 3 attack across various defense strategies, datasets, network architectures and metrics. The setup is the same as section 4.1 of the main paper.
Results. As can be seen in Tab. 4. we show the effectiveness of the proposed A 3 across more datasets (e.g., MNIST, CIFAR10, and ImageNet), network architectures (e.g., VGG16, DenseNet161, ShuffleNet, etc.) and metrics [49], Proxy [39], Fast [48], Feature Scatter [55], Geometry [60] and HYDRA [40].). The diversified direction of each model has a model-specific bias in the positive/negative direction. In other words, random sampling is suboptimal.
(e.g., L ∞ and L 2 ). The experimental results show that A 3 is better than AA on various datasets, model architectures and metrics. [35], Interpolation [56], Regular [25], MART [47], MMA [10], Pre-training [22].). The diversified direction of each model has a model-specific bias in the positive/negative direction. In other words, random sampling is suboptimal. Table 4. The results of the proposed A 3 attack across various defense strategies, datasets, network architectures and metrics. The "acc" column shows the robust accuracies of different models. The "→" column shows the iteration number of forward propagation (million), while the "←" column shows the iteration number of backward propagation (million). The "acc" column of A 3 shows the difference between the robust accuracies of AA and A 3 , the "←" and "→" columns of A 3 show the speedup factors of A 3 relative to AA.
Figure 1 .
1Quantitative statistical results of the diversified direction w d of adversarial examples on 11 models. The diversified direction w d for all models disobey uniform distribution.
Figure 2 .
2Quantitative statistical results of loss percentile of easyto-attack images. As the number of iterations increases, the loss percentile of easy-to-attack images continually decrease.
For our A 3 , N = 10 . The other experimental settings are the same as in Sec. 4.1.
Figure 3 .
3Comparisons of performance of ADI+OSD, ADI and AA on 4 defenders. For each defender, we separately record the number of forward propagation and back propagation by columns. The horizontal axes show the number of back propagation (top) and forward propagation (bottom). The vertical axes show the percentage of remaining unsuccessful examples. The iteration numbers needed for ADI+OSD and ADI to defeat AA are also marked.
Figure 4 .
4Quantitative statistical results of the diversified direction w d of adversarial examples on multiple defense models ( i.e., AWP
Figure 5 .
5Quantitative statistical results of the diversified direction w d of adversarial examples on multiple defense models ( i.e., Hypersphere
ADI adopts the sign of the summed w d in W as prior knowledge to generate the adaptive direction w a :to initialize PGD attacks and obtains ad-
versarial examples crafted by PGD. We denote W as a set
containing w d of all adversarial examples.
Motivated by Sec. 3.2, for adaptive directions genera-
tion,
With our proposed A 3 , we participated in the CVPR 2021 White-box Adversarial Attacks on Defense Models competition launched by Alibaba Group and Tsinghua University. To evaluate performance fairly, all codes wereCIFAR-10
Defense Method
Model
Clean Nominal PGD ODI
AA
A 3
∆
acc
acc
acc
acc
acc
→
←
acc
→
←
acc
ULAT [19] †
WRN-70-16
91.10
65.87
66.75 66.06 65.88 51.20 12.90 65.78 ↓ 0.10 4.49(11.40×) 2.20(5.86×) ↓ 0.09
Fixing Data [36] WRN-70-16
88.54
64.20
65.10 64.46 64.25 50.82 12.59 64.19 ↓ 0.06 4.41(11.52×) 2.17(5.81×) ↓ 0.01
ULAT [19] †
WRN-28-10
89.48
62.76
63.63 63.01 62.80 49.62 12.30 62.70 ↓ 0.10 4.28(11.58×) 2.10(5.85×) ↓ 0.05
Fixing Data [36] WRN-28-10
87.33
60.73
61.64 61.09 60.75 47.98 11.91 60.66 ↓ 0.09 4.14(11.59×) 2.04(5.83×) ↓ 0.07
RLPE [42] †
WRN-34-15
86.53
60.41
61.25 60.69 60.41 47.53 11.82 60.31 ↓ 0.10 4.12(11.52×) 2.02(5.84×) ↓ 0.10
AWP [49] †
WRN-28-10
88.25
60.04
60.55 60.23 60.04 47.20 11.70 59.98 ↓ 0.06 4.09(11.54×) 2.01(5.82×) ↓ 0.06
RLPE [42] †
WRN-28-10
89.46
59.66
60.78 59.88 59.66 47.09 11.72 59.51 ↓ 0.15 4.10(11.49×) 2.00(5.85×) ↓ 0.15
Geometry [60] † ‡
WRN-28-10
89.36
59.64
60.17 59.59 59.64 47.10 11.67 59.53 ↓ 0.11 4.10(11.49×) 2.00(5.85×) ↓ 0.11
RST [5] †
WRN-28-10
89.69
62.50
60.64 59.44 59.53 47.10 11.70 59.42 ↓ 0.11 4.10(11.49×) 2.01(5.82×) ↓ 3.08
Proxy [39] †
WRN-34-10
85.85
59.09
60.51 59.94 59.09 46.70 11.60 58.99 ↓ 0.10 4.04(11.56×) 1.98(5.86×) ↓ 0.10
OAAT [1] WRN-34-10
85.32
58.04
58.84 58.25 58.04 45.64 11.34 57.98 ↓ 0.06 3.99(11.43×) 1.96(5.76×) ↓ 0.06
HYDRA [40] †
WRN-28-10
88.98
59.98
58.27 57.60 57.14 45.20 11.20 57.06 ↓ 0.08 3.91(11.56×) 1.92(5.83×) ↓ 2.92
ULAT [19] WRN-70-16
85.29
57.20
57.90 57.48 57.20 45.20 11.20 57.08 ↓ 0.12 3.90(11.59×) 1.92(5.83×) ↓ 0.12
ULAT [19] WRN-34-20
85.64
56.82
57.40 57.00 56.86 44.96 11.18 56.76 ↓ 0.10 3.88(11.60×) 1.90(5.89×) ↓ 0.10
MART [47] †
WRN-28-10
87.50
65.04
58.09 56.80 56.29 44.60 11.10 56.20 ↓ 0.09 3.86(11.55×) 1.89(5.93×) ↓ 8.84
Pre-training [22] †
WRN-34-10
87.11
57.40
56.43 55.32 54.92 43.40 10.80 54.76 ↓ 0.16 3.73(11.64×) 1.83(5.90×) ↓ 2.64
Proxy [39]
ResNet-18
84.38
55.60
56.31 54.98 54.43 43.21 10.71 54.35 ↓ 0.08 3.75(11.52×) 1.84(5.81×) ↓ 1.25
AT HE [35] WRN-34-20
85.14
62.14
55.33 54.21 53.74 43.00 10.69 53.67 ↓ 0.07 3.68(11.68×) 1.81(5.91×) ↓ 8.47
LBGAT [8] ‡
WRN-34-20
88.70
53.57
54.69 53.90 53.57 43.11 10.58 53.46 ↓ 0.11 3.69(11.63×) 1.81(5.80×) ↓ 0.11
FAT [59] WRN-34-10
84.52
53.51
54.46 53.83 53.51 42.94 10.54 53.42 ↓ 0.09 3.68(11.72×) 1.81(5.83×) ↓ 0.09
Overfitting [37] WRN-34-20
85.34
58.00
55.21 53.95 53.42 42.10 10.50 53.33 ↓ 0.09 3.66(11.50×) 1.80(5.83×) ↓ 4.67
Self-adaptive [24] ‡
WRN-34-10
83.48
58.03
54.39 53.62 53.33 42.10 10.50 53.20 ↓ 0.13 3.66(11.50×) 1.80(5.83×) ↓ 4.83
TRADES [57] ‡
WRN-34-10
84.92
56.43
54.02 53.31 53.08 42.00 10.40 53.01 ↓ 0.07 3.63(11.57×) 1.78(5.75×) ↓ 3.42
LBGAT [8] ‡
WRN-34-10
88.22
52.86
54.37 53.26 52.86 41.80 10.30 52.76 ↓ 0.10 3.64(11.48×) 1.79(5.79×) ↓ 0.10
OAAT [1]
ResNet-18
80.24
51.06
51.69 51.28 51.06 40.54 10.21 51.02 ↓ 0.04 3.51(11.53×) 1.72(5.93×) ↓ 0.04
SAT [41] WRN-34-10
86.84
50.72
52.95 51.38 50.72 40.14 10.01 50.62 ↓ 0.10 3.50(11.46×) 1.72(5.81×) ↓ 0.10
Robustness [13]
ResNet-50
87.03
53.29
52.19 50.14 49.21 39.10 9.80 49.16 ↓ 0.05 3.42(11.43×) 1.68(5.83×) ↓ 4.13
YOPO [54] WRN-34-10
87.20
47.98
47.11 45.57 44.83 35.60 9.00 44.77 ↓ 0.06 3.09(11.52×) 1.52(5.92×) ↓ 3.21
MMA [10]
WRN-28-4
84.36
47.18
47.78 42.42 41.51 33.30 8.60 41.27 ↓ 0.24 3.17(10.50×) 1.66(5.19×) ↓ 5.85
DNR [28]
ResNet-18
87.32
40.41
42.15 41.01 40.41 32.81 8.72 40.26 ↓ 0.15 2.81(11.67×) 1.38(6.32×) ↓ 5.93
CNL [3] ‡
ResNet-18
81.30
79.67
40.26 40.23 40.22 32.70 8.70 39.83 ↓ 0.39 2.74(11.93×) 1.34(6.49×) ↓ 39.84
Feature Scatter [55] WRN-28-10
89.98
60.60
54.63 42.91 36.62 30.00 8.20 36.31 ↓ 0.33 11.02(2.72×) 5.44(1.51×) ↓ 24.33
Interpolation [56] WRN-28-10
90.25
68.70
66.72 49.35 36.45 30.00 8.50 36.21 ↓ 0.24 11.21(2.64×) 5.52(1.54×) ↓ 32.32
Sensible [26] WRN-34-10
91.51
57.23
56.04 43.15 34.22 28.20 7.80 34.00 ↓ 0.22 10.66(2.65×) 5.25(1.49×) ↓ 23.23
Regularization [25]
ResNet-18
90.84
77.68
52.77 19.73 1.35
3.10
2.30
0.89 ↓ 0.46
2.24(1.38×) 1.09(2.11×) ↓ 76.79
CIFAR-100
Defense Method
Model
Clean Nominal PGD ODI
AA
A 3
∆
acc
acc
acc
acc
acc
→
←
acc
→
←
acc
ULAT [19] †
WRN-70-16
69.15
36.88
38.64 37.41 36.88 29.84 7.42 36.86 ↓ 0.02 2.56(11.64×) 1.25(5.92×) ↓ 0.02
Fixing Data [36] WRN-70-16
63.56
34.64
35.95 34.98 34.64 28.02 6.96 34.55 ↓ 0.09 2.38(11.76×) 1.16(6.00×) ↓ 0.04
Fixing Data [36] WRN-28-10
62.41
32.06
33.39 32.36 32.06 25.53 6.48 32.00 ↓ 0.06 2.24(11.38×) 1.10(5.90×) ↓ 0.06
OAAT [1] WRN-34-10
65.73
30.35
31.62 30.93 30.35 24.34 6.11 30.31 ↓ 0.04 2.18(11.14×) 1.07(5.70×) ↓ 0.04
LBGAT [8] ‡
WRN-34-20
62.55
30.20
31.65 30.49 30.20 23.97 6.10 30.12 ↓ 0.08 2.16(11.11×) 1.05(5.80×) ↓ 0.08
ULAT [19] WRN-70-16
60.86
30.03
31.03 30.41 30.03 23.93 6.09 29.99 ↓ 0.04 2.13(11.23×) 1.04(5.86×) ↓ 0.04
LBGAT [8] ‡
WRN-34-10
60.64
29.33
30.56 29.63 29.33 23.21 5.94 29.18 ↓ 0.15 2.11(11.00×) 1.03(5.77×) ↓ 0.15
AWP [49] WRN-34-10
60.38
28.86
30.70 29.45 28.86 23.01 5.84 28.78 ↓ 0.08 2.10(10.96×) 1.02(5.72×) ↓ 0.08
Pre-training [22] WRN-28-10
59.23
28.42
30.56 29.13 28.42 22.74 5.73 28.31 ↓ 0.11 2.08(10.93×) 1.02(5.61×) ↓ 0.11
OAAT [1]
ResNet18
62.02
27.14
27.90 27.47 27.14 21.74 5.61 27.09 ↓ 0.05 2.34(9.29×) 1.15(4.88×) ↓ 0.05
SAT [41] WRN-34-10
62.82
24.57
26.69 25.43 24.57 19.70 5.10 24.51 ↓ 0.06 1.90(10.36×) 0.93(5.48×) ↓ 0.06
Overfitting [37] PAResNet-18 53.83
18.95
20.15 19.39 18.95 15.28 4.00 18.90 ↓ 0.05 1.64(9.32×) 0.80(5.00×) ↓ 0.05
Table 1. Comparison of robust accuracy (%) under the attack of A 3 , PGD, ODI, and AutoAttack(AA) across various defense strategies.
Table 3 .
3Effectiveness of the two modules ADI and OSD on 4
different attack methods.
2021 White-box Adversarial Attacks on Defense Models
competitions with this method.
WideResNet-28-10 91.79 78.80 62.00 15.20 78.79 ↓ 0.01 5.35(11.59×) 2.63(5.78×) Robustness [13] CIFAR10(10000) L 2 ( = 0.5) ResNet50 90.83 69.23 54.56 13.45 69.21 ↓ 0.02 4.72(11.56×) 2.32(5.80×)Defense Method
Dataset
Metrics
Model
Clean
AA
A 3
number of test samples
acc
acc
→
←
acc
→
←
Undefended
ImageNet(5000)
L ∞ ( = 4/255)
ResNet50
76.74
0.0
0.40
0.39
0.0
0.02(20.0×) 0.005(78.0×)
DARI [38]
ImageNet(5000)
L ∞ ( = 4/255) WideResNet-50-2
68.46 38.14 15.15 3.82 38.12 ↓ 0.02 2.67(5.67×)
1.31(2.90×)
DARI [38]
ImageNet(5000)
L ∞ ( = 4/255)
ResNet50
64.10 34.66 13.78 3.49 34.64 ↓ 0.02 2.47(5.58×)
1.22(2.86×)
DARI [38]
ImageNet(5000)
L ∞ ( = 4/255)
ResNet18
52.90 25.30 10.10 2.58 25.16 ↓ 0.14 1.96(5.15×)
0.96(2.69×)
DARI [38]
ImageNet(5000)
L 2 ( = 3.0)
DenseNet161
66.14 36.52 14.51 3.67 36.50 ↓ 0.02 2.59(5.60×)
1.28(2.87×)
DARI [38]
ImageNet(5000)
L 2 ( = 3.0)
VGG16-BN
56.24 29.62 11.79 2.99 29.62 ↓ 0.00 2.20(5.36×)
1.08(2.77×)
DARI [38]
ImageNet(5000)
L 2 ( = 3.0)
ShuffleNet
43.16 17.64 7.08
1.85 17.56 ↓ 0.08 1.58(4.48×)
0.78(2.37×)
DARI [38]
ImageNet(5000)
L 2 ( = 3.0)
MobileNet-V2
49.62 24.78 9.89
2.52 24.74 ↓ 0.04 1.94(5.10×)
0.95(2.65×)
Fixing Data [36]
CIFAR10(10000)
L 2 ( = 0.5)
Proxy [39]
CIFAR10(10000)
L 2 ( = 0.5)
WideResNet-34-10 90.31 76.11 59.89 14.69 76.10 ↓ 0.01 5.18(11.56×) 2.55(5.76×)
Overfitting [37]
CIFAR10(10000)
L 2 ( = 0.5)
ResNet18
88.67 67.68 53.34 13.15 67.64 ↓ 0.04 4.61(11.57×) 2.27(5.79×)
ULAT [19]
MNIST(10000)
L ∞ ( = 0.3)
WideResNet-28-10 99.26 96.34 76.05 18.44 96.31 ↓ 0.03 6.53(11.64×) 3.22(5.71×)
TRADES [57]
MNIST(10000)
L ∞ ( = 0.3)
SmallCNN
99.48 92.76 73.12 17.88 92.71 ↓ 0.05 6.33(11.55×) 3.12(5.73×)
Towards achieving adversarial robustness beyond perceptual limits. Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, Shivangi Khare, Venkatesh Babu Radhakrishnan, ICML 2021 Workshop on Adversarial Machine Learning. 47Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, Shivangi Khare, and Venkatesh Babu Radhakrishnan. To- wards achieving adversarial robustness beyond perceptual limits. In ICML 2021 Workshop on Adversarial Machine Learning, 2021. 4, 7
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Anish Athalye, Nicholas Carlini, David Wagner, In ICML. 2Anish Athalye, Nicholas Carlini, and David Wagner. Obfus- cated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, 2018. 2
Controlling neural level sets. Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, Yaron Lipman, NeurIPS. Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. Controlling neural level sets. In NeurIPS, 2019. 7
Towards evaluating the robustness of neural networks. Nicholas Carlini, David A Wagner, SP. Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In SP, 2017. 1
Unlabeled data improves adversarial robustness. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, Percy Liang, NeurIPS. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, and Percy Liang. Unlabeled data improves adversar- ial robustness. In NeurIPS, 2019. 7
Minimally distorted adversarial examples with a fast adaptive boundary attack. Francesco Croce, Matthias Hein, ICML. Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In ICML, 2020. 2
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Francesco Croce, Matthias Hein, PMLRICML. 6Francesco Croce and Matthias Hein. Reliable evalua- tion of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, pages 2206-2216. PMLR, 2020. 1, 2, 3, 6
Learnable boundary guided adversarial training. Jiequan Cui, Shu Liu, Liwei Wang, Jiaya Jia, ICCV. Jiequan Cui, Shu Liu, Liwei Wang, and Jiaya Jia. Learn- able boundary guided adversarial training. In ICCV, pages 15721-15730, October 2021. 7
Stochastic activation pruning for robust adversarial defense. S Guneet, Kamyar Dhillon, Zachary C Azizzadenesheli, Jeremy Lipton, Jean Bernstein, Aran Kossaifi, Animashree Khanna, Anandkumar, ICLR. Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In ICLR, 2018. 1
MMA training: Direct input space margin maximization through adversarial training. Yash Gavin Weiguang Ding, Kry Yik Chau Sharma, Ruitong Lui, Huang, ICLR, 2020. 7. 1113Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. MMA training: Direct input space margin maximization through adversarial training. In ICLR, 2020. 7, 11, 13
Boosting adversarial attacks with momentum. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li, CVPR. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial at- tacks with momentum. In CVPR, 2018. 1
Evading defenses to transferable adversarial examples by translation-invariant attacks. Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, CVPR. 1Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversarial examples by translation-invariant attacks. In CVPR, 2019. 1, 2
Shibani Santurkar. Logan Engstrom, Andrew Ilyas, Hadi Salman, 714and Dimitris Tsipras. Robustness (python libraryLogan Engstrom, Andrew Ilyas, Hadi Salman, Shibani San- turkar, and Dimitris Tsipras. Robustness (python library), 2019. 4, 7, 14
Feature space targeted attacks by statistic alignment. Lianli Gao, Yaya Cheng, Qilong Zhang, Xing Xu, Jingkuan Song, IJCAI. Lianli Gao, Yaya Cheng, Qilong Zhang, Xing Xu, and Jingkuan Song. Feature space targeted attacks by statistic alignment. In IJCAI, 2021. 1
Patch-wise attack for fooling deep neural network. Lianli Gao, Qilong Zhang, Jingkuan Song, Xianglong Liu, Heng Tao Shen, ECCV. 2020Lianli Gao, Qilong Zhang, Jingkuan Song, Xianglong Liu, and Heng Tao Shen. Patch-wise attack for fooling deep neu- ral network. In ECCV, 2020. 1
Patch-wise++ perturbation for adversarial targeted attacks. CoRR. Lianli Gao, Qilong Zhang, Jingkuan Song, Heng Tao Shen, abs/2012.15503, 2020. 1Lianli Gao, Qilong Zhang, Jingkuan Song, and Heng Tao Shen. Patch-wise++ perturbation for adversarial targeted at- tacks. CoRR, abs/2012.15503, 2020. 1
Adversarial and clean data are not twins. Zhitao Gong, Wenlu Wang, Wei-Shinn Ku, arXiv:1704.04960arXiv preprintZhitao Gong, Wenlu Wang, and Wei-Shinn Ku. Ad- versarial and clean data are not twins. arXiv preprint arXiv:1704.04960, 2017. 1
Explaining and harnessing adversarial examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, ICLR. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. 1
Uncovering the limits of adversarial training against norm-bounded adversarial examples. Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy A Mann, Pushmeet Kohli, abs/2010.03593714Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy A. Mann, and Pushmeet Kohli. Uncovering the limits of adver- sarial training against norm-bounded adversarial examples. CoRR, abs/2010.03593, 2020. 7, 14
An alternative surrogate loss for pgd-based adversarial testing. Sven Gowal, Jonathan Uesato, Chongli Qin, Po-Sen Huang, Timothy A Mann, Pushmeet Kohli, arXiv:1910.093386arXiv preprintSven Gowal, Jonathan Uesato, Chongli Qin, Po-Sen Huang, Timothy A. Mann, and Pushmeet Kohli. An alternative sur- rogate loss for pgd-based adversarial testing. arXiv preprint arXiv:1910.09338, 2019. 1, 2, 6
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 1
Using pre-training can improve model robustness and uncertainty. Dan Hendrycks, Kimin Lee, Mantas Mazeika, ICML. 13Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In ICML, 2019. 4, 7, 11, 13
Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. Gao Huang, Zhuang Liu, CVPR. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kil- ian Q. Weinberger. Densely connected convolutional net- works. In CVPR, 2017. 1
Selfadaptive training: beyond empirical risk minimization. Lang Huang, Chao Zhang, Hongyang Zhang, NeurIPS. Lang Huang, Chao Zhang, and Hongyang Zhang. Self- adaptive training: beyond empirical risk minimization. In NeurIPS, 2020. 7
Manifold regularization for adversarial robustness. CoRR, abs. Charles Jin, Martin Rinard, 13Charles Jin and Martin Rinard. Manifold regularization for adversarial robustness. CoRR, abs/2003.04286, 2020. 4, 7, 11, 13
Sensible adversarial learning. Jungeum Kim, Xiao Wang, Jungeum Kim and Xiao Wang. Sensible adversarial learning. 2019. 7
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 6
DNR: A tunable robust pruning framework through dynamic network rewiring of dnns. Souvik Kundu, Mahdi Nazemi, Peter A Beerel, Massoud Pedram, ASPDAC. Souvik Kundu, Mahdi Nazemi, Peter A. Beerel, and Mas- soud Pedram. DNR: A tunable robust pruning framework through dynamic network rewiring of dnns. In ASPDAC, pages 344-350, 2021. 7
Adversarial examples in the physical world. Alexey Kurakin, Ian J Goodfellow, Samy Bengio, ICLR Workshop. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Ad- versarial examples in the physical world. In ICLR Workshop, 2017. 6
Nesterov accelerated gradient and scale invariance for adversarial attacks. Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, John E Hopcroft, ICLR. 2020Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft. Nesterov accelerated gradient and scale invariance for adversarial attacks. In ICLR, 2020. 1
Towards deep learning models resistant to adversarial attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, ICLR. 611Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018. 1, 2, 3, 6, 11
Composite adversarial attacks. Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue, AAAI. Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, and Hui Xue. Composite adversarial attacks. In AAAI, 2021. 2
Deepfool: A simple and accurate method to fool deep neural networks. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Pascal Fawzi, Frossard, CVPR. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In CVPR, 2016. 1
A learning and masking approach to secure learning. Linh Nguyen, Sky Wang, Arunesh Sinha, In GameSec. 411Linh Nguyen, Sky Wang, and Arunesh Sinha. A learning and masking approach to secure learning. In GameSec, 2018. 4, 11
Boosting adversarial training with hypersphere embedding. Tianyu Pang, Xiao Yang, Yinpeng Dong, Taufik Xu, Jun Zhu, Hang Su, NeurIPS. 13Tianyu Pang, Xiao Yang, Yinpeng Dong, Taufik Xu, Jun Zhu, and Hang Su. Boosting adversarial training with hy- persphere embedding. In NeurIPS, 2020. 4, 7, 11, 13
Fixing data augmentation to improve adversarial robustness. Sven Sylvestre-Alvise Rebuffi, Dan A Gowal, Florian Calian, Olivia Stimberg, Timothy A Wiles, Mann, abs/2103.01946CoRR714Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Flo- rian Stimberg, Olivia Wiles, and Timothy A. Mann. Fixing data augmentation to improve adversarial robustness. CoRR, abs/2103.01946, 2021. 7, 14
Overfitting in adversarially robust deep learning. Leslie Rice, Eric Wong, J Zico Kolter, ICML, 2020. 714Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning. In ICML, 2020. 7, 14
Do adversarially robust imagenet models transfer better? In NeurIPS. Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry, 2020. 14Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust im- agenet models transfer better? In NeurIPS, 2020. 14
Improving adversarial robustness using proxy distributions. Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang, Prateek Mittal, abs/2104.09425CoRR1214Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Si- hui Dai, Chong Xiang, Mung Chiang, and Prateek Mittal. Improving adversarial robustness using proxy distributions. CoRR, abs/2104.09425, 2021. 4, 7, 11, 12, 14
HYDRA: pruning adversarially robust neural networks. Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana, NeurIPS. 1112Vikash Sehwag, Shiqi Wang, Prateek Mittal, and Suman Jana. HYDRA: pruning adversarially robust neural net- works. In NeurIPS, 2020. 3, 7, 11, 12
Wagner. Improving adversarial robustness through progressive hardening. CoRR, abs. Chawin Sitawarin, Supriyo Chakraborty, David A , Chawin Sitawarin, Supriyo Chakraborty, and David A. Wag- ner. Improving adversarial robustness through progressive hardening. CoRR, abs/2003.09347, 2020. 7
Robust learning via persistency of excitation. CoRR, abs. Kaustubh Sridhar, Oleg Sokolsky, Insup Lee, James Weimer, 47Kaustubh Sridhar, Oleg Sokolsky, Insup Lee, and James Weimer. Robust learning via persistency of excitation. CoRR, abs/2106.02078, 2021. 4, 7
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, CVPR. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the in- ception architecture for computer vision. In CVPR, 2016. 1
Diversity can be transferred: Output diversification for white-and black-box attacks. Yusuke Tashiro, Yang Song, Stefano Ermon, arXiv:2003.06878611arXiv preprintYusuke Tashiro, Yang Song, and Stefano Ermon. Diver- sity can be transferred: Output diversification for white-and black-box attacks. arXiv preprint arXiv:2003.06878, 2020. 1, 2, 3, 6, 11
On adaptive attacks to adversarial example defenses. Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry, NeurIPS, 2020. 1Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial ex- ample defenses. In NeurIPS, 2020. 1, 2
Adversarial risk and the dangers of evaluating against weak attacks. Jonathan Uesato, O Brendan, Pushmeet 'donoghue, Aaron Kohli, Oord, ICML. 1Jonathan Uesato, Brendan O'donoghue, Pushmeet Kohli, and Aaron Oord. Adversarial risk and the dangers of evalu- ating against weak attacks. In ICML, 2018. 1, 2
Improving adversarial robustness requires revisiting misclassified examples. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu, ICLR, 2020. 4. 713Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In ICLR, 2020. 4, 7, 11, 13
Fast is better than free: Revisiting adversarial training. Eric Wong, Leslie Rice, J Zico Kolter, ICLR, 2020. 3. 1112Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In ICLR, 2020. 3, 11, 12
Adversarial weight perturbation helps robust generalization. Dongxian Wu, Shu-Tao Xia, Yisen Wang, NeurIPS, 2020. 4. 712Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. In NeurIPS, 2020. 4, 7, 11, 12
Mitigating adversarial effects through randomization. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Alan L Zhou Ren, Yuille, ICLR. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan L. Yuille. Mitigating adversarial effects through ran- domization. In ICLR, 2018. 1
Improving transferability of adversarial examples with input diversity. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Alan L Zhou Ren, Yuille, CVPR. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. Improving transferabil- ity of adversarial examples with input diversity. In CVPR, 2019. 1
Lafeat: Piercing through adversarial defenses with latent features. Yunrui Yu, Xitong Gao, Cheng-Zhong Xu, CVPR. Yunrui Yu, Xitong Gao, and Cheng-Zhong Xu. Lafeat: Piercing through adversarial defenses with latent features. In CVPR, 2021. 2
. Sergey Zagoruyko, Nikos Komodakis, arXiv:1605.07146Wide residual networks. arXiv preprintSergey Zagoruyko and Nikos Komodakis. Wide residual net- works. arXiv preprint arXiv:1605.07146, 2016. 1
You only propagate once: Accelerating adversarial training via maximal principle. Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, NeurIPS. Zhanxing Zhu, and Bin DongDinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerat- ing adversarial training via maximal principle. In NeurIPS, pages 227-238, 2019. 7
Defense against adversarial attacks using feature scattering-based adversarial training. Haichao Zhang, Jianyu Wang, NeurIPS. 1112Haichao Zhang and Jianyu Wang. Defense against adversar- ial attacks using feature scattering-based adversarial training. In NeurIPS, 2019. 4, 7, 11, 12
Adversarial interpolation training: A simple approach for improving model robustness. Haichao Zhang, Wei Xu, 13Haichao Zhang and Wei Xu. Adversarial interpolation train- ing: A simple approach for improving model robustness. 2019. 4, 7, 11, 13
Theoretically principled trade-off between robustness and accuracy. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, Michael I Jordan, ICML. 414Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In ICML, pages 7472-7482, 2019. 4, 7, 14
Curriculum-based meta-learning. Ji Zhang, Jingkuan Song, Yazhou Yao, Lianli Gao, ACM MM. Ji Zhang, Jingkuan Song, Yazhou Yao, and Lianli Gao. Curriculum-based meta-learning. In ACM MM, pages 1838- 1846, 2021. 1
Attacks which do not kill training make adversarial learning stronger. Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan S Kankanhalli, ICML. 47Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan S. Kankanhalli. Attacks which do not kill training make adversarial learning stronger. In ICML, pages 11278-11287, 2020. 4, 7
Geometry-aware instance-reweighted adversarial training. Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan S Kankanhalli, ICLR. 1112Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, and Mohan S. Kankanhalli. Geometry-aware instance-reweighted adversarial training. In ICLR, 2021. 3, 7, 11, 12
Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue, ICLR. 2022Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, and Hui Xue. Beyond imagenet at- tack: Towards crafting adversarial examples for black-box domains. In ICLR, 2022. 1
Practical no-box adversarial attacks with training-free hybrid image transformation. Qilong Zhang, Chaoning Zhang, Chaoqun Li, Jingkuan Song, Lianli Gao, Heng Tao Shen, abs/2203.04607, 2022. 1CoRRQilong Zhang, Chaoning Zhang, Chaoqun Li, Jingkuan Song, Lianli Gao, and Heng Tao Shen. Practical no-box ad- versarial attacks with training-free hybrid image transforma- tion. CoRR, abs/2203.04607, 2022. 1
Towards large yet imperceptible adversarial image perturbations with perceptual color distance. Zhengyu Zhao, Zhuoran Liu, Martha A Larson, CVPR. 2020Zhengyu Zhao, Zhuoran Liu, and Martha A. Larson. To- wards large yet imperceptible adversarial image perturba- tions with perceptual color distance. In CVPR, 2020. 1
| [
"https://github.com/liuye6666/adaptive",
"https://github.com/fra31/auto-attack"
]
|
[
"Intermediate Service Facility Planning in a Stochastic and Competitive Market: Incorporating Agent-infrastructure Interactions over Networks",
"Intermediate Service Facility Planning in a Stochastic and Competitive Market: Incorporating Agent-infrastructure Interactions over Networks"
]
| [
"Sina Baghali \nDepartment of Civil\nEnvironmental and Construction Engineering\nUniversity of Central Florida\n\n",
"Zhaomiao Guo [email protected] \nDepartment of Civil\nEnvironmental and Construction Engineering\nUniversity of Central Florida\n\n",
"Julio Deride \nDepartment of Mathematics\nUniversidad Técnica Federico Santa María Santiago\nChile\n",
"Yueyue Fan \nDepartment of Civil and Environmental Engineering\nUniversity of California Davis\n\n"
]
| [
"Department of Civil\nEnvironmental and Construction Engineering\nUniversity of Central Florida\n",
"Department of Civil\nEnvironmental and Construction Engineering\nUniversity of Central Florida\n",
"Department of Mathematics\nUniversidad Técnica Federico Santa María Santiago\nChile",
"Department of Civil and Environmental Engineering\nUniversity of California Davis\n"
]
| []
| This paper presents a network-based multi-agent optimization model for the strategic planning of service facilities in a stochastic and competitive market. We focus on the type of service facilities that are of intermediate nature, i.e., users may need to deviate from the shortest path to receive/provide services in between the users' planned origins and destinations. This problem has many applications in emerging transportation mobility, including dynamic ride-sharing hub design and competitive facility location and allocation problems for alternative fuel vehicle refueling stations. The main contribution of this paper is establishing a new multi-agent optimization framework considering decentralized decision makings of facility investors and users over a transportation network and providing rigorous analyses of its mathematical properties, such as uniqueness and existence of system equilibrium. In addition, we develop an exact convex reformulation of the original multi-agent optimization problems to overcome computational challenges brought by non-convexity. Extensive analysis on case studies showed how the proposed model can capture the complex interaction between different stakeholders in an uncertain environment. Additionally, our model allowed quantifying the value of stochastic modeling and information availability by exploring stochastic metrics, including value of stochastic solution (VSS) and expected value of perfect information (EVPI), in a multi-agent framework. | null | [
"https://export.arxiv.org/pdf/2304.00669v1.pdf"
]
| 257,913,624 | 2304.00669 | 0cd5fc14fb7d2a7e09ec8dc249fb85bf74c93bff |
Intermediate Service Facility Planning in a Stochastic and Competitive Market: Incorporating Agent-infrastructure Interactions over Networks
Sina Baghali
Department of Civil
Environmental and Construction Engineering
University of Central Florida
Zhaomiao Guo [email protected]
Department of Civil
Environmental and Construction Engineering
University of Central Florida
Julio Deride
Department of Mathematics
Universidad Técnica Federico Santa María Santiago
Chile
Yueyue Fan
Department of Civil and Environmental Engineering
University of California Davis
Intermediate Service Facility Planning in a Stochastic and Competitive Market: Incorporating Agent-infrastructure Interactions over Networks
Preprint submitted to Transportation Research: Part C April 4, 2023 arXiv:2304.00669v1 [math.OC] 3 Apr 2023Intermediate Service FacilityCompetitive Facility LocationMulti-agent OptimizationConvex Reformulation * (Corresponding Author) Assistant ProfessorDepartment of CivilEnvironmental and Construction Engi- neeringResilientIntelligentand Sustainable Energy Systems ClusterUniversity of Central FloridaOrlandoFL 32816 Phone: 407-823-6215
This paper presents a network-based multi-agent optimization model for the strategic planning of service facilities in a stochastic and competitive market. We focus on the type of service facilities that are of intermediate nature, i.e., users may need to deviate from the shortest path to receive/provide services in between the users' planned origins and destinations. This problem has many applications in emerging transportation mobility, including dynamic ride-sharing hub design and competitive facility location and allocation problems for alternative fuel vehicle refueling stations. The main contribution of this paper is establishing a new multi-agent optimization framework considering decentralized decision makings of facility investors and users over a transportation network and providing rigorous analyses of its mathematical properties, such as uniqueness and existence of system equilibrium. In addition, we develop an exact convex reformulation of the original multi-agent optimization problems to overcome computational challenges brought by non-convexity. Extensive analysis on case studies showed how the proposed model can capture the complex interaction between different stakeholders in an uncertain environment. Additionally, our model allowed quantifying the value of stochastic modeling and information availability by exploring stochastic metrics, including value of stochastic solution (VSS) and expected value of perfect information (EVPI), in a multi-agent framework.
Introduction
Facility location-allocation problems (FLAPs), which seek the best strategy for locating facilities and allocating demands to the facilities, have wide applications in transportation science, supply chain and logistics, and infrastructure systems (Cornuéjols et al., 1983;Melo et al., 2009a;Hekmatfar, 2009). In this paper, we focus on the type of service facilities that are of intermediate nature, i.e., users may need to deviate from a predefined shortest path to receive/provide services in between the users' planned origins and destinations. The stakeholders we model include facility investors and facility users. Facility investors decide investment capacity and provide services to facility users to maximize their own profit. Facility users will make facility selection and routing decision to receive services. This problem has many applications in emerging transportation mobility, including competitive facility location problems for alternative fuel vehicle refueling stations and dynamic ride-sharing hubs design.
Despite variations tailored to specific domain applications, there are some common features shared in the facility planning for alternative fuel vehicles and emerging mobility that have not been systematically studied in a unified framework. First, service demand could appear at travel origins, destinations, and/or intermediate locations. For example, plug-in electric vehicles (PEVs) can charge at home, workplaces, or public charging stations; ride-sharing/crowdsourced drivers may prefer to pick up and drop off riders/goods with minimum deviation from their planned route. These behaviors require additional modeling capabilities to provide flexibility in capturing node-based or/and link-based demand with possible deviation from pre-defined paths in an endogenous manner. The second feature is that the system involves multiple competitive stakeholders from both supply and demand sides, who are driven by self-interests. For example, individual facility providers invest in service facilities to maximize their own profits. Individual travelers accessing facilities aim to optimize their utility, which could include travel time and/or service costs/revenue. This feature requires a modeling framework that could capture different decision entities' interests and enable analysis at the system level, where performance is shaped collectively by all. The third feature concerns complex agent-infrastructure interactions, i.e., the close coupling between the users' choices of facilities and travel routes, facility providers' facility location decisions, and the resulting link travel time and locational service prices, that need to be studied over a network structure.
This paper aims to develop a generalized modeling framework and efficient computational algorithms to model and analyze intermediate service facility planning (ISFP) in a competitive and stochastic market. Specifically, we make contributions in the following three aspects: (1) we propose a unified system modeling framework to model the decentralized decision-making on the supply and demand sides of a competitive and stochastic market, with rigorous analysis of its mathematical properties (including equilibrium existence and uniqueness); (2) we extended the Combined Distribution and Assignment (CDA) model (Evans, 1976) to capture the coupling between route choice and intermediate facility choices over a congested transportation network; and (3) from a computational perspective, we tailored an exact convex reformulation to our proposed equilibrium model, which significantly improves the computation efficiency with guaranteed global convergence.
The remainder of the paper is organized as follows. Section 2 discusses relevant literature regarding competitive ISFP. Section 3 presents the proposed modeling framework and the mathematical formulations. The solution properties and computational strategies are provided in Section 4. Numerical experiments on the Sioux Falls test network are presented in Section 5 to provide analytical and numerical insights. Section 6 concludes the paper with a discussion and potential future extensions.
Literature Review
Given the ever-growing body of literature on FLAPs, we will focus on the discussion of the literature from a perspective that highlights what distinguishes our work from the past studies.
The readers may refer to (Owen and Daskin, 1998;Hale and Moberg, 2003;Snyder, 2006;Melo et al., 2009b;Daskin, 2011) for more comprehensive reviews on classic facility location models.
Most facility location-allocation models are built with a central planner's perspective, assuming that the location choices and sizes of different facilities can all be controlled by a single spatial monopoly/planner. Covering (Farahani et al., 2012), p-center (Lin and Lin, 2018), p-median (Hansen and Mladenović, 1997), and flow-capturing (Hodgson, 1990) are classic in this category.
In reality, however, an infrastructure system often involves multiple facility developers driven by self interests (Plastria, 2001).
To capture competitive nature of the supply side, competitive FLAPs, pioneered by Hotelling (1929), have been proposed and developed in facility location literature (Hakimi, 1983;Eiselt et al., 1993;Miller et al., 1996;Aboolian et al., 2007;Friesz, 2007;Drezner, 2009;Smith et al., 2009;Kress and Pesch, 2012). A competitive FLAP concerns the problem of deciding the locations and/or capacity of competing facilities, such as shopping centers, charging stations, ride-sharing hubs, restaurants, and others. In contrast to the classic facility location problem, the configurations of competing facilities are decided by a set of competitors who aim to optimize their own benefits. Competitive facility location models can be broadly categorized based on how competition (e.g. static/dynamic), demand (e.g. fix/elastic, discrete/continuous, deterministic/probabilistic), and decision space (e.g. discrete/network/continuous) are formulated. These studies typically focus on decentralized decision-making from the supply side while simplifying demand-side modeling. For example, most existing studies on competitive FLAPs consider nodal demand (i.e., demand appearing at discrete locations) (Klose and Drexl, 2005) or demand continuous in space (Li and Ouyang, 2010) without considering endogenous demand that could be influenced by the facility locations. In addition, they typically do not model the travel and routing behavior of facility users over transportation network. A few competitive FLAPs studies consider flow-based demand and user travel routes (e.g., (Berman and Krass, 1998;Wu and Lin, 2003)), but adopt a central planner's perspective, which may undermine the capability to forecast and analyze the facility network collectively shaped by multiple investors.
In addition, even though studies (Yang and Wong, 2000;Ouyang et al., 2015) have shown that transportation congestion and user's facility choice are closely coupled, existing studies typically consider congestion at the facility level (Guo et al., 2016;Luo et al., 2015) and assume exogenously given traffic congestion, travel routes, and facility service prices.
The flow-based FLAPs have been actively studied in the last decade, especially in the context of charging stations for EVs. Based on Flow Intercepting Location Model (FILM) (Hodgson, 1990;Berman et al., 1992), Shukla et al. (2011);Wen et al. (2014) developed mathematical programming models to determine the cost-effective charging station locations to maximize the intercepted traffic flow. In those studies, potential travel path deviations are not considered.
Several versions of FILM with detours have been proposed by Berman et al. (1995), including maximizing O-D flows intercepted subject to maximum detour allowance and minimizing total detours subject to covering all O-D flows. Building upon Berman et al. (1995), deviated paths were considered in (Li and Huang, 2014;Zockaie et al., 2016). This school of literature may not explicitly consider the EV driving range. Kuby and Lim (2005) has proposed the flow refueling location model (FRLM) to take into account driving range limitations for alternative fuel vehicles, which are further developed in (Kuby et al., 2009;Lim and Kuby, 2010;Capar and Kuby, 2012;MirHassani and Ebrazi, 2013;Kim and Kuby, 2013;de Vries and Duijzer, 2017;Wang et al., 2018;Guo et al., 2018;He et al., 2018;Boujelben and Gicquel, 2019).
Intermediate FLAPs have been applied in the context of EVs en-route charging to faciliate EV adoption (Kchaou-Boujelben, 2021). Wang et al. (2019) designed a charging station capacity and location problem for intra-city travels of EVs and developed a facility location problem for battery swapping of EVs to reduce the range anxiety of drivers. Both studies focused on fulfilling the charging requirements of drivers during their trips and did not consider the transportation network congestion and drivers' facility location and routing choice modeling. Xu and Meng (2020) considered the elastic demand of drivers along with their path deviation in FLAP. Li et al. (2022) proposed a metanetwork-based approach to model en-route charging station planing to improve the network-based algorithm in the branch-andbound framework. A bi-level optimization approach is proposed by Tran et al. (2021) to locate charging stations by minimizing the total travel time and installation costs at the upper level and captures re-routing behaviours of travellers with their driving ranges at the lower level. Authors use an iterative algorithm to solve the bi-level problem which does not guarantee finding the global optimal point. Schoenberg et al. (2022) developed a charging station siting and sizing problem with coordinated charging to facilitate both en-route and destination charging where the focus is on the charging scheduling instead of transportation network modeling. All of the above studies take a central planner's perspective, where all charging facilities are deployed by a single decision-maker.
Modeling decentralized decision-makers in facility location problems has gained more attention in recent studies (Guo et al., 2016;Zhao et al., 2020;Bao and Xie, 2021;Chen et al., 2020).
For example, Zhao et al. (2020) studied the optimal location of new charging stations among the existing competitive stations to maximize the profit of private investors. In that study, probabilistic modeling is developed to model the decision-making of the drivers where the congestion of the transportation network did not play a role in the charging station selection of the drivers. Neither study considered path deviation and en-route charging, which are essential in this context. Guo et al. (2016) proposed a network-based multi-agent optimization modeling framework to explicitly capture the decentralized behaviors of multiple facility investors and users in the context of public fast charger planning. That study modeled service demand only at travel destinations in deterministic market conditions. In addition, the formulation was nonconvex in (Guo et al., 2016). This paper aims to generalize (Guo et al., 2016) in the following three aspects. First, we relax the assumption that travelers receive facility services only at trip destinations by modeling both node-and flow-base facility service demand. Second, we consider a stochastic market where parameters, such as OD travel demand, link travel time, and operational costs, could be uncertain.These extended modeling capabilities improve the realism of the studied problem setting. Third, when the multi-agent optimization problem is coupled with high-dimensional stochastic parameters, the combined problem becomes too complex to be solved by solution approaches based on lopsided convergence of bivariate functions as proposed in (Guo et al., 2016). To overcome that challenge, we establish an exact convex reformulation of the proposed modeling framework, which leads to significantly improved computational efficiency.
Methodology
Problem Description and Modeling Framework
Our goal is to investigate the long-term equilibrium patterns of intermediate service facilities, considering the interactions between stakeholders from both facility supply and demand sides.
On the facility supply side, we consider multiple investors, each of whom makes facility deployment and operational decisions to maximize its own profits. We assume each facility provider does not have the sufficient market power to strategically influence the locational service prices through its own decision-making (i.e., service providers are perfectly competitive) 1 . On the demand side, there are (many) potential facility service users who make individual choices both for facilities and travel routes in order to maximize their utilities, which may depend on facility service prices, locational preference, and the travel time to access the facility. Locational facility service prices and travel time are endogenously determined through the interactions between service supply and demand over the transportation network.
We model this problem in the framework of network-based multi-agent optimization problem with equilibrium constraints (N-MOPEC) (Guo et al., 2016), which reflects the "selfish" nature of each decision entity while simultaneously capturing the interactions among all over a complex 1 We acknowledge that some markets may not fall into the perfect competition category, such as US electricity wholesale market, where the entry barriers and capacity constraint may lead to imperfect competition, especially during contingency (Guo and Fan, 2017). For those markets involving noticeable market power, an oligopolistic model, such as Cournot, Bertrand, or Hotelling model, would be more appropriate. These market settings are beyond the scope of this paper, and we shall leave the investigation of alternative market structures in the future. network structure. MOPEC is originally proposed by (Ferris and Wets, 2013), which includes a wide variety of variational problems as special cases: variational inequalities, complementarity problems, fixed points problems, etc. MOPEC has wide applications in economics (Deride et al., 2015), coupled transportation/power systems (Guo et al., 2021;Baghali et al., 2022), and ride-sourcing mobility systems (Afifah and Guo, 2022).
Consider a collection of agents A whose decisions are denoted as x A = (x a , a ∈ A). A MOPEC model, in its general form, can be expressed as:
x a ∈ argmax x∈Xp,x −a ⊂I R na f a (p, x, x −a ), a ∈ A,(1)
where x represents the vector of investor a's decision variables and X p,x−a is the feasible set for the investor's problem which may depend on the system parameters p and other investors' decisions x −a (−a means A \ a). IR na represents the domain of the set with n a being the dimension of decision variables for investor a. f a is agent a's objective function depends on the decisions of the other agents and system parameters (p), which may be endogenously determined by the system, such as prices. Parameters p and the decisions x A resulting from the multioptimization problem typically need to satisfy global equilibrium constraints, which can be formulated as a functional variational inequality (2):
D(p, x A ) ∈ ∂g(p),(2)
where g : IR d → IR is a proper, lower semicontinuous and convex function and D is a set-valued
mapping from IR d × IR a∈A na to IR d .
The proposed N-MOPEC modeling framework in the context of competitive ISFP is illustrated in Figure 1. We consider two categories of stakeholders: (1) individual investor i (∈ I) decides the location, facility service capacity, and supply quantity to maximize his/her own profits;
(2) individual service user j (∈ J ) travels from a specific origin and destination. User j chooses facility service locations and travel routes to maximize his/her own utility. Even though the decisions of these agents are made individually, they are interdependent due to the shared market, infrastructure, and resources. To ensure an equilibrium state is reached, market clearing conditions, i.e., supply equals demand at every facility location, also need to be imposed.
Network-based Multi-agent Optimization Problems with Equilibrium Constraints (N-MOPEC)
Detailed Formulation for Each Agent
Modeling the Decisions of Facility Investors
Although this paper focuses on facility layout in the long run from investors' perspective, the effectiveness of planning decisions can not be properly evaluated without considering the performance in the operational stage. Therefore, we adopt a two-stage stochastic programming framework to distinguish between two types of decisions a facility investor has to make: (1) during the planning stage, each investor decides the capacities of facilities to invest facing future uncertainties, such as demand, access time, and marginal operational costs.
(2) during the operational stage, uncertain parameters are revealed, and each investor will choose its supply quantities based on market locational prices and operational costs. Because of the assumption of a perfectly competitive market, without loss of generality, we can aggregate the decision-making of all investors into a representative one and use aggregate investment and operational cost functions to capture their collective decisions. The detailed formulation for the representative investor is presented in the model (3).
maximize c k ,g k ξ ∈I R+,k∈K,ξ∈Ξ E ξ k∈K ρ k ξ g k ξ − φ g (g k ξ ) − k∈K φ c (c k ) (3a) subject to g k ξ − c k ≤ 0, ∀k ∈ K, ξ ∈ Ξ (3b)
where:
K set of candidate investment locations, indexed by k; Ξ vector set of uncertain parameters, indexed by ξ;
c k investment capacity allocated at location k; g k ξ total supply at location k in scenario ξ;
ρ k ξ unit service price at location k in scenario ξ, endogeneously determined by the market;
E ξ expectation with respect to the uncertain parameters ξ;
φ c (·) aggregate capital cost function with respect to facility capacity;
φ g (·) aggregate operational cost function with respect to supply quantity.
The objective function (3a) maximizes the expected net profits, calculated as the expectation of the total revenues k∈K ρ k ξ g k ξ minus the operating cost k∈K φ g (g k ξ ), minus the total investment cost during planning stage k∈K φ c (c k ). Total investment costs could include costs associated with land acquisition, construction, and equipment purchase. Constraint (3b) is the capacity constraint that ensures the supplied quantities at each location k and scenario ξ do not exceed its total capacity. The remaining constraints are non-negative restrictions. Note that throughout the paper, we denote vectors in lowercase bold font.
The interpretation of uncertainties ξ is two-fold. First, investors can not predict the future service demand due to uncertain factors, such as total demand (e.g., EV adoption), market competition, travel/charging time/costs, etc. In this case, the interpretation of the probability of ξ is the probability of uncertain parameters. Second, the state of the systems (e.g., facility service demand) may change over time, which can be grouped into homogeneous time segments (e.g., peak and off-peak hours). In this case, the probability of ξ measures the duration percentage of certain homogeneous time segments in the studied horizon. In other words, ξ can represent a realization of uncertain parameters and/or a specific homogeneous time segment.
Note that φ c (·) and φ g (·) are aggregate capital and operational cost functions at each location. In this paper, we assume φ k c (·) and φ k g (·) to be convex functions, e.g., linear function or a quadratic form with positive leading coefficients. Besides mathematical convenience, a convex production cost function implies two desired properties: (1) as service demand at a location increases, it may cause congestion in the upstream supply chain, which leads to a higher marginal cost; (2) as demand increases, higher-cost production resources may start to be utilized, due to capacity limitation of lower cost resources. For example, due to space limitations, the earlier investment can be made at a location with cheaper rent and/or construction costs. However, later investment may have to be built at a more expensive location. Capacity cost functions with increasing marginal costs are widely used to model the cost of charging station Ghamami et al. (2020Ghamami et al. ( , 2016; Guo et al. (2018). In terms of operational costs, when the facility needs certain resources to operate and the resource supply are limited, the marginal production cost is usually monotone increasing with production quantity. For example, for charging facility, the energy prices increase with the demand quantity because cheaper energy resources will be dispatched first. For shared mobility, to attract additional unit of drivers, transportation network companies usually need to pay higher prices.
Modeling the Decisions of Facility Users in a Congested Transportation Network
Facility users' behaviors (facility choice and route choice) are affected by not only the characteristics of facilities but also the transportation network. The combined Distribution and Assignment (CDA) model (see, e.g., (Sheffi, 1985;Lam and Huang, 1992)) has been demonstrated effective in terms of integrating discrete choices (e.g., mode choices and destination choices) and traffic assignment in the context of charging infrastructure planning (He et al., 2013;Guo et al., 2016). In this study, instead of restricting the service location to be at the travel destinations, we propose a Generalized Combined Distribution and Assignment (GCDA) model, in which the facility location can be at either origin, destination, or anywhere in between.
Since all decisions from the demand side are operational decisions and scenario dependent, we omit the notation ξ for brevity throughout this subsection. A multinomial logit model is used to describe the choice of different facility locations k from origins r to destination s, with the utility function defined in (4).
U rsk = β k 0 − β 1 t rsk − β 2 ρ k e rs + rsk(4)
where:
U rsk : utility measure of a user to go from r to s and receive service at k; β : utility function parameters (model input); t rsk : equilibrium travel time from r to s, with detour to service location k; e rs : average service demand from r to s (model input); rsk : error term of utility from r to s, with detour to service location k. rsk follows extreme value distribution.
The utility function of a traveler from origin node r to destination node s choosing facility k is assumed to be the summation of four parts: locational specific attractiveness factor (β 0 ), travel time (β 1 ), and service cost (β 2 ), and error term ( rsk ). β 2 ρ k e rs in the utility function (4) is monetary service costs. ρ k e rs represents the price that drivers pay for the service at facility k.
This price is calculated as the unit cost of service at station k (i.e. ρ k ) multiplied by the average charging quantity (e rs ). β 2 is a utility coefficient representing the disutility for each unit of money spent. Here, we have considered vehicles to have similar facility demand e rs coming from each O-D pair. However, this assumption can be easily relaxed by categorizing drivers from each origin node based on their different levels of facility needs. In case of EVs, for example, the e rs can be categorized into different homogeneous groups to model EVs with different charging demand levels. Different exogenous utility factors can be included in the utility function without affecting the key modeling and computational strategies proposed in this study. In addition, although service time is not explicitly modeled in the utility function (4) (5).
minimizê x,x,x,q≥0 a∈A va 0 t a (u)du + 1 β 1 r∈R s∈S k∈K rs q rsk ln q rsk − 1 + β 2 ρ k e rs − β k 0 (5a) subject to v a = r∈R s∈S k∈K rs (x rsk a +x rsk a ), ∀a ∈ A (5b) (γ)x rsk +x rsk = p∈P rsk (Bp + Bp)x p , ∀r ∈ R, s ∈ S, k ∈ K rs (5c) (λ) Ax rsk = q rsk E rk , ∀r ∈ R, s ∈ S, k ∈ K rs (5d) (λ) Ax rsk = q rsk E ks , ∀r ∈ R, s ∈ S, k ∈ K rs (5e) (µ rs ) k∈K rs q rsk = d rs , ∀r ∈ R, s ∈ S(5f)
where: q rsk : traffic flow from r to s and service at k;
x p : traffic flow on path p;
x rsk a : traffic flow on link a that belongs to the travel from r to k associated with Origin-Service-Destination triple rks.x rsk represents the vector form ofx rsk a for all links;
x rsk a : traffic flow on link a that belongs to the travel from k to s associated with Origin-Service-Destination triple rks.x rsk represents the vector form ofx rsk a for all links;
A : node-link incidence matrix of network, with 1 at starting node and −1 at ending node;
p : sub-path of path p ∈ P rsk that connect r to k; p : sub-path of path p ∈ P rsk that connect k to s; Constraint (5b) calculates the aggregate link flow v a from the link flow associated with rsk:
x rsk a andx rsk a ; Constraint (5c) guarantees there is always a feasible path flow solution x p (p ∈ P ) that can yield a given link flow pattern; Constraint (5d, 5e) ensures the flow conservation at each node, including the origin, intermediate stop, and destination nodes; Constraint (5f) guarantees the sum of the flow to all facilities equals to the total travel demand between each origin-destination pair. Note that the OD demand that does not need access to facility service can be considered as background traffic in model (5). In addition, the total demand d rs may be elastic, and our modeling framework can be naturally extended to consider elastic travel demand depending on travel distance, time, service congestion, and costs. Those who are interested in elastic demand can refer to (Berman and Kaplan, 1987;Aboolian et al., 2012;Berman and Drezner, 2006). The rest of the constraints set non-negative restrictions on path/link flow and trip distribution.
In the objective function (5a), the first term corresponds to the total user cost as modeled in a conventional static traffic equilibrium model; the second term involving q ln q corresponds to the entropy of trip distribution, and the remaining terms correspond to the utility measure (4) of the travelers. This objective function does not have a physical interpretation, but it guarantees the first Wardrop principle (Wardrop, 1952) Proof. See Appendix A.
The GCDA model proposed here can include three special cases, denoted as Intermediate Facility Service , Origin/Destination Facility Service, and Round-trip Facility Service, see Figure 2. Intermediate Facility Service case represents when facility users access the facility service on their way to a destination, such as refueling, banking, and convenience store services. In the Destination Facility Service case, drivers choose their destinations and, in the mean time, receive service at their destinations. For example, EV drivers may choose a restaurant and charge their vehicles at the same time. This case is precisely the conventional CDA model (Sheffi, 1985). In the Round-trip Facility Service case, a user who starts from the origin will make a dedicated trip to a facility location and will need to go back to the same origin after receiving service.
For example, employees who have lunch and need to go back to the workplace afterward. To degenerate our model to Destination Facility Service case, we can specify s = k, while Round-trip Facility Service can can be incorporated in our model by specifying r = s.
Round-trip Facility Service
Intermediate Facility Service Destination Facility Service Note that the framework is not a stochastic user equilibrium (SUE) per the definition by Daganzo and Sheffi (1977) and we do not need to generate a predefined set of paths in advanced.
The "stochasticity" in this paper refers to the uncertainties facility investors faced when they make long-term planning decisions. Model (5) is scenario dependent and we aims to model the equilibrium traffic patterns given each specific realization of scenario. In other words, travelers do not face uncertainties when they make facility and routing choices. Model (5) is an extension of standard Beckmann's formulation of Wardrop user equilibrium (Wardrop, 1952) by considering intermediate facility location choice using Logit model. A closely related category of model is called combined distribution and assignment (CDA) model (Sheffi, 1985;Lam and Huang, 1992), where travelers choose destinations instead of intermediate facility locations.
Market Clearing Conditions
Lastly, to ensure the market is stabilized, we formulate the market clearing conditions that requires the total demand ( r∈R s∈S e rs q rsk ξ ) equal the total supply (g k ξ ) at each facility location in each scenario, as in (6). Recall that q rsk represents the travel demand of rs that receive services at location k. e rs represents the average quantity of service demand per user from rs. Therefore, the total service demand is calculated as r∈R s∈S e rs q rsk ξ . Locational service prices ρ k ξ can be interpreted as the dual variables of (6). Locational service prices ρ k ξ influence the decision making of both supply and demand sides (i.e., problems (3) and (5)) to optimize their own objectives. On the other hand, supply and demand imbalance will influence the locational service prices. We focus on estimating the locational service prices that can lead to a market clearing in an equilibrium state. Note that due to network congestion and accessibility cost, the prices may vary by location even if the services offered at each location are identical.
(ρ k ξ ) g k ξ −
r∈R s∈S e rs q rsk ξ = 0, ∀k ∈ K, ξ ∈ Ξ.
System Equilibrium
The decisions of all participants in this system are interdependent and should be modeled and solved simultaneously as a whole system. Following the notion of Nash equilibrium, at system equilibrium, a unilateral decision change of one agent given the market clearing price ρ and other agents' decisions would diminish his/her pay-off. We state the system equilibrium more formally in Definition 1.
Definition 1. (system equilibrium). The equilibrium state of the system is that all facility providers achieve their own optimality (i.e., problem (3)) and facility users achieve their optimality (i.e., problem (5)), given market clearing price ρ and all other agents' decisions. In addition, the market at each location is cleared by condition (6). Thus, a system equilibrium is defined by an investor strategy (c * , g * ), a GCDA traffic pattern (x * ,x * , x * , q * ), and a vector of prices ρ * , such that (c * , g * ) solves the Investor problem for price ρ * in (3),
(x * ξ ,x * ξ , x * ξ , q * ξ ) solves each the GCDA problem for price ρ * ξ in (5), ∀ξ ∈ Ξ(7)
and (g * , q * ) satisfy the market clearing conditions in (6).
Solution Methods
The proposed modeling framework in Section 3 is a highly non-convex problem due to the complementarity conditions, which is challenging to solve especially with large number of scenarios. In this section, we propose an exact convex reformulation for the original N-MOPEC problem, which can lead to further scenario decomposition for scalability. In addition, leveraging convex reformulation, we prove the existence and uniqueness of the equilibrium solution to the original N-MOPEC problem.
Exact Convex Reformulation
Solving the system equilibrium state per Definition 1 is non-trivial due to the complementary nature of the model formulation. We proposed an exact convex reformulation to solve the system equilibrium, which is shown in model (10). A similar approach is described in (Dvorkin, 2020) and has been proposed to convexify multi-agent system equilibrium problems in coupled transportation and power systems (Guo et al., 2021;Baghali et al., 2022) and ride-sourcing systems (Afifah and Guo, 2022). In the proposed exacted convex reformulation (i.e., model (10)), the objective function is to minimize the combined social non-transactional costs, i.e., the linear combination of the investors' costs and the normalized 2 CDA objective function without considering the terms associated with the price vector ρ. The constraints of model (10) include the constraints of investors (3b), GCDA (5b)-(5f), and the market clearing condition (6).
minimize (c,g,x,x,x,q) k∈K φ c (c k ) + E ξ k∈K φ g (g k ξ ) + E ξ β 1 β 3 a∈A v a,ξ 0 t a (u)du + 1 β 3 r∈R s∈S k∈K rs q rsk ξ ln q rsk ξ − 1 − β k 0 (10a) subject to (λ k ξ ) g k ξ − r∈R s∈S e rs q rsk ξ = 0, ∀k ∈ K, ξ ∈ Ξ. (10b) (c, g) satisfies constraint (3b) (10c) (x ξ ,x ξ , x ξ , q ξ ) satisfies constraints (5b)-(5f) ξ , ∀ξ ∈ Ξ (10d)
To investigate the existence and uniqueness of system equilibrium and its relationship with the solutions of model (10), we first prove that model (10) φ g (·) are convex functions and t a (·) is monotone increasing. Furthermore, if φ c (·) and φ g (·) are strictly convex functions and t a (·) is strictly monotone increasing, model (10) is strictly convex and has a unique solution.
Proof. See Appendix A.
Lemma 3. (solutions of model (10) and their relationship with system equilibria) Assume φ c (·) and φ g (·) are convex functions and t a (·) is monotone increasing. (c * , g * ,x * ,x * , x * , q * ; λ) is a primal-dual solution of the model (10) if and only if (c * , g * ,x * ,x * , x * , q * ) is a system equilibrium (Definition 1), with equilibrium price vector ρ k ξ = λ k ξ π ξ , for every ξ and k, where {π ξ : ξ ∈ Ξ} is the probability distribution of ξ.
Proof. See Appendix A.
Lemma 3 establishes the equivalency between our convex reformulation and the equilibrium model (i.e., model (3), (5), and (6)). Furthermore, based on Lemma 2, we can further discuss the uniqueness of the system equilibrium under strict convexity conditions of the exact reformulation, as shown in Theorem 1.
Theorem 1. (existence and uniqueness of system equilibrium) If φ c (·) and φ g (·) are strictly convex functions and t a (·) is strictly monotone increasing, the system has a unique equilibrium (c * , g * ,x * ,x * , x * , q * , ρ * ) satisfying Definition 1, where (c * , g * ,x * ,x * , x * , q * ) is the solution from the model (10), and ρ k ξ = λ k ξ π ξ , for every ξ and k.
Proof. See Appendix A.
Convex reformulation (10) can be directly solved by commercial nonlinear solvers (e.g., IPOPT), which has provided an effective way of finding a system equilibrium as described in Definition 1. As the dimensions of uncertainties increase, the problem may become more challenging to solve. But since model (10) is convex, it can be solved by the classic scenario decomposition approach, such as the progressive-hedging (PH) algorithm.
Numerical Examples
We use the Sioux Falls network, a widely used benchmark network, as shown in Figure 3, to test the numerical performance of our solution methods and draw practical insights. The network consists of 24 nodes and 76 directed links. The number on each node/link in Figure 3 is the node/link index. Table B.1 in Appendix B. For illustration purposes, the parameters in the travelers' utility function are assumed to be β 0 = 0, β 1 = 1, β 2 = 0.06, e = 1. These utility parameter settings represent the case when users consider travel time and service prices when they choose facilities and routes and do not have a particular locational preference, and all users have a homogeneous service demand and value of time. For facility providers, we select a quadratic form for the investment and operational cost function: φ c (c) = 0.1c 2 + 170c and φ g (g) = 0.1g 2 + 130g. We refer to the above specifications as the base case, on which sensitivity analyses will be further conducted. Note that the magnitude of the parameters are arbitrary and just for illustration purposes. All the numerical experiments presented in this section were run on a 3.5 GHz Intel Core i5 processor with 8 GB of RAM under the Mac OS X operating system.
Deterministic Case
First, we investigate the deterministic case where investors make an investment decision based on base-level future EV travel demand. Road congestion is an important factor in transportation system that can influence the decision making of investors as it will influence the facility selection and route choice of travelers (Duranton and Turner, 2011). However, most of the existing studies do not consider traffic congestion in their planning of intermediate service facility (e.g., (Wang et al., 2019;). To demonstrate quantitatively potential bias of investment resulted from lacking considerations of congestion, we compare numerical results between cases with and without congestion. Figure 4 illustrates the impacts of consideration of network congestion on model results, including the equilibrium capacity, price, and the traffic distribution of the transportation network. Figure 4a corresponds to a case where transportation network congestion is not considered in the investors' decision making (by setting the link capacity to be infinity), while Figure 4c corresponds to the base case where network congestion is explicitly modeled. Note that, for the sake of fair comparison, the link capacity used for reporting the flow to capacity ratio in both figures are set to be equal to the actual capacity of the link (i.e., the links capacity in base case). The link flow in Figure 4a are hypothetical flow 3 Note that ca is the "capacity" parameter used in BPR rather than the true link capacity if congestion is not modeled in the intermediate facility location problem. We find that link congestion in Figure 4c is not as significant as Figure 4a because when congestion is captured in the model, users will adjust their facility location and routing choices to avoid congestion. In addition, considering congestion costs for facility location choice will lead to a difference in the facility demand distribution, which leads to the difference in equilibrium service prices shown in Figure 4.
Considering a hypothetical scenario, if investors planned the infrastructure without considering transportation network congestion (i.e., following the facility capacity results presented in Figure 4a), once the users experience significant congestion, their actual location and routing choices would be adapted, which means the computed equilibrium capacity and service prices would no longer be optimal for individual investors. Therefore, the investors would have an incentive to make changes accordingly. If we let this system evolve, the system eventually converges to an equilibrium state identical to the base case results, as shown in Figure 4b. This experiment illustrates the importance of capturing realistic user-infrastructure interactions (e.g., user choices and transportation congestion) to find a stable system equilibrium state. Next, we study how different price sensitivities might affect the equilibrium investment layout and travel time by comparing cases with β 2 values at 0, 0.06, and 0.6. Figure 5 represents sensitivity analysis on β 2 and the specific magnitudes only aim to demonstrate the impact of different values of time on the equilibrium outcomes. More specifically, the interpretation of β 1 and β 2 are the dis-utility per unit of time and costs, respectively. Therefore, β 1 /β 2 represents the value of time, i.e., monetary costs per unit of time. Since β 1 = 1, β 2 = 0 represents the case when the value of time is infinitely, which means users choose the facility that takes the least detour and do not care about service prices. This case could represent facility providing life-critical service in emergency. When β 2 becomes larger, users put more weight on the service costs in addition to travel time when they choose facility. This case could represent daily facility service, such as EV charging facility. With increasing price sensitivity, the system has an increasing total travel time in equilibrium. The reason is that when travelers are more sensitive to price, they are more willing to choose a cheaper, albeit farther or more congested, service facility. In terms of equilibrium investment, higher price sensitivity leads to a more evenly distributed investment pattern because the preference of travelers for cheaper locations will naturally drive closer the equilibrium prices of different locations so that each location has similar attractiveness to investors.
Stochastic Case
Different sources of uncertainties can influence the choice of facility providers and users.
The impact of uncertainty will be more prominent in the multi-agent framework because the response of each agent to the uncertainties will also influence the decisions of other agents. In this section, we will focus on the uncertain total travel demand from each OD pair d rs . Other uncertainty sources can be similarly investigated.
The service demand is modeled with a random coefficient θ ξ ∈ [θ min , θ max ] for each scenario ξ multiplied by the base case service demand. In the Sioux Falls test network, we considered θ min and θ max to be 1 and 1.2, respectively, and generated 20 service demand scenarios from a uniform distribution. We compared the results in three cases:
• Case 1: Deterministic problem, where only the expected elastic demand is considered.
• Case 2: Stochastic problem, where possible scenarios of demand and their associated probabilities are modeled.
• Case 3: Wait-and-see problem, where all stakeholders have perfect forecast of the uncertainty parameters when they make investment decisions (i.e., relaxing the capacity variable to be scenario dependant without non-anticipitivity constraints).
These three cases allow us to investigate the decision makings of the stakeholders under different information availability scenarios (Section 5.2.1), as well as to quantify the stochastic programming metrics (e.g., the value of stochastic solutions (VSS) and the expected value of perfect information (EVPI) (Birge and Louveaux, 2011)) for each individual stakeholder (Section
5.2.2).
The simulation time for the three cases were 0.153, 7.938, and 3.019 minutes respectively.
The increased simulation time for cases 2 and 3, compared to case 1, indicates the additional computational burdens of stochastic modeling. Additionally, the computation time in case 3 is lower than case 2 because case 3 is basically running the deterministic problem (case 1)
repeatedly for the number of scenarios (20 scenarios here).
Results on the Decision Making of Stakeholders
Supplied service quantities, service prices, and total capacity at facility locations for each of the three cases are presented in Figure 6. From Figure 6a, we can see that the service supply quantities at all facility locations are similar between case 1 and the mean value in case 3 (i.e., "the supply for mean demand is similar to the mean supply for each demand sceanrio").
However, the ranges of service supply in cases 2 and 3 vary across different locations. The main reason stems from agents being able to make different investment decisions for each scenario in case 3 and have more flexibility to optimize their supply quantity. For example, services provided at location 6 have a higher supply on average in case 3 than in case 2, meaning that perfect information on OD demand can encourage facility provision in location 6. In terms of the variance of service supply, perfect information (case 3) does not have universal impacts for different locations. For example, the variance of service supply in case 3 is lower at location 6, while it is higher at location 22, compared with case 2. Smaller variance indicates that service supply is less sensitive to uncertain parameters.
Similar to the supplied services, the locational capacities in case 1 are close to the mean capacity of the stochastic problem in case 3 (see Figure 6b), which is as expected since investors will invest the same amount of capacity as the supply quantity for each scenario in both case 1 and case 3 and there is no unused capacity. Enforcing the non-anticipativity constraints in case 2 results in a constant locational capacity for all the scenarios (see Figure 6a). In general, the capacity at each location in case 2 is higher than the mean of the capacity investment in case 3, because investors in case 2 consider all scenarios and may need to over-invest to consider the most profitable demand scenario, which has the highest service prices, to maximize their expected profits. In case 3, however, the investors determine capacities for each scenario independently and can invest less for scenarios where lower capacity is needed. Additionally, with perfect information, investors have the flexibility to distribute their capacity among locations for each scenario. For example, facilities at 3, 12, and 22 may have higher capacity investment in some extreme scenarios compared to case 2. Figure 6c shows the equilibrium service prices for each case. In general, the service prices in cases 1 and 3 are higher than the prices in case 2 (see Figure 6c). However, in scenarios where the facility capacity is binding (g k ξ =c k • ), we observe drastically higher service prices in case 2. These spikes in service prices stem from the market-based modeling framework, where the service prices are based on the marginal costs of additional service quantity. In case 2, the costs of the additional service quantity in the capacity-binding scenario will need to account for the costs of unused capacity in the other scenarios. The pricing mechanism based on marginal costs also explains why facility investors tend to invest more facing future uncertainties since they can make more profits during the supply shortage.
In summary, the key observations of the case analyses are as follows:
• With a perfect forecast on the uncertain parameters (case 3), the decision-makers act similarly on average in terms of supply capacity and quantity compared with the deterministic case (case 1), as presented in Figures 6a and 6b. In other words, the mapping from scenario to facility supply has the property that the supply for mean demand is similar to the mean supply for each demand scenario.
• Stochastic decision-making (case 2) leads to more investment of facility capacity as a response to the uncertain future demand, which will lead to different locational service supplies and demand quantities compared with the case with perfect information on the uncertainties (case 3). For example, Figure 6b shows significantly higher invested capacity (especially at location 6) in case 2 compared to case 3, which has resulted in different service supply quantities, as presented in Figure 6a.
• The market-based mechanism of service pricing could result in high prices for scenarios where the capacity constraints are binding in the stochastic problem (case 2), which leads to higher investment in case 2 compared with case 3. This is evident from the marginal locational service prices presented in Figure 6c.
Stochastic Programming Metrics
In order to quantify the impacts of stochastic programming and uncertainty information on the benefits of individual stakeholders, we investigate two classic stochastic programming metrics in the context of network equilibrium: (1) value of stochastic solutions (VSS) and (2) expected value of perfect information (EVPI).
VSS evaluates the potential benefit of implementing stochastic programming solutions considering system uncertainties compared with the deterministic solutions. EVPI is another stochastic metric that quantifies the value of the perfect forecast of the uncertain parameters on the decision-making of stakeholders. In order to calculate these metrics, we first calculate the objective value of each stakeholder with the results found from the previously defined three cases. Then, the VSS would be the objective value difference of each stakeholder between case 2 (stochastic problem) and case 1 (deterministic problem). Since case 3 models the condition where all stakeholders have access to the information of uncertain scenarios, EVPI would be the objective value difference between case 3 (the wait and see problem) and case 2. Note that in contrast to single-agent stochastic programming, VSS and EVPI may be negative in a multi-agent setting due to the complex interactions.
The objective value of facility providers can be determined using model (3), and the objective of the service users will be calculated as the total expected utility based on equation (11), where U total is the total utility of the users, U is the drivers' utility defined in equation (4), and q are the resulting traffic flow after solving the problem.
U total = E ξ k∈K 1 β 2 (q rsk ξ × U rsk ξ )(11)
Notice that in equation (11), we have divided the expected utility by β 2 to normalize the utility in the unit of $. Therefore, we can also analyze the system welfare or surplus as the summation of the service providers' objective and users' total utility.
The calculated objectives and system surplus are shown in Figure 7 for the three cases. We can see that facility providers have the highest objective value in case 2 while facility users have the lowest utility, because in case 2, the market will yield a much higher service price on facility users in capacity binding scenarios (Figure 6c), resulting in higher benefits for investors and lower utility for the users. The objective value of providers has decreased with perfect information (case 3) compared with case 2 mainly because of the high spikes of service prices in case 2 when the supply quantities are constrained by the capacity of the facilities. The objective value of users has become less negative, which means that users have benefited from perfect information. The improvement of objective value for users was more prominent than the providers resulting in improved system surplus in case 3 compared to case 2.
The mentioned stochastic metrics will help us solidify these comparisons. Based on the definition provided for VSS, the VSS for the providers would be 24652.7 units, which represents the profit of investors by implementing stochastic solutions compared to deterministic modeling.
Comparing the objectives in cases 2 and 3, the objective value of providers has decreased by 23394.0 units, and the objective value of users has improved by 84225.6 units. These changes can be interpreted as EVPI. The metrics also show that providers are better off without having perfect information over the realization of uncertain parameters, whereas users have benefited from the perfect information. From a system perspective, perfect information (case 3) and deterministic solutions (case 1) achieve higher system surpluses compared with the stochastic solution (case 2).
Results for disaggregated investors
The results presented in the previous section for investment capacity is based on the aggregated cost function of investors at each facility location. Here, we will explicitly model multiple investors with heterogeneous cost functions to investigate the equilibrium capacity share between investors. The objective function of each investor will take the same quadratic forms
Φ(c k i ) = a i c k i 2 + b i c k i ,(12)
with a i and b i being the cost coefficients for each investor i.
As an example, we will consider having two disaggregated investors i ∈ {1, 2}. The first investor with cost coefficients a 1 = 0.1 and b 1 = 170, and the second investor with a 2 = 1 and b 2 = 17. The first investor represents an investor who has higher marginal costs for installation but the marginal costs increase slower with capacity, and the second investor represents an investor with the opposite cost structure. An example for the first investor would be a firm that owns large area of land with no existing infrastructure and an example for the second investor would be a firm that owns limited land with underlying infrastructure already installed (e.g., gas station owners). Figure 8 shows the capacity share between these two investors for the stochastic case (case 2). Since, the second investor has higher investment cost for high capacity, it has lower share of investment capacity at each location. The differences between total capacity installed among different locations depend on the route and path selection of drivers as we have discussed in the stochastic and deterministic analysis. Note that we can use an aggregated cost function to model the total capacity at each node. This is evident in our example that the total capacity at each location presented in Figure 8 is equal to the total capacity based on the aggregated cost function.
Discussion
In this paper, we have presented a new modeling framework along with an efficient solution method for long-term infrastructure system planning problems with challenges brought by intermediate facility provision, non-cooperative stakeholders, and complex agent-infrastructure interactions. The existence and uniqueness of equilibrium are proved. Through numerical examples, we demonstrated that (1) prices, investment, and profits might differ significantly across locations due to agent-infrastructure interactions;
(2) ignoring transportation congestion may lead to system assessment bias; (3) the equilibrium investment patterns may be sensitive to user preferences; (4) while the stochastic decision making facing uncertainties may lead to higher investment compared with the average investment with perfect information, it will lead to significantly higher service prices in the scenarios when the capacity constraint is binding; and (5) information on uncertain parameters may not benefit the facility providers, but will benefit the users and increase the system surplus.
This research can be extended in several directions. From a methodological viewpoint, the assumption of perfect competition may not fit all the applications when there is noticeable market power from the supply or demand side. Therefore, different market structures could be investigated. For example, if suppliers have the strong market power to anticipate and influence the responses of users, a bilevel leader-and-follower model would be more appropriate.
In addition, we can leverage the existing modeling framework to investigate the control strategies to influence market interactions so that a more resilient facility network can be achieved. Also, we observed asymmetry of value of information for the service providers versus service users, which is unique in a decentralized decision environment. How to design an effective information acquisition and sharing mechanism in this context would be an interesting topic for future investigation.
Appendix A. Proofs.
Proof (Lemma 1). Firstly, the objective function (5a) is convex becasue it is a linear combination of three basic convex functions:
(1) f 1 (x) =
x 0 g(u)du, with g(u) being a positive and nondecreasing function, (2) f 2 (x) = x ln x and (3) f 3 (x) = cx. In addition, the constraints for problem (5) are all linear. Therefore, the optimization problem (5) is convex.
Because of the differentiability of function (5a), the optimality conditions of problem (5) is equivalent to the following complementarity conditions in additions to constraints (5c ∼ 5f):
∀a ∈ A, r ∈ R, s ∈ S, k ∈ K rs , p ∈ P rsk
0 ≤ x p ⊥ a∈Ap t a (·) − γ T (Bp + Bp) ≥ 0 (A.1a) 0 ≤x rsk a ⊥ γ a − A T aλ ≥ 0 (A.1b) 0 ≤x rsk a ⊥ γ a − A T aλ ≥ 0 (A.1c) 0 ≤ q rsk ⊥ 1 β 1 (ln(q rsk ) + β 3 ρ k e rs inc rs − β 2 i∈I k c s i − β k 0 ) + E rkTλ + E ksTλ + µ rs ≥ 0 (A.1d)
We first show that the traffic flow solutions is Wardrop user equilibrium by proving the following two conditions.
1. All the used paths connecting r, s, k have the same travel time. ∀r ∈ R, s ∈ S, k ∈ K rs , for thosep ∈ P rsk with xp > 0, a∈Ap t a (·) = γ T (Bp + Bp) (because of (A.1a)).
Due to the following two conditions:
• forã ∈p, i.e. Bp ,ã = 1,x rsk a > 0 and therefore γã = A T aλ (because of (A.1b)). So γ T a Bp ,ã =λ T AãBp ,ã
• forã / ∈p, i.e. Bp ,ã = 0, γ T a Bp ,ã =λ T AãBp ,ã = 0 we have γ T Bp =λ T ABp. Notice that ABp = E rk , so γ T Bp =λ T E rk .
Same procedure, we have γ T Bp =λ T E ks .
So a∈Ap t a (·) = γ T (Bp + Bp) =λ T E rk +λ T E ks . = τ rsk , which only dependents on r, s, k.
2. All the unused paths connecting r, s, k have no smaller travel time than that of the used paths. ∀r ∈ R, s ∈ S, k ∈ K rs , for thosep ∈ P rsk with xp = 0, a∈Ap t a (·) ≥ γ T (Bp + Bp) (because of (A.1a)). From (A.1b, A.1c), γã ≥ A T aλ and γ a ≥ A T aλ , ∀a. So a∈Ap t a (·) ≥ γ T (Bp + Bp) ≥λ T ABp +λ T ABp =λ T E rk +λ T E ks = τ rsk .
Next, we show the OD-demand solutions are the service location choice with logit facility demand functions. This can be easily seen from (A.1d): for any k with q rsk > 0, 1 β1 (ln(q rsk ) + β 3 ρ k e rs inc rs − β 2 i∈I k c s i − β k 0 ) + E rkTλ + E ksTλ + µ rs = 0. After reorganization, q rsk = e β k 0 −β1(E rkTλ +E ksTλ )+β2 i∈I k c s i −β3 ρ k e rs inc rs +β1µ rs = e β k 0 −β1τ rsk +β2 i∈I k c s i −β3 ρ k e rs inc rs +β1µ rs = e U rsk +β1µ rs Proof of Lemma 2.
Objective function (10a) is a linear combination of five types of function: φ c (·), φ g (·), v a,ξ 0 t a udu, qlnq, and q. First, the representative investor costs functions, φ c (·) and φ g (·) are convex by assumption. Second, since the link performance function t a (·) is monotone increasing, thus v a,ξ taudu 0 is a convex function because its second order derivative t a (·) ≥ 0. Third, the rest of the functions q ln q and q can be easily shown to be convex by taking second order derivative.
Therefore, the reformulation problem (model (10)) corresponds to a minimization of a convex function under linear constraints. Therefore, model (10) is a convex problem (Rockafellar and Wets, 1998). Under constraint qualifications, the convex reformulated problem (i.e., model (10)) has at least one solution.
Furthermore, if φ c (·), φ g (·) are strictly convex functions and t a (·) is strictly monotone increasing, following the same logic as above, model (10)) corresponds to a minimization of a strictly convex function under linear constraints. Therefore, the solution of model (10)), if exists, is unique.
Proof of Lemma 3.
Since φ c (·) and φ g (·) are convex functions and t a (·) is monotone increasing, following Lemma 2, model (10) is a convex optimization problem. It is not difficult to see that the first order conditions associated with the convex optimization problem (10) with dual multipliers {λ k ξ } are separable by agents, and each corresponds to the first order conditions of the representative investor problem (3a)-(3b) and the GCDA problem (5a)-(5f) respectively, with equilibrium price vector ρ k ξ = λ k ξ π ξ , for every ξ and k, where {π ξ : ξ ∈ Ξ} is the probability distribution of ξ.
Proof of Theorem 1.
Theorem 1 directly follows from Lemma 2 and Lemma 3.
Bao and Xie(2021)developed a bi-level problem for the optimal charging station location of en-route charging in congested networks. The authors assumed that charging prices are similar in all of the charging stations across all stations and do not influence users' facility choice. Chen et al. (2020) proposed a similar bi-level optimization framework where an investor decides the facility locations and their capacity at the upper level, and drivers' choices are modeled at the lower level.
Figure 1 :
1Illustration of Network-based MOPEC
, in our model one may consider adding a dummy link connecting the closest transportation node to the facility. The travel time of this dummy link represents the service time needed at that facility depending on the service capacity. Denote the transportation network by a directed graph G = (N , A) , where N is the set of nodes (indexed by n) and A is the set of links (indexed by a). A node can represent a TAZ (source/sink of aggregated travel demand), a transport hub, or an intersection. A link can represent a path or a physical road section that connects two nodes. The GCDA model for a scenario ξ(∈ Ξ) is formulated in
flow on link a; t a (·) : travel time function of link a, e.g. the Bureau of Public Roads (BPR) function; d rs : travel demand from r to s (model input);
B p : link-path p incidence vector, with ith row equals to 1 if path p includes link i and 0 otherwise; E ij : O-D incidence vector of O-D pair ij with 1 at origin i, −1 at destination j;γ,λ,λ, µ : dual variables of the corresponding constraints.
and the multinomial logit facility choice assumption being satisfied, as formally stated in Lemma 1. Lemma 1. (Generalized Combined Distribution and Assignment) The optimal solutions (x * ,x * , x * , q * ) of problem (5) are the equilibrium solutions for the service location choice with logit facility demand functions and Wardrop user equilibrium.
Figure 2 :
2Special Cases of GCDA Model
is a strictly convex optimization problem under mild conditions (see Lemma 2). Then we show that any solutions solving model (10) will also satisfy the system equilibrium definition 1 (see Lemma 3). Lemma 2. (convexity of model (10) and solution uniqueness) Model (10) is convex if φ c (·) and
Figure 3 :
3Base Case Sioux Falls Test NetworkInFigure 3, the green, red, and blue nodes (5 of each) represent the set of origins, destinations, and candidate facility locations, respectively. We consider 25 o-d pairs, each expecting 100 units of travel demand. For the link travel costs function, t a (v a ), we adopt a 4 th -order Bureau of Public Roads (BPR) function: t a = t 0 a [1 + 0.15 * (v a /c a ) 4 ], where t 0 a is the free flow travel time (FFT) and c a is the link capacity parameter 3 . Values of t 0 a and c a are documented in
Figure 4 :
4Impacts of Modeling Network Congestion
Figure 5 :
5Impact of User Preferences on Investment
Figure 6 :
6Resulting decision variables (a) services provided by the facility locations, (b) determined capacity of facility locations, and (c) prices of the provided services;× case 1, case2, and case 3.
Figure 7 :
7Stakeholders
Figure 8 :
8Investors' capacity shares in facility locations.
CDA objective is normalized by β 1 /β 3 . The intuition of this step is to convert the unit of CDA objective to $. More rigorous proof of why we adopt this form can be seen in Lemma 3.
Appendix B. Data inputs
Competitive facility location and design problem. R Aboolian, O Berman, D Krass, European Journal of Operational Research. 1821Aboolian, R., Berman, O., Krass, D., 2007. Competitive facility location and design problem. European Journal of Operational Research 182 (1), 40-62.
Profit maximizing distributed service system design with congestion and elastic demand. R Aboolian, O Berman, D Krass, Transportation Science. 462Aboolian, R., Berman, O., Krass, D., 2012. Profit maximizing distributed service system design with congestion and elastic demand. Transportation Science 46 (2), 247-261.
Spatial pricing of ride-sourcing services in a congested transportation network. F Afifah, Z Guo, Transportation Research Part C: Emerging Technologies. 1Afifah, F., Guo, Z., 2022. Spatial pricing of ride-sourcing services in a congested transportation network. Transportation Research Part C: Emerging Technologies 1, 1-21.
Electric vehicles for distribution system load pickup under stressed conditions: A network equilibrium approach. S Baghali, Z Guo, W Wei, M Shahidehpour, IEEE Transactions on Power Systems. 11Baghali, S., Guo, Z., Wei, W., Shahidehpour, M., 2022. Electric vehicles for distribution system load pickup under stressed conditions: A network equilibrium approach. IEEE Transactions on Power Systems 1 (1), 1-13.
Optimal station locations for en-route charging of electric vehicles in congested intercity networks: A new problem formulation and exact and approximate partitioning algorithms. Z Bao, C Xie, Transportation Research Part C: Emerging Technologies. 133103447Bao, Z., Xie, C., 2021. Optimal station locations for en-route charging of electric vehicles in congested intercity networks: A new problem formulation and exact and approximate parti- tioning algorithms. Transportation Research Part C: Emerging Technologies 133, 103447.
Locating discretionary service facilities, ii: maximizing market size, minimizing inconvenience. O Berman, D Bertsimas, R C Larson, Operations Research. 434Berman, O., Bertsimas, D., Larson, R. C., 1995. Locating discretionary service facilities, ii: maximizing market size, minimizing inconvenience. Operations Research 43 (4), 623-632.
Location of congested capacitated facilities with distancesensitive demand. O Berman, Z Drezner, IIE Transactions. 383Berman, O., Drezner, Z., 2006. Location of congested capacitated facilities with distance- sensitive demand. IIE Transactions 38 (3), 213-221.
Facility location and capacity planning with delay-dependent demand. O Berman, E Kaplan, International journal of production research. 2512Berman, O., Kaplan, E., 1987. Facility location and capacity planning with delay-dependent demand. International journal of production research 25 (12), 1773-1780.
Flow intercepting spatial interaction model: a new approach to optimal location of competitive facilities. O Berman, D Krass, Location science. 61-4Berman, O., Krass, D., 1998. Flow intercepting spatial interaction model: a new approach to optimal location of competitive facilities. Location science 6 (1-4), 41-65.
Optimal location of discretionary service facilities. O Berman, R C Larson, N Fouska, Transportation Science. 263Berman, O., Larson, R. C., Fouska, N., 1992. Optimal location of discretionary service facilities. Transportation Science 26 (3), 201-211.
Introduction to stochastic programming. J R Birge, F Louveaux, Springer Science & Business MediaBirge, J. R., Louveaux, F., 2011. Introduction to stochastic programming. Springer Science & Business Media.
Efficient solution approaches for locating electric vehicle fast charging stations under driving range uncertainty. M K Boujelben, C Gicquel, Computers & Operations Research. 109Boujelben, M. K., Gicquel, C., 2019. Efficient solution approaches for locating electric vehicle fast charging stations under driving range uncertainty. Computers & Operations Research 109, 288-299.
An efficient formulation of the flow refueling location model for alternative-fuel stations. I Capar, M Kuby, Iie Transactions. 448Capar, I., Kuby, M., 2012. An efficient formulation of the flow refueling location model for alternative-fuel stations. Iie Transactions 44 (8), 622-636.
Optimal charging facility location and capacity for electric vehicles considering route choice and charging time equilibrium. R Chen, X Qian, L Miao, S V Ukkusuri, Computers & Operations Research. 113Chen, R., Qian, X., Miao, L., Ukkusuri, S. V., 2020. Optimal charging facility location and ca- pacity for electric vehicles considering route choice and charging time equilibrium. Computers & Operations Research 113, 104776.
The uncapicitated facility location problem. G Cornuéjols, G Nemhauser, L Wolsey, Cornell University Operations Research and Industrial EngineeringTech. repCornuéjols, G., Nemhauser, G., Wolsey, L., 1983. The uncapicitated facility location problem. Tech. rep., Cornell University Operations Research and Industrial Engineering.
On stochastic models of traffic assignment. C F Daganzo, Y Sheffi, Transportation science. 113Daganzo, C. F., Sheffi, Y., 1977. On stochastic models of traffic assignment. Transportation science 11 (3), 253-274.
Network and discrete location: models, algorithms, and applications. M S Daskin, John Wiley & SonsDaskin, M. S., 2011. Network and discrete location: models, algorithms, and applications. John Wiley & Sons.
Incorporating driving range variability in network design for refueling facilities. H De Vries, E Duijzer, Omega. 69de Vries, H., Duijzer, E., 2017. Incorporating driving range variability in network design for refueling facilities. Omega 69, 102-114.
Solving deterministic and stochastic equilibrium problems via augmented walrasian. J Deride, A Jofré, R Wets, Department of Mathematics, University of California DavisTechnical reportDeride, J., Jofré, A., Wets, R., 2015. Solving deterministic and stochastic equilibrium prob- lems via augmented walrasian. Technical report, Department of Mathematics, University of California Davis.
Competitive facility location Competitive Facility Location. T Drezner, Springer USBoston, MADrezner, T., 2009. Competitive facility location Competitive Facility Location. Springer US, Boston, MA, pp. 396-401.
. 10.1007/978-0-387-74759-0_73URL http://dx.doi.org/10.1007/978-0-387-74759-0_73
The fundamental law of road congestion: Evidence from us cities. G Duranton, M A Turner, American Economic Review. 1016Duranton, G., Turner, M. A., 2011. The fundamental law of road congestion: Evidence from us cities. American Economic Review 101 (6), 2616-2652.
Stochastic and private energy system optimization. V Dvorkin, Dvorkin, V., 2020. Stochastic and private energy system optimization.
Competitive location models: A framework and bibliography. H A Eiselt, G Laporte, J.-F Thisse, Transportation Science. 271Eiselt, H. A., Laporte, G., Thisse, J.-F., 1993. Competitive location models: A framework and bibliography. Transportation Science 27 (1), 44-54.
. http:/pubsonline.informs.org/doi/abs/10.1287/trsc.27.1.44URL http://pubsonline.informs.org/doi/abs/10.1287/trsc.27.1.44
Derivation and analysis of some models for combining trip distribution and assignment. S P Evans, Transportation research. 101Evans, S. P., 1976. Derivation and analysis of some models for combining trip distribution and assignment. Transportation research 10 (1), 37-57.
Covering problems in facility location: A review. R Z Farahani, N Asgari, N Heidari, M Hosseininia, M Goh, Computers & Industrial Engineering. 621Farahani, R. Z., Asgari, N., Heidari, N., Hosseininia, M., Goh, M., 2012. Covering problems in facility location: A review. Computers & Industrial Engineering 62 (1), 368-407.
Mopec: multiple optimization problems with equilibrium constraints. M C Ferris, R Wets, Ferris, M. C., Wets, R., 2013. Mopec: multiple optimization problems with equilibrium con- straints. URL: http://pages. cs. wisc. edu/˜ferris/talks/mopta-aug. pdf.
Competitive facility location. T L Friesz, 10.1007/s11067-006-9008-1Networks and Spatial Economics. 71Friesz, T. L., 2007. Competitive facility location. Networks and Spatial Economics 7 (1), 1-2. URL http://dx.doi.org/10.1007/s11067-006-9008-1
Refueling infrastructure planning in intercity networks considering route choice and travel time delay for mixed fleet of electric and conventional vehicles. M Ghamami, M Kavianipour, A Zockaie, L R Hohnstadt, Y Ouyang, Transportation Research Part C: Emerging Technologies. 120102802Ghamami, M., Kavianipour, M., Zockaie, A., Hohnstadt, L. R., Ouyang, Y., 2020. Refueling infrastructure planning in intercity networks considering route choice and travel time delay for mixed fleet of electric and conventional vehicles. Transportation Research Part C: Emerging Technologies 120, 102802.
A general corridor model for designing plug-in electric vehicle charging infrastructure to support intercity travel. M Ghamami, A Zockaie, Y M Nie, Transportation Research Part C: Emerging Technologies. 68Ghamami, M., Zockaie, A., Nie, Y. M., 2016. A general corridor model for designing plug-in electric vehicle charging infrastructure to support intercity travel. Transportation Research Part C: Emerging Technologies 68, 389-402.
The battery charging station location problem: Impact of users' range anxiety and distance convenience. F Guo, J Yang, J Lu, Transportation Research Part E: Logistics and Transportation Review. 114Guo, F., Yang, J., Lu, J., 2018. The battery charging station location problem: Impact of users' range anxiety and distance convenience. Transportation Research Part E: Logistics and Transportation Review 114, 1-18.
A stochastic multiagent optimization framework for interdependent transportation and power system analyses. Z Guo, F Afifah, J Qi, S Baghali, IEEE Transactions on Transportation Electrification. 73Guo, Z., Afifah, F., Qi, J., Baghali, S., 2021. A stochastic multiagent optimization framework for interdependent transportation and power system analyses. IEEE Transactions on Trans- portation Electrification 7 (3), 1088-1098.
Infrastructure planning for fast charging stations in a competitive market. Z Guo, J Deride, Y Fan, Transportation Research Part C: Emerging Technologies. 68Guo, Z., Deride, J., Fan, Y., 2016. Infrastructure planning for fast charging stations in a com- petitive market. Transportation Research Part C: Emerging Technologies 68, 215-227.
A stochastic multi-agent optimization model for energy infrastructure planning under uncertainty in an oligopolistic market. Z Guo, Y Fan, Networks and Spatial Economics. 172Guo, Z., Fan, Y., 2017. A stochastic multi-agent optimization model for energy infrastructure planning under uncertainty in an oligopolistic market. Networks and Spatial Economics 17 (2), 581-609.
On locating new facilities in a competitive environment. S L Hakimi, European Journal of Operational Research. 121Hakimi, S. L., 1983. On locating new facilities in a competitive environment. European Journal of Operational Research 12 (1), 29-35.
Location science research: A review. T S Hale, C R Moberg, Annals of Operations Research. 1231Hale, T. S., Moberg, C. R., Oct 2003. Location science research: A review. Annals of Operations Research 123 (1), 21-35.
. 10.1023/A:1026110926707URL https://doi.org/10.1023/A:1026110926707
Variable neighborhood search for the p-median. P Hansen, N Mladenović, Location Science. 54Hansen, P., Mladenović, N., 1997. Variable neighborhood search for the p-median. Location Science 5 (4), 207-226.
Optimal deployment of public charging stations for plug-in hybrid electric vehicles. F He, D Wu, Y Yin, Y Guan, Transportation Research Part B: Methodological. 47He, F., Wu, D., Yin, Y., Guan, Y., 2013. Optimal deployment of public charging stations for plug-in hybrid electric vehicles. Transportation Research Part B: Methodological 47, 87-101.
An optimal charging station location model with the consideration of electric vehicle's driving range. J He, H Yang, T.-Q Tang, H.-J Huang, Transportation Research Part C: Emerging Technologies. 86He, J., Yang, H., Tang, T.-Q., Huang, H.-J., 2018. An optimal charging station location model with the consideration of electric vehicle's driving range. Transportation Research Part C: Emerging Technologies 86, 641-654.
Facility location. R M Hekmatfar, Physica-VerlagHekmatfar, R. M., 2009. Facility location. Physica-Verlag.
A flow-capturing location-allocation model. M J Hodgson, Geographical Analysis. 223Hodgson, M. J., 1990. A flow-capturing location-allocation model. Geographical Analysis 22 (3), 270-279.
Stability in competition. H Hotelling, The Economic Journal. 39153Hotelling, H., 1929. Stability in competition. The Economic Journal 39 (153), 41-57. URL http://www.jstor.org/stable/2224214
Charging station location problem: A comprehensive review on models and solution approaches. M Kchaou-Boujelben, Transportation Research Part C: Emerging Technologies. 132103376Kchaou-Boujelben, M., 2021. Charging station location problem: A comprehensive review on models and solution approaches. Transportation Research Part C: Emerging Technologies 132, 103376.
A network transformation heuristic approach for the deviation flow refueling location model. J.-G Kim, M Kuby, Computers & Operations Research. 404Kim, J.-G., Kuby, M., 2013. A network transformation heuristic approach for the deviation flow refueling location model. Computers & Operations Research 40 (4), 1122-1131.
Facility location models for distribution system design. A Klose, A Drexl, European Journal of Operational Research. 1621Klose, A., Drexl, A., 2005. Facility location models for distribution system design. European Journal of Operational Research 162 (1), 4-29.
Sequential competitive location on networks. D Kress, E Pesch, European Journal of Operational Research. 2173Kress, D., Pesch, E., 2012. Sequential competitive location on networks. European Journal of Operational Research 217 (3), 483-499.
The flow-refueling location problem for alternative-fuel vehicles. M Kuby, S Lim, Socio-Economic Planning Sciences. 392Kuby, M., Lim, S., 2005. The flow-refueling location problem for alternative-fuel vehicles. Socio- Economic Planning Sciences 39 (2), 125-145.
Optimization of hydrogen stations in florida using the flow-refueling location model. M Kuby, L Lines, R Schultz, Z Xie, J.-G Kim, S Lim, International journal of hydrogen energy. 3415Kuby, M., Lines, L., Schultz, R., Xie, Z., Kim, J.-G., Lim, S., 2009. Optimization of hydrogen stations in florida using the flow-refueling location model. International journal of hydrogen energy 34 (15), 6045-6064.
A combined trip distribution and assignment model for multiple user classes. W Lam, H Huang, Transportation Research Part B-Methodological. 264Lam, W., Huang, H., 1992. A combined trip distribution and assignment model for multiple user classes. Transportation Research Part B-Methodological 26 (4), 275-287.
Optimal en-route charging station locations for electric vehicles: A new modeling perspective and a comparative evaluation of network-based and metanetworkbased approaches. J Li, C Xie, Z Bao, Transportation Research Part C: Emerging Technologies. 142103781Li, J., Xie, C., Bao, Z., 2022. Optimal en-route charging station locations for electric vehicles: A new modeling perspective and a comparative evaluation of network-based and metanetwork- based approaches. Transportation Research Part C: Emerging Technologies 142, 103781.
Heuristic approaches for the flow-based set covering problem with deviation paths. S Li, Y Huang, Transportation Research Part E: Logistics and Transportation Review. 72Li, S., Huang, Y., 2014. Heuristic approaches for the flow-based set covering problem with deviation paths. Transportation Research Part E: Logistics and Transportation Review 72, 144-158.
A continuum approximation approach to reliable facility location design under correlated probabilistic disruptions. X Li, Y Ouyang, Transportation research part B: methodological. 444Li, X., Ouyang, Y., 2010. A continuum approximation approach to reliable facility location design under correlated probabilistic disruptions. Transportation research part B: method- ological 44 (4), 535-548.
Heuristic algorithms for siting alternative-fuel stations using the flowrefueling location model. S Lim, M Kuby, European Journal of Operational Research. 2041Lim, S., Kuby, M., 2010. Heuristic algorithms for siting alternative-fuel stations using the flow- refueling location model. European Journal of Operational Research 204 (1), 51-61.
The p-center flow-refueling facility location problem. C.-C Lin, C.-C Lin, Transportation Research Part B: Methodological. 118Lin, C.-C., Lin, C.-C., 2018. The p-center flow-refueling facility location problem. Transportation Research Part B: Methodological 118, 124-142.
Placement of ev charging stations-balancing benefits among multiple entities. C Luo, Y.-F Huang, V Gupta, IEEE Transactions on Smart Grid. 82Luo, C., Huang, Y.-F., Gupta, V., 2015. Placement of ev charging stations-balancing benefits among multiple entities. IEEE Transactions on Smart Grid 8 (2), 759-768.
Facility location and supply chain management-a review. M T Melo, S Nickel, F Saldanha-Da-Gama, European journal of operational research. 1962Melo, M. T., Nickel, S., Saldanha-Da-Gama, F., 2009a. Facility location and supply chain management-a review. European journal of operational research 196 (2), 401-412.
Facility location and supply chain management-a review. M T Melo, S Nickel, F Saldanha-Da Gama, European journal of operational research. 1962Melo, M. T., Nickel, S., Saldanha-da Gama, F., 2009b. Facility location and supply chain management-a review. European journal of operational research 196 (2), 401-412.
Equilibrium facility location on networks. T C Miller, T L Friesz, R L Tobin, Springer Science & Business MediaMiller, T. C., Friesz, T. L., Tobin, R. L., 1996. Equilibrium facility location on networks. Springer Science & Business Media.
A flexible reformulation of the refueling station location problem. S Mirhassani, R Ebrazi, Transportation Science. 474MirHassani, S., Ebrazi, R., 2013. A flexible reformulation of the refueling station location prob- lem. Transportation Science 47 (4), 617-628.
Facility location design under continuous traffic equilibrium. Y Ouyang, Z Wang, H Yang, Transportation Research Part B: Methodological. 81Ouyang, Y., Wang, Z., Yang, H., 2015. Facility location design under continuous traffic equilibrium. Transportation Research Part B: Methodological 81, Part 1, 18-33.
com/S0191261515001204/1-s2.0-S0191261515001204-main. pdf?_tid=5b5d33c2-ccf7-11e5-9336-00000aab0f01&acdnat=1454780026_ 89a34c785e7f378378e777aa55648805. //ac.els-cdn.com/S0191261515001204/1-s2.0-S0191261515001204-main. pdf?_tid=5b5d33c2-ccf7-11e5-9336-00000aab0f01&acdnat=1454780026_ 89a34c785e7f378378e777aa55648805
Strategic facility location: A review. S H Owen, M S Daskin, European Journal of Operational Research. 1113Owen, S. H., Daskin, M. S., 1998. Strategic facility location: A review. European Journal of Operational Research 111 (3), 423 -447.
Static competitive facility location: An overview of optimisation approaches. F Plastria, European Journal of Operational Research. 1293Plastria, F., 2001. Static competitive facility location: An overview of optimisation approaches. European Journal of Operational Research 129 (3), 461-470.
R Rockafellar, R Wets, of Grundlehren der Mathematischen Wissenschafte. Springer3173rd printing 2009Rockafellar, R., Wets, R., 1998. Variational Analysis. Vol. 317 of Grundlehren der Mathematis- chen Wissenschafte. Springer (3rd printing 2009).
Siting and sizing charging infrastructure for electric vehicles with coordinated recharging. S Schoenberg, D S Buse, F Dressler, IEEE Transactions on Intelligent Vehicles. Schoenberg, S., Buse, D. S., Dressler, F., 2022. Siting and sizing charging infrastructure for electric vehicles with coordinated recharging. IEEE Transactions on Intelligent Vehicles.
Urban transportation networks: equilibrium analysis with mathematical programming methods. Y Sheffi, Sheffi, Y., 1985. Urban transportation networks: equilibrium analysis with mathematical pro- gramming methods.
An optimization framework for cost effective design of refueling station infrastructure for alternative fuel vehicles. A Shukla, J Pekny, V Venkatasubramanian, Computers & Chemical Engineering. 358Shukla, A., Pekny, J., Venkatasubramanian, V., 2011. An optimization framework for cost effective design of refueling station infrastructure for alternative fuel vehicles. Computers & Chemical Engineering 35 (8), 1431-1438.
Locational analysis: highlights of growth to maturity. H K Smith, G Laporte, P R Harper, Journal of the Operational Research Society. Smith, H. K., Laporte, G., Harper, P. R., 2009. Locational analysis: highlights of growth to maturity. Journal of the Operational Research Society, S140-S148.
Facility location under uncertainty: a review. L V Snyder, 10.1080/07408170500216480IIE Transactions. 387Snyder, L. V., 2006. Facility location under uncertainty: a review. IIE Transactions 38 (7), 547-564. URL https://doi.org/10.1080/07408170500216480
A user equilibrium-based fast-charging location model considering heterogeneous vehicles in urban networks. C Q Tran, D Ngoduy, M Keyvan-Ekbatani, D Watling, Transportmetrica A: Transport Science. 174Tran, C. Q., Ngoduy, D., Keyvan-Ekbatani, M., Watling, D., 2021. A user equilibrium-based fast-charging location model considering heterogeneous vehicles in urban networks. Trans- portmetrica A: Transport Science 17 (4), 439-461.
Designing locations and capacities for charging stations to support intercity travel of electric vehicles: An expanded network approach. C Wang, F He, X Lin, Z.-J M Shen, M Li, Transportation Research Part C: Emerging Technologies. 102Wang, C., He, F., Lin, X., Shen, Z.-J. M., Li, M., 2019. Designing locations and capacities for charging stations to support intercity travel of electric vehicles: An expanded network approach. Transportation Research Part C: Emerging Technologies 102, 210-232.
Siting and sizing of fast charging stations in highway network with budget constraint. Y Wang, J Shi, R Wang, Z Liu, L Wang, Applied Energy. 228Wang, Y., Shi, J., Wang, R., Liu, Z., Wang, L., 2018. Siting and sizing of fast charging stations in highway network with budget constraint. Applied Energy 228, 1255-1271.
Road paper. some theoretical aspects of road traffic research. J G Wardrop, ICE Proceedings: Engineering Divisions. Thomas Telford1Wardrop, J. G., 1952. Road paper. some theoretical aspects of road traffic research. In: ICE Proceedings: Engineering Divisions. Vol. 1. Thomas Telford, pp. 325-362.
Locating replenishment stations for electric vehicles: application to danish traffic data. M Wen, G Laporte, O B Madsen, A V Nørrelund, A Olsen, Journal of the Operational Research Society. 6510Wen, M., Laporte, G., Madsen, O. B., Nørrelund, A. V., Olsen, A., 2014. Locating replenishment stations for electric vehicles: application to danish traffic data. Journal of the Operational Research Society 65 (10), 1555-1561.
Solving the competitive discretionary service facility location problem. T.-H Wu, J.-N Lin, European Journal of Operational Research. 1442Wu, T.-H., Lin, J.-N., 2003. Solving the competitive discretionary service facility location prob- lem. European Journal of Operational Research 144 (2), 366-378.
Optimal deployment of charging stations considering path deviation and nonlinear elastic demand. M Xu, Q Meng, Transportation Research Part B: Methodological. 135Xu, M., Meng, Q., 2020. Optimal deployment of charging stations considering path deviation and nonlinear elastic demand. Transportation Research Part B: Methodological 135, 120-142.
Mitigate the range anxiety: Siting battery charging stations for electric vehicle drivers. M Xu, H Yang, S Wang, Transportation Research Part C: Emerging Technologies. 114Xu, M., Yang, H., Wang, S., 2020. Mitigate the range anxiety: Siting battery charging stations for electric vehicle drivers. Transportation Research Part C: Emerging Technologies 114, 164- 188.
A continuous equilibrium model for estimating market areas of competitive facilities with elastic demand and market externality. H Yang, S Wong, Transportation Science. 342Yang, H., Wong, S., 2000. A continuous equilibrium model for estimating market areas of competitive facilities with elastic demand and market externality. Transportation Science 34 (2), 216-227.
Deployment of the electric vehicle charging station considering existing competitors. Y Zhao, Y Guo, Q Guo, H Zhang, H Sun, IEEE Transactions on Smart Grid. 115Zhao, Y., Guo, Y., Guo, Q., Zhang, H., Sun, H., 2020. Deployment of the electric vehicle charging station considering existing competitors. IEEE Transactions on Smart Grid 11 (5), 4236-4248.
Solving detour-based fuel stations location problems. A Zockaie, H Z Aashtiani, M Ghamami, Y Nie, Computer-Aided Civil and Infrastructure Engineering. 312Zockaie, A., Aashtiani, H. Z., Ghamami, M., Nie, Y., 2016. Solving detour-based fuel stations location problems. Computer-Aided Civil and Infrastructure Engineering 31 (2), 132-144.
| []
|
[
"ON THE LIMITING PROBLEMS FOR TWO EIGENVALUE SYSTEMS AND VARIATIONS",
"ON THE LIMITING PROBLEMS FOR TWO EIGENVALUE SYSTEMS AND VARIATIONS"
]
| [
"H Bueno ",
"Aldo H S Medeiros "
]
| []
| []
| Let Ω be a bounded, smooth domain. Supposing that α(p) + β(p) = p, ∀ p ∈ N s , ∞ and lim p→∞ α(p)/p = θ ∈ (0, 1), we consider two systems for the fractional p-Laplacian and a variation on the first system. The first system is the following. | null | [
"https://export.arxiv.org/pdf/2304.00315v1.pdf"
]
| 257,913,791 | 2304.00315 | c47cc8aa9cfd0a5ecc6453f8e1c18277f4ee6a9f |
ON THE LIMITING PROBLEMS FOR TWO EIGENVALUE SYSTEMS AND VARIATIONS
1 Apr 2023
H Bueno
Aldo H S Medeiros
ON THE LIMITING PROBLEMS FOR TWO EIGENVALUE SYSTEMS AND VARIATIONS
1 Apr 2023
Let Ω be a bounded, smooth domain. Supposing that α(p) + β(p) = p, ∀ p ∈ N s , ∞ and lim p→∞ α(p)/p = θ ∈ (0, 1), we consider two systems for the fractional p-Laplacian and a variation on the first system. The first system is the following.
Introduction
In this paper we deal with different systems for the fractional p-Laplacian and study the behavior of their solutions (u p , v p ) as p goes to infinity: we prove that these solutions converge, in the viscosity sense, to solutions (u ∞ , v ∞ ) of related systems.
Let Ω ⊂ R N be a bounded, smooth domain and, for each x ∈ Ω, let δ x be the Dirac mass concentrated at x. Consider also functions α, β : N s , ∞ → (1, ∞) satisfying (h 1 ) α(p) + β(p) = p, ∀ p ∈ N s , ∞ ; (h 2 ) lim p→∞ α(p) p = θ ∈ (0, 1).
For each p > N s , we consider the system (−∆ p ) s u(x) = λα(p)|u| α(p)−2 u|v(x 0 )| β (p) in Ω,
(−∆ p ) t v(x) = λβ(p) Ω |u| α(p) dx |v(x 0 )| β(p)−2 v(x 0 )δ x0 in Ω, u = v = 0 in R N \ Ω, (P 1 p )
where x 0 is a point in Ω, λ is a parameter, 0 < s ≤ t < 1 and (−∆ p ) r denotes the r-fractional p-Laplacian operator, which is defined, for any p > 1, by
(−∆ p ) r φ(x) = lim ε→0 R N \Bε(x) |φ(x) − φ(y)| p−2 (φ(x) − φ(y)) |x − y| N +rp dxdy(1)
for any φ ∈ C ∞ 0 (Ω), which is a dense subespace of W r,p 0 (Ω). We also recall that
(−∆ p ) r u, ϕ := R N R N |u(x) − u(y)| p−2 (u(x) − u(y))(ϕ(x) − ϕ(y)) |x − y| N +rp dxdy
is the expression of (−∆ p ) r as an operator from W r,p 0 (Ω) into its dual. (The definition of the space W r,p 0 (Ω) will be given in the sequence.) We first prove that, for each p > N/s, this system has a unique solution. Then we consider the behavior of a sequence of these solutions as p → ∞ and prove that they converge uniformly to (u ∞ , v ∞ ), which are viscosity solutions of a related system. (Precise statements are given in the sequence.)
As a variation on system (P 1 p ), we consider the system
(−∆ p ) s u(x) = λα(p)|u| α(p)−2 u|v(x v )| β(p)
in Ω, in Ω. To solve the above system we apply the same method used to handle problem (P 1 p ), see Remark 8. We also handle the system (−∆ p ) s u(x) = λα(p)|u(x 1 )| α(p)−2 u(x 1 )|v(x 2 )| β(p) δ x1 in Ω, (−∆ p ) t v(x) = λβ(p)|u(x 1 )| α(p) |v(x 2 )| β(p)−2 v(x 2 )δ x2 in Ω,
(−∆ p ) t v(x) = λβ(p) Ω |u| α(p) dx |v(x v )| β(p)−2 v(x v )δ xv in Ω, u = v = 0 in R N \ Ω, (P 1 ∞ ) where x vu = v = 0 in R N \ Ω, (P 2 p )
where x 1 , x 2 ∈ Ω are arbitrary points,
x 1 = x 2 .
Of course, we could also consider the case where x u and x v are points of maxima of u and v, respectively, since our reasoning also solves this case.
In Section 2-5 we handle system (P 1 p ), while system (P 1 ∞ ) is considered in Remark 8. Finally, in Section 6 we deal with problem (P 2 p ).
Background, setting and description of results
Due to the appropriate Sobolev embedding, the solutions (u, v) of both problems (P 1 p ) and (P 2 p ) must be continuous. Since both equations in the system have the same homogeneity, (P 1 p ) and (P 2 p ) are actually eigenvalue problems. The eigenvalue problem for the s-fractional p-Laplacian operator was studied by Lindgren and Lindqvist in the pioneering paper [9]. Precisely, they studied the problem
(−∆ p ) s u = λ 1 (s, p)|u| p−2 u(x) in Ω, u = 0 in R N \ Ω.(2)
The authors proved that the minimum of the Rayleigh quotient associated with (2), that is,
λ 1 (s, p) = inf u∈W s,p 0 (Ω)\{0} [u] p s,p u p p = [φ p ] p s,p φ p p p .
is attained by a function that does not change sign in Ω.
In the case p = ∞ of the same paper, Lindgren and Lindqvist denoted
λ 1 (s, ∞) = inf u(x)−u(y) |x−y| s ∞ u ∞ : u ∈ W s,∞ 0 (Ω) \ {0} and showed that λ 1 (s, ∞) = 1 R s and lim p→∞ p λ 1 (s, p) = λ 1 (s, ∞), where R = max x∈Ω dist(x, R N \ Ω) = dist(·, R N \ Ω) ∞ .
The results obtained in relation with Eq. (2) were extended by Del Pezzo and Rossi in [3] to the case of systems of the form
(−∆ p ) r u(x) = λα(p)|u(x)| α(p)−2 u(x)|v(x)| β(p) in Ω, (−∆ p ) s v(x) = λβ(p)|u(x)| α(p) |v(x)| β(p)−2 v(x) in Ω, u = v = 0 in R N \ Ω,(3)
when assumptions (h 1 ) and (h 2 ) are fulfilled. If for each p ∈ ( N s , ∞) we denote
λ 1,p = inf 1 p [u] p r,p + 1 p [v] p s,p Ω |u| α(p) |v| β(p) dx : (u, v) ∈ W s,p (Ω), uv = 0
the authors showed that λ 1,p is principal eigenvalue (that is, an eigenvalue associated with an eigenfunction that does not change its sign) and
λ 1 p s,p → Λ 1,∞ = 1 R θr+(1−θ)s as p → ∞.(4)
More recently, Mihǎilescu, Rossi and Stancu-Dumitru [11] studied the system
−∆ p u(x) = λα(p)|u(x 1 )| α(p)−2 u(x 1 )|v(x 2 )| β(p) δ x1 in Ω, −∆ p v(x) = λβ(p)|u(x 1 )| α(p) |v(x 2 )| β(p)−2 v(x 2 )δ x2 in Ω, u = v = 0 on ∂Ω,(5)
where x 1 , x 2 ∈ Ω are arbitrary points, x 1 = x 2 . If x 1 and x 2 are points of maxima of u and v, respectively, using arguments like those in [1,5,7], it can be proved that (P 2 p ) is the limit, as r → ∞, of the problem
−∆ p u = λα(p) u α(p)−r r |u| r v β(p) r in Ω, −∆ p v = λβ(p) u α(p) r v β(p)−r r |v| r in Ω, u = v = 0 on ∂Ω,(6)
which can be solved by classical minimization procedures.
As in [3], they proved that system (5) has a principal eigenvalue and studied the asymptotic behavior of the principal eigenvalues and corresponding positive eigenfunctions u p and v p as p goes to infinity. Mihǎilescu, Rossi and Stancu-Dumitru proved that the converge to u ∞ and v ∞ , both viscosity solutions of the equation
−∆ ∞ w = 0 in .
The main goal of this work is to study system (P 1 p ). Note that this system is related to both systems (3) and (5). In the last section of this article, we make clear that the method used to solve system (P 1 p ) also applies to system (P 2 p ), thus generalizing system (5) from [11] to the fractional p-Laplacian operator.
Due to the presence of the Dirac mass δ x , it is more natural to compare the present work with [11]. We note that the integral form of the fractional p-Laplacian is more difficult to handle than that of the p-Laplacian. Also, in [11], it is valid the convergence ∇u L p (Ω) → |∇u| L ∞ (Ω) , for all u ∈ W 1,p 0 (Ω) in the p-Laplacian case, what does not happen when we are dealing with the Gagliardo semi-norm. Furthermore, a direct calculation with the distance function dist(x, R N \ Ω) shows that |∇dist(x, R N \ Ω)| = 1, but this is not valid in our case, making more difficult to estimate the solutions of system (P 2 p ). Furthermore, the presence of the integral term in (P 1 p ) changes the equation that the viscosity solutions u ∞ and v ∞ satisfy, see Theorem 4.
On its turn, we will show that the eigenvalues of (P 1 p ) converge, as p → ∞ to the same value Λ 1,∞ given by (4), a result obtained in [3].
We introduce the notation used while handling problem (P 1 p ). In the last section of this article, we consider problem (P 2 p ) and make the necessary adjustments. For each 0 < r < 1 and p ∈ [1, ∞], we consider the Sobolev spaces W r,p (Ω) W r,p (Ω) = u ∈ L p (Ω) :
Ω Ω |u(x) − u(y)| p |x − y| N +rp dxdy < ∞ ,
and also the spaces
W r,p 0 (Ω) = u ∈ L p (R N ) : u = 0 in R N \ Ω and [u] r,p < ∞ , where [u] p r,p = R N R N |u(x) − u(y)| p |x − y| N +rp dxdy.
We recall that, for 0 < s ≤ t < 1 and 1 < p < ∞, there exists q constant C > 0 depending only on s, N and p such that
f W s,p (Ω) ≤ C f W t,p (Ω) , for all f ∈ W t,p (Ω).
In particular, W t,p 0 (Ω) ֒→ W s,p 0 (Ω), for more details see [4]. So, we can consider only the space W s,p 0 (Ω). For each 0 < s ≤ t < 1, x 0 ∈ Ω fixed and p ∈ [1, ∞], we denote X s,t,p (Ω) = W s,p 0 (Ω) × W t,p 0 (Ω) and X * s,t,p (Ω) = (u, v) ∈ X s,t,p (Ω) :
Ω |u| α(p) dx v(x 0 ) = 0 .
If C 0 (Ω) stands for the space u ∈ C(Ω) : u = 0 in R N \ Ω , it is well-known that the immersion W s,p 0 (Ω) ֒→ C 0 (Ω) is compact for any p ∈ N s , ∞ . The compactness of this immersion is consequence of the following Morrey's type inequality (see [4])
sup y =x |u(x) − u(y)| |x − y| s− N p ≤ C[u] s,p , ∀u ∈ W s,p 0 (Ω),(7)
which holds whenever p > N s . If p is sufficiently large, the positive constant C in (7) can be chosen uniformly with respect to p (see [8], Remark 2.2).
Thus, denoting
X 0 (Ω) = C 0 (Ω) × C 0 (Ω), we have the compact immersion X s,t,p (Ω) ֒→ X 0 (Ω) for any p ∈ N s , ∞ . For p ∈ N s , ∞ and u, v ∈ X * s,t,p , we define Q s,t,p (u, v) = 1 p [u] p s,p + 1 p [v] p t,p Ω |u| α(p) dx |v(x 0 )| β(p) and Λ 1 (p) = inf (u,v)∈X * s,t,p (Ω) Q ,s,t,p (u, v). Straightforward calculations show that d dt t=0 1 p [u + tϕ] p r,p = (−∆ p ) r u, ϕ , ∀ϕ ∈ W r,p 0 (Ω). (8) If 0 < m < ∞, then d dt t=0 |(u + tϕ)(x)| m = m|u(x)| m−2 u(x)ϕ(x), ∀ ϕ ∈ L m (Ω).(9)
We also have, for all 1 < α < ∞ and ϕ ∈ L α (Ω),
d dt t=0 Ω |(u + tϕ)(x)| α dx |v(x 0 )| β = α Ω |u(x)| α−2 u(x)ϕ(x)dx |v(x 0 )| β .(10)Definition 1. A pair (u, v) ∈ X s,t,p (Ω) is a weak solution to (P 1 p ) if (−∆ p ) s u, ϕ + (−∆ p ) t v, ψ =λ α(p)|u| α(p)−2 u(x)|v(x 0 )| β(p) ϕ(x)(11)+ β(p) Ω |u(x)| α(p) dx |v(x 0 )| β(p)−2 v(x 0 )ψ(x 0 ) for all (ϕ, ψ) ∈ X s,t,p (Ω).
The functional at the left-hand side of (11) is the Gâteaux derivative of the
Fréchet differentiable functional (u, v) → 1 p [u] p s,p + 1 p [v] p t,p .
However, the functional at the right-hand side of (11) is merely related to the right-hand Gâteaux-derivative
of the functional (u, v) → λ Ω |u(x)| α(p) dx |v(x 0 )| β(p)
, thus motivating the definition of Q p and Λ 1 (p). It is noteworthy that minimizing that integral term is enough to minimize the whole system.
By applying minimization methods, our first result shows that the problem (P 1 p ) has a principal eigenvalue -and therefore, a weak solution -for each p ∈ N s , ∞ .
Its proof simply adapts Theorem 1 in [11]. We sketch the proof for the convenience of the reader in Section 3.
Theorem 1. For each p ∈ N s , ∞ we have (i) Λ 1 (p) > 0; (ii) there exists (u p , v p ) ∈ X * s,t,p (Ω) such that Λ 1 (p) = Q s,t,p (u p , v p ), with u p , v p > 0 and Ω |u p | α(p) dx |v p (x 0 )| β(p) = 1.
The next step is to look for an operator that will motivate the study of the problem (P 1 p ) as p → ∞. So, for each 0 < s ≤ t < 1 and p ∈ N s , ∞ we denote
S p = (u, v) ∈ X s,t,p (Ω) : Ω |u| α(p) dx |v(x 0 )| β(p) = 1 S ∞ = (u, v) ∈ X s,t,∞ (Ω) : u θ ∞ |v(x 0 )| 1−θ = 1 , where θ was defined in (h 2 ).
Furthermore, for each 0 < s ≤ t < 1 and p ∈ N s , ∞ , we define the functions
χ Sp : X 0 (Ω) → [0, ∞] and F p : X 0 (Ω) → [0, ∞] by χ Sp (u, v) = 0, if (u, v) ∈ S p ; ∞, otherwise(12)
and
F p (u, v) = G p (u, v) + χ Sp (u, v), if (u, v) ∈ X * s,t,p (Ω); ∞, otherwise,(13)
with G p defined by
G p (u, v) = Q s,t,p (u, v) 1 p , if p ∈ ( N s , ∞), max {|u| s , |v| t } u θ ∞ |v(x 0 )| 1−θ , if p = ∞,(14)
where, for 0 < σ < 1,
|u| σ = sup y =x |u(x) − u(y)| |x − y| σ .
The method we apply is known as Γ-convergence, but everything we use are the properties listed in Theorem 2. Once again, the next result follows from a straightforward adaptation of the proof of [11,Theorem 2].
Theorem 2. The function F ∞ satisfy the following properties.
(i) If {(u p , v p )} is a sequence such that (u p , v p ) → (u, v) in X 0 (Ω), then F ∞ (u, v) ≤ lim p→∞ inf F p (u p , v p ). (ii) For each (u, v) ∈ X 0 (Ω), there exists a sequence {(U p , V p )} ⊂ X 0 (Ω) such that (U p , V p ) → (u, v) in X 0 (Ω) and F ∞ (u, v) ≥ lim p→∞ sup F p (U p , V p ).
Thus, as a consequence of Theorem 2-(i), we have
F ∞ (u, v) ≤ lim p→∞ inf F p (u p , v p ).
Applying this inequality to the solutions (u p , v p ) given by Theorem 1, we obtain the estimate
F ∞ (u, v) ≤ lim p→∞ inf Λ 1 (p) 1 p = 1 R sθ+(1−θ)t = max{|u ∞ | s , |v ∞ | t },(15)
where the last equality will be shown in the proof of Theorem 3. As a consequence of Theorem 2-(ii) and (15), we can analyze problem (P 1 p ) as p → ∞. Therefore, considering Theorems 1 and 2, we study the behavior of the eigenvalues and eigenfunctions of problem (P 1 p ) as p → ∞. Theorem 3. Let {p n } be a sequence converging to ∞ and (u pn , v pn ) the solution of (P 1 p ) given in Theorem 1. Passing to a subsequence if necessary,
{(u pn , v pn )} n∈N converges uniformly to (u ∞ , v ∞ ) ∈ C 0,s 0 (Ω) × C 0,t 0 (Ω). Furthermore (i) u ∞ ≥ 0, v ∞ ≥ 0 and u ∞ θ ∞ |v ∞ (x 0 )| 1−θ = 1; (ii) lim n→∞ pn Λ 1 (p n ) = Λ 1,∞ = 1 R sθ+(1−θ)t ; (iii) max {|u ∞ | s , |v ∞ | t } = 1 R sθ+(1−θ)t .
As we will see in the sequence, the functions u ∞ and v ∞ are solutions, in the viscosity sense, of regular boundary value problems. In order to distinguish between the cases (and also to avoid a double minus sign), we change notation: for each 1 < p < ∞ we denote the σ-fractional p-Laplacian by (−∆ p ) σ = −L σ,p , where, if 1 < p < ∞ and 0 < σ < 1,
(L σ,p u)(x) := 2 R N |u(x) − u(y)| p−2 (u(x) − u(y)) |x − y| N +σp dy.
As argued in [9], this expression appears formally as follows
(−∆ p ) σ u, ϕ = R N R N |u(x) − u(y)| p−2 (u(x) − u(y))(ϕ(x) − ϕ(y)) |x − y| N +σp dxdy = R N ϕ(x) R N |u(x) − u(y)| p−2 (u(x) − u(y)) |x − y| N +σp dy dx − R N ϕ(y) R N |u(x) − u(y)| p−2 (u(x) − u(y)) |x − y| N +σp dx dy = R N ϕ(x)(L σ,p u)(x)dx, ∀ϕ ∈ W σ,p 0 (Ω). If p = ∞, we define L σ,∞ = L + σ,∞ + L − σ,∞ , where (L + σ,∞ u)(x) = sup y∈R N \{x} u(x) − u(y) |x − y| σ and (L − σ,∞ u)(x) = inf y∈R N \{x} u(x) − u(y) |x − y| σ ,
see Chambolle, Lindgren and Monneau [2], where the concept was introduced, but also [9]. Observe that, since L σ,∞ is not sufficiently smooth, its solutions must be interpreted in the viscosity sense.
We recall the definition of a solution in the viscosity sense by considering the problem L σ,p u = 0 in Ω,
u = 0 in R N \ Ω,(16)
for all p ∈ (1, ∞].
Definition 2. Let u ∈ C(R N ) satisfy u = 0 in R N \ Ω. The function u is a viscosity supersolution of (16) if (L σ,p ϕ)(x 0 ) ≤ 0 for each pair (x 0 , ϕ) ∈ Ω × C 1 0 (R N ) such that ϕ(x 0 ) = u(x 0 ) and ϕ(x) ≤ u(x) ∀x ∈ R N .
On its turn, u is a viscosity subsolution of (16) if
(L σ,p ϕ)(x 0 ) ≥ 0 for all pair (x 0 , ϕ) ∈ Ω × C 1 0 (R N ) such that ϕ(x 0 ) = u(x 0 ) e ϕ(x) ≥ u(x) ∀x ∈ R N .
The function u is a viscosity solution to the problem (16) if u is both a viscosity super-and subsolution to problem (16).
Finally, in Section 5, we prove that the solutions u ∞ and v ∞ given by Theorem 3 are viscosity solutions.
Theorem 4. Let 1 < s ≤ t < 1. Then, the functions u ∞ and v ∞ , given by Theorem 3, are viscosity solutions of the system
max L s,∞ u, L − s,∞ u − Λ 1,∞ |u(x)| θ |v ∞ (x 0 )| 1−θ = 0 in Ω, L t,∞ v = 0 in Ω \ {x 0 }, u = v = 0 in R N \ Ω, v(x 0 ) = v ∞ (x 0 ).(17)
3. Some remarks on the proofs of Theorems 1 and 2
Since the proofs of Theorems 1 and 2 are simple adaptations of that one given in [11], we only sketch them for the convenience of the reader. For details, see [11, Theorem 1 and Theorem 2]. Sketch of proof of Theorem 1. Estimating the denominator in the definition of Q s,t,p , the inequalities of Young and Sobolev imply that Λ 1 > 0. By defining
U n (x) = u n (x) Ω |u n | α(p) dx 1 p |v n (x 0 )| β(p) p and V n (x) = v n (x) Ω |u n | α(p) dx 1 p |v n (x 0 )| β(p) p , we have (U n , V n ) ∈ X s,p (Ω) satisfy Ω |U n (x)| α(p) dx |V n (x 0 )| β(p) = 1. Further- more, lim n→∞ Q s,t,p (U n , V n ) = lim n→∞ Q s,t,p (u n , v n ) = Λ 1 (s, p),
guaranteeing the existence of u p , v p ∈ W s,p (Ω) such that
Ω |u p | α(p) dx |v p (x 0 )| β(p) = 1.
and Q s,t,p (u p , u p ) = Λ 1 (p). For any (φ, ψ) ∈ X s,t,p (Ω), considering
g(t) = Q s,t,p (u p + tφ, v p + tψ),
it follows the existence of t 0 > 0 such that g(t) > g(0) = Λ 1 (p). Since g ∈ C 1 ((−t 0 , t 0 ), R)m we have g ′ (0) = 0, from what follows that (u p , v p ) is a weak solution to system (P 1 p ). An argument similar [9, Lemma 22] proves that u p > 0 and v p > 0 in Ω, showing that Λ 1 (s, p) is a principal eigenvalue to system (P 1 p ).
Sketch of proof of Theorem 2. In order to prove (i), suppose that (u p , v p ) → (u, v) ∈ X 0 (Ω). Passing to a subsequence, we assume that lim
p→∞ F p (u p , v p ) = lim inf p→∞ F p (u p , v p ). It is not difficult to discard the case (u, v) / ∈ X * s,t,∞ (Ω) ∩ S ∞ . So, we consider the case (u, v) ∈ X * s,t,∞ (Ω) ∩ S ∞ , which implies u θ ∞ |v(x 0 )| 1−θ = 1. We can assume that F p (u p , v p ) ≤ C < ∞, since otherwise (i) is valid. So, for p large enough, we have (u p , v p ) ∈ S p and, if k > N s , then Ω Ω |u p (x) − u p (y)| k |x − y| ( N p +s)k + |v p (x) − v p (y)| k |x − y| ( N p +t)k dxdy 1 k ≤ 2 1 k |Ω| 2( 1 k − 1 p ) p 1 p 1 p [u p ] p s,p + 1 p [v p ] p t,p 1 p .
Thus,
F p (u p , v p ) = Q s,t,p (u p , v p ) = 1 p [u p ] p s,p + 1 p [v p ] p t,p 1 p ≥ 2 − 1 k |Ω| 2( 1 p − 1 k ) p − 1 p Ω Ω |u p (x) − u p (y)| k |x − y| ( N p +s)k + |v p (x) − v p (y)| k |x − y| ( N p +t)k dxdy 1 k .
As p → ∞, results from the uniform convergence and Fatou's Lemma that
lim inf p→∞ F p (u p , v p ) ≥ 2 − 1 k |Ω| − 2 k Ω Ω |u(x) − u(y)| k |x − y| sk + |v(x) − v(y)| k |x − y| tk dxdy 1 k . Making k → ∞, we obtain lim inf p→∞ F p (u p , v p ) ≥ max {|u| s , |v| t } = F ∞ (u, v),(18)
concluding the proof of (i). Now we deal with the second claim. Take any (u, v) ∈ X 0 (Ω) and initially suppose that (u, v) / ∈ X * s,t,∞ (Ω) ∩ S ∞ . Then F s,∞ (u, v) = ∞. Consider then a sequence of values p → ∞ and, for any p ∈ N s , ∞ in the sequence, define u p := u and v p := v. Of course we have (u p , v p ) → (u, v) as p → ∞ in X 0 (Ω). It is not difficult to discard the cases Ω |u p | α(p) dx |v p (x 0 )| β(p) = 1. If, however, (u, v) ∈ X * s,t,∞ (Ω) ∩ S ∞ , consider then a sequence of values p → ∞ and, for any p ∈ N s , ∞ in the sequence, define
U p (x) = u(x) Ω |u| α(p) dx 1 p |v(x 0 )| 1 p and V p (x) = v(x) Ω |u| α(p) dx 1 p |v(x 0 )| β(p) p . Then (U p , V p ) ∈ S p and lim sup p→∞ F p (U p , V p ) = max |u| s , |v| t = F ∞ (u, v),
completing the proof of (ii).
Proof of Theorem 3
Let us denote
R = max x∈Ω dist(x, R N \ Ω) = dist(., R N \ Ω) L ∞ (Ω) .
For a fixed x 1 ∈ Ω we consider the functions φ R :
B R (x 1 ) → [0, R] and ψ R : B R (x 0 ) → [0, R] given by φ R (x) = R (θ−1)t−sθ (R − |x − x 1 |) s + and ψ R (x) = R (θ−1)t−sθ (R − |x − x 0 |) t + . Of course we have φ R ∈ C 0,s 0 (B R (x 1 ) and ψ R ∈ C 0,s 0 (B R (x 0 ). Furthermore, φ R ∞ = R (θ−1)(t−s) , |ψ R (x 0 )| = R θ(t−s) and |φ R | s = |ψ R | s = R (θ−1)t−sθ .
We can extend φ R and ψ R to Ω by putting φ R = 0 in R N \ B R (x 1 ) and ψ R = 0 in R N \B R (x 0 ) to that φ R , ψ R ∈ C 0,s 0 (Ω), maintaining its s-Hölder norm. Additionally, we still have φ R , ψ R ∈ W 1,m 0 (Ω) ֒→ W s,m 0 (Ω) for all s ∈ (0, 1) and m ≥ 1. For details, see [7,9].
Lemma 5. For any fixed 0 < s ≤ t < 1 we have
Λ 1,∞ = inf (u,v)∈X * s,t,∞ (Ω) max |u| s , |v| t u θ ∞ |v(x 0 )| 1−θ ∞ = 1 R sθ+(1−θ)t .
Proof. We note that we have
φ R θ ∞ |ψ R (x 0 )| 1−θ = R θ(θ−1)(t−s)+θ(1−θ)(t−s) = 1 and therefore Λ 1,∞ = inf (u,v)∈X * s,t,∞ (Ω) max |u| s , |v| t u θ ∞ |v(x 0 )| 1−θ ∞ ≤ max |φ R | s , |ψ R | t φ R θ ∞ |ψ R (x 0 )| 1−θ ∞ = 1 R sθ+(1−θ)t .
Also note that, given (u, v) ∈ X * s,t,p (Ω), then u = 0 = v in Ω. Since u is continuous, there exists x 1 ∈ Ω such that
u ∞ = |u(x 1 )|.
The compactness of Ω guarantees the existence of y x0 , y x1 ∈ ∂Ω such that |x 0 − y x0 | = dist(x 0 , R N \ Ω) and |x 1 − y x1 | = dist(x 1 , R N \ Ω).
Thus, since u(y x1 ) = v(y x0 ) = 0, it follows
u θ ∞ = |u(x 1 ) − u(y x1 )| θ ≤ |u| θ s |x 1 − y x1 | sθ ≤ |u| θ s R sθ . On the other hand, |v(x 0 )| 1−θ = |v(x 0 ) − v(y x0 )| 1−θ ≤ |v| 1−θ t |x 0 − y x0 | t(1−θ) ≤ |v| 1−θ t R t(1−θ) .
So, for any (u, v) ∈ X * s,t,p (Ω), we have
1 R sθ+t(1−θ) = 1 R sθ R (1−θ)t ≤ |u| θ s |v| 1−θ t u θ ∞ |v(x 0 )| 1−θ ≤ max |u| s , |v| t θ max |u| s , |v| t 1−θ u θ ∞ |v(x 0 )| 1−θ = max |u| s , |v| t u θ ∞ |v(x 0 )| 1−θ . Therefore, Λ 1,∞ = inf (u,v)∈X * s,t,∞ (Ω) max |u| s , |v| t u θ ∞ |v(x 0 )| 1−θ ∞ ≥ 1 R sθ+(1−θ)t ,
concluding the proof.
The next result is pivotal in our analysis of the asymptotic behavior of solutions in problems driven by the fractional p-Laplacian. Lemma 6. Let u ∈ C 0,σ 0 (Ω) be extended as zero outside Ω. If u ∈ W σ,q (Ω) for some q > 1, then u ∈ W σ,p 0 (Ω) for all p ≥ q and lim p→∞ [u] σ,p = |u| σ .
The proof of Lemma 6 can be found in [6,Lemma 7]. Proof of Theorem 3. Of course we have
Λ 1 (p n ) ≤ 1 pn [φ R ] pn s,pn + 1 pn [ψ R ] pn t,pn Ω |φ R | α(pn) dx |ψ R (x 0 )| β(pn)
.
Thus, lim sup
n→∞ pn Λ 1 (p n ) ≤ lim sup n→∞ 1 p n [φ R ] pn s,pn + [ψ R ] pn t,pn Ω |φ R | α(pn) dx |ψ R (x 0 )| β(pn) 1 pn ≤ lim sup n→∞ 2 p n 1 pn max [φ R ] s,pn , [ψ R ] t,pn Ω |φ R | α(pn) dx |ψ R (x 0 )| β(pn) = max |φ R | s , |ψ R | t φ R θ ∞ |ψ R (x 0 )| 1−θ ≤ 1 R sθ+(1−θ)t ,
proving that the sequence pn Λ 1 (p n ) n∈N is bounded in R, that is, there exists
M 0 > 0 such that pn Λ 1 (p n ) ≤ M 0 for all n ∈ N.(19)
Theorem 1 guarantees that we can take (u pn , v pn ) so that (Ω) such that u pn → u ∞ and v pn → v ∞ uniformly in Ω.
u pn > 0, v pn > 0 and Ω |u pn | α(pn) dx |v pn (x 0 )| β(pn) = 1. Therefore Λ 1 (p n ) = 1 p n [u|u pn | s− N m 0 = sup x =y |u pn (x) − u pn (y)| |x − y| s− N m 0 = sup x =y |u pn (x) − u pn (y)| |x − y| s− N pn |x − y| N m 0 − N pn ≤ (diam(Ω)) N m 0 − N pn sup x =y |u pn (x) − u pn (y)| |x − y| s− N pn ≤ C (diam(Ω)) N m 0 − N pn [u pn ] s,pn ≤ C (diam(Ω))
We also observe that
u ∞ θ ∞ |v ∞ (x 0 )| 1−θ = lim n→∞ Ω |u pn | α(pn) dx |v pn (x 0 )| β(pn) 1 pn = 1.
Fix k > N s . By applying Fatou's, Hölder's inequality and (20), we obtain
Ω Ω |u ∞ (x) − u ∞ (y)| k |x − y| sk dxdy ≤ lim inf n→∞ Ω Ω |u pn (x) − u pn (y)| k |x − y| ( N pn +s)k dxdy ≤ lim inf n→∞ |Ω| 2( pn −k pn ) Ω Ω |u pn (x) − u pn (y)| pn |x − y| N +spn dxdy k pn ≤ |Ω| 2 lim inf n→∞ [u pn ] k s,pn(21)≤ |Ω| 2 lim inf n→∞ p 1 pn n pn Λ 1 (p n ) k ≤ |Ω| 2 1 R sθ+(1−θ)t k .
Thus,
|u ∞ | s = lim k→∞ Ω Ω |u ∞ (x) − u ∞ (y)| k |x − y| sk dxdy 1 k ≤ lim n→∞ |Ω| 2 k 1 R sθ+(1−θ)t = 1 R sθ+(1−θ)t .
Analagously,
|v ∞ | t = lim k→∞ Ω Ω |v ∞ (x) − v ∞ (y)| k |x − y| tk dxdy 1 k ≤ lim n→∞ |Ω| 2 k 1 R sθ+(1−θ)t = 1 R sθ+(1−θ)t and therefore max |u ∞ | s , |v ∞ | t ≤ 1 R sθ+(1−θ)t . It follows from Lemma 5 that 1 R sθ+(1−θ)t = inf (u,v)∈X * s,t,∞ (Ω) max |u| s , |v| t u θ ∞ |v(x 0 )| 1−θ ≤ max |u ∞ | s , |v ∞ | t ≤ 1 R sθ+(1−θ)t , thus producing max |u ∞ | s , |v ∞ | t = 1 R sθ+(1−θ)t . On its turn, inequality (21) yields max Ω Ω |u ∞ (x) − u ∞ (y)| k |x − y| sk dxdy 1 k , Ω Ω |v ∞ (x) − v ∞ (y)| k |x − y| tk dxdy 1 k ≤ |Ω| 2 k lim inf n→∞ p 1 pn n pn Λ 1 (p n ) .
Thus, as k → ∞ we obtain
1 R sθ+(1−θ)t = max |u ∞ | s , |v ∞ | s ≤ lim inf n→∞ p 1 pn n pn Λ 1 (p n ) ≤ lim sup n→∞ p 1 pn n pn Λ 1 (p n ) ≤ 1 R sθ+(1−θ)t , from what follows lim n→∞ pn Λ 1 (p n ) = lim n→∞ p 1 pn n pn Λ 1 (p n ) = 1 R sθ+(1−θ)t = Λ 1,∞ .
Proof of Theorem 4
The next result only shows that solutions in the weak sense are viscosity solutions. Its proof can be achieved by adapting the arguments given by Lindgren and Lindqvist in [9, Proposition 1].
Proposition 7. The function u p e v p given by Theorem 1 are viscosity solutions to the problems
L s,p u = Λ 1 (p)α(p)|u| α(p)−1 v(x 0 ) in Ω, u = 0 in R N \ Ω, and L t,p v = 0 in Ω \ {x 0 }, v = 0 in R N \ Ω, v(x 0 ) = v p (x 0 ), respectively.
Proof of Theorem 4. We start showing that v ∞ is a viscosity solution to the problem
L t,∞ v = 0 in Ω \ {x 0 }, v = 0 in R N \ Ω, v(x 0 ) = v ∞ (x 0 ). (22) According to Theorem 3 we have v ∞ = 0 in R N \ Ω and v ∞ (x 0 ) = v ∞ (x 0 ). So, we need only show that v ∞ is a viscosity solution. Fix (z 0 , ϕ) ∈ (Ω \ {x 0 }) × C 1 0 (R N \ {x 0 }) satisfying ϕ(z 0 ) = v ∞ (z 0 ) and ϕ(x) ≤ v ∞ (x), ∀x ∈ R N \ {x 0 , z 0 }.
Theorem 3 also guarantees the existence of a sequence {(u pn , v pn )} n∈N ∈ C 0,s 0 (Ω)× C 0,t 0 (Ω) such that u pn → u ∞ and v pn → v ∞ uniformly in Ω. Thus, there exists a sequence {x pn } n∈N so that x pn → z 0 and v pn (x pn ) = ϕ(x pn ). Since x 0 = z 0 , we can assume the existence of n 0 ≥ 0 and a ball B ρ (z 0 ) such that
x pn / ∈ B ρ (z 0 ) ⊂ Ω \ {z 0 }, ∀n ≥ n 0 .
Since v pn weakly satisfies
(−∆ pn ) t v pn (x) = Λ 1 (p n )α(p n ) Ω |u pn | α(pn) dx |v pn (x 0 )| β(pn) v pn (x 0 )δ x0
in Ω, then also in Ω \ {x 0 }, Proposition 7 yields that v pn is a viscosity solution to the problem
L t,pn v = 0 in Ω \ {x 0 }, v = 0 in R N \ Ω, v(x 0 ) = v pn (x 0 ).(23)
By standard arguments, we obtain a sequence {z n } n∈N ⊂ B ρ (x 0 ) such that z n → z 0 and σ n := min
Bρ(x0) (v pn − ϕ) = v pn (z n ) − ϕ(z n ) < v pn (x) − ϕ(x), ∀x = x pn .
Now, define Ψ n := ϕ + σ n . We have Ψ n (z n ) = ϕ(z n )+σ n = v pn (z n ) and Ψ n (x) = ϕ(x)+σ n < v pn (x), ∀x ∈ B ρ (x 0 ).
Since v pn satisfies (23) in Ω \ {x 0 },
(L t,∞ Ψ n )(z n ) ≤ 0, ∀n ≥ n 0 .
Thus, defining
(A pn,t (ϕ(z n ))) pn−1 := 2 R N |ϕ(z n ) − ϕ(y)| pn−2 (ϕ(z n ) − ϕ(y)) + |z n − y| N +tpn dy and (B pn,t (ϕ(z n ))) pn−1 := 2 R N |ϕ(z n ) − ϕ(y)| pn−2 (ϕ(z n ) − ϕ(y)) − |z n − y| N +tpn dy, we have (A pn,t (ϕ(z n ))) pn−1 − (B pn,t (ϕ(z n ))) pn−1 = 2 R N |ϕ(z n ) − ϕ(y)| pn−2 (ϕ(z n ) − ϕ(y)) |z n − y| N +spn dy ≤ 0, ∀n ≥ n 0 .(24)
Applying [7,Lemma 3.9] (see also [8,Lemma 6.1]), we obtain lim n→∞ A pn,t (ϕ(z n )) = L + t,∞ ϕ (z 0 ) and lim n→∞ B pn,t (ϕ(z n )) = −L − t,∞ ϕ (z 0 ).
As n → ∞ in (24) we get
(L t,∞ ϕ) (x 0 ) = L + t,∞ ϕ (x 0 ) + L − t,∞ ϕ (x 0 ) ≤ 0,
showing that v ∞ is a viscosity supersolution of (22). Analogously, we obtain that v ∞ is a viscosity subsolution of the same equation, and thus a viscosity solution of (22). Now we show that u ∞ is a viscosity solution to the problem
max L s,∞ u, L − s,∞ u + Λ 1,∞ |u(x)| θ |v ∞ (x 0 )| 1−θ = 0 in Ω, u = 0 in R N \ Ω.(25)
The same reasoning used before imply that, for given (z 0 , ϕ) ∈ Ω × C 1 0 (R N ), we find a sequence {u pn } n∈N in C 0,s 0 (Ω) such that u pn → u ∞ uniformly in Ω and a sequence {x pn } n∈N satisfying x pn → z 0 and u pn (x pn ) = ϕ(x pn ). Thus, there exist n 0 ≥ 0 and a ball B ρ (z 0 ) so that
x pn / ∈ B ρ (z 0 ) ⊂ Ω \ {z 0 }, ∀n ≥ n 0 .
As before, we obtain that u pn is a viscosity solution to the problem
L s,pn u pn = Λ 1 (p n )α(p n )|u pn | α(pn)−1 v pn (x 0 ) in Ω, u = 0 in R N \ Ω.
Considering, as before, a sequence {z n } n∈N ⊂ B ρ (z 0 ) such that z n → z 0 and defining Ψ n as in the previous proof, we obtain (L s,pn Ψ n )(z n ) ≤ Λ 1 (p n )α(p n )|Ψ n (z n )| α(pn)−1 v pn (x 0 ) ∀n ≥ n 0 , which is equivalent to the inequality (A pn,s (ϕ(z n ))) pn−1 − (B pn,s (ϕ(z n ))) pn−1 ≤ (C pn (ϕ(z n ))) pn−1 ∀n ≥ n 0 , where C pn (ϕ(z n )) pn−1 := Λ 1 (p n )α(p n )|ϕ + σ n | α(pn)−1 v pn (x 0 ) and the other terms are analogous to that of the previous case, just changing t for s.
Observe that a direct calculation yields
lim n→∞ C pn (ϕ(z n )) = lim n→∞ pn Λ 1 (p n ) pn α(p n )|ϕ(z n ) + σ n | α(pn ) pn−1 v pn (x 0 ) β(pn ) pn−1 = Λ 1,∞ |ϕ(z 0 )| θ v ∞ (x 0 ) 1−θ So, as n → ∞ em (24) we obtain (L s,∞ ϕ) (x 0 ) = L + s,∞ ϕ (z 0 ) + L − s,∞ ϕ (z 0 ) ≤ Λ 1,∞ |ϕ(z 0 )| θ v ∞ (x 0 ) 1−θ and therefore max L s,∞ u, L − s,∞ u − Λ 1,∞ |u(x)| θ |v ∞ (x 0 )| 1−θ ≤ 0
in Ω, that is, u ∞ is a viscosity supersolution to problem (22). Analogously, u ∞ is a viscosity subsolution to the same problem. We are done.
Remark 8. We observe that the system
(−∆ p ) s u(x) = λα(p)|u| α(p)−2 u|v(x v )| β(p)
in Ω,
(−∆ p ) t v(x) = λβ(p) Ω |u| α(p) dx |v(x v )| β(p)−2 v(x v )δ xv in Ω, u = v = 0 in R N \ Ω, (P 1 ∞ ) where x v is a maximum point of v
in Ω can be treated in the same setting given in Section 2, applying the same procedure used to solve system (P 1 p ).
6. On the system(P 2 p ) In this section we consider the functional system (P 2 p ).
(−∆ p ) s u(x) = λα(p)|u(x 1 )| α(p)−2 u(x 1 )|v(x 2 )| β(p) δ x1 in Ω, (−∆ p ) t v(x) = λβ(p)|u(x 1 )| α(p) |v(x 2 )| β(p)−2 v(x 2 )δ x2 in Ω, u = v = 0 in R N \ Ω,
where x 1 , x 2 ∈ Ω are arbitrary points, x 1 = x 2 . Observe that both equations are functional, so their treatment recall that used to deal with the second equation in system (P 1 p ).
Definition 3. A pair (u, v) ∈ X s,t,p (Ω) is a weak solution to (P 2 p ) if (−∆ p ) s u, ϕ + (−∆ p ) s v, ψ = λ α(p)|u(x 1 )| α(p)−2 u(x 1 )|v(x 2 )| β(p) ϕ(x 1 ) (26) +β(p)|u(x 1 )| α(p) |v(x 2 )| β(p)−2 v(x 2 )ψ(x 2 )
for all (ϕ, ψ) ∈ X s,t,p (Ω).
The denominator in the definition of Q s,t,p should be changed into |u(x 1 )| α(p) |v(x 2 )| β(p) , maintaining the definition of Λ 1 (p). The first result, which is similar to Theorem 1 is the following.
Theorem 9. For each p ∈ N s , ∞ we have (i) Λ 1 (p) > 0; (ii) there exist (u p , v p ) ∈ X * s,t,p (Ω) such that u p > 0, v p > 0 and |u p (x 1 )| α(p) |v p (x 2 )| β(p) = 1 and Λ 1 (s, p) = Q s,t,p (u p , v p ).
Its proof is also similar to that of Theorem 1. For details, see the proof sketched in Section 3 or [11, Theorem 1]. The next step is to prove a result similar to Theorem 2. Changing the definition of S p and S∞ into S p = (u, v) ∈ X s,t,p (Ω) : |u(x 1 )| α(p) |v(x 2 )| β(p) = 1 and S ∞ = (u, v) ∈ X s,t,p : |u(x 1 )| θ |v(x 2 )| 1−θ = 1 and also the denominator in G p into |u(x 1 )| θ |v(x 2 )| 1−θ , we obtain the version of Theorem 2 with the same statement.
Up to this point, the points x 1 , x 2 ∈ Ω were taken arbitrarily. Now, we consider sequences u n := u pn and v n := u pn given by Theorem 1. Since u n , v n > 0, we can take x 1 as a maximum x n of u n and x 2 as a maximum y n of v n . Observe that we do not suppose that the maxima x n and y n are unique. However, we will prove that the sequence (x n , y n ) has a subsequence that converges to (x ∞ , y ∞ ) and the equality |u ∞ (x ∞ )| θ |v ∞ (y ∞ )| 1−θ = 1 still holds true.
Theorem 10. Let {p n } be a sequence converging to ∞ and (u pn , v pn ) the solution of (P 1 p ) given in Theorem 9. Denote x n := x up n and y n := x vp n a sequence of maxima to u pn and v pn , respectively. Passing to a subsequence if necessary, {(u pn , v pn )} n∈N converges uniformly to (u ∞ , v ∞ ) ∈ C 0,s 0 (Ω)×C 0,s 0 (Ω), while the sequences {x n } and {y n } converge to x ∞ ∈ Ω and y ∞ ∈ Ω, respectively, which are the maxima of u ∞ and v ∞ . Furthermore (i) u ∞ ≥ 0, v ∞ ≥ 0 and |u ∞ (x ∞ )| θ |v ∞ (y ∞ )| 1−θ = 1;
(ii) lim
n→∞ pn Λ 1 (p n ) = 1 R sθ+(1−θ)t (iii) max |u ∞ | s , |v ∞ | t = 1 R sθ+(1−θ)t ; (iv) If s = t, then 0 ≤ u ∞ (x) ≤ dist(x, R N \ Ω) s R s and 0 ≤ v ∞ (x) ≤ dist(x, R N \ Ω) s R s .
Its proof can be obtained by mimicking the method used to prove Theorem 3. Comparing this result with the one in [11], we first note that our result brings information about the sequence of maxima of u pn and v pn , which are absent in that paper.
Finally, the analogue to Theorem 4 is the following. Once again, its proof is obtained by adapting that of the Theorem 4.
is a maximum point of v in Ω. Observe that the first equation in (P 1 ∞ ) can be replaced by (−∆ p ) s u(x) = λα(p)|u| α(p)−2 u v β(p) ∞
1 (s, p n ) the constant C not depending on p n . We conclude that the sequence {u pn } is uniand the same reasoning is valid for {v pn }, showing that {v pn } n∈N is uniformly bounded in C
Theorem 11 .
11The functions u ∞ and v ∞ , given by Theorem 10, are viscosity so-R N \ Ω, v(x 2 ) = v ∞ (x 2 ),respectively.
Asymptotic behavior as p → ∞ of ground state solutions of a (p, q(p))-Laplacian problem. C Alves, G Ercole, G Pereira, Proc. Roy. Soc. Edinburgh Sect. A. 1496C. Alves, G. Ercole and G. Pereira: Asymptotic behavior as p → ∞ of ground state solutions of a (p, q(p))-Laplacian problem, Proc. Roy. Soc. Edinburgh Sect. A, 149 (2019), no. 6, 1493- 1522.
A Chambolle, E Lindgren, R Monneau, A Hölder infinity Laplacian. 18A. Chambolle, E. Lindgren and R. Monneau: A Hölder infinity Laplacian, ESAIM Control Optim. Calc. Var. 18 (2012), no. 3, 799-835.
Eigenvalues for systems of fractional p-Laplacians. L , Del Pezzo, J Rossi, Rocky Mountain J. Math. 484L. Del Pezzo and J. Rossi: Eigenvalues for systems of fractional p-Laplacians, Rocky Mountain J. Math. 48 (2018), no. 4, 1077-1104.
Hitchhikers guide to the fractional Sobolev spaces. R Di Nezza, G Palatucci, E Valdinoci, Bull. Sci. Math. 1365R. Di Nezza, G. Palatucci and E. Valdinoci: Hitchhikers guide to the fractional Sobolev spaces, Bull. Sci. Math. 136 (2012), no. 5, 521-573.
Asymptotics for the best Sobolev constants and their extremal functions. G Ercole, G Pereira, Math. Nachr. 28911G. Ercole and G. Pereira: Asymptotics for the best Sobolev constants and their extremal functions, Math. Nachr. 289 (2016), no. 11-12, 1433-1449.
Asymptotic behavior of extremals for fractional Sobolev inequalities associated with singular problems. G Ercole, G Pereira, R Sanchis, Ann. Mat. Pura Appl. 1984G. Ercole, G. Pereira and R. Sanchis: Asymptotic behavior of extremals for fractional Sobolev inequalities associated with singular problems, Ann. Mat. Pura Appl. (4), 198 (2019), no. 6, 2059-2079.
On the behavior of least energy solutions of a fractional (p, q(p))-Laplacian problem as p goes to infinity. G Ercole, A H S Medeiros, G A Pereira, Asymptot. Anal. 1233-4G. Ercole, A. H. S. Medeiros and G. A. Pereira: On the behavior of least energy solutions of a fractional (p, q(p))-Laplacian problem as p goes to infinity, Asymptot. Anal. 123 (2021), no. 3-4, 237-262.
Limit problems for a fractional p-Laplacian as p → ∞. R Ferreira, M Pérez-Llanos, NoDEA Nonlinear Differential Equations Appl. 232ppR. Ferreira and M. Pérez-Llanos: Limit problems for a fractional p-Laplacian as p → ∞, NoDEA Nonlinear Differential Equations Appl. 23 (2016), no. 2, Art. 14, 28 pp.
E Lindgren, P Lindqvist, Fractional eigenvalues. Calc. Var. Partial Differential Equations. 49E. Lindgren and P. Lindqvist: Fractional eigenvalues. Calc. Var. Partial Differential Equa- tions 49 (2014), no. 1-2, 795-826.
On the higher eigenvalues for the ∞-eigenvalue problem. P Juutinen, P Lindqvist, Calc. Var. Partial Differential Equations. 232P. Juutinen and P Lindqvist: On the higher eigenvalues for the ∞-eigenvalue problem, Calc. Var. Partial Differential Equations 23 (2005), no. 2, 169-192.
A limiting problem for a family of eigenvalue problems involving p-Laplacians. M Mihǎilescu, J Rossi, D Stancu-Dumitru, Rev. Mat. Complut. 323M. Mihǎilescu, J. Rossi and D. Stancu-Dumitru: A limiting problem for a family of eigenvalue problems involving p-Laplacians, Rev. Mat. Complut. 32 (2019), no. 3, 631-653.
. Matemática Departmento De, address: [email protected] Horizonte -MG, Brazil EmailUniversidade Federal de Minas GeraisDepartmento de Matemática, Universidade Federal de Minas Gerais, 31270-901 -Belo Horizonte -MG, Brazil Email address: [email protected]
. Matemática Departamento De, Viçosa -MG, BrazilUniversidade Federal de ViçosaEmail address: [email protected] de Matemática, Universidade Federal de Viçosa, 36570-900 -Viçosa - MG, Brazil. Email address: [email protected]
| []
|
[
"The upper bound on knots in neural networks",
"The upper bound on knots in neural networks"
]
| [
"Kevin K Chen "
]
| []
| []
| Neural networks with rectified linear unit activations are essentially multivariate linear splines. As such, one of many ways to measure the "complexity" or "expressivity" of a neural network is to count the number of knots in the spline model. We study the number of knots in fully-connected feedforward neural networks with rectified linear unit activation functions. We intentionally keep the neural networks very simple, so as to make theoretical analyses more approachable. An induction on the number of layers l reveals a tight upper bound on the number of knots in R → R p deep neural networks. With n i 1 neurons in layer i = 1, . . . , l, the upper bound is approximately n 1 . . . n l . We then show that the exact upper bound is tight, and we demonstrate the upper bound with an example. The purpose of these analyses is to pave a path for understanding the behavior of general R q → R p neural networks. | null | [
"https://arxiv.org/pdf/1611.09448v2.pdf"
]
| 6,510,924 | 1611.09448 | 90eec7649689c1ae138de35862b0169cce5d0378 |
The upper bound on knots in neural networks
November 2016
Kevin K Chen
The upper bound on knots in neural networks
November 2016
Neural networks with rectified linear unit activations are essentially multivariate linear splines. As such, one of many ways to measure the "complexity" or "expressivity" of a neural network is to count the number of knots in the spline model. We study the number of knots in fully-connected feedforward neural networks with rectified linear unit activation functions. We intentionally keep the neural networks very simple, so as to make theoretical analyses more approachable. An induction on the number of layers l reveals a tight upper bound on the number of knots in R → R p deep neural networks. With n i 1 neurons in layer i = 1, . . . , l, the upper bound is approximately n 1 . . . n l . We then show that the exact upper bound is tight, and we demonstrate the upper bound with an example. The purpose of these analyses is to pave a path for understanding the behavior of general R q → R p neural networks.
Introduction
In recent years, neural networks-and deep neural networks in particular-have succeeded exceedingly well in such a great plethora of data-driven problems, so as to herald an entire paradigm shift in the way data science is approached. Many everyday computerized tasks-such as image and optical character recognition, the personalization of Internet search results and advertisements, and even playing games such as chess, backgammon, and Go-have been deeply impacted and vastly improved by the application of neural networks. The applications of neural networks, however, have advanced significantly more rapidly than the theoretical understanding of their successes. Elements of neural network structures-such as the division of vector spaces into convex polytopes, and the application of nonlinear activation functions-afford neural networks a great flexibility to model many classes of functions with spectacular accuracy. The flexibility is embodied in universal approximation theorems (Cybenko 1989;Hornik et al. 1989;Hornik 1991;Sonoda and Murata 2015), which essentially state that neural networks can model any continuous function arbitrarily well. The complexity of neural networks, however, have also made their analytical understanding somewhat elusive.
The general thrust of this paper, as well as two companion papers (Chen et al. 2016b,a), is to explore some unsolved elements of neural network theory, and to do so in a way that is independent of specific problems. In the broadest sense, we seek to understand what models neural networks are capable of producing. There exist many variations of neural networks, such as convolutional neural networks, recurrent neural networks, and long short-term memory models, each having their own arenas of success. For simplicity, we choose to focus on the simplest case of feedforward, fully-connected neural networks with rectified linear unit activations. This model is defined more precisely in Section 2.
More specifically, as we will see, neural networks with rectified linear unit activations are linear splines; i.e., they are continuous, piecewise linear functions with a finite number of pieces. Therefore, one of many ways to measure of the "complexity" or "expressivity" of a neural network is to count the number of knots, i.e., discontinuities in the first derivative of the output quantities with respect to input quantities. Similarly, one could count the number of piecewise linear regions given by the neural network. Previous works (e.g., Raghu et al. 2016) have observed or shown that number of piecewise linear pieces grows exponentially with the number of layers in the neural network, therefore justifying the use of deep networks over shallow networks.
In this paper, we continue the exploration of how the size of a neural network, given by the width or the number of neurons in a layer, and the depth or the number of layers, is related to the number of knots in the neural network. Whereas previous works have generally focused on asymptotic or otherwise approximate upper bounds, we derive an exact tight upper bound. The chief utility of such a bound is that it allows an a priori determination of whether a neural network size is sufficient for a given task or governing equation. For instance, we could imagine that a neural network designer at least roughly knows the complexity of the input-output behavior of a function to be modeled. In this case, certain neural network widths and depths could be ruled out, on the grounds that no neural networks of those sizes could produce enough knots to model the function of interest.
In this paper, we attempt to circumvent some of the complexities of neural network behavior by making simplifications that may seem strong at times. For instance, the results we report apply specifically to R → R p functions. Although neural networks are almost never used to study singleinput functions, the simplicity does admit certain analyses that would otherwise be very difficult for general R q → R p functions. Indeed, a key objective following this paper is to extend the results to multidimensional inputs. This extension is tantamount to analyzing convex polytopes in R q instead of linear segments in R in the input space.
The main results of the paper are given by the following theorems.
Theorem 1. In an l-layer R → R p neural network with n i rectified linear unit neurons in layer i = 1, . . . , l, the number of knots m l in the neural network model satisfies
m l ≤ l i=1 n i l j=i+1 (n j + 1).(1)
Theorem 2. If n i ≥ 3 for i = 1, . . . , l − 1 and n l ≥ 2, then the upper bound (1) is tight.
This paper is organized as follows. Section 2 briefly reviews the neural network architecture that we employ in this paper. Constructive proofs of Theorems 1 and 2 are presented respectively in Sections 3 and 4. An example of a deep neural network meeting the upper bound on the number of knots is then constructed in Section 5. Finally, we summarize our work and comment on future directions in Section 6.
Brief overview of neural networks
In Section 2.1, we first review the basic definitions and descriptions of neural networks. Next, we describe two ideas which are relevant for the analytical development of the paper. Section 2.2 describes the rectified linear unit neural network as a linear spline with associated knots and roots, so as to allow knot counting. Afterwards, Section 2.3 derives a transformation of the neural network into an equivalent model with only forward-facing rectified linear units. Such a transformation is useful in constructing particular neural networks (e.g., for Theorem 2 and its associated lemmas).
Description of neural networks
Neural networks are most commonly employed in the context of supervised machine learning, where the primary objective is to construct a function that best models a data set. In this paper, however, we will be more concerned with the functional behavior of neural network models than with the training of such models. As such, we will not address common topics such as model risk, loss, and optimization. A review of machine learning techniques and their statistical analyses can be found in Knox (2016).
We begin by defining neural networks of a single or multiple hidden layers. It is noteworthy that many variations on neural networks exist. The definitions below correspond to the dense, fully-connected, feedforward structure we will employ, but may differ from architectures used in other studies or applications.
Definition. For some bias b ∈ R, weight w ∈ R n , nonlinear activation function σ : R → R, and input v ∈ R n , a neuron is the function σ(w · v + b).
Definition. Let q and p respectively denote the input and output dimension. For k = 1, . . . , n, with n the number of neurons, select input biases b 1k ∈ R and input weights w 1k ∈ R q . Also, for k = 1, . . . , p, select output biases b 2k ∈ R and output weights w 2k ∈ R n . Using the shorthand notation v := [v 1 · · · v n ] ∈ R n and y := [y 1 · · · y p ] ∈ R p , a single-hidden-layer neural network is the modelf : R q → R p , x → y given by
v k := σ(w 1k · x + b 1k ), k = 1, . . . , n,(2a)y k := w 2k · v + b 2k , k = 1, . . . , p.(2b)
This architecture is shown in Figure 1. In summary, each neuron takes an affine transformation of the input and applies the activation function (2a). Then, each output takes an affine transformation of all the neural outputs (2b). The flexibility of this architecture is apparent from the (q + 1)n + (n + 1)p scalars that comprise the biases and weights. In particular, the well-known universal approximation theorem loosely states that if the activation function σ is continuous, non-constant, and bounded, then the single-hidden-layer neural network can approximate any continuous function arbitrarily well with a finite number n of neurons (Cybenko 1989;Hornik et al. 1989;Hornik 1991). A recent result (Sonoda and Murata 2015) extends the universal approximation result to the commonly employed rectified linear unit
σ(x) := max(0, x) = x + |x| 2 .(3)
Although the universal approximation theorem implies that the single-hidden-layer neural network is sufficiently flexible for modeling continuous functions, it is common to employ deep neural networks, where the outputs of neurons are fed into further hidden layers of neurons. Such architectures are behind many of the notable successes in machine learning applications. The deep neural network with l layers proceeds as follows.
Definition. Let q and p respectively denote the input and output dimension. Set n i as the number of neurons for each layer i = 1, . . . , l. · · · affine trans. y 1 y 2 · · · y p v 1n v 11 Figure 1: The single-hidden-layer neural network, with the hidden layer shown in red.
input biases b 1k ∈ R. Also, for i = 2, . . . , l and for each k = 1, . . . , n i , also select weight vectors w ik ∈ R n i−1 and biases b ik ∈ R. Finally, for k = 1, . . . , p, select output weight vectors w l+1,k ∈ R n l and output biases b l+1,k ∈ R. Using the shorthand notation
v i := [v i1 · · · v in i ] ∈ R n i and y := [y 1 · · · y p ], a deep neural network is the modelf : R q → R p , x → y given by v 1k := σ(w 1k · x + b 1k ), k = 1, . . . , n 1 (4a) v ik := σ(w ik · v i−1 + b ik ), i = 2, . . . , l, k = 1, . . . , n i (4b) y k := w l+1,k · v l + b l+1,k , k = 1, . . . , p.(4c)
The deep neural network architecture is shown in Figure 2. Typically, n 1 > · · · > n l ; it has been empirically shown that training risk is better reduced by optimizing layers closer to the input than layers closer to the output (Raghu et al. 2016).
Splines, knots, and roots
In this study, we will use the rectified linear unit activation function (3) in all neurons. The rectified linear unit is a common choice because it creates flexible models and is fast to compute. Other common choices, such as the sigmoid function 1/(1 + e −x ), are more computationally intensive. They also typically have smaller regions in the domain where the first derivative is far from zero, which can pose additional challenges when training neural networks on data.
With the rectified linear unit activation, the neural network is essentially a linear spline. To understand this property, first consider the simplified case of a single scalar input, i.e., where the neural network is somef : R → R p , x → y. The outputs of the first hidden layer (2a, 4a) are v 1k (x) = σ(w 1k x + b 1k ) for k = 1, . . . , n 1 . Since σ(x) is continuous and has a discontinuity in dσ/dx at x = 0, v 1k is clearly also continuous and has a discontinuity in dv 1k /dx at x = −b 1k /w 1k . Thus, the functions v 1k (x) are linear splines. The next layer, whether it is a second hidden layer or the output layer, then computes an affine transformation of the functions v 1k (x). Such an affine transformation is continuous; hence, it is still a linear spline. This reasoning can be carried out through each hidden layer to the output.
In every application of a rectified linear unit beyond the first layer, knots can be retained, destroyed, or created. An example of this process is shown in Figure 3 for some neuron k in
[ ] x v 1 v 2 . . . v l y 2 · · · y 1 y pFigurev i−1 (x). Red: the neural output σ(w ik · v i−1 (x) + b ik )
, with knots shown as ×. Knots of the blue spline above zero are retained, knots below zero are discarded, and roots of the blue spline appear as new knots. some layer i. If the previous layer output v i−1 (x) contains a particular knot at some x j such that
w ik · v i−1 (x j ) + b ik > 0
, then the application of the rectified linear unit does not alter this knot, and the knot is retained by this neuron. On the other hand, if w ik · v i−1 (x j ) + b ik < 0, then both the knot and the immediate neighborhood of x j are rectified to zero, and the knot at x j is destroyed. Finally, wherever w ik · v i−1 (x) + b ik crosses zero, there exists a region on one side of the root that is rectified to zero. The rectification introduces a new knot at the root, as shown in Figure 3. In all three cases, the neural output σ(w ik · v i−1 (x) + b ik ) again remains a continuous function with discrete discontinuities in its first derivative. Hence, even deep neural networks with rectified linear unit activations are linear splines. The mechanisms for retaining, destroying, and creating knots will be relevant when deriving the upper bound on the number of knots in Section 3. The description of knots becomes more sophisticated in the typical scenario where the input space is R q with q > 1. In this case, each neuron in the first hidden layer divides the input space into two regions split by the hyperplane w 1k · x + b 1k = 0. With the rectified linear unit acting on w 1k · x + b 1k , each neuron outputs zero on one side of the hyperplane, and a half-plane with normal vector [x v 1k ] = [−w 1k 1] on the other side. Just as further hidden layers retain, destroy, and create new knots for q = 1, further hidden layers retain, destroy, and create new hyperplanes or pieces thereof for q > 1. The resulting neural network is a piecewise linear R q → R p model on a finite number of convex polytopes; see Figure 4 for an example. It is still possible to analyze such neural networks in a one-dimensional sense if we were to consider one-dimensional trajectories through the input space R q (Raghu et al. 2016), but the full model is notably more complex in general. Many analytical results on multidimensional input spaces rely on upper bounds and asymptotics based on polytope counting Raghu et al. 2016).
Equivalent form with forward-facing rectified linear units
For the simple case of R → R p neural networks with one hidden layer, the neurons in the first hidden layer (2a, 4a) output v 1k = σ(w 1k x + b 1k ), which is essentially a rectified linear unit σ(x) that is horizontally stretched and translated, and possibly reflected across the v 1k -axis. Therefore, the sloped ray in the activated region can extend into quadrants I or II in the x-v 1k plane. For the purpose of constructing or analyzing R → R p neural networks, it is convenient to have all rectified linear units extend in the positive x direction (i.e., into quadrant I), which we call "forward-facing." Such a feature allows us to consider the action of each rectified linear unit by starting at x = −∞ and increasing x. Thus, no rectified linear units are activated at x = −∞, and the units are successively activated with increasing x; no units are deactivated.
The transformation that expresses the scalar-input, single-hidden-layer neural network with forward-facing rectified linear units is as follows.
Lemma 1. Consider the single-hidden-layer R → R p rectified linear unit neural network with input weights w 1j ∈ R, input biases b 1j ∈ R, output weights w 2kj ∈ R, and output biases b 2k ∈ R for j = 1, . . . , n and k = 1, . . . , p. The neural network model
y k (x) = n j=1 w 2kj σ(w 1j x + b 1j ) + b 2k , k = 1, . . . , p (5) (cf. (2) with w 2k = [w 2k1 . . . w 2kn ]) is equivalently y k (x) = n j=1 s kj σ(x − x j ) + c 1k x + c 0k , k = 1, . . . , p,(6)
where
c 1k := 1≤j≤n w 1j <0 w 2kj w 1j , c 0k := 1≤j≤n w 1j <0 w 2kj b 1j + b 2k ,(7a)s kj := w 2kj |w 1j |, x j := − b 1j w 1j , j = 1, . . . , n(7b)
for k = 1, . . . , p. All rectified linear units in (6) face forward.
Proof. We first split the sum in (5) according to the sign of w 1j , so that
y k (x) = 1≤j≤n w 1j <0 w 2kj σ(w 1j x + b 1j ) + 1≤j≤n w 1j ≥0 w 2kj σ(w 1j x + b 1j ) + b 2k .(8)
Next, we observe from (3) that
σ(x) = σ(−x) + x;(9)
using this property on the first sum, we obtain
y k (x) = 1≤j≤n w 1j <0 w 2kj σ(−w 1j x − b 1j ) + 1≤j≤n w 1j <0 w 2kj (w 1j x + b 1j ) + 1≤j≤n w 1j ≥0 w 2kj σ(w 1j x + b 1j ) + b 2k (10a) = 1≤j≤n w 1j <0 w 2kj σ(−w 1j x − b 1j ) + 1≤j≤n w 1j ≥0 w 2kj σ(w 1j x + b 1j ) + c 1k x + c 0k .(10b)
To combine the two sums, we further observe that if w ≥ 0, then σ(wx) = wσ(x). Thus, we can pull −w 1j out of the rectified linear unit in the first sum and w 1j out of the same in the second sum, and obtain
y k (x) = 1≤j≤n w 1j <0 −w 2kj w 1j σ x + b 1j w 1j + 1≤j≤n w 1j ≥0 w 2kj w 1j σ x + b 1j w 1j + c 1k x + c 0k (11a) = n j=1 w 2kj |w 1j |σ x + b 1j w 1j + c 1k x + c 0k ,(11b)
which is equal to (6). All rectified linear units face forward because the coefficient on x is simply unity.
Besides that all the rectified linear units in (6) face forward, the utility of that expression is that the entire neural network is expressed in terms of four sets of parameters (7), each with a natural interpretation. The parameter x j is the location of the knot created by neuron j. For convenience, we will assume hereafter that all parameters in j (i.e., w 1j , b 1j , w 2kj , s kj , and x j ) are sorted by ascending x j . Next, in the contribution from the forward-facing rectified linear unit in neuron j to the scalar output k, s kj is the slope of the activated region. Finally, c 1k and c 0k describe the line that is added to the sum of rectified linear units, so as to complete the equivalence between (5) and (6).
Upper bound on number of knots
Some recent articles have derived asymptotic or otherwise approximate upper bounds for the number of linear regions in neural networks with multidimensional inputs and outputs. For instance, building on , showed that for an R q → R p neural network with n i ≥ q neurons in layer i = 1, . . . , l, the upper bound on the number of linear regions is at least
l−1 i=1 n i q q q j=0 n l j .(12)
Later, Raghu et al. (2016) gave asymptotic upper bounds for the number of linear regions in neural networks with multidimensional inputs and outputs. The article shows that an R q → R p neural network with n neurons in each of l layers has a number of regions that grows at most like O(n ql ) for rectified linear unit activations, and O((2n) ql ) for step activation functions. Furthermore, the asymptotic upper bound is shown to be tight Raghu et al. 2016).
In this section, we derive an exact as opposed to asymptotic or approximate upper bound, but restrict ourselves to the case of R → R p neural networks. The possibility of extending the result to R q → R p remains open. We first discuss the mechanisms by which the maximal number of knots is retained and created in each hidden layer. Next, we use induction to prove Theorem 1, which states the upper bound. Afterwards, we prove in Section 4 that the upper bound is tight (Theorem 2).
We begin with a basic definition that we will use throughout this section.
Definition.
A knot or its location is unique if the knot's input coordinate is different from that of all other knots in the neural network.
To set the base case for the induction, we first consider the neural network with l = 1 layer and n 1 neurons in that layer. Using the notation of Lemma 1, we make the simple observation that in a one-hidden-layer neural network, each neuron contributes exactly one knot to the model at x j = −b 1j /w 1j . If the input biases b 1j and input weights w 1j are selected such that the knot locations x j are unique, then the neural network has exactly n 1 knots.
To consider the inductive step, recall from Section 2.2 that every application of a rectified linear unit can preserve, destroy, or create new knots. For the purposes of constructing an upper bound, we can make the stronger statement that with the proper choice of weights and biases, every knot can be preserved in every hidden layer. Explicitly, the knots in the affine transformation w ik · v i−1 (x) + b ik of layer i − 1 outputs can be preserved in σ(w ik · v i−1 (x) + b ik ), the output of neuron k in layer i. The most naive way to do so is to set the biases b ik so high that w ik · v i−1 (x j ) + b ik > 0 for all knots x j ; see Figure 5(a). The disadvantage of this method is that the rectified linear unit does not create any new knots. A better but still very simple alternative is to have two neurons in layer i employ identical or similar weights w ik and biases b ik , but with flipped signs. This way, as shown in Figure 5(b), one neuron would preserve some subset of the knots of w ik · v i−1 (x) + b ik , and the other neuron would preserve the complement. With this design, each rectified linear unit is able to create the maximum possible number of knots as follows.
Since each affine transformation w ik · v i−1 (x) + b ik is a linear spline, each line segment between adjacent knots can have at most one root. If w ik · v i−1 (x) + b ik has m i−1 knots, then these connections can cumulatively have at most m i−1 − 1 roots. Additionally, there may exist one root between x = −∞ and the knot x 1 closest to −∞, and another root between the knot x m i−1 closest to ∞ and x = ∞. In total, w ik · v i−1 (x) + b ik can have at most m i−1 + 1 roots. Hence, the output σ(w ik · v i−1 (x) + b ik ) of neuron k in layer i can create at most m i−1 + 1 knots, with the equality
x j in w ik ·v i−1 (x)+ b ik (red) can be preserved in σ(w ik · v i−1 (x) + b ik ) by setting b ik sufficiently high so that w ik · v i−1 (x j ) + b ik is greater than zero (dashed line) for all k. (b)
Alternatively, two neurons (red and blue) can assign similar weights and biases with opposite signs to preserve knots on both sides of zero. (c) If w ik · v i−1 (x) + b ik is a sawtooth wave with m i−1 knots, then each neuron k in layer i can uniquely create m i−1 + 1 new knots. An example is shown for k = 1, 2, 3.
being met with a sawtooth wave. Furthermore, each neuron k can adjust b ik so as to create m i−1 +1 knots uniquely. This construction is demonstrated in Figure 5(c).
Having shown that all knots can be preserved in every layer, and having computed the maximum number of knots that each neuron can create, the upper bound (Theorem 1) can be formally derived. Note that we have not yet shown that all knots can always be preserved at the same time that every neuron in every layer creates the maximum possible number of knots. We first prove the upper bound as follows, and demonstrate the tightness of the bound by construction later in Section 4.
Proof of Theorem 1. For l = 1, the neural network can have up to one knot per neuron, as previously stated. That is, m 1 ≤ n 1 , which is equivalent to (1).
For l > 1, let us once again denote the number of knots in the affine transformation of layer i outputs by m i . In layer i, each neuron j = 1, . . . , n i can preserve at most all m i−1 knots from the previous layer, and can also create at most m i−1 + 1 knots uniquely. Therefore, the upper bound on m i is
m i ≤ m i−1 + n i (m i−1 + 1) (13a) = (n i + 1)m i−1 + n i .(13b)
Setting i = l + 1 in (13b), we have that m l+1 ≤ (n l+1 + 1)m l + n l+1 . Supposing that (1) is true, we find that
m l+1 ≤ (n l+1 + 1) l i=1 n i l j=i+1 (n j + 1) + n l+1 (14a) = l i=1 n i l+1 j=i+1 (n j + 1) + n l+1 (14b) = l+1 i=1 n i l+1 j=i+1 (n j + 1).(14c)
Hence, if (1) holds for l, then it also holds for l + 1, and the induction is complete.
Remark. The dimension p of the output space does not affect the upper bound on the number of knots in the neural network; see Lemma 2 of . The output layer is simply an affine transformation, and does not contain any rectified linear units. Therefore, all knots that are outputted from the final hidden layer v l can be preserved. Additionally, some knots may possibly be destroyed in the degenerate case where v l has discontinuities in its first derivative, but w l+1,k · v l does not for all k = 1, . . . , p. Either way, no new knots can be created in the output layer.
Remark. In most applications of neural networks, n 1 ≥ · · · ≥ n l , where n l is notably larger than unity. In this case, the upper bound (1) is dominated by the i = 1 summand, and the upper bound is approximately
l i=1 n i .(15)
If we further assume that n := n 1 = · · · = n l (16) (which is sometimes useful for analytical purposes but less commonly employed in practice), then the upper bound further reduces to n l . This approximate upper bound is consistent with the tight asymptotic upper bound O(n ql ) given by Raghu et al. (2016), where we have used the input dimension q = 1.
Remark. The number of scalar parameters in the weights and biases of a deep R q → R p network (4) is
(q + 1)n 1 + l−1 i=1 (n i + 1)n i+1 + (n l + 1)p.(17)
If we assume (16) once again, then for q = 1, the number of parameters is 2n+(n+1)(n(l−1)+p) ≈ (p + 2)n + (l − 1)n 2 . This number is typically far smaller than n l for l ≥ 3. Thus, deep networks can possibly create a large number of knots with a comparatively small number of parameters. This feature plays a key role in the expressive power of deep neural networks. It has been suggested that although shallow networks can create models identical to deep networks via the universal approximation theorem, they may require many more parameters to do so; see Lin and Tegmark (2016) and the references within.
Tightness of the upper bound
Next, we show that the upper bound (1) is tight if there is a sufficient number of neurons in each layer, which will almost certainly be satisfied in practical applications. This demonstration proceeds by construction. In Lemma 2, we first review the trivial case where the neural network has l = 1 layer. We then show in Lemma 3 that the affine transformation of the first hidden layer outputs can be made into a sawtooth wave. Then, we show in Lemma 4 that subsequent hidden layers can turn sawtooth wave inputs into sawtooth wave outputs with the maximum number of knots. Finally, we reaffirm that all knots from a previous layer can be preserved in the application of a new layer, while creating the maximum number of knots.
Lemma 2. The upper bound (1) is tight for single-hidden-layer neural networks.
Proof. Equation (1) reduces to m 1 ≤ n 1 for l = 1. As previously stated, the equality is obtained simply by choosing b 1j and w 1j in (5) such that x j = −b 1j /w 1j is unique for each j = 1, . . . , n 1 .
Lemma 3. If the first hidden layer has n 1 ≥ 3 neurons, then there exist weights w 1j , w 2kj and biases b 1j , b 2k such that the input to the rectified linear unit in neuron k of layer 2 is a sawtooth wave.
n 1 j=1 w 2kj σ(w 1j x + b 1j ) + b 2k(18)
Proof. One way to construct such a sawtooth wave is to select
w 1j = −1 | j = 3 1 | j = 3 (19a) b 1j = j − 1 | j = 3 −j + 1 | j = 3 (19b) w 2kj = 3 2 | j = 1 −1 | j even 1 | j > 1 and j odd ,(19c)
with b 2k arbitrary. This is more apparent if we apply Lemma 1 and write (18) as
n 1 j=1 s kj σ(x − x j ) + c 1k x + c 0k ,(20)
where
x j = j − 1, s kj = w 2kj , c 1k = −1, c 0k = b 2k + 2.(21)
That is, the knots are evenly spaced, the initial slope from x = −∞ to the first knot x 1 = 0 is c 1k = −1, and the slopes of the subsequent segments between knots are obtained by cumulatively adding s kj . Thus, the slopes in successive linear pieces of the spline are
c 1k + r j=1 s kj n 1 r=0 = −1, 1 2 , − 1 2 , 1 2 , − 1 2 , . . . ,(22)
which generates a sawtooth wave. See Figure 6 for an example.
Lemma 4. Suppose layer i ≥ 2 has n i ≥ 3 neurons, and there exist weights α ij ∈ R for j = 1, . . . , n i−1 such that
g i (x) := n i−1 j=1 α ij v i−1,j (x)(23)
(which is an input to a layer i rectified linear unit, up to a bias) is a sawtooth wave with m i−1 knots.
Then there exist weights w ikj , α i+1,k ∈ R and biases b ik ∈ R for j = 1, . . . , n i−1 and k = 1, . . . , n i such that given
v ik (x) := σ n i−1 j=1 w ikj v i−1,j (x) + b ik ,(24)
the function
g i+1 (x) := n i k=1 α i+1,k v ik (x)(25)
is a sawtooth wave with the maximal number of knots 13a)).
m i = m i−1 + n i (m i−1 + 1) (26) (cf. (
Proof. Suppose that-excluding the sections of g i (x) between x = −∞ and the first knot x 1 , and between the last knot x m i−1 and x = ∞-the minimum and maximum of the oscillation in g i (x) are respectively g min and g max . For convenience, let us rescale g i (x) such that the minimum and maximum are respectively 0 and 1; we definê
g i (x) := g i (x) − g min g max − g min .(27)
The central idea behind the construction is to select the weights and biases so that every line segment of the oscillation betweenĝ i = 0 and 1 is transformed into a sawtooth wave with n i knots. One method to achieve this is to construct the wave
g i+1 (x) = 3 2 σ ĝ i (x) − 1 2n i + 1 − σ ĝ i (x) − 3 2n i + 1 + σ −ĝ i (x) + 5 2n i + 1 + n i k=4 (−1) k+1 σ ĝ i (x) − 2k − 1 2n i + 1 .(28)
This construction has a natural equivalence with (19), withĝ i used in place of x. Interpretingĝ i as the independent variable and setting
α i+1,k := 3 2 | k = 1 −1 | k even 1 | k > 1 and k odd , γ k := 2k − 1 2n i + 1 ,(29)
we employ (9) to find that (28) is equivalent to
g i+1 = n i k=1 α i+1,k σ(ĝ i − γ k ) −ĝ i + 5 2n i + 1 .(30)
Thus, asĝ i increases from 0 to 1, the slope of g 1+i with respect toĝ i in consecutive segments is g i+1 (x) is a sawtooth wave with n i knots. Referring back to Section 3, we recall that the maximum number of knots (26) is achieved if every knot inĝ i (x) is retained, and each of the n i neurons uniquely creates m i−1 + 1 knots. We verify that these conditions are met. The quantityĝ i − γ k has a total of m i−1 − 1 roots between the m i−1 knots, plus one each between x = −∞ and the first knot x 1 , and between the last knot x m i−1 and x = ∞. In total, each neuron creates m i−1 + 1 knots. Furthermore, each bias γ k is unique, ensuring that the knots that are created by each of the n i rectified linear units are also unique (see Figure 5(c)). Finally, since the operand to σ in the third summand in (28) contains −ĝ i (x) as opposed toĝ i (x) in all other summands, both the lower and the upper knots of the sawtooth wave are preserved by the right-hand side of (28), as shown in Figure 5(b).
−1 + r k=1 α i+1,k n i r=0 = −1, 1 2 , − 1 2 , 1 2 , − 1 2 , . . . ,(31)
Note that for the induction to carry through successive layers, we must also verify that the local minima of (30) are all equal, as are the local maxima. This is easily confirmed, since the spacing inĝ i between consecutive knots (including endpoints) is
{γ 1 − 0, γ 2 − γ 1 , . . . , γ n i − γ n i −1 , 1 − γ n i } = 1 2n i + 1 , 2 2n i + 1 , . . . , 2 2n i + 1 .(32)
Comparing this against the slopes (31), the vertical displacement between consecutive knots is simply
− 1 2n i + 1 , 1 2n i + 1 , − 1 2n i + 1 , 1 2n i + 1 , . . . .(33)
Finally, to complete the construction, we combine (23,24,27,28) to find that one valid set of weights and biases is given by (29) and
w ikj = α ij g max − g min · −1 | k = 3 1 | k = 3 (34a) b ik = − g min g max − g min + 2k − 1 2n i + 1 · −1 | k = 3 1 | k = 3 .(34b)
With these lemmas in place, the tightness of the upper bound (Theorem 2) can now be proven.
Proof of Theorem 2. For i = 1, . . . , l − 1, the inductive and constructive proof is given quite simply by the combination of Lemmas 2-4. In the base case, Lemma 2 shows that (1) is tight for l = 1. Next, Lemma 3 shows that the affine transformation of the first hidden layer outputs-whether it is for the output of a single-hidden-layer neural network, or for a second hidden layer in a deep network-can be made into a sawtooth wave. In light of Lemma 2, this sawtooth wave can be constructed with the maximal m 1 = n 1 knots. Next, the induction step is given by Lemma 4. Namely, suppose that the affine transformation of the layer i − 1 outputs is a sawtooth wave with the maximal number of knots m i−1 . Then, it is possible to construct a sawtooth wave out of an affine transformation of the layer i outputs, such that the wave also has the maximal number of knots m i = m i−1 +n i (m i−1 +1). This induction step can be carried out sequentially from the second hidden layer i = 2 all the way to the penultimate hidden layer i = l − 1.
Finally, we note that the final hidden layer i = l deserves special treatment because the output layer does not contain any rectified linear units. As a direct result, it is not actually necessary for the final hidden layer to output a sawtooth wave. Section 5 will later demonstrate this idea in an example. Instead, it is sufficient to have two neurons in the final hidden layer and still maintain the induction relation (13a). By referring back to Figure 5(b), we remind that two neurons can preserve all m i−1 knots from the penultimate layer, while each uniquely introducing m i−1 + 1 new knots with the application of the rectified linear unit.
In the constructive proofs of Lemmas 3 and 4, it is apparent that special consideration has been given to the third neuron in the respective series. This is also evident in Figure 6(b), which shows that the sawtooth wave in the affine transformation of the first hidden layer outputs can be constructed from all forward-facing rectified linear units, except for the third unit which faces backwards. To construct a sawtooth wave, it is in fact necessary to reverse the orientation of neuron j for some j ≥ 3. Since a maximally high-wavenumber wave must be input into every rectified linear unit to meet the upper bound, an additional result is the following corollary, which is essentially the inverse of Theorem 2. We remark that the conditions of this corollary may not be seen in practice, but we nevertheless state this result for completeness. Proof. For the upper bound to be met with l ≥ 2, the affine transformations of the outputs of hidden layers i = 1, . . . , l − 1 must have alternating slopes-i.e., between positive and negativethrough all linear pieces. Only then can each rectified linear unit in layer i + 1 create the maximal m i + 1 unique knots. This condition can be analyzed separately for i = 1 and i > 1.
For i = 1, the individual rectified linear units of the first hidden layer must be linearly combined to construct a sawtooth wave; see Lemma 3. Such an arrangement is not possible in the (rather unorthodox) case of n 1 = 1 or 2. The case where n 1 = 1 is trivial: the function σ(w 11 x + b 11 ) clearly cannot have both a negative and a positive slope for a given choice of w 11 and b 11 . The case where n 1 = 2 is slightly less obvious. Suppose, without loss of generality, that we wish to construct a linear combination
g 2 (x) = 2 j=1 w 2j σ(w 1j x + b 1j )(35)
of two neural outputs in the first hidden layer that slopes down, then up, and finally down again:
. The left and right extremes of this shape requires that one neuron be oriented toward quadrant II ( ) and the second neuron be oriented toward quadrant IV ( ). There does not exist a way to sum these two rectified linear units and obtain the positive slope in the middle segment of the linear combination. Therefore, the upper bound (1) cannot be achieved if n 1 < 3.
For i = 2, . . . , l − 1, hidden layer i must be able to transform a sawtooth wave with m i−1 knots into another sawtooth wave with m i−1 + n i (m i−1 + 1) knots. Consider a single line segment in the linear combination of layer i − 1 outputs. Using the notation of Lemma 4, if the output of this segment has a minimum g i = g min and maximum g i = g max , then we require some choice of w ij , w i+1,j and b ij such that the derivative of
g i+1 = n i j=1 w i+1,j σ(w ij g i + b ij )(36)
with respect to g i contains n i sign changes as g i increases from g min to the next instance of g max . Using the same argument as the previous paragraph for i = 1, but using the input g i in place of x, such an arrangement is impossible if n i = 1 or 2. Finally, we make the observation that in the unusual case that n l = 1, it is impossible for that single final-hidden-layer neuron both to preserve all m l−1 knots from the penultimate layer, while also introducing m l−1 + 1 knots. If m l−1 + 1 knots were introduced by drawing a bias through the sawtooth wave from layer l − 1, then half of the m l−1 knots (rounded up or down, if m l−1 is odd) from the previous layer would be discarded. Alternatively, if the single neuron preserved all m l−1 knots from the previous layer, then it would not be able to create new knots, as required by the upper bound.
Example construction of tight upper bound
In this section, we demonstrate a construction of an R → R p neural network with a number of knots exactly equal to the upper bound. For the sake of keeping the neural network size manageable, we intentionally use a small number of neurons. We choose to have l = 3 hidden layers, with n 1 = 6 neurons in the first layer, n 2 = 3 neurons in the second layer, and n 3 = 2 neurons in the third layer. We will employ p = 2 in this example, though as Section 3 shows, the output dimension is actually irrelevant to the number of knots in the neural network.
Using these values in (1), we find that the upper bound on the number of knots is m 1 = 6 in the first layer outputs, m 2 = 27 in the second layer outputs, and m 3 = 83 in the third layer and final outputs. Since n 1 , n 2 , and n 3 satisfy the criteria in Theorem 2, these bounds are tight, and we can use the constructions in Section 4 to define a neural network with these numbers of knots.
The example neural network is given by the equations
v 1k = σ(w 1k x + b 1k ), k = 1, . . . , n 1 (37a) v 2k = σ n 1 j=1 w 2kj v 1j + b 2k , k = 1, . . . , n 2 (37b) v 3k = σ n 2 j=1 w 3kj v 2j + b 3k , k = 1, . . . , n 3 (37c) y k = n 3 j=1 w 4kj v 3j + b 4k , k = 1, . . . , p,(37d)
where the tight upper bound, since there are no further rectified linear units and the sawtooth waveform is therefore no longer required. By following the strategy shown in Figure 5(c), we pick w 3kj and b 3k (40) to have opposite signs between k = 1 and 2. Furthermore, we note that the sawtooth in Figure 8(b) has a range of [4,5], so we pick b 3k to be two different values for k = 1 and 2 within the range (−5, −4). That way, as shown in Figure 5(b), the k = 1 neuron retains the upper knots of Figure 8(b), while the k = 2 neuron retains the lower ones. Furthermore, each of the two neurons produces one new knot in the m 2 + 1 regions of R divided by the knots of Figure 8(b). The factor of seven in (40a) is arbitrary.
w 1k = −1 | k = 3 1 | k = 3 ,(38a)b 1k = k − 1 | k = 3 −k + 1 | k = 3 (38b)
Finally, the choice of the output weights w 4kj and biases b 4k (41) is also arbitrary, since the output layer does not contain rectified linear units and cannot destroy or create knots. The sawtooth wave
g 4 (x) = n 3 j=1 w 41j v 3j (x)(44)
that makes up the output y 1 is shown in Figure 8(c). The neural network outputs (37d), with the maximal m 3 = 83 knots, are shown in Figure 8(d).
Conclusion
We have shown that deep, fully-connected, R → R p neural networks with rectified linear unit activations are essentially linear splines. In Theorem 1, we derived an upper bound on the number of knots that such neural networks can have. The upper bound is given exactly by (1); to close approximation, this bound is n 1 · · · n l . We then showed in Theorem 2 that the upper bound is tight for the neural network widths that would be encountered in practice. An example of a deep neural network exactly meeting this upper bound was described in Section 5. It is clear from the setup of the upper bound that the imposed conditions are prohibitively restrictive. Most notably, it is common in practical applications to construct R q → R p neural networks where q may be on the order of 10 3 or even larger. As aforementioned, previous works have computed approximate or asymptotic bounds on the number of linear pieces in R q → R p neural networks Raghu et al. 2016). Nevertheless, an exact upper bound-let alone a tight one-remains to be derived in this generic case.
In addition, there is little reason to believe that neural networks used in actual applications would contain a number of knots equal to or close to the upper bound presented here. The construction of the upper bound required that a sawtooth wave be constructed at every hidden layer except the final one. It is unlikely that such maximally high-wavenumber networks would be fitted to actual data, and the likelihood is even lower for large input dimensions q commonly used in practice.
Thus, the results of this paper can be interpreted as a theoretical "brick-wall" limit on neural network expressivity, which may be used as a guideline or check in designing actual neural networks. Two companion papers present more realistic scenarios. In the first (Chen et al. 2016a), we explore the number of knots in randomly weighted and biased neural networks. In the second (Chen et al. 2016b), we describe empirical results on the behavior of neural network training. Both of these scenarios are more representative of actual situations seen in practice. Not only is a random neural network more likely to represent an "average case" neural network rather than a "best case," but also-as demonstrated in Chen et al. (2016a)-random neural networks are actually encountered in the early stages of training on data. In Chen et al. (2016a), we also describe open problems related to the expressivity of neural networks in greater detail. These papers are still largely analytical in nature, since the chief objective of our investigation is to close the gap between our understanding of neural network theory and applications.
2 :Figure 3 :
23The deep neural network, with each neuron shown in red. Blue: an example of the affine transformation w ik · v i−1 (x) + b ik in neuron k of layer i. The knots (filled dots) originate from the various scalar elements of
Figure 4 :
4A R 2 → R neural network model.
Figure 5 :
5Schematics for preserving and creating knots in neuron k of layer i. (a) All knots
Figure 6 :
6(a) The affine transformation (18) of first-hidden-layer outputs using the parameters in (19) with n = 8 and b 2k = −9/4. (b) The rectified linear unit summands in (a), with each summand in a different non-gray color (see (18)), and the bias b 2k in gray.
(Figure 7 :
7cf. (22) and seeFigure 7). Hence, for every line segment ofĝ i (x) between consecutive knots, The wave (28) with n i = 7.
Corollary 1 .
1For deep neural networks with l ≥ 2 layers, the upper bound in (1) is not tight if n i < 3 for any i = 1, . . . , l − 1, or if n l = 1.
Figure 8 :
8The neural network given by (37-41), as an example of a model that meets the upper bound (1) on the number of knots. The sawtooth waves ni j=1 w i+1,1,j v ij are constructed by linearly combining the outputs of hidden layer (a) i = 1, with six knots; (b) i = 2, with 27 knots; and (c) i = 3, with 83 knots. Knots retained from the previous layer are shown in blue, and knots created in the current layer are shown in red. (d) The outputs y 1 (magenta) and y 2 (green), with 83 knots.
For k = 1, . . . , n 1 , select input weight vectors w 1k ∈ R q and[
]
x
affine
trans.
affine
trans.
· · ·
affine
trans.
σ
σ
· · ·
σ
affine
trans.
affine
trans.
Alden Walker is gratefully acknowledged for providingFigure 4and for helpful conversations. Discussions with Anthony Gamst were also very fruitful, and led to the central ideas of the work presented in Chen et al. (2016a,b).for k = 1, . . . , n 1 ,for k = 1, . . . , n 2 ,for k = 1, . . . , n 3 , andThe hidden layer and model outputs for this example are shown inFigure 8. The interpretation of the above weights and biases proceeds as follows. In the first hidden layer, w 1k and b 1k (38), as well as the dependence of w 2kj (39a) on j, are copied directly from the construction for a sawtooth wave (19) in Lemma 3. Thus, they create knots at x = 0, . . . , n 1 − 1, and the rectified linear units are oriented as inFigure 6(b). The factor of 2 in (39a) is added for convenience to make the sawtooth span a range of 1 instead of 1/2. The sawtooth wavethat is used in neuron k = 1 of layer i = 2 is shown inFigure 8(a). From this figure, we observe that the range of the sawtooth wave, excluding the end parts with g 2 → ±∞, is[4,5]. Following (34), we flip the signs of w 2kj and b 2k (39) for k = 3. Furthermore, we set b 2k according to (34b), so that each neuron offsets g 2 by the proper amount to construct m 1 + 1 unique knots, which can then be rearranged into a new sawtooth wave. In addition, we set the dependence of w 3kj (40a) on j to match the construction in (28). As shown inFigure 8(b), this choice of parameters produces the sawtooth wavethat is used in neuron k = 1 of layer i = 3. We may observe from this figure that this second layer output retains all the knots from the first layer output(Figure 8(a)), and it also creates the maximal n 2 knots between all the knots of the first layer output, as well as in (−∞, 0) and (n 1 − 1, ∞).Moving forward, the construction of the third hidden layer in this example proceeds differently. As stated in Theorem 2, the final hidden layer i = 3 only needs to have n 3 = 2 neurons to meet
Knots in random neural networks. K K Chen, A C Gamst, A K Walker, Neural Information Processing Systems (NIPS), Workshop on Bayesian Deep Learning. Barcelona, SpainTo be presentedK. K. Chen, A. C. Gamst, and A. K. Walker. Knots in random neural networks. In Neural In- formation Processing Systems (NIPS), Workshop on Bayesian Deep Learning, Barcelona, Spain, 2016a. To be presented.
The empirical size and risk of trained neural networks. K K Chen, A C Gamst, A K Walker, arXiv:1611.09444K. K. Chen, A. C. Gamst, and A. K. Walker. The empirical size and risk of trained neural networks, 2016b. arXiv:1611.09444.
Approximation by superpositions of a sigmoidal function. G Cybenko, Math. Control Signals Syst. 24G. Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst., 2(4):303-314, 1989.
Approximation capabilities of multilayer feedforward networks. K Hornik, Neural Netw. 42K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Netw., 4(2): 251-257, 1991.
Multilayer feedforward networks are universal approximators. K Hornik, M Stinchcombe, H White, Neural Netw. 2K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approx- imators. Neural Netw., 2:359-366, 1989.
S W Knox, Machine learning: Topics and techniques. S. W. Knox. Machine learning: Topics and techniques, Edition 2.2, 2016.
Why does deep and cheap learning work so well?. H W Lin, M Tegmark, arXiv:1608:08225v1H. W. Lin and M. Tegmark. Why does deep and cheap learning work so well?, 2016. arXiv:1608:08225v1.
On the number of linear regions of deep neural networks. G Montúfar, R Pascanu, K Cho, Y Bengio, Advances in Neural Information Processing Systems. G. Montúfar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems, pages 2924-2932, 2014.
On the number of response regions of deep feedforward networks with piecewise linear activations. R Pascanu, G Montúfar, Y Bengio, arXiv:1312.6098v5R. Pascanu, G. Montúfar, and Y. Bengio. On the number of response regions of deep feedforward networks with piecewise linear activations, 2014. arXiv:1312.6098v5.
M Raghu, B Poole, J Kleinberg, S Ganguli, J Sohl-Dickstein, arXiv:1606.05336v2On the expressive power of deep neural networks. M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. Sohl-Dickstein. On the expressive power of deep neural networks, 2016. arXiv:1606.05336v2.
Neural network with unbounded activation functions is universal approximator. S Sonoda, N Murata, Appl. Comput. Harmon. Anal. In pressS. Sonoda and N. Murata. Neural network with unbounded activation functions is universal ap- proximator. Appl. Comput. Harmon. Anal., 2015. In press.
| []
|
[
"First-principles electronic structure investigation of HgBa ! Ca \"#$ Cu \" O !\"%!%& with the SCAN density functional",
"First-principles electronic structure investigation of HgBa ! Ca \"#$ Cu \" O !\"%!%& with the SCAN density functional"
]
| [
"Alpin N Tatan \nDepartment of Physics\nGraduate School of Science\nThe University of Tokyo\n113-0033TokyoJapan\n\nInstitute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n",
"Jun Haruyama \nInstitute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n",
"Osamu Sugino \nDepartment of Physics\nGraduate School of Science\nThe University of Tokyo\n113-0033TokyoJapan\n\nInstitute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan\n"
]
| [
"Department of Physics\nGraduate School of Science\nThe University of Tokyo\n113-0033TokyoJapan",
"Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan",
"Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan",
"Department of Physics\nGraduate School of Science\nThe University of Tokyo\n113-0033TokyoJapan",
"Institute for Solid State Physics\nThe University of Tokyo\n277-8581KashiwaChibaJapan"
]
| []
| We perform first-principles calculation to study the electronic structure of HgBa ! Ca "#$ Cu " O !"%!%& copper oxides up to = 6 for the undoped parent compound ( = 0) and up to = 3 for the doped compound ( > 0) by means of the SCAN meta-GGA density functional. Our calculations predict an antiferromagnetic insulator ground state for the parent compounds with an energy gap that decreases with the number of CuO ! planes. We report structural, electronic and magnetic order evolution with which agree with experiments. We find an enhanced density of states at Fermi level at ≈ 0.25 for the single-layered compound manifesting in a peak of the Sommerfeld parameter, which recently has been discussed as a possible signature of quantum criticality generic to all cuprates. | 10.1063/5.0098554 | [
"https://arxiv.org/pdf/2204.05719v3.pdf"
]
| 248,512,561 | 2204.05719 | 946fc8ee907d1e9c066cd51f7b30d73487e041ad |
First-principles electronic structure investigation of HgBa ! Ca "#$ Cu " O !"%!%& with the SCAN density functional
Alpin N Tatan
Department of Physics
Graduate School of Science
The University of Tokyo
113-0033TokyoJapan
Institute for Solid State Physics
The University of Tokyo
277-8581KashiwaChibaJapan
Jun Haruyama
Institute for Solid State Physics
The University of Tokyo
277-8581KashiwaChibaJapan
Osamu Sugino
Department of Physics
Graduate School of Science
The University of Tokyo
113-0033TokyoJapan
Institute for Solid State Physics
The University of Tokyo
277-8581KashiwaChibaJapan
First-principles electronic structure investigation of HgBa ! Ca "#$ Cu " O !"%!%& with the SCAN density functional
(Dated: May 4, 2022)1
We perform first-principles calculation to study the electronic structure of HgBa ! Ca "#$ Cu " O !"%!%& copper oxides up to = 6 for the undoped parent compound ( = 0) and up to = 3 for the doped compound ( > 0) by means of the SCAN meta-GGA density functional. Our calculations predict an antiferromagnetic insulator ground state for the parent compounds with an energy gap that decreases with the number of CuO ! planes. We report structural, electronic and magnetic order evolution with which agree with experiments. We find an enhanced density of states at Fermi level at ≈ 0.25 for the single-layered compound manifesting in a peak of the Sommerfeld parameter, which recently has been discussed as a possible signature of quantum criticality generic to all cuprates.
I. INTRODUCTION
The discovery of high temperature superconductivity at around 40 K in La !#& Ba & CuO ' copper oxides (cuprates) in 1986 has opened a new research frontier in condensed matter physics [1]. New superconductors operating above liquid nitrogen temperature (77 K) were soon found in YBa ! Cu ( O ) [2]. Multilayered compounds such as Bi ! Sr ! CaCu ! O *%& [3,4] and HgBa ! Ca "#$ Cu " O !"%!%& [5,6] then earned significant interest from the scientific community [7,8] as adding CuO ! planes could increase the transition temperature Tc beyond 100 K. The trilayer compound HgBa ! Ca ! Cu ( O *%& (Tc = 135 K at ambient pressure [6] and Tc = 164 K at 31 GPa [9,10]) is still the highest Tc superconductor outside the electron-phonon mechanism of Bardeen-Cooper-Schrieffer (BCS) theory [11].
Besides its Tc value, the Hg-based family ( Fig. 1) is preferred for its structural simplicity [12,13]. Its tetragonal structure is relatively free from structural phase transitions unlike for example, La ! CuO ' . Its CuO ! planes have minimal buckling, and there are no Cu-O chains in its cell structure unlike in YBa ! Cu ( O ) [17,18]. When doped, the dopant O atom is found to reside in the center of Hg layer, i.e., at the ; $ ! , $ ! , 0= position [19,20]. From theoretical perspective, Ref. [21] has also affirmed with atomistic simulations that this dopant location is energetically most favorable. The disorder effects to the CuO ! plane from the dopant are minimized with the thick Ba layer between the Hg and CuO ! planes (see Fig. 1) [12,22]. Despite the difficulty in obtaining large single crystals with well-defined composition [23], these materials serve as ideal benchmarks for assessing theoretical models.
Unlike other prototypical cuprates such as La ! CuO ' , there is no experimental data for the electronic band gap and other electronic structure features in the parent compound HgBa ! Ca "#$ Cu " O !"%! due to the difficulty to synthesize high-purity samples. Thus, its description is both a challenge and opportunity for first-principles calculation techniques. Early studies based on local-density or generalized gradient approximations (LDA/GGA) to density functional theory (DFT) were insufficient because an incorrect nonmagnetic metallic ground state was predicted instead of the antiferromagnetic Mott insulating ground state [24 -27]. The more sophisticated hybrid exchange-correlation functionals that include a fraction of the nonlocal Fock exchange could provide a reasonable description of the ground state. For example, the HSE06 functional [28] opens a finite insulating gap and predicts a magnetic exchange coupling (J) value that is comparable to experiment in the antiferromagnetic ground state of La ! CuO ' [29]. However, the hybrid methodology fails to describe the insulator-tometal transition upon doping for lanthanum cuprates [30,31] as the energy gap persists even in the doped state. A suitable exchange-correlation functional that can properly describe both the undoped and doped phases of cuprates remain an important research focus until this day.
The multilayered parent compounds (HgBa ! Ca "#$ Cu " O !"%! ) have been studied using hybrid functionals for = 1, 2, 3 (hereafter, we refer to these as Hg-12(n − 1)n, e.g., Hg-1201, Hg-1212, and Hg-1223 for = 1, 2, 3 respectively) [27]. The obtained magnetic solutions concurred with other cuprates, e.g., La ! CuO ' . The magnetic exchange coupling was slightly overestimated compared to configuration interaction calculations with cluster model (CICM) [32,33]. The density of states (DOS) calculations concluded that unlike the monolayer Hg-1201 which is an insulator with a band gap slightly above 1 eV, the multilayered Hg-1212 and Hg-1223 compounds are metallic as the Hg-O conduction states lower in the energy and the Cu-O conduction states remain fixed, thereby closing the band gap. As this metallicity is not due to Cu states, this would explain the persisting antiferromagnetism in metallic Hg-1212/Hg-1223.
While the presence of low-lying Hg-O states in the conduction band has been suggested since the early calculations of HgBa ! Ca "#$ Cu " O !"%! [34,35], the extent of mercury roles in the multilayered compounds remain unclear. There has neither been experimental evidence of the metallicity due to low-lying Hg-O states nor further studies with hybrid functionals reported on these materials. This stagnancy may be attributed to the difficulties in synthesizing pure samples and the prohibitive cost of hybrid methodology (approx. hundreds of times more expensive [36,37] than using semilocal functionals). Thanks to the wider availability of highperformance computing resources, more extensive calculations have become possible in the past decade. Hence, we would like to investigate this material more comprehensively with the computational resources available at our disposal (see Acknowledgments).
Over the years, there were questions on whether band gaps predicted by density-functional calculations should be compared with experimentally derived energy gaps. These doubts are expected to fade since modern computational packages such as VASP [38,39] have implemented their calculations in the generalized Kohn-Sham (gKS) scheme. Ref. [40] has shown that the gKS band gap is equal to the fundamental band gap in the solid, which is defined as the ground-state energy difference between systems with different numbers of electrons. This provides a solid basis for comparing band gaps of gKS formalism with the experimentally observed band gaps and improvement in the prediction of energies and structures brought by a functional would also indicate improved gKS band gaps [31,[41][42][43].
The band gap prediction of La ! CuO ' with hybrid methodology [29] served as one of the supporting basis for calculating HgBa ! Ca "#$ Cu " O !"%! in Ref. [27]. Citing the 2 eV optical absorption peak in Ref. [44], the computed La ! CuO ' band gap of 2.5 eV by HSE06 hybrid functional was considered to concur with experiment. However, Ref. [45] also reported a smaller band gap of 0.89 eV obtained via Hall transport measurements. It has since been argued [31,42,45,46] that one should estimate the band gap not from the lowest energy absorption peak, but from the leading-edge gap in the optical spectra. This implies that the computed band gaps by density functional theory should be compared to an experimental value around 1 eV from Ref. [44]. From a comparative study [31], it is apparent that the hybrid functionals overestimate the band gap of La ! CuO ' and another functional of the meta-GGA class is better suited for this purpose, which we shall explain in the following paragraph.
The recently devised strongly-constrained-and-appropriately-normed (SCAN) meta-GGA exchange-correlation functional [37] satisfies all 17 known constraints applicable to meta-GGA. It has successfully described the properties of pristine and doped La ! CuO ' [42,43], [46], and Bi ! Sr ! CaCu ! O *%& [47]. In La ! CuO ' , SCAN obtains the magnetic moment in magnitude and orientation, the magnetic exchange coupling strength , the magnetic form factor as well as the electronic band gap that well-correspond to experiments. Ref. [31] compared SCAN with 12 other functionals spanning across the levels of Perdew-Schmidt hierarchy [48] in lanthanum cuprates and demonstrated SCAN's superiority in matching experimental results. Although hybrid functionals' value of magnetic exchange coupling = 187 meV [29] is in reasonable agreement with experimental value of = 133 ± 3 meV [49], a much closer prediction = 138 meV can be obtained with SCAN [42]. In doped YBa ! Cu ( O ) , the charge, spin and lattice degrees of freedom are treated equally in a selfconsistent manner [46] to yield stable stripe phases without invoking free parameters, which leads to the identification of a landscape of 26 competing uniform and stripe phases. These results indicate that SCAN captures many key features of the cuprates and provides a new prospect in describing correlated electron properties of these materials. The computational cost of meta-GGA functionals that is only a few times larger than its LDA/GGA predecessors adds to the viability of more effective studies of cuprates in comparison to the cost-prohibitive hybrid methodology or beyond-DFT techniques.
YBa ! Cu ( O +
In this work, we present a SCAN density-functional description for the electronic structure of HgBa ! Ca "#$ Cu " O !"%! series. Using relaxed structures that are in good agreement with experiment, we show that these compounds remain insulating up to = 6 with a finite but decreasing, indirect band gap. The low-lying Hg-O conduction states are noted, but their dominant proportions against the Cu-O states are not apparent in Hg-1212/Hg-1223 and these Hg-O states are only clearly lower in energy at = 6. In addition, we also investigate the doped phase via supercell construction of Hg-1201, Hg-1212, and Hg-1223 for several representative excess oxygen levels . We report that SCAN improves an earlier description of the doped phase with semilocal functionals [50] in capturing lattice contraction and the magnetic moment as a function of . We confirm the expected narrow DOS peak due to additional states contributed by the dopant O atom at low doping levels and extract an optimum doping , where this feature is located at the Fermi level -. Finally, we compute the normalstate, zero-temperature Sommerfeld parameter of electronic specific heat from the DOS at and observe a peak across doping levels to support an experimental prediction of it being a universal property among cuprates which could be a signature of quantum criticality [51][52][53]. Figure prepared with VESTA [14] with atomic positions derived from Materials Project database [15] and Ref. [16].
II. COMPUTATIONAL DETAILS
In general, we followed the computational parameters of the preceding SCAN studies of other cuprates [42,43,46,47]. Ab initio calculations were carried out by using the pseudopotential projector augmented-wave (PAW) method [54] implemented in the Vienna ab initio simulation package (VASP [38,39]) with an energy cutoff of 550 eV for the plane-wave basis set. Exchange-correlation effects were treated using the SCAN meta-GGA scheme [37]. The crystal structures were relaxed using a quasi-Newton (RMM-DIIS) algorithm to minimize energy with an atomic force tolerance of 0.008 eV/Å and a total energy tolerance of 10 #. eV. The costlier conjugate gradient algorithm was also utilized in a few cases when the forementioned algorithm encountered convergence problems. The relaxation procedure Fig. 2), for which we note only minor difference in results that do not change the conclusion of this study. For the larger supercells of the multilayered compounds containing 8 formula units and more, the DOS plots used a smaller 10 × 10 × 4 k-point mesh due to the large memory requirements. This change does not affect the resulting DOS plots given the smaller size of first Brillouin zone for larger supercells. The doping effect on the lattice parameters, atomic positions and magnetic order was investigated by total-energy and atomic-force calculations. Initial antiferromagnetic order was assumed in our structure, which allowed us to study the interplay between doping levels and the strength of magnetic moments. In this regard, our calculation is an extension of Refs. [50,57] in which their calculations were performed on nonmagnetic, non-relaxed supercells of Hg-1201 with local-density approximation.
Using the results of our DOS calculations, we compute a thermodynamic quantity that can be compared with experiments. The Sommerfeld parameter of electronic specific heat is defined as:
/0 ( → 0) = (1) = 2 3 ! 1 ! ( -) (2)
where the DOS at Fermi energy ( -) is defined per atom and per one spin direction [17]. This quantity in the normal state extrapolated to zero temperature can be extracted from ( -) of a computational cell containing X formula units [17]:
" 2 (mJ/K ! ⋅ mol) ≈ 2.36 ⋅ 2 2 ( -) (state / eV ⋅ )(3)
which allows us to estimate directly from the DOS.
III. RESULTS AND DISCUSSION
A. Parent compounds
We plot the in-plane lattice parameter of HgBa ! Ca "#$ Cu " O !"%! structures relaxed with SCAN in Fig. 3. This parameter decreases with the number of copper-oxide planes in agreement with LDA results [58] and the trend observed in experiments for doped structures.
SCAN predicts values that are closer to experiments than LDA. Moreover, SCAN provides slightly larger parameters for the parent compounds compared to experimental values for doped samples. The LDA results, however, are on the smaller side, reflecting its well-known overbinding issue. Our SCAN results are hence an improvement as it agrees with dopinginduced lattice contraction observed in experiments. A similar observation can be made for the out-of-plane lattice parameter (see Fig. S1 and Table S1 of the Supplemental Material for the full set of and values). [65], and the slightly smaller moments for the IP concur with the hybrid functional picture in Ref. [27].
Moving to the electronic structure results, the band gap of HgBa ! Ca "#$ Cu " O !"%! are shown in Fig. 4. In contrast to the metallic multilayered structure predicted by hybrid functionals in Ref. [27], SCAN predicts a decreasing but finite band gap from = 1 to = 6. In addition, our prediction for the monolayer Hg-1201 band gap 4 5678 ≈ 0.6 eV is comparable to the results of variational Monte Carlo 4 9:6 ≈ 0.7 eV [65] while the HSE06 and B3LYP hybrid functionals in Ref. [27] yielded much larger band gaps of 1.1 and 1.5 eV, respectively. The band structure and DOS plots are shown in Fig. 5 and Fig. 6. These suggest an insulating ground state with indirect band gaps between the and points. The valence bands are dominated by Cu and O contributions as generally expected from cuprates [17]. Starting from = 3, the equivalence between copper oxide planes is broken ( Fig. 6 (a)). There is magnetic inhomogeneity between the OP and IP, with the latter having higher energy states. The lowlying, mercury conduction states are not apparently dominant until = 6 in the SCAN description ( Fig. 6 (b)). Even at = 6, the structure remains insulating with a finite band gap of 4 ≈ 0.2 eV. In the SCAN picture, the diminishing band gap is a gradual process, where Cu, Hg, and O conduction states collectively getting closer to the valence states. This is in contrast with the hybrid picture (Ref. [27]) where the insulator-to-metal transition is immediate and is solely facilitated by the Hg states. The full set of DOS plots for the parent compounds are provided in Fig. S2 of the Supplemental Material.
B. Doped compounds
We show the lattice parameters of Hg-1201, Hg-1212, and Hg-1223 relaxed with SCAN alongside their experimental values in Fig. 7. Due to limited number of experiments, we collect values over a short range of excess oxygen levels for each compound from multiple sources. The LDA results [50] for Hg-1201 are included for comparison. Doping-induced lattice contraction is observed in all cases. In the low doping levels ( ≤ 0.125), there is a good agreement between SCAN (closed circles) and experiments (open circles) for Hg-1201. In comparison to LDA [50], the discrepancy between theory and experiments is improved with SCAN. On the other hand, the experimental lattice contraction is smaller than our results for the multilayered compounds. Ref. [67] noted that their synthesized lattice parameters are very close to the intrinsic size of the CuO ! plane of 3.855 Å in the infinite-layered compound CaCuO ! [68] such that it is difficult to further reduce the lattice size during their synthesis, which may explain the small contraction. Our supercell calculations are not subject to these experimental complexities and thus by varying solely the oxygen concentration we find that this translates to a bigger lattice contraction. We also note that these quantitative discrepancies in the doped lattice parameters are not unexpected because we perform zero-temperature, normal state density-functional caiculations while Ref. [67] measured their samples at finite temperatures in the superconducting phase. Nevertheless, we believe capturing qualitatively the doping-induced lattice contraction observed in experiments is still a good progress for theoretical simulations. [9,59,66,67]) and the LDA-computed values for = 1 from Ref. [50] are included for comparison. In our doping model that assumes initial antiferromagnetic order, the ground-state magnetic moments are reduced with , as shown in Fig. 8. The single-layered Hg-1201 has the largest drop among all three compounds for the same , which suggests that the reduced magnetic order is related to the hole doping concentration in each copper-oxide plane. This observation is further affirmed by the results for Hg-1223, where the IP retains more magnetic moments in comparison to the OPs which are closer to the dopant O atom located at the Hg-plane. This magnetic moment inhomogeneity qualitatively agrees with the experiment observation concerning doped five-layered compound Hg-1245 [69], where the IPs retain its antiferromagnetic order ( ≈ 0.6 3 ) in contrast to the much weaker moments in the OPs ( ≈ 0.1 3 ). This demonstrates the capability of SCAN in predicting the charge-spin interplay, which has also been demonstrated in its predictions of doped YBa ! Cu ( O ) [46], and Bi ! Sr ! CaCu ! O *%& [47]. We analyze the DOS of the doped structures in Fig. 9, where we illustrate the total DOS (black lines) and the contribution from the dopant O atom (shaded area). The latter is localized in a narrow energy range at low concentrations, resulting in a sharp peak of the total DOS at some energy < - (Fig. 9, top). As is increased, this peak becomes delocalized and its magnitude relative to the other atoms' contributions changes. Considering this evolution of the dopant state, there are two possibilities of its influence on the total DOS at Fermi level ( -) for a chosen : first, if the dopant state has a narrow energy range with a magnitude that is larger than other atoms' contributions as it reaches -, then ( -) becomes significantly enhanced (Fig. 9, middle). On the other hand, there will be no induced peak of ( -) when the dopant state is highly delocalized with a low magnitude (Fig. 9, bottom). We include similar plots for Hg-1212, and Hg-1223 in Fig. S3 of Supplemental Material.
We briefly discuss the nature of the DOS by segregating atomic contributions in Hg-1201 for = 0.125 and = 0.25 in Fig. S4 and Fig. S5 of Supplemental Material, respectively. We note a strong similarity from the contributions of dopant O atom with the atoms in its vicinity (Hg, Ba, and apical O atoms) in both Fig. S4 and Fig. S5. At = 0.25, there are also small peaks atfor contributions from Cu and planar O, which suggest that the dopant O state delocalizes and interacts with further atoms as the doping level increases. In addition, we remark that these apparent interactions between the dopant O state with other atomic states reported here have also been further elaborated in previous literatures [50,[70][71][72]. Our observation in the previous paragraph allows us to define an optimum excess oxygen concentration , with regards to the total DOS at -. For Hg-1201, we can specify , ≈ 0.25 based on the DOS peak observed in Fig. 9. Our few supercell calculations are insufficient to pinpoint the corresponding value for multilayered compounds. However, we can estimate based on the DOS of the sampled concentrations that 0.25 < , < 0.375 for Hg-1212 and 0.375 < , < 0.5 for Hg-1223, forming an increasing trend with the number of copper-oxide layers. This suggests that , could be directly related to the amount of hole doping p effectively introduced to each layer. Indeed, this was also the conclusion arrived in Ref. [50] where their optimum value ;<7 = 0.22 is attributed to the point where the number of induced holes on the CuO ! plane saturates. The subsequent question is pertaining how we should interpret the physical role of , . For example, it is interesting to relate these values with the optimum concentrations that yield the highest superconducting transition temperature, recorded in Refs. [73][74][75] for these three compounds (Table II). The , for Hg-1201 from DFT calculations obtained in our work and Ref. [50] agree with these records within the experimental uncertainties. Meanwhile, we note that the estimation of optimum doping for = in cuprates is not unequivocal among experiments. In contrast to Refs. [73][74][75], Refs. [66,67] predicted a different set of values of that yield maximum = for these three compounds. This discrepancy may be caused by different experimental techniques used (iodometric titration vs thermogravimetric analysis), which we as theorists claim no expertise of, and yet we may note that these conflicting reports exemplify the long-standing question [35,70] on whether the induced hole concentration in the copper-oxide plane deviates from a simple ionic picture whereby two holes are donated for every oxygen dopant . The results from Refs. [66,67] support the simple ionic picture that gives hole concentration ≈ 2 , while Refs. [73][74][75] suggests a smaller dependence ≈ 0.72 . Although there is no experimental consensus yet [50], earlier density-functional calculations in Refs. [50,70] concur with Refs. [73][74][75]. This is also what we infer from our SCAN calculations. In any case, both ionic and non-ionic pictures suggest that the optimum excess oxygen concentrations for the whole compound follow the ascending order of Hg-1201, Hg-1212, and Hg-1223, which agrees with our results. We compute the normal-state, zero-temperature Sommerfeld parameter of the electronic specific heat from the DOS at the Fermi level ( -) for the doped compounds in Fig. 10.
Our values for the small and large doping levels agree well with the experimental results from other cuprates [17,51,52], which lie around 2 − 7 mJ/K 2 · mol. For Hg-1201, we observe a peak feature across the doping concentration at > = 0.25. This feature is of recent interest [51] as it may be a thermodynamic signature of a quantum critical point, indicated by a logarithmic divergence in / ∝ log(1/| − * |) where is some tuning parameter such as the doping concentration. Similar peaks have not been confirmed in our calculations for Hg-1212 and Hg-1223 in Fig. 10 at the concentrations computed in our supercells. Should this feature extend to the multilayered compounds, we expect them to materialize at concentrations 0.25 < > < 0.375 for the bilayer Hg-1212 and 0.375 < > < 0.5 for the trilayer Hg-1223 compounds based on the dopant state energies computed in our supercells. The peaks in have only been experimentally confirmed by direct measurements on lanthanum-cuprate families [51,76], but there are recent observations on the bismuth-and mercury-cuprate samples that suggest that this is a universal property for all cuprates [52,53]. However, the exact nature of this divergence is still under debate. Beside the quantum criticality argument, there is an alternative explanation without invoking broken symmetries from two-dimensional Hubbard model [77] which associates the feature to arise from the finite-temperature critical end point of a first-order transition between a pseudogap phase with dominant singlet correlations and a metal. Our result for Hg-1201 is therefore a positive indicator for density-functional methods to contribute, in the future, a deeper theoretical study to understand the nature of this phenomenon.
Figure 10:
The computed zero-temperature, normal-state Sommerfeld parameter for the Hg-12(n − 1)n doped compounds. There's a peak confirmed for the single-layered Hg-1201, while no peak was observed for the multilayered compounds at the excess oxygen levels computed in this study.
IV. SUMMARY AND CONCLUSIONS
We have studied both the parent and doped compounds of HgBa ! Ca "#$ Cu " O !"%!%& with firstprinciples calculations utilizing the recently devised SCAN meta-GGA density functional. Our results suggest an improvement in structural characterization of these compounds exemplified by the success in describing doping-induced lattice parameter contraction. SCAN's description of the electronic structure of the parent compound is distinct from its preceding densityfunctional studies as SCAN predicts antiferromagnetic insulating ground state even in the multilayered compounds. The diminishing band gap is described by SCAN as a gradual and collective process contributed by Cu, Hg, and O components as opposed to the immediate process dominantly acted by the Hg states prescribed in previous studies [27]. We find this new physical description refreshing and hope it will encourage further advancement in the experimental techniques to synthesize the elusive parent compounds. Meanwhile, we note that doping these compounds with oxygen results in weaker magnetic moments, signalling an interplay between hole carrier concentrations in the CuO ! planes with antiferromagnetic order. SCAN also correctly captures the magnetic inequivalence between copper planes for ≥ 3 observed in experiments. The DOS of doped compounds atcan be significantly enhanced with some optimum excess oxygen concentration , , which manifests in the case of Hg-1201 as a peak in the normal-state, zero-temperature Sommerfeld parameter of the electronic specific heat. As the nature of this feature is currently of active interest, it is likely that modern firstprinciples density functional calculations can play an important role in unraveling the mysteries of quantum criticality.
ACKNOWLEDGMENTS
The calculations were performed with the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo, as well as the computational resource of Fujitsu PRIMERGY CX2550M5/CX2560M5(Oakbridge-CX) awarded by "Large-scale HPC Challenge" Project, Information Technology Center, The University of Tokyo.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Supplemental Material:
First-principles electronic structure investigation of HgBa ! Ca "#$ Cu " O !"%!%& with the SCAN density functional Alpin N. Tatan 1,2 , Jun Haruyama 2 , and Osamu Sugino 1,2
Figure 1 .
1(a): A schematic of HgBa ! Ca "#$ Cu " O !"%! structure up to = 6. The white, green, magenta, blue and red colored spheres represent Hg, Ba, Ca, Cu, and O atoms, respectively. (b): The trilayer HgBa ! Ca ! Cu ( O * compound in its nonmagnetic conventional unit cell and antiferromagnetic √2 × √2 × 1 supercell. The copper-oxide and mercury planes are shown. The different colors of Cu-O polyhedra represent opposing magnetic moments in the antiferromagnetic case.
utilized an 8 × 8 × 4 gamma-centered k-point mesh to sample the Brillouin zone. Denser meshes (20 × 20 × 4 or more) were used to calculate the DOS with tetrahedron method with Blöchl corrections. The band structures are drawn along the path − − − − − − − in the antiferromagnetic N√2 × √2 × 1 O cell of the first Brillouin zone. The DOS and bandstructure plots are made with PyProCar [55] and Sumo [56] packages.
Figure 2 :
2Basal planes of doped supercells computed in this study viewed from the top. The dopant oxygen atoms (black spheres) are placed between the mercury atoms (grey spheres). A unit dotted square represent a unit cell meanwhile solid lines represent supercells for each excess oxygen concentration . The top row illustrates the supercells used in all three compounds. The bottom row shows the additional supercells used to compute = 0.375 in Hg-1212 and Hg-1223, as well as an alternative supercell for = 0.5 tested for Hg-1201. For investigating the doping-dependent electronic structure, we used a series of supercells containing excess oxygen atoms. We used the same calculation parameters for computing both undoped and doped compounds, except for the additional dopant atoms. Oxygen concentrations of 0.125, 0.25, 0.5 with cell sizes up to an eightfold single cell are considered for compounds Hg-1201, Hg-1212, and Hg-1223. The corresponding cells used for all compounds are illustrated in the top row of Fig. 2. The bottom row of Fig. 2 includes supercells used to compute the = 0.375 case for Hg-1212 and Hg-1223. An alternative dopant placement is also tested for = 0.5 case in Hg-1201 (the bottom right figure in
Figure 3 :
3In-plane lattice parameter for Hg-12(n − 1)n parent compounds. Triangles and squares denote relaxed parameters obtained with SCAN (this work) and LDA (Ref.[58]). Straight lines are guides to the eye. Black circles are experimental values for doped structures retrieved from Refs.[9,16,[59][60][61][62][63][64].
Figure 4 :
4Band gap of Hg-12(n − 1)n parent compounds predicted by SCAN (blue circles, this work), HSE06 and B3LYP hybrids (green and black circles, Ref.[27] ), and variational Monte Carlo (VMC, red circle, Ref.[65]). Straight lines are guides to the eye.
Figure 5 :
5Band structure of Hg-12(n − 1)n parent compounds predicted by SCAN for = 1 to = 6, drawn along the path Γ − X − M − Γ − Z − R − A − Z in the first Brillouin zone of the magnetic √2 × √2 × 1 cell. Red circles are guides to the eye for Cu valence band splitting starting from = 3 and the low-lying Hg-O conduction bands at = 6. The two spin orientations (blue and yellow traces) coincide and appear as bicolored dashed lines.
Figure 6 :
6DOS plots nearfor selected Hg-12(n − 1)n compounds. Total DOS (black lines, no shading) are presented along with the Cu, Hg, and O contributions (shaded areas). (a): the Cu states in equivalent planes of Hg-1212 are shown in contrast to the outer and inner planes of Hg-1223. The colors represent contributions from different Cu orbitals. (b): Hg, Cu and O states in Hg-1223 and Hg-1256.
Figure 7 :
7In-plane lattice parameter for Hg-12(n − 1)n doped compounds. Closed (open) circles, triangles, and squares denote the SCAN-computed (experimental) values for = 1, 2, 3 respectively. The experimental values (Refs.
Figure 8 :
8Magnetic moments of Hg-12(n − 1)n doped compounds. The values for singlelayered Hg-1201 and bilayered Hg-1212 compounds are denoted by circles and triangles. The outer and inner planes of the trilayered Hg-1223 compound are shown as closed and open squares, respectively.
Figure 9 :
9Density of states plots for doped HgBa ! CuO '%& in arbitrary units. The total DOS and the contribution from dopant O states are shown in black trace and blue shaded area, respectively. The dopant contribution is sharp and narrow at low excess oxygen level . Its delocalization in energy may result in a peak at the Fermi level at a suitable = , which is approximately equal to 0.25 as shown in the middle plot. At high , the dopant state fully delocalizes and does not lead to a sharp peak in the total DOS.
Figure S1 :
S1Lattice parameter of the parent compound Hg-12(n -1)n normalized to the average of experimental values for doped samples from multiple sources [S1 -S8]. SCAN's relaxed lattice parameters (blue triangles) for the parent compound are larger than the experimental values (dashed lines), in line with doping-induced lattice contraction. The smaller LDA results from Ref. [S9] (red squares) are included for comparison.
Figure S2 :
S2Stacked plots of the total density of states and its projections to each species of the parent compound Hg-12(n -1)n.
Figure S3 :
S3Total density of states (black traces) and the contributions from the dopant oxygen atom (blue area) for the bilayer Hg-1212 and trilayer Hg-1223 compounds.
Figure S4 :
S4Total density of states (black traces) and the contributions from Cu, Hg, Ba (red area) and types of O atoms: dopant, apical, and planar (blue area) for Hg-1201 compound with = 0.125.
Figure S5 :
S5Total density of states (black traces) and the contributions from Cu, Hg, Ba (red area) and types of O atoms: dopant, apical, and planar (blue area) for Hg-1201 compound with = 0.25.
Table I :
IMagnetic moments of Hg-12(n − 1)n parent compounds in the antiferromagnetic phase. For > 2, the magnetic moments for the inner planes are shown in parentheses.Hg-1201
Hg-1212
Hg-1223
Hg-1234
Hg-1245
Hg-1256
0.491
0.477
0.475
0.473
0.472
0.472
(0.470)
(0.469)
(0.467)
(0.468)
We tabulate the magnetic moments in the antiferromagnetic ground state of
HgBa ! Ca "#$ Cu " O !"%! in Table I. The multilayered compounds have similar magnetic
moments of about 0.47 3 , with a slight difference between the outer (OP) and inner (IP)
planes for ≥ 3. These values are comparable with the values predicted for other cuprates by
SCAN [42, 43, 47]. The single-layered Hg-1201 compound's magnetic moment agrees with
the 0.4 3 value predicted from variational Monte Carlo
Table II :
IIThe optimum compounds predicted by SCAN and experiments based on ionic and non-ionic pictures. For SCAN, we define the "optimum" compound to be the one with a sharp DOS peak atdue to contribution of the dopant O states. The experiments' optimum values refer to the compounds that yield maximum superconducting transition temperature."Optimum"
Compounds
LDA
(Refs. [50, 70])
SCAN
(This work)
Exp. (Ref [67])
(Ionic picture)
Exp. (Ref. [73])
(non-ionic picture)
HgBa ! CuO "
≈ 4.22
≈ 4.25
≈ 4.09
= 4.18 ± 0.1
HgBa ! CaCu ! O "
No data
6.25 <
< 6.375
≈ 6.21
= 6.34 ± 0.12
HgBa ! Ca ! Cu # O "
≈ 8.5
8.375 <
< 8.5
≈ 8.29
= 8.45 ± 0.16
Table S1 :
S1Relaxed lattice parameters of HgBa ! Ca "#$ Cu " O !"%!%& obtained by SCAN density functional calculation.(Å)
= 0
= 0.125
= 0.25
= 0.375
= 0.5
1
3.897
3.858
3.842
N/A
3.8265
2
3.868
3.845
3.831
3.827
3.818
3
3.859
3.842
3.831
3.824
3.822
4
3.854
N/A
5
3.851
6
3.849
(Å)
= 0
= 0.125
= 0.25
= 0.375
= 0.5
1
9.624
9.5954
9.5545
N/A
9.6204
2
12.839
12.793
12.739
12.762
12.782
3
16.039
15.971
15.895
15.910
15.978
4
19.188
N/A
5
22.351
6
25.511
Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan 2 Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan (Dated: May 4, 2022)
. J G Bednorz, K A Müller, Z. Phys. B Condens. Matter. 64189J. G. Bednorz and K. A. Müller, Z. Phys. B Condens. Matter 64, 189 (1986).
. M K Wu, J R Ashburn, C J Torng, P H Hor, R L Meng, L Gao, Z J Huang, Y Q Wang, C W Chu, Phys. Rev. Lett. 58908M. K. Wu, J. R. Ashburn, C. J. Torng, P. H. Hor, R. L. Meng, L. Gao, Z. J. Huang, Y. Q. Wang, and C. W. Chu, Phys. Rev. Lett. 58, 908 (1987).
. M A Subramanian, C C Torardi, J C Calabrese, J Gopalakrishnan, K J Morrissey, T R Askew, R B Flippen, U Chowdhry, A W Sleight, Science. 2391015M. A. Subramanian, C. C. Torardi, J. C. Calabrese, J. Gopalakrishnan, K. J. Morrissey, T. R. Askew, R. B. Flippen, U. Chowdhry, and A. W. Sleight, Science. 239, 1015 (1988).
. H Maeda, Y Tanaka, M Fukutomi, T Asano, Jpn. J. Appl. Phys. 27209H. Maeda, Y. Tanaka, M. Fukutomi, and T. Asano, Jpn. J. Appl. Phys. 27, L209 (1988).
. S N Putilin, E V Antipov, O Chmaissem, M Marezio, Nature. 362226S. N. Putilin, E. V. Antipov, O. Chmaissem, and M. Marezio, Nature 362, 226 (1993).
. A Schilling, M Cantoni, J D Guo, H R Ott, Nature. 36356A. Schilling, M. Cantoni, J. D. Guo, and H. R. Ott, Nature 363, 56 (1993).
. J M Tarascon, Y Lepage, L H Greene, B G Bagley, P Barboux, D M Hwang, G W Hull, W R Mckinnon, M Giroud, Phys. Rev. B. 382504J. M. Tarascon, Y. Lepage, L. H. Greene, B. G. Bagley, P. Barboux, D. M. Hwang, G. W. Hull, W. R. McKinnon, and M. Giroud, Phys. Rev. B 38, 2504 (1988).
. A Iyo, Y Tanaka, H Kito, Y Kodama, P M Shirage, D D Shivagan, H Matsuhata, K Tokiwa, T Watanabe, J. Phys. Soc. Jpn. 7694711A. Iyo, Y. Tanaka, H. Kito, Y. Kodama, P. M. Shirage, D. D. Shivagan, H. Matsuhata, K. Tokiwa, and T. Watanabe, J. Phys. Soc. Jpn. 76, 094711 (2007)
. A R Armstrong, W I David, I Gameson, P P Edwards, J J Capponi, P Bordet, M Marezio, Phys. Rev. B. 5215551A. R. Armstrong, W. I. David, I. Gameson, P. P. Edwards, J. J. Capponi, P. Bordet, and M. Marezio, Phys. Rev. B 52, 15551 (1995).
. L Gao, Y Y Xue, F Chen, Q Xiong, R L Meng, D Ramirez, C W Chu, J H Eggert, H K Mao, Phys. Rev. B. 504260L. Gao, Y. Y. Xue, F. Chen, Q. Xiong, R. L. Meng, D. Ramirez, C. W. Chu, J. H. Eggert, and H. K. Mao, Phys. Rev. B 50, 4260 (1994).
. J Bardeen, L N Cooper, J R Schrieffer, Phys. Rev. 1081175J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957).
. N Barišić, Y Li, X Zhao, Y.-C Cho, G Chabot-Couture, G Yu, M Greven, Phys. Rev. B. 7854518N. Barišić, Y. Li, X. Zhao, Y.-C. Cho, G. Chabot-Couture, G. Yu, and M. Greven, Phys. Rev. B 78, 054518 (2008).
. I M Vishik, N Barišić, M K Chan, Y Li, D D Xia, G Yu, X Zhao, W S Lee, W Meevasana, T P Devereaux, M Greven, Z X Shen, Phys. Rev. B. 89195141I. M. Vishik, N. Barišić, M. K. Chan, Y. Li, D. D. Xia, G. Yu, X. Zhao, W. S. Lee, W. Meevasana, T. P. Devereaux, M. Greven, and Z. X. Shen, Phys. Rev. B. 89, 195141 (2014).
. K Momma, F Izumi, J. of Appl. Crystallogr. 441272K. Momma and F. Izumi, J. of Appl. Crystallogr. 44, 1272 (2011).
. A Jain, S P Ong, G Hautier, W Chen, W D Richards, S Dacek, S Cholia, D Gunter, D Skinner, G Ceder, K A Persson, APL Materials. 111002A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, and K. A. Persson, APL Materials 1, 011002 (2013).
. Q Huang, O Chmaissem, J J Capponi, C Chaillout, M Marezio, J L Tholence, A Santoro, Physica C. 2271Q. Huang, O. Chmaissem, J. J. Capponi, C. Chaillout, M. Marezio, J. L. Tholence, and A. Santoro, Physica C. 227, 1 (1994).
N Plakida, High-Temperature Cuprate Superconductors: Experiment, Theory, and Applications, Springer Series in Solid-State Sciences. Berlin; HeidelbergSpringerN. Plakida, High-Temperature Cuprate Superconductors: Experiment, Theory, and Applications, Springer Series in Solid-State Sciences (Springer Berlin Heidelberg, 2010).
. A V Narlikar, Superconductors, Oxford University PressOxfordA. V. Narlikar, Superconductors (Oxford University Press, Oxford, 2014).
. J Wagner, P Radaelli, D Hinks, J Jorgensen, J Mitchell, B Dabrowski, G Knapp, M Beno, Physica C. 210447J. Wagner, P. Radaelli, D. Hinks, J. Jorgensen, J. Mitchell, B. Dabrowski, G. Knapp, and M. Beno, Physica C. 210, 447 (1993).
. O Chmaissem, Q Huang, S Putilin, M Marezio, A Santoro, Physica C. 212259O. Chmaissem, Q. Huang, S. Putilin, M. Marezio, and A. Santoro, Physica C. 212, 259 (1993).
. X Zhang, S Xu, C Ong, C Physica, 26213X. Zhang, S. Xu, and C. Ong, Physica C. 262, 13 (1996).
. H Eisaki, N Kaneko, D L Feng, A Damascelli, P K Mang, K M Shen, Z.-X Shen, M Greven, Phys. Rev. B. 6964512H. Eisaki, N. Kaneko, D. L. Feng, A. Damascelli, P. K. Mang, K. M. Shen, Z.-X. Shen, and M. Greven, Phys. Rev. B 69, 064512 (2004).
. X Zhao, G Yu, Y C Cho, G Chabot-Couture, N Barišić, P Bourges, N Kaneko, Y Li, L Lu, E M Motoyama, O P Vajk, M Greven, Adv. Mater. 183243X. Zhao, G. Yu, Y. C. Cho, G. Chabot-Couture, N. Barišić, P. Bourges, N. Kaneko, Y. Li, L. Lu, E. M. Motoyama, O. P. Vajk, and M. Greven, Adv. Mater. 18, 3243 (2006).
. E Dagotto, Rev. Mod. Phys. 66763E. Dagotto, Rev. Mod. Phys. 66, 763 (1994).
. W E Pickett, Rev. Mod. Phys. 61433W. E. Pickett, Rev. Mod. Phys. 61, 433 (1989).
. C Ambrosch-Draxl, K Schwarz, Solid State Commun. 7745C. Ambrosch-Draxl and K. Schwarz, Solid State Commun. 77, 45 (1991).
. I P Moreira, P Rivero, F Illas, J. Chem. Phys. 13474709I. P. R Moreira, P. Rivero, and F. Illas, J. Chem. Phys. 134, 074709 (2011).
. A V Krukau, O A Vydrov, A F Izmaylov, G E Scuseria, J. Chem. Phys. 125224106A. V. Krukau, O. A. Vydrov, A. F. Izmaylov, and G. E. Scuseria, J. Chem. Phys. 125, 224106 (2006).
. P Rivero, I P Moreira, F Illas, Phys. Rev. B. 81205123P. Rivero, I. P. R Moreira, and F. Illas, Phys. Rev. B. 81, 205123 (2010).
. J K Perry, J Tahir-Kheli, W A Goddard, Phys. Rev. B. 63144510J. K. Perry, J. Tahir-Kheli, and W. A. Goddard, Phys. Rev. B. 63, 144510 (2001).
. K Pokharel, C Lane, J W Furness, R Zhang, J Ning, B Barbiellini, R S Markiewicz, Y Zhang, A Bansil, J Sun, Comput. Mater. 831K. Pokharel, C. Lane, J. W. Furness, R. Zhang, J. Ning, B. Barbiellini, R. S. Markiewicz, Y. Zhang, A. Bansil, and J. Sun, npj Comput. Mater. 8, 31 (2022).
. D Muñoz, F Illas, I P Moreira, Phys. Rev. Lett. 841579D. Muñoz, F. Illas, and I. P. R Moreira, Phys. Rev. Lett. 84, 1579 (2000).
. D Muñoz, I P Moreira, F Illas, Phys. Rev. B. 65224521D. Muñoz, I. P. R Moreira, and F. Illas, Phys. Rev. B. 65, 224521 (2002).
. D J Singh, Phys. Rev. B. 483571D. J. Singh, Phys. Rev. B 48, 3571 (1993).
. D Novikov, A Freeman, Physica C. 212233D. Novikov and A. Freeman, Physica C 212, 233 (1993).
. F Furche, J P Perdew, J. Chem. Phys. 12444103F. Furche and J. P. Perdew, J. Chem. Phys. 124, 44103 (2006).
. J Sun, A Ruzsinszky, J P Perdew, Phys. Rev. Lett. 11536402J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 36402 (2015).
. G Kresse, J Hafner, Phys. Rev. B. 4813115G. Kresse and J. Hafner, Phys. Rev. B 48, 13115 (1993).
. G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse and J. Furthmüller, Phys. Rev. B. 54, 11169 (1996).
. J P Perdew, W Yang, K Burke, Z Yang, E K Gross, M Scheffler, G E Scuseria, T , J. P. Perdew, W. Yang, K. Burke, Z. Yang, E. K. Gross, M. Scheffler, G. E. Scuseria, T.
. M Henderson, I Y Zhang, A Ruzsinszky, H Peng, J Sun, E Trushin, A Görling, Proc. Natl. Acad. Sci. U. S. A. 1142801M. Henderson, I. Y. Zhang, A. Ruzsinszky, H. Peng, J. Sun, E. Trushin, and A. Görling, Proc. Natl. Acad. Sci. U. S. A. 114, 2801 (2017).
. Y Zhang, J Furness, R Zhang, Z Wang, A Zunger, J Sun, Phys. Rev. B. 10245112Y. Zhang, J. Furness, R. Zhang, Z. Wang, A. Zunger, and J. Sun, Phys. Rev. B 102, 45112 (2020)
. C Lane, J W Furness, I G Buda, Y Zhang, R S Markiewicz, B Barbiellini, J Sun, A Bansil, Phys. Rev. B. 98125140C. Lane, J. W. Furness, I. G. Buda, Y. Zhang, R. S. Markiewicz, B. Barbiellini, J. Sun, and A. Bansil, Phys. Rev. B 98, 125140 (2018)
. J W Furness, Y Zhang, C Lane, I G Buda, B Barbiellini, R S Markiewicz, A Bansil, J Sun, Commun. Phys. 111J. W. Furness, Y. Zhang, C. Lane, I. G. Buda, B. Barbiellini, R. S. Markiewicz, A. Bansil, and J. Sun, Commun. Phys. 1, 11 (2018).
. S Uchida, T Ido, H Takagi, T Arima, Y Tokura, S Tajima, Phys. Rev. B. 437942S. Uchida, T. Ido, H. Takagi, T. Arima, Y. Tokura, and S. Tajima, Phys. Rev. B 43, 7942 (1991).
. S Ono, S Komiya, Y Ando, Phys. Rev. B. 7524515S. Ono, S. Komiya, and Y. Ando, Phys. Rev. B. 75, 024515 (2007).
. Y Zhang, C Lane, J W Furness, B Barbiellini, J P Perdew, R S Markiewicz, A Bansil, J Sun, Proc. Natl. Acad. Sci. U. S. A. 11768Y. Zhang, C. Lane, J. W. Furness, B. Barbiellini, J. P. Perdew, R. S. Markiewicz, A. Bansil, and J. Sun, Proc. Natl. Acad. Sci. U. S. A. 117, 68 (2020).
. J Nokelainen, C Lane, R S Markiewicz, B Barbiellini, A Pulkkinen, B Singh, J Sun, K Pussi, A Bansil, Phys. Rev. B. 101214523J. Nokelainen, C. Lane, R. S. Markiewicz, B. Barbiellini, A. Pulkkinen, B. Singh, J. Sun, K. Pussi, and A. Bansil, Phys. Rev. B 101, 214523 (2020).
. J P Perdew, K Schmidt, AIP Conf. Proc. 5771J. P. Perdew and K. Schmidt, AIP Conf. Proc. 577, 1 (2001).
. P Bourges, H Casalta, A S Ivanov, D Petitgrand, Phys. Rev. Lett. 794906P. Bourges, H. Casalta, A. S. Ivanov, and D. Petitgrand, Phys. Rev. Lett. 79, 4906 (1997)
. C Ambrosch-Draxl, P Süle, H Auer, E Y Sherman, Phys. Rev. B. 67100505C. Ambrosch-Draxl, P. Süle, H. Auer, and E. Y. Sherman, Phys. Rev. B 67, 100505 (2003).
. B Michon, C Girod, S Badoux, J Kačmarčík, Q Ma, M Dragomir, H A Dabkowska, B D Gaulin, J S Zhou, S Pyon, T Takayama, H Takagi, S Verret, N Doiron-Leyraud, C Marcenat, L Taillefer, T Klein, Nature. 567218B. Michon, C. Girod, S. Badoux, J. Kačmarčík, Q. Ma, M. Dragomir, H. A. Dabkowska, B. D. Gaulin, J. S. Zhou, S. Pyon, T. Takayama, H. Takagi, S. Verret, N. Doiron-Leyraud, C. Marcenat, L. Taillefer, and T. Klein, Nature 567, 218 (2019).
. C Girod, A Legros, A Forget, D Colson, C Marcenat, A Demuer, D Leboeuf, L Taillefer, T Klein, Phys. Rev. B. 10214506C. Girod, A. Legros, A. Forget, D. Colson, C. Marcenat, A. Demuer, D. Leboeuf, L. Taillefer, and T. Klein, Phys. Rev. B 102, 14506 (2020).
. C Girod, D Leboeuf, A Demuer, G Seyfarth, S Imajo, K Kindo, Y Kohama, M Lizaire, A Legros, A Gourgout, H Takagi, T Kurosawa, M Oda, N Momono, J Chang, S Ono, G Q Zheng, C Marcenat, L Taillefer, T Klein, Phys. Rev. B. 103214506C. Girod, D. Leboeuf, A. Demuer, G. Seyfarth, S. Imajo, K. Kindo, Y. Kohama, M. Lizaire, A. Legros, A. Gourgout, H. Takagi, T. Kurosawa, M. Oda, N. Momono, J. Chang, S. Ono, G. Q. Zheng, C. Marcenat, L. Taillefer, and T. Klein, Phys. Rev. B 103, 214506 (2021).
. G Kresse, D Joubert, ; U Herath, P Tavadze, X He, E Bousquet, S Singh, F Muñoz, A H Romero, Comput. Phys. Commun. 59107080Phys. Rev. BG. Kresse and D. Joubert, Phys. Rev. B 59, 1758 (1999). [55] U. Herath, P. Tavadze, X. He, E. Bousquet, S. Singh, F. Muñoz, and A. H. Romero, Comput. Phys. Commun. 251, 107080 (2020)
. A Ganose, A Jackson, D Scanlon, J. Open Source Softw. 3717A. M Ganose, A. J Jackson, and D. O Scanlon, J. Open Source Softw. 3, 717 (2018).
. C Ambrosch-Draxl, E Y Sherman, Phys. Rev. B. 7424503C. Ambrosch-Draxl and E. Y. Sherman, Phys. Rev. B. 74, 024503 (2006).
. T Thonhauser, H Auer, E Y Sherman, C Ambrosch-Draxl, Phys. Rev. B. 69104508T. Thonhauser, H. Auer, E. Y. Sherman, and C. Ambrosch-Draxl, Phys. Rev. B. 69, 104508 (2004).
. M Paranthaman, B C Chakoumakos, J. Solid State Chem. 122221M. Paranthaman and B. C. Chakoumakos, J. Solid State Chem. 122, 221 (1996).
. P G Radaelli, J L Wagner, B A Hunter, M A Beno, G S Knapp, J D Jorgensen, D G Hinks, Physica C. 21629P. G. Radaelli, J. L. Wagner, B. A. Hunter, M. A. Beno, G. S. Knapp, J. D. Jorgensen, and D. G. Hinks, Physica C. 216, 29 (1993).
. B A Hunter, J D Jorgensen, J L Wagner, P G Radaelli, D G Hinks, H Shaked, R L Hitterman, R B Von Dreele, Physica C. 2211B. A. Hunter, J. D. Jorgensen, J. L. Wagner, P. G. Radaelli, D. G. Hinks, H. Shaked, R. L. Hitterman, and R. B. Von Dreele, Physica C. 221, 1 (1994).
. V Aksenov, A Balagurov, V Sikolenko, V Simkin, V Alyoshin, E Antipov, A Gippius, D Mikhailova, S Putilin, Phys. Rev. B. 553966V. Aksenov, A. Balagurov, V. Sikolenko, V. Simkin, V. Alyoshin, E. Antipov, A. Gippius, D. Mikhailova, and S. Putilin, Phys. Rev. B 55, 3966 (1997).
. Q Huang, J W Lynn, Q Xiong, C W Chu, Phys. Rev. B. 52462Q. Huang, J. W. Lynn, Q. Xiong, and C. W. Chu, Phys. Rev. B. 52, 462 (1995).
. S M Loureiro, E V Antipov, E M Kopnin, M Brunner, J J Capponi, M Marezio, Physica C. 257117S. M. Loureiro, E. V. Antipov, E. M. Kopnin, M. Brunner, J. J. Capponi, and M. Marezio, Physica C. 257, 117 (1996).
. M Hirayama, T Misawa, T Ohgoe, Y Yamaji, M Imada, Phys. Rev. B. 99245155M. Hirayama, T. Misawa, T. Ohgoe, Y. Yamaji, and M. Imada, Phys. Rev. B. 99, 245155 (2019)
. A Fukuoka, A Tokiwa-Yamamoto, M Itoh, R Usami, S Adachi, H Yamauchi, K Tanabe, C Physica, 26513A. Fukuoka, A. Tokiwa-Yamamoto, M. Itoh, R. Usami, S. Adachi, H. Yamauchi, and K. Tanabe, Physica C. 265, 13 (1996).
. A Fukuoka, A Tokiwa-Yamamoto, M Itoh, R Usami, S Adachi, K Tanabe, Phys. Rev. B. 556612A. Fukuoka, A. Tokiwa-Yamamoto, M. Itoh, R. Usami, S. Adachi, and K. Tanabe, Phys. Rev. B. 55, 6612 (1997).
. J Karpinski, H Schwer, I Mangelschots, K Conder, A Morawski, T Lada, A Paszewin, Physica C. 23410J. Karpinski, H. Schwer, I. Mangelschots, K. Conder, A. Morawski, T. Lada, and A. Paszewin, Physica C 234, 10 (1994).
. H Mukuda, M Abe, Y Araki, Y Kitaoka, K Tokiwa, T Watanabe, A Iyo, H Kito, Y Tanaka, Phys. Rev. Lett. 9687001H. Mukuda, M. Abe, Y. Araki, Y. Kitaoka, K. Tokiwa, T. Watanabe, A. Iyo, H. Kito, and Y. Tanaka, Phys. Rev. Lett. 96, 87001 (2006)
. D J Singh, W E Pickett, Phys. Rev. Lett. 73476D. J. Singh and W. E. Pickett, Phys. Rev. Lett. 73, 476 (1994).
. T Das, Phys. Rev. B. 8654518T. Das, Phys. Rev. B 86, 054518 (2012).
. D Novikov, A Freeman, C Physica, 216273D. Novikov and A. Freeman, Physica C. 216, 273 (1993).
. E Pellegrin, J Fink, Phys. Rev. B. 532767E. Pellegrin and J. Fink, Phys. Rev. B 53, 2767 (1996).
. Q Xiong, Y Y Xue, Y Cao, F Chen, Y Y Sun, J Gibson, C W Chu, L M Liu, A Jacobson, Phys. Rev. B. 5010346Q. Xiong, Y. Y. Xue, Y. Cao, F. Chen, Y. Y. Sun, J. Gibson, C. W. Chu, L. M. Liu, and A. Jacobson, Phys. Rev. B 50, 10346 (1994).
. Q Xiong, Y Y Xue, Y Cao, F Chen, J Gibson, L M Liu, A Jacobson, C W Chu, Physica C. 251216Q. Xiong, Y. Y. Xue, Y. Cao, F. Chen, J. Gibson, L. M. Liu, A. Jacobson, and C. W. Chu, Physica C. 251, 216 (1995).
. N Momono, M Ido, T Nakano, M Oda, Y Okajima, K Yamaya, C Physica, 233395N. Momono, M. Ido, T. Nakano, M. Oda, Y. Okajima, and K. Yamaya, Physica C. 233, 395 (1994).
. G Sordi, C Walsh, P Sémon, A M Tremblay, Phys. Rev. B. 100121105G. Sordi, C. Walsh, P. Sémon, and A. M. Tremblay, Phys. Rev. B 100, 121105 (2019) REFERENCES
. A R Armstrong, W I David, I Gameson, P P Edwards, J J Capponi, P Bordet, M Marezio, Phys. Rev. B. 5215551A. R. Armstrong, W. I. David, I. Gameson, P. P. Edwards, J. J. Capponi, P. Bordet, and M. Marezio, Phys. Rev. B 52, 15551 (1995).
. Q Huang, O Chmaissem, J J Capponi, C Chaillout, M Marezio, J L Tholence, A Santoro, Physica C. 2271Q. Huang, O. Chmaissem, J. J. Capponi, C. Chaillout, M. Marezio, J. L. Tholence, and A. Santoro, Physica C. 227, 1 (1994).
. M Paranthaman, B C Chakoumakos, J. Solid State Chem. 122221M. Paranthaman and B. C. Chakoumakos, J. Solid State Chem. 122, 221 (1996).
. P G Radaelli, J L Wagner, B A Hunter, M A Beno, G S Knapp, J D Jorgensen, D G Hinks, Physica C. 21629P. G. Radaelli, J. L. Wagner, B. A. Hunter, M. A. Beno, G. S. Knapp, J. D. Jorgensen, and D. G. Hinks, Physica C. 216, 29 (1993).
. B A Hunter, J D Jorgensen, J L Wagner, P G Radaelli, D G Hinks, H Shaked, R L Hitterman, R B Von Dreele, Physica C. 2211B. A. Hunter, J. D. Jorgensen, J. L. Wagner, P. G. Radaelli, D. G. Hinks, H. Shaked, R. L. Hitterman, and R. B. Von Dreele, Physica C. 221, 1 (1994).
. V Aksenov, A Balagurov, V Sikolenko, V Simkin, V Alyoshin, E Antipov, A Gippius, D Mikhailova, S Putilin, Phys. Rev. B. 553966V. Aksenov, A. Balagurov, V. Sikolenko, V. Simkin, V. Alyoshin, E. Antipov, A. Gippius, D. Mikhailova, and S. Putilin, Phys. Rev. B. 55, 3966 (1997).
. Q Huang, J W Lynn, Q Xiong, C W Chu, Phys. Rev. B. 52462Q. Huang, J. W. Lynn, Q. Xiong, and C. W. Chu, Phys. Rev. B 52, 462 (1995).
. S M Loureiro, E V Antipov, E M Kopnin, M Brunner, J J Capponi, M Marezio, Physica C. 257117S. M. Loureiro, E. V. Antipov, E. M. Kopnin, M. Brunner, J. J. Capponi, and M. Marezio, Physica C. 257, 117 (1996).
. T Thonhauser, H Auer, E Y Sherman, C Ambrosch-Draxl, Phys. Rev. B. 69104508T. Thonhauser, H. Auer, E. Y. Sherman, and C. Ambrosch-Draxl, Phys. Rev. B. 69, 104508 (2004).
| []
|
[
"Segment Motion in the Reptation Model of Polymer Dynamics. II. Simulations",
"Segment Motion in the Reptation Model of Polymer Dynamics. II. Simulations"
]
| [
"A Baumgärtner \nInstitut für Festkörperforschung and Forum Modellierung\nForschungszentrum Jülich\n52425JülichGermany\n",
"U Ebert \nInstituut-Lorentz\nUniversiteit Leiden\nthe Netherlands9506, 2300 RAPostbus, Leiden\n",
"L Schäfer \nFachbereich Physik\nUniversität Essen\n45117EssenGermany\n"
]
| [
"Institut für Festkörperforschung and Forum Modellierung\nForschungszentrum Jülich\n52425JülichGermany",
"Instituut-Lorentz\nUniversiteit Leiden\nthe Netherlands9506, 2300 RAPostbus, Leiden",
"Fachbereich Physik\nUniversität Essen\n45117EssenGermany"
]
| []
| We present simulation data for the motion of a polymer chain through a regular lattice of impenetrable obstacles (Evans-Edwards model). Chain lengths range from N = 20 to N = 640, and time up to 10 7 Monte Carlo steps. For N ≥ 160 we for the central segment find clear t 1/4 -behavior as an intermediate asymptote. The also expected t 1/2 -range is not yet developed. For the end segment also the t 1/4 -behavior is not reached. All these data compare well to our recent analytical evaluation of the reptation model, which shows that for shorter times (t < ∼ 10 4 ) the discreteness of the elementary motion cannot be neglected, whereas for longer times and short chains (N < ∼ 100) tube renewal plays an essential role also for the central segment. Due to the very broad crossover behavior both the diffusion coefficient and the reptation time within the range of our simulation do not reach the asymptotic power laws predicted by reptation theory. We present results for the centerof-mass motion, showing the expected intermediate t 1/2 -behavior, but again only for very long chains. In addition we show results for the motion of the central segment relative to the center of mass, where in some intermediate range we see the expected increase of the effective power beyond the t 1/4 -law, before saturation sets in. Analysis and simulations agree on defining a new set of criteria as characteristic for reptation of finite chains. | 10.1023/a:1023291714290 | [
"https://export.arxiv.org/pdf/cond-mat/9710066v1.pdf"
]
| 9,263,190 | cond-mat/9710066 | fd1201131a030a090976e0885660b2ca560b418c |
Segment Motion in the Reptation Model of Polymer Dynamics. II. Simulations
7 Oct 1997
A Baumgärtner
Institut für Festkörperforschung and Forum Modellierung
Forschungszentrum Jülich
52425JülichGermany
U Ebert
Instituut-Lorentz
Universiteit Leiden
the Netherlands9506, 2300 RAPostbus, Leiden
L Schäfer
Fachbereich Physik
Universität Essen
45117EssenGermany
Segment Motion in the Reptation Model of Polymer Dynamics. II. Simulations
7 Oct 1997(submitted to J. Stat. Phys. on September 18, 1997)reptationpolymer dynamicsMonte Carlo simulations
We present simulation data for the motion of a polymer chain through a regular lattice of impenetrable obstacles (Evans-Edwards model). Chain lengths range from N = 20 to N = 640, and time up to 10 7 Monte Carlo steps. For N ≥ 160 we for the central segment find clear t 1/4 -behavior as an intermediate asymptote. The also expected t 1/2 -range is not yet developed. For the end segment also the t 1/4 -behavior is not reached. All these data compare well to our recent analytical evaluation of the reptation model, which shows that for shorter times (t < ∼ 10 4 ) the discreteness of the elementary motion cannot be neglected, whereas for longer times and short chains (N < ∼ 100) tube renewal plays an essential role also for the central segment. Due to the very broad crossover behavior both the diffusion coefficient and the reptation time within the range of our simulation do not reach the asymptotic power laws predicted by reptation theory. We present results for the centerof-mass motion, showing the expected intermediate t 1/2 -behavior, but again only for very long chains. In addition we show results for the motion of the central segment relative to the center of mass, where in some intermediate range we see the expected increase of the effective power beyond the t 1/4 -law, before saturation sets in. Analysis and simulations agree on defining a new set of criteria as characteristic for reptation of finite chains.
Abstract
We present simulation data for the motion of a polymer chain through a regular lattice of impenetrable obstacles (Evans-Edwards model). Chain lengths range from N = 20 to N = 640, and time up to 10 7 Monte Carlo steps. For N ≥ 160 we for the central segment find clear t 1/4 -behavior as an intermediate asymptote. The also expected t 1/2 -range is not yet developed. For the end segment also the t 1/4 -behavior is not reached. All these data compare well to our recent analytical evaluation of the reptation model, which shows that for shorter times (t < ∼ 10 4 ) the discreteness of the elementary motion cannot be neglected, whereas for longer times and short chains (N < ∼ 100) tube renewal plays an essential role also for the central segment. Due to the very broad crossover behavior both the diffusion coefficient and the reptation time within the range of our simulation do not reach the asymptotic power laws predicted by reptation theory. We present results for the centerof-mass motion, showing the expected intermediate t 1/2 -behavior, but again only for very long chains. In addition we show results for the motion of the central segment relative to the center of mass, where in some intermediate range we see the expected increase of the effective power beyond the t 1/4 -law, before saturation sets in. Analysis and simulations agree on defining a new set of criteria as characteristic for reptation of finite chains. key words: reptation, polymer dynamics, Monte Carlo simulations
I. INTRODUCTION
An understanding of the motion of a chain molecule in a surrounding of impenetrable obstacles is of great interest in the physics of polymer melts or dense solutions as well as for polymers diffusing through gels. With special regard to the latter system De Gennes suggested the reptation model [1]. Basic to this model is the observation that the crosslinked structure of the gel for short times restricts the motion of the macromolecule to a tube defined by its initial configuration. The motion proceeds by curvilinear diffusion of little wiggles of 'spared length' along the tube. The destruction of the initial tube ('tube renewal') is due to the motion of the chain ends. These may draw back into the tube, thus shortening the tube and creating new wiggles of spared length, or they may unfold and thus destroy spared length, thus prolonging the tube in some random direction. This is the natural thermal motion of a flexible chain between topological constraints.
Proposed originally for motion through rigid gels, this model extensively has been applied to melts or dense solutions [2]. It is generally accepted as a basic scenario of polymer dynamics. A critical examination [3], however, shows that the experimental or computerexperimental evidence for the quantitative reliability of the model is not particularly strong. Searching for the asymptotic power laws predicted by reptation theory one typically finds a range of exponents differing from the predictions, a finding often interpreted as crossover behavior from Rouse-type motion to reptation. This is little more than an excuse, since -, with the exception of Doi's theory of the melt viscosity [4] -, no effort seems to have been spent to really work out the predictions of the reptation model beyond (intermediate) asymptotics. Thus, since not only the experiments but also the simulations mostly are concerned with melts or with an immobile, but disordered configuration of obstacles, it is not at all clear whether the results reflect intrinsic properties of reptation or are dominated by other mechanisms like entropic traps in disordered systems or relaxation of the surrounding in melts.
To proceed we need precise knowledge of the quantitative implications of the reptation model in the (computer-) experimental range of time and chain length. We therefore analytically have worked out detailed quantitative predictions of the model, and we have carried through extensive simulations. All our work is concerned with the original reptation scenario: motion of a discrete chain through an ordered lattice of impenetrable obstacles, which confines the internal motion of the chain to a very narrow tube. Our analytical work concentrates on the motion of individual beads. As long as the bead does not leave the original tube, its motion can be calculated rigorously. Tube renewal and thus the motion of the chain ends can be treated only approximately, and we use an approximation inspired by random walk theory. Details and results of our analytical work may be found in the preceeding paper [5].
The present paper is devoted to our simulations. In Sect. II we introduce a Monte Carlo model, which first has been proposed by Evans and Edwards [6]. We also briefly describe our analytical model as far as needed for some of the arguments to follow, and we discuss the relation among the Monte-Carlo and analytical models. In Sect. III we review results of previous simulations of the Evans-Edwards model, compare to corresponding results of our simulations, and discuss the relevant time scales. Sects. IV and V are devoted to a detailed comparison with our theory, where in Sect. IV we treat the motion of the central bead inside the tube, and in Sect. V we are concerned with tube renewal. Quantities involving the center-of-mass, for which at present we have no new analytical results, are discussed in Sect. VI. Sect. VII summarizes our findings. Preliminary results of our analytical work and our simulations have been published in [7].
II. MODELS
A. Monte Carlo model
The Evans-Edwards model [6] considers the chain configuration as a random walk of N (M C) −1 steps ('segments') on a cubic lattice. The lattice constant ℓ 0 henceforth is taken as the unit of length: ℓ 0 = 1. The configuration is fixed by giving the positions {r 1 , . . . , r N (M C) } of the endpoints of all segments ('beads'). The length of segment j equals ℓ 0 , by construction, |r j+1 − r j | = ℓ 0 . The obstacles form a second cubic lattice, of lattice constant m · ℓ 0 , placed such that its lattice points coincide with centers of the cells of the first lattice. The edges of this lattice are considered as impenetrable. Ref. [6] uses m = 1, 2, . . . , 10, but we consider only m = 1, thus taking the tube as narrow as possible. This should show reptation in clearest form. As illustrated in Fig. 1, this model eliminates all kink-type motions of the chain and leaves only hairpins, i.e., subsequent segments of opposite direction: r j+1 − r j = −(r j − r j−1 ), free to move. Of course also the end beads can move freely.
Clearly with regard to the static properties this model is identical to a simple randomwalk chain. With our choice of the narrowest tube, m = 1, also the dynamics is most simple. The obstacle lattice comes into play only implicitly in restricting the motion to that of hairpins and chain ends. We start with an initial random walk configuration of the chain. In one elementary step we randomly choose one bead. If it happens to be the tip of a hairpin or a chain end, it is moved with probability 1/6 to one of its 6 possible positions (including its original position, of course). This completes the elementary move. Monte Carlo time t (M C) is measured in (on the average) one attempted move per bead. The simulations extended to t (M C) = 10 8 , and chains of lengths N (M C) = 20, 40, 80, 160, 320, 640 were used. We measured in each run correlations over time intervals t
(M C) 1 − t (M C) 0 ≤ 10 7 , averaging over t (M C) 0
('moving average'). In addition the data were averaged over up to 40 independent runs. This is important in particular for the longer chains, where the equilibration time T (M C) 2 of the hairpins comes close to the total time of the run (see Sect. 3.2). For the longest chains (N = 320, 640) and largest times (t (M C) ≈ 10 7 ) the standard deviation of our data reaches 6 %. Due to the moving time average it rapidly decreases with decreasing chain length and time, being less than 3 % for t (M C) < ∼ 10 5 , for all chain lengths.
B. Analytical model
We use a version of De Gennes' reptation model [1,5], discrete in time and space. The tube is taken as a chain of N segments, connecting beads numbered 0, 1, . . . N. Particles, each representing a spared length ℓ s , are sitting on the beads of that chain. The average density of these particles is ρ 0 . The particles randomly and independently hop among neighbouring beads, with hopping probability p. They do not interact, so that a given particle does not feel the presence of the others. If a particle moves over a bead, it drags it along and displaces it by a distance ℓ s along the tube. (For hairpin motion this is illustrated in Fig. 1 b of [7].) Reaching a chain end (bead 0 or N), a particle is absorbed by a virtual reservoir. These reservoirs also randomly emit particles at a rate adjusted to pertain the average density ρ 0 . They serve to ease the analysis of the in principle grand canonical problem.
To establish the connection to the physical motion of the beads we note that the displacement along the tube of some bead j within time interval t is given by ℓ s |n(j, t)|, where n(j, t) gives the number of particles having passed bead j from one direction, subtracted by the number of particles coming from the other direction. Since the tube conformation itself is a random walk in space, bead j in space has moved an average distance
g 1 (j, N, t) = (r j (t) − r j (0)) 2 = ℓ s |n(j, t)| .
(2.1) (Cf. Eq. (I 2.8); in the sequel ref. [5] will be refered to as I.) In Eq. (2.1) the pointed brackets denote the average over the chain configurations, and the bar indicates the average over particle diffusion. Eq. (2.1) holds as long as bead j stays in the initial tube. Tube renewal is driven by the emission and absorption of particles by the reservoir. Emission of a particle shortens the tube by ℓ s at the end considered. Thus within time interval t the tube from end zero is destroyed up to the bead j < = ℓ s n max (t), where (−n max (t)) is the largest negative fluctuation in the occupation number of reservoir 0 within time interval t. Since at time t the tube on the average has been shortened by ℓ s n max (t) steps and then rebuilt by another ℓ s n max (t) randomly chosen steps, we find for the motion of the endsegment (cf. Eq. (I 2.12))
g 1 (0, N, t) = 2ℓ s n max (t) ,(2.2)
valid for times t smaller than the tube renewal time T 3 . Combining these considerations we find the somewhat complicated expression (I 2.13) for the motion of an arbitrary bead including tube renewal effects. All these are exact expressions within the frame of our model, holding as long as the original tube is not destroyed completely. It turns out that Eq. (2.1) can be evaluated rigorously, whereas Eq. (2.2) as well as the tube renewal effects on the motion of an arbitrary bead can be handled only approximately. The stochastic process n(0, t), giving the occupation of reservoir 0, is correlated by the fact that a particle emitted may be reabsorbed at some later time, the decay time of the correlation being given by the time T 2 a particle needs to diffuse over the whole chain. This correlation renders an exact evaluation of n max (t) impossible. As explained in detail in I, sect. V we evaluate n max (t) in a 'mean hopping rate' approximation, calculating the contribution to n max (t) of a time step s, 0 < s ≤ t as the contribution of an uncorrelated process with properly adjusted hopping rate. In a similar spirit we have constructed an approximation for the tube renewal effects on arbitrary beads. Our explicit expressions for g 1 (j, N, t) will be recalled later in the context of data analysis.
C. Relation among the models
Obviously the particles of the theoretical model roughly correspond to the hairpins, and for long chains and large time, where hopefully the influence of the microstructure is negligible, we expect both models to yield identical results. In practice, however, it is not clear whether the experiments reach such a universal regime, and a more detailed discussion of the relation among the models is appropriate.
We first consider the chain lengths. The endbeads of the MC-chain correspond to the particle reservoirs, and a hairpin absorbs two beads. A hairpin thus effectively walks along a chain of N (M C) − 4 interior beads. This must be compared to the theoretical model, where a particle hops along a chain of N − 1 interior beads. Thus we should identify
N = N (M C) − 3 . (2.3)
For the shorter chains (N (M C) < ∼ 100), this correction cannot be neglected. Identifying hairpins and particles we should take the spared length ℓ s = 2. The density ρ o is less well defined. For the simple random-walk type MC-chain it is not hard to determine the full statistics of the side-branches, i.e., of tree-like structures in which each lattice bond is occupied by an even number of segments. In the limit of long chains the average total number of segments in such side branches amounts to [8] N (M C) /3, whereas the average number of simple hairpins (of two segments each) tends to N (M C) /9. Thus the remaining N (M C) /9 segments are contained in larger side-branches, which can be seen as a result of a fusion of simple hairpins. In the particle picture this corresponds to an interaction of particles sitting on the same bead, and with this interpretation we should choose ρ o ≈ 1/6. However, also other complications must be noted: For a hairpin lying on the chain like in the right part of Fig. 1, the separation of the configuration into hairpin and backbone of the chain is not unique, this configuration in fact showing two mobile points. Thus these considerations suggest an order of magnitude for ℓ s , ρ o , rather than giving precise values. Taking ℓ s , ρ o as fit parameters, we in Sect. 4 will find that the data rather precisely determine the combination These parameters are of the expected order of magnitude, but they also show that the identification of particles and 'free' hairpins should not be taken to literally. Rather the hairpin motion is renormalized by interaction effects, the particles representing 'quasi-hairpins'. We finally consider the relation among the time scales. The theoretical results, considered as function oft
= pt , (2.7)
for p · t > ∼ 1 essentially are independent of the hopping rate p. We by convention take p = 1/5, and we henceforth always will use the variablet. The relation amongt and t (M C) defines the time scale τ :t = τ t (M C) .
(2.8)
A fit to experiment (see Sect. 4) fairly precisely fixes τ at a value τ = 6.092 · 10 −2 (2.9)
Thus about 17 MC moves correspond to the displacement of a particle by one step. This again is a reasonable result, since following the microscopic motion of a hairpin we may estimate that on the average of the order of 10 moves are needed for a hairpin to jump from one segment to the next.
Having discussed the relation among the parameters of the theoretical and the MC-model we still need to consider the measured quantity
g (M C) 1 (j, N (M C) , t (M C) ) = (r j (t (M C) ) − r j (0)) 2 .
Let j be some interior bead of the MC-chain. With probability ρ H = 1/9 it sits in the tip of a simple hairpin, a configuration which in the theoretical model effectively is projected down to the base of the hairpin. Taking into account only simple hairpins we thus find for the relation of g
(M C) 1
to the g 1 of the analytical model
g (M C) 1 (j, N (M C) , t (M C) ) = (1 − ρ H ) 2 g 1 (j − 1, N (M C) − 3, t) + 2ρ H (1 − ρ H ) g 1 (j − 1, N (M C) − 3, t) + 1 + ρ 2 H g 1 (j − 1, N (M C) − 3, t) + 2 ,
where we took into account the relation among N (M C) and N as well as the different counting of the beads. This shows that g (M C) 1 and g 1 differ by an additive contribution c 0
g (M C) 1 (j, N (M C) , t (M C) ) = g 1 (j − 1, N (M C) − 3, t) + c 0 , (2.10)
where the simple-hairpin contribution to c 0 is found as
c 0 = 2ρ H = 2/9 . (2.11)
More complicated side branches will contribute also, but we observe (see Sect. 4) that reasonable changes of c 0 in fitting to the data can be compensated by readjusting ρ 0 . We thus by convention choose the value (2.11), which then fixes ρ 0 to the value (2.5). We have checked that for microscopic times 10
< ∼ t (M C) < ∼ 50, g (M C) 1 is well represented as g (M C) 1 = 2 9 + const · (t (M C) ) x , x ≈ 1/2 ,
so that this choice of c 0 is well justified. For the endsegments the correction is more important. These in a single move will jump a mean squared distance 12) and this motion is not taken into account in the theoretical model. We thus have
c 1 = 2 ,(2.g (M C) 1 (1, N (M C) , t (M C) ) = g 1 (0, N (M C) − 3, t) + c 1 . (2.13)
We have checked that the correction c 1 = 2 precisely takes into account the difference in the motion of the end segment and the adjacent interior segment of the Monte-Carlo chain.
Though these corrections are microstructure effects, they cannot simply be ignored. In particular it is important to correct for the endsegment motion, since g
(M C) 1 (1, N (M C) , t (M C)
) reaches values of the order 100 only for t (M C) ≈ 10 6 .
Besides g
(M C) 1
we also have measured the cubic invariant
g (M C) 1 (j, N (M C) , t (M C) ) = 3 α=1 r j,α (t (M C) ) − r j,α (0) 4 1/2 . (2.
14)
It is easily checked that this function for an interior bead in our model reduces to the second moment of n(j, t)ĝ this relation holding as long as the bead stays in the initial tube. For the endsegment the expression is more complicated and given in appendix B of I . Again the relation amongĝ 1 andĝ
(M C) 1
involves microstructure corrections, which, however, are more difficult to estimate and will not be considered.
III. A FIRST INSPECTION OF THE SIMULATION RESULTS
A. Comparison to previous work
Reptation theory predicts power law behavior Evans and Edwards [6] introduced the above described Monte Carlo model to test these predictions. They used obstacle lattices of spacing m · ℓ 0 , m ≤ 10, and chains of length N (M C) ≤ 80. The runs seem to extent up to t (M C) ∼ 10 3 . Clearly according to present day facilities this is a fairly small scale simulation. Still within the scatter of the data the results for the smallest spacings m = 1, 2 seem to verify the predictions (3.1). In particular the authors observe D ∼ N 2 , T 3 ∼ N 3 , as well as t 1/4 -regimes and t 1/2 -regimes for the motion of the central segment. To check these results, Fig. 2 shows our data for g
g 1 (j, N, t) ∼ t 1/4 , T o ≪ t ≪ T 2 (t/N) 1/2 , T 2 ≪ t ≪ T 3 t/N 2 , T 3 ≪ t .(M C) 1 (N (M C) /2, N (M C) , t (M C) )
in the common doubly-logarithmic representation. As is obvious, a t 1/4 -regime starts around t (M C) ≈ 10 3 and barely is observable for N (M C) = 80. It fully is developed only for larger chain lengths. A t 1/2 -regime is not observable for N (M C) ≤ 160. It may be present for larger chains but its unambiguos identification needs at least a further decade in time. Recall that these data are taken for the obstacle lattice of highest density, m = 1, as all our data. We conclude that the observation of [6] amounts to a misinterpretation of the direct crossover from the initial behavior, which roughly follows a t 1/3 -law, to free diffusion, as seen here for short chains.
Deutsch and Madden [9] used the Evans-Edwards model with m = 1 to measure the diffusion coefficient D by following the center-of-mass motion of the chain:
R cm (t (M C) ) − R cm (0) 2 t (M C) →∞ −→ D t (M C) (3.2)
Measuring chains up to length 100 they found D ∼ N (M C) −2.5 , i.e., a considerably faster decrease than predicted by reptation theory. Our own data for the center-of-mass motion are shown in Fig. 3. We clearly can extract the diffusion coefficients up to N (M C) = 80. For N (M C) ≥ 320 only an upper bound D ≤ 0.2(N (M C) ) −2 can be given. Our measured values of (N (M C) ) 2 · D are plotted against (N (M C) ) −1/2 in Fig. 4. We also included data extracted from Fig. 3 of [9]. Clearly the two sets of data are completely consistent. They nicely are fitted by the ansatz
D = 0.04(N (M C) ) −2 1 + 50(N (M C) ) −1/2 ,(3.3)
this form being motivated by Doi's work [4,9]. It, however, is clear that the range of chain lengths from 10 to 160 in Fig. 4 is insufficient to fully justify an ansatz leading to such a large first order correction. Still it shows that with chain lengths that presently can be reached, we are far from extracting the large-N limit of the diffusion coefficient.
B. Time scales
The reptation time T 3 gives the time needed to destroy the tube. It may be defined in terms of the decay of the end-to-end vector correlations of the chain [1], but for the present work another definition is more convenient, both theoretically and experimentally. We define T 3 by the relation
ℓ s n max (T 3 ) = N 2 ,(3.4)
which via Eq. (2.2) implies
g 1 (0, N, T 3 ) = (r 0 (T 3 ) − r 0 (0)) 2 = N = R 2 e ,(3.5)
where R 2 e is the mean squared-average end-to-end vector. Thus within time interval T 3 the endsegment has moved mean squared distance R 2 e . Our simulation data allow for the determination of T 3 for N (M C) ≤ 160, and our experimental results together with the theoretical curve are shown in Fig. 5. Being based on an approximation the theory lies about 20 % above the datapoints. This suggests that our approximation underestimates n max (t). Both theory and data, however, consistently show that it needs chain lengths much larger than N (M C) = 200 to approach the asymptotic N 3 behavior. Indeed, the theoretical asymptote, calculated from Eq. (I 5.50), is found as
T (M C) 3 = 2.62(N (M C) ) 3 , (3.6)
suggesting that chain lengths much larger than 10 3 are needed. This is quite consistent with our finding for the diffusion coefficient. Another important time scale of the model is the internal equilibration time T 2 of the chain. It gives the time a hairpin needs to diffuse over the whole chain, so that for t ≫ T 2 the motion of all beads is correlated. Theoretically T 2 can be identified with the Rouse time, i.e , the longest internal relaxation time of a free chain: pT 2 = N 2 /π 2 . A practicable and precise experimental definition is not easy. Heuristically we could think of the time at which the motion of the central segment bends over from t x 1 , x 1 ≈ 1/4 towards t x 2 , x 2 > ∼ 0.5, but this crossover is quite broad and the power law regimes are poorly defined for shorter chains. We thus here are content with the theoretical definition, valid for long chains:
p T 2 = N π 2 (3.7)
Transforming to the Monte Carlo time, we find But then the total MC time 10 7 is not much larger than T (M C) 2 = 0.7 · 10 6 . Even with the present data we thus have no chance to verify the t 1/2 -law. Qualitative, not quantitative, indications can be found however, as is discussed in sect. VI. These findings are completely consistent with typical results found in the literature for simulations of melts.
T (M C) 2 = 1.66(N (M C) ) 2 ,(3.
IV. MOTION INSIDE THE TUBE
A. Analysis of g 1 (
g i (j, N, t) = 4 π ℓ 2 s ρ 0 A 1 (j, t) 1/2 [1 − F 1 (4ρ 0 A 1 (j, t))] (4.1) A 1 (j, t) =t N + 1 2N N −1 k=1 1 − exp −4t sin 2 πk 2N cos 2 πk N j + 1 2 sin 2 πk 2N (4.2)
We use the variablet = pt, and we somewhat simplified the expression, the simplification being valid for pt > ∼ 1. The correction function F 1 (j, t) reads (Eq. I.3.26)
F 1 (z) = 1 2 √ π z 0 dx x −3/2 e −x 1 − x 2 −1/2 − 1 − 1 2 √ π Γ − 1 2 , z . (4.3)
It arises from the discreteness of the stochastic variable n(j, t) and is negligible for z > ∼ 25. As has been discussed in I, Sect. 4, for long chains and times so large that F 1 (z) can be ignored, g i (j, N, t) takes the form (cf. Eq. I 4.10):
g i (j, N, t) = (ℓ 2 s ρ 0 ) 1/2t1/4g i j N ,t N 2 , (4.4) g i (j, N, t) −→ 2π −3/4 (ℓ 2 s ρ 0 ) 1/2t1/4 ,t/N 2 ≪ 1 2π −1/2 ℓ 2 s ρ 0t N 1/2 ,t/N 2 ≫ 1 . (4.5)
It is in this large time region that we determine the nonuniversal parameters. Specifically we get one relation from fitting the t 1/4 -plateau.
(t (M C) ) −1/4 g i N 2 , N, t = 2π −3/4 (ℓ 2 s ρ 0 ) 1/2 τ −1/4 , T (M C) 0 ≪ t (M C) ≪ T (M C) 2 (4.6)
Since no t 1/2 -regime properly is reached by the data, we determine τ, ℓ 2 s ρ 0 separately by fitting to the crossover at t ∼ T 2 , where the t 1/4 -regime terminates. In the fit we exclusively used data for the longest chain: N (M C) = 640, so that we have a large region affected neither by initial effects nor by tube renewal. We find the values ℓ 2 s ρ 0 = 1.23 and τ = 6.092 · 10 −2 cited in Sect. II C.
With ℓ 2 s ρ 0 fixed, a variation of ρ 0 only influences the argument of F 1 in Eq. (4.1). Increasing ρ 0 we decrease the time range where F 1 is important. Since F 1 (0) = 1, we thus also increase the initial slope of g i (j, N, t). As mentioned in Sect.II C, these changes to some extent can be compensated by a change of c 0 that relates g (M C) 1 to g 1 . Good fits can be obtained for 0.14 < ∼ c 0 < ∼ 0.25, with varying ρ o from 0.3 to 0.18. Fort ≈ 10, g 1 (j, N, t) varies by about 15 %. Larger changes of the parameters lead to a mismatch in the curvature of the theoretical and experimental results. We finally fix all parameters by choosing c 0 = 2/9, leading to ρ 0 = 0.22. Fig. 6 shows our results for log 10 g 1 N 2 , N, t /g ass (t) as function of log 10 (t), where g ass (t) gives the intermediate asymptotics defined by the first line of Eq. (4.5). This plot focusses on the region of t 1/4 -behavior. As mentioned above, the parameters have been fitted for N (M C) = 640, the remaining results involving no further fitting. This plot demonstrates the ability of our reptation model to explain the data. Deviations occuring for shorter chains and large time result from tube renewal, as is obvious from the values of T 3 indicated by the ends of the full lines. Within a rough approximation such effects will be considered in Sect. V B. The discreteness correction F 1 (z) is visible up tot ≈ 10 3 , and for the shorter chains this initial range immediately crosses over to behavior dominated by tube renewal, with no indication of an intermediate t 1/4 -plateau. Furthermore, the t 1/2 -regime is not developed within the range of our simulations. As has been stressed and illustrated earlier [7], the cubic invariantĝ 1 ( N 2 , N, t) (Eq. 2.14) should show t 1/4 -behavior even for very small time, with no discreteness correction. For motion inside the tube the theory yieldŝ
g i (j, N, t) = (2ℓ 2 s ρ 0 A 1 (j, t)) 1/2 , (4.7)
with A 1 (j, t) given in Eq. (4.2). Fig. 7 shows our numerical and analytical results, normalized toĝ ass (t) = π 2 g ass (t). As expected, the t 1/4 -plateau is seen very clearly. Even for the shortest chain, N (M C) = 20, there is an initial range of behavior close to t 1/4 . The plateauvalue, however, systematically seems to lie 3-5 % below the theoretical value. We believe that this might indicate some small effect of the interaction among the hairpins, renormalizing the amplitudes but not changing the power laws.
V. TUBE RENEWAL EFFECTS
A. Motion of the endsegments
Tube renewal cannot be treated rigorously, and our approximation, discussed in I, Sect. V, yields for the motion of the endsegment j = 0
g 1 (0, N, t) = t s=1 1 s g i (0, N, s) (5.1)
where g i (0, N, s) is given in Eq. (4.1). In Fig. 8 we have plotted g 1 (0, N, t)/t 1/4 together with our data. Obviously our approximation reproduces the qualitative features of the data and performs not too bad on the quantitative level. It somewhat underestimates the mobility of the endsegment, consistent with our finding for T 3 (cf. Sect. III B), but the relative error decreases with increasing time. This is plausible since its origin lies in a mistreatment of the time-dependent correlations which decay on time scale T 2 . Both theory and experiment agree in exhibiting a very long initial transient, much longer than found for the central segment. Only the longest chains barely reach a t 1/4 -plateau. As mentioned in I, Sect. V C, this again is due to the discrete nature of the process. With ℓ 2 s ρ 0 fixed, the theoretical curves again are fairly insensitive to ρ 0 . We should recall that we have corrected the data by subtracting c 1 = 2 from the motion of the endsegment (cf. Eq. (2.13)). Without that correction, the data in the initial range would be enhanced somewhat, starting around .54 att = 1.
The theory predicts that g 1 (0, N, t)/g i ( N 2 , N, t) for T 0 ≪ t ≪ T 2 reaches a plateau, which in our approximation is found at a value 4 √ 2. For t > ∼ T 2 this ratio should decrease again, since all segments start to move coherently. Fig. 9 shows our results for this ratio where for the central segment we in the theoretical result approximately took the tube renewal into account in the form discussed in the next subsection: N, t). This figure should be quite characteristic for reptation. For t < T 2 it very clearly shows the enhanced mobility of the endsegment. Note that this ratio for the Rouse model would take a value of 2 in the initial range and drop down to 1 for t ≫ T 2 . Here it takes much larger values, coming close to our theoretical estimate. Clearly the asymptotic plateau value can not be estimated from the experiment. This would need much longer chains and longer times. For t > ∼ T 2 we see the expected decrease. For t > T 3 the ratio asymptotically should tend to 1. Also this is seen, but it is clear that it needs times t > ∼ 10 T 3 to approximately attain that limit. (In Fig. 9, T 3 for N (M C) < ∼ 160 can be taken from the endpoints of the theoretical curves.)
g 1 ( N 2 , N, t) = g i ( N 2 , N, t)+2g r ( N 2 ,
We also measured the fourth momentĝ 1 (0, N, t). As we have discussed in appendix B of I, after some initial range the ratioĝ 1 (0, N, t)/g 1 (0, N, t) should take valueŝ
g 1 (0, N, t) g 1 (0, N, t) = 1.0125 T 0 ≪ t ≪ T 2 1.085 T 2 ≪ t ≪ T 3 , (5.2)
where these results again are only approximate. Our data in all the regime T 0 ≪ t ≪ T 3 scatter within bounds (1.025, 1.035), with no systematic trend observable. This is completely consistent with the estimate (5.2), if we take into account that within the limitations of our simulation the region T 2 ≪ t ≪ T 3 is not properly developed.
B. Motion of an arbitrary segment
The mean squared displacement of an arbitrary segment j not to close to the center of the chain for all t < T 3 can be written as
g 1 (j, N, t) = g i (j, N, t) + g r (j, N, t) ,(5.3)
where g i (j, N, t) is the contribution of motion inside the tube as given in Eq. (4.1), and g r (j, N, t) is the contribution of the tube renewal, which reaches segment j roughly at a time T R (j) defined by ℓ s n max (T R (j)) = j (5.4) (cf. I, Eq. 5.31). In I, Sect. V C and appendix C, we have evaluated g r (j, N, t) in a rough approximation, based on the distribution of n max (t) for a simple random walk, with hopping probability adjusted to our theoretical result for n max (t). Despite ignoring detailed correlation effects this approximation yields quite reasonable results as is illustrated in Fig. 10. We there show the motion of segments j = 20, 40, 80 in a chain of length N (M C) = 640.
In particular for j = 80 we see the onset of the initial t 1/4 -plateau, which fort > ∼ 10 3.5 is destroyed by the hairpins diffusing in from the nearest chain end (j = 0). This in itself would lead to another plateau at relative height √ 2 (cf. I, Sect. IV), but before this can develop two further effects set in. Hairpins from the other chain end become important, too, (which implies t > ∼ T 2 ), and tube renewal effects are seen. Detailed inspection of Fig. 10 shows small but significant deviations among experiment and theory, which must be due to our neglect of correlations in g r .
Eq. (5.3) ceases to be valid close to the center of the chain, since it considers tube renewal only coming from one chain end. However, for t < ∼ T 3 processes where both chain ends within time interval t have made excursions of length ≈ N/2 into the tube are rare. This suggests to apply Eq. (5.3) also to the central segment. The result, which should underestimate the tube renewal effects, is given by the long dashed curves in Fig. 6. The short dashed curves follow by weighting g r (j, N, t) by a factor of 2, in trying to take into account the symmetry of the tube renewal process. The results look quite reasonable and clearly demonstrate that the deviations from g i ( N 2 , N, t), as given by the full curves in Fig. 6, indeed are due to tube renewal.
The motion of segments j = 80 for chains N (M C) = 160 and 640 is compared in Fig. 11. It for N (M C) = 160 shows the additional mobility due to a superposition of the effects from both ends.
VI. QUANTITIES INVOLVING THE CENTER-OF-MASS
Our theory at present has not been evaluated for quantities like the center-of-mass motion, which involve two-bead correlations. We thus here only present our numerical results and compare to the qualitative predictions of reptation theory.
A. Center-of-mass motion
We have measured the correlation function
g cm (t) = (R cm (t) − R cm (0)) 2 , (6.1)
where R cm (t) is the position of the center-of-mass of the chain. Reptation theory predicts the power laws
g cm (t) ∼ t 1/2 /N T 0 ≪ t ≪ T 2 t/N 2 T 2 ≪ t (6.2)
Note that free diffusion sets in for t ≫ T 2 , in contrast to the motion of the internal segment, where diffusional behavior is found only for t ≫ T 3 (cf. Eq. (3.1)). As shown in Fig. 3 our data reach the diffusional regime only for N (M C) ≤ 160, allowing for an extraction of the diffusion coefficients as discussed in sect. III A. Here we consider the short time regime. Fig. 12 shows the combination g cm (t) · N (M C) /t 1/2 . For N (M C) > ∼ 320 we indeed find a plateau. To the best of our knowledge this is the first time that the t 1/2 -behavior for the center-of-mass motion has been observed. The plateau seems to approach an asymptotic value close to 1.5, but the splitting of the curves even in the initial range indicates the existence of sizeable corrections to the N-dependence. For shorter chains the large mobility of the chain ends ruins the t 1/2 -behavior and a glance to Fig. 3 shows that effective power laws g cm ∼ t x , 1 2 < x < ∼ .8, might be extracted, for t < T 2 .
B. Motion of the central segment relative to the center-of-mass
The correlation function g 2 (j, N, t) defined as
g 2 (j, N, t) = [r j (t) − R cm (t)] − [r j (0) − R cm (0)] 2 (6.3)
measures the motion of segment j relative to the center-of-mass. For t ≪ T 3 the center-ofmass moves much slower than any specific segment, and thus
g 2 (j, N, t) ≈ g i (j, N, t), t ≪ T 3 .
For t ≫ T 3 , however, g 2 (j, N, t) saturates at some j-dependent value, that for j = N/2 equals the mean squared radius of gyration R 2 g , up to correction of order 1/N 2 . Reptation theory thus predicts
g 2 ( N 2 , N, t) ∼ t 1/4 , T 0 ≪ t ≪ T 2 (t/N) 1/2 , T 2 ≪ t ≪ T 3 R 2 g = N/6 , T 3 ≪ t . (6.4)
Specifically in some intermediate range g 2 ( N 2 , N, t) increases with a larger effective power of t than in the initial range, and this phenomenon here is not mixed up with crossover towards free diffusion. Its observation thus is a clear signal of reptation. Fig. 13 shows our results for N (M C) = 80, 160, 640, normalized to the t 1/4 -plateau g ass (t). The sequence of the three regimes (6.4) for N (M C) = 80, 160 is clearly seen, the data also saturating at R 2 g , as expected. As also was to be expected, in the intermediate range T 2 < ∼ t < T 3 the power law g 2 ∼ t 1/2 is not fully attained, but still the data in this plot show a pronounced maximum. To our knowledge this is the first time that the intermediate t 1/2regime has clearly been identified. In Fig. 13 we included theoretical curves for g i ( N 2 , N, t) (Eq. 4.1) to check whether g 2 ( N 2 , N, t) for t ≪ T 3 indeed equals g i ( N 2 , N, t). Taking into account that the data are not measured with very good statistics, being averaged over only 10 independent runs each, we find a very satisfactory agreement.
VII. CONCLUSIONS
We have performed extensive simulations of the Evans-Edwards model up to chain lengths N = 640 and 10 7 Monte Carlo time steps. Our simulation data show all features predicted by reptation theory, in particular: 1) We find a strong increase in mobility of the endsegment, as compared to the central segment.
2) The simulations of the motion g 2 of the central segment relative to the center-of-mass exhibit all three time regimes predicted by reptation, including the intermediate 't 1/2 'regime.
3) The crossover time T 2 to 't 1/2 '-behavior of g 2 coincides with the crossover time to free diffusion of the center of mass.
These features clearly distinguish reptation from pure Rouse motion or a Rouse model with randomly spaced entropic traps [10,11].
4)
We also found the celebrated t 1/4 -law for motion of an inner segment, and the corresponding t 1/2 -law for the center of mass. However, we need chain lengths N > 100 and correspondingly long times to reliably identify such asymptotic laws.
5)
Within the range of our simulations, the asymptotic N-dependence predicted for the diffusion coefficient D, and the reptation time T 3 is not yet reached. From our results we estimate that chain lengths larger than N = 10 3 are needed to come close to asymptotics for these quantities.
Our work shows that strong preasymptotic effects are an inherent feature of reptation. Such effects in fact dominate the chain length and time range covered by our simulations. The crossover regions here in particular cover all the region T 2 < t < T 3 , masking in g 1 (j, N, t) the expected t 1/2 behavior for an inner segment. Also crossover from the initial behavior to the t 1/4 -law is very slow, a feature which we trace back to the discrete character of the basic dynamics. Indeed, this crossover is so slow that no t 1/4 -regime is seen for the endsegment. As Figs. 5-11 and 13 show, this is well explained by our theory. We indeed find very good agreement between our simulation data and our analytical evaluation of De Gennes' reptation model, also in the regions where the asymptotic predictions fail. To reach this agreement for the large variety of quantities and the large parameter range considered, we have adjusted the four parameters ρ 0 ℓ 2 s , ρ 0 , c 0 and τ = pt/t (M C) within the physically reasonable range. Our analytical predictions are exact, as long as tube renewal is negligible. Deviations between theory and simulations are mainly due to our only approximate analytical evaluation of tube renewal.
Will our numerical and analytical results be stable under a change of the microscopic structure of the system? This is a question of many different facets. It is expected that certain types of time-independent disorder in the surrounding, entropic traps [10] in particular, may ruin reptation all together. Also the consequences of relaxation of the surrounding, like in a melt, at present are not well known. Restricting ourselves to the original reptation scenario, i.e., to motion through an ordered array of obstacles, we should consider the effect of excluded volume interactions among the beads of the chain. In I, Sect. II C, we have given reasons why we believe this to be basically irrelevant here, changing only the embedding of the tube into real space as well as the time scale. More serious is the fact that both in our simulations and our analytical work we use a very narrow tube. Allowing for more degrees of freedom of the chain per unit spacing of the obstacle lattice, we certainly will increase microstructure corrections related to excursions of the bead considered from the center of the tube, and at the same time the discreteness corrections, playing such an important role in the initial time range within our model, will decrease. These effects to some extent may compensate each other. Indeed, in some preliminary simulation using a spring-andbead chain in continuous space and obstacles of finite diameter in a regular lattice of wider spacing, we found results closely resembling the initial time range of our model presented here. (With this other model we, however, were unable to reach chain lengths and a time range where reptation predictions like the t 1/4 -law properly are found. Rather we stayed with effective t 1/3 -behavior familiar from previous work. With regard to the range of chain lengths and times this is completely consistent with the present findings.) We thus expect that certainly on the qualitative and presumably also on a semiquantitative level our results stay valid for wider tubes even in the initial time range. Of course a naive rescaling of our results in such a problem involving several time-and length-scales may be questionable.
We finally should comment on consequences of this work for previous work on polymer motion through more realistic environments. Let us first consider polymer motion through a fixed disordered background of other chains, roughly modelling a gel. There the general folklore tells us that the reptation scenario is valid. To examine this we appeal to ref. [12], where the motion of a long chain (N = 200) was simulated. Fig. 6 of that work indeed shows more than one decade of t 1/4 -behavior for g 2 ( N 2 , N, t), i.e , the motion of the central segment relative to the center-of-mass. However, the characteristic increase of the effective power at the end of the t 1/4 range, as shown in Fig. 13, is missing, this law fairly abruptly ending in saturation. Also g 1 and g 2 deviate rapidly, g 1 reaching only an effective t 1/3 -law. Both these observations are not compatible with our reptation results, and this conclusion is strengthened by a glance to the motion of the center-of-mass, as shown in Fig. 7, ref. [12]. To our feeling this suggests that the published work shows disorder effects [10,11,13] rather than reptation.
With regard to polymer motion in melts the situation is even less clear. Again considering as example some extensive published work [14], we note that many results shown there resemble our results found for short chains. In particular, the ratio g 1 (0, N, t)/g 1 ( N 2 , N, t) reaches values of order 2.7, i.e , larger than for a Rouse chain. For g 2 no tendency towards t 1/2 -behavior is seen, but this might be due to the effective shortness of the chains, reaching only of the order of 6 entanglement lengths. Thus it is not unlikely that these results reflect 'reptational' behavior of a very short effective chain.
Clearly with regard to such more complicated systems much work still has to be done, and we hope to have contributed to this task by clearly exhibiting the quantitative consequences of the reptation model. An allowed hairpin-move and a forbidden kink-jump, illustrated for a chain embedded in a square lattice with obstacles in the centers of the cells. Our simulation uses the 3-dimensional version of this model. The straight lines correspond to power laws g 1 ∼ t 1/3 , t 1/4 , t 1/2 or g 1 ∼ t. The broken lines approximate tube renewal, as explained in sect. 5.2. All lines, except for N ≥ 320, end at T 3 (N). The horizontal broken line gives the intermediate asymptotics, here normalized to 1 by dividing through g ass = 2π −3/4 (ℓ 2 s ρ 0 ) 1/2t1/4 . log 10 (g 1 (j, N, t)/g ass (t)) as function of log 10 (t) for j = 80 and N (M C) = 160, 640. Curves give the full theory.
Figure captions
as shown in I, sect. IV is the only combination relevant in the universal large time regime. With ℓ 2 s ρ o fixed, a range of values 0.15 < ∼ ρ o < ∼ 0.3 yields equivalent fits, and by convention explained below we choose ρ o = 0.22 ,
0 = O(N 0 ) is the microscopic time, till the segment motion feels the constraining environment, T 2 = O(N 2 ) is the equilibration time of the internal motion, and T 3 = O(N 3 ) is the reptation time, needed for a complete destruction of the original tube. The last line of Eq. (3.1) identifies the diffusion constant of the chain as D ∼ 1/N 2 .
,
ignore the difference among N, N (M C) . Comparing to Figs. 2 or 3 we note that this marks the point, where definite deviations from the initial behavior can be seen. Combining Eqs. mark fairly broad crossover regions, the ratio (3.9) must take values of order 10 3 before we can see the intermediate t 1/2 behavior. (Cf. the second line of Eq. 3.1.) Only the longest chain N = 640 shows a sufficiently large ratio T
In Eqs. (I.4.1), (I.4.2), (I.3.12) we have given the rigorous theoretical result for g 1 (j, N, t) for motion within the tube. To keep track of this condition we here denote this result as g i (j, N, t).
Fig. 1
1Fig. 1 An allowed hairpin-move and a forbidden kink-jump, illustrated for a chain embedded in a square lattice with obstacles in the centers of the cells. Our simulation uses the 3-dimensional version of this model.
Fig
C) ,N (M C) ,t (M C)as function of log 10 (t (M C) ) for N (M C) = 20,40,80,160,320,640.
Fig. 3 log
310 R 2 cm , R 2 cm = R cm (t (CM ) ) − R cm (0)2 as function of log 10 t (M C) . Chain lengths N (M C) = 20, 40, 80, 160, 320, 640. The lines indicate the asymptotic behavior Dt (M C) (3.2). The thus determined D(N (M C) ) is further evaluated in Fig. 4.
Fig. 4 (
4N (M C) ) 2 D as function of (N (M C) ) −1/2 . Dots: present work, ellipsoids: ref.[9]. The straight line represents Eq. (3.3).
Fig
of N (M C) . The curve gives the theoretical prediction. The broken line gives the theoretical asymptote. Points from our simulations.
Fig
N, t /g ass (t) as function of log 10 (t). Points are our data. From left: N (M C) = 20, 40, 80, 160, 320, 640. The full lines give the result (4.1), valid for motion inside the tube.
Fig. 7
7As figure 6, but for the fourth momentĝ 1
N 2 ,
2N, t /ĝ ass (t). Curves are for motion inside the tube.
Fig. 8
8log 10 (g 1 (0, N, t)/t 1/4 ) as function of log 10 (t). Data for N (M C) = 20 − 640 (from left). The curves are calculated within the approximation (5.1) and end at T 3 (N).
Fig
N, t as function of log 10 (t). Data and theory like inFig. 8. The broken line indicates the theoretical plateau value 4 √ 2.
Fig. 10 log
1010 (g 1 (j, N, t)/g ass (t)) as function of log 10 (t) for j = 20, 40, 80, and N (M C) = 640. The broken line gives the contribution g i (j, N, t), the full line is our result(5.3).
Fig. 11
11Fig. 11 log 10 (g 1 (j, N, t)/g ass (t)) as function of log 10 (t) for j = 80 and N (M C) = 160, 640. Curves give the full theory.
Fig. 12 g
12cm (t)N (M C) /t 1/2 as function of log 10 (t) for N (M C) = 20, 40, 80, 160, 320, 640 (from left). Values ofT 2 (N), calculated according to Eq. (3.7), are indicated by the arrows.
Fig
ass (t) as function of log 10 (t). Chain lengths are indicated. The fat curves give g i N 2 , N, t /g ass . The thin lines represent the asymptotic law g 2 = R 2 g . The thin broken line illustrates a power-law g 2 ∼ t 1/2 .
Fig. 1 Fig. 2
12Fig. 1
Fig. 3
Fig. 4
Fig
Fig.10 1 2 3 4 5 6
Fig.13
(j, N, t) = ℓ s n 2 (j, t) 1/2 , (2.15)
AcknowledgementThis work was supported by the Deutsche Forschungsgemeinschaft, SFB 'Unordnung und grosse Fluktuationen'. Furthermore financial support of UE by the Dutch research foundation NWO and by the EU-TMR-network 'Patterns, Noise and Chaos' is gratefully acknowledged.
. P G De Gennes, J. Chem. Phys. 55572P.G. De Gennes, J. Chem. Phys. 55, 572 (1971)
M Doi, S F Edwards, The Theory of Polymer Dynamics. OxfordClarendon PressM. Doi, S.F. Edwards, The Theory of Polymer Dynamics, Clarendon Press, Oxford 1986
. T P Lodge, N A Rotstein, S Prager, Advances Chem. Phys. LXXIX. T.P. Lodge, N.A. Rotstein, S. Prager, Advances Chem. Phys. LXXIX, Prigogine and Rice eds., Wiley 1990
. M Doi, J. Pol. Sci. Pol. Phys. Ed. 21667M. Doi, J. Pol. Sci. Pol. Phys. Ed., 21, 667 (1983)
Segment Motion in the Reptation Model of Polymer Dynamics. I. Analytical Investigation. U Ebert, L Schäfer, A Baumgärtner, U. Ebert, L. Schäfer, A. Baumgärtner, Segment Motion in the Reptation Model of Polymer Dynamics. I. Analytical Investigation
. K E Evans, S F Edwards, J. Chem. Soc., Faraday Trans. 21891K.E. Evans, S.F. Edwards, J. Chem. Soc., Faraday Trans. 2, 1891 (1981)
. U Ebert, A Baumgärtner, L Schäfer, Phys. Rev. Lett. 781592U. Ebert, A. Baumgärtner, L. Schäfer, Phys. Rev. Lett. 78, 1592 (1997)
. S K Nechaev, A N Semenov, M K Koleva, Physica A. 140506S.K. Nechaev, A.N. Semenov, M.K. Koleva, Physica A 140, 506 (1987)
. M Deutsch, T L Madden, J. Chem. Phys. 913252M. Deutsch, T.L. Madden, J. Chem. Phys. 91, 3252 (1989)
. M Muthukumar, A Baumgärtner, 221941M. Muthukumar, A. Baumgärtner, Macromol. 22, 1941 (1989)
. U Ebert, A Baumgärtner, L Schäfer, Phys. Rev. E. 53950U. Ebert, A. Baumgärtner, L. Schäfer, Phys. Rev. E 53, 950 (1996)
. K Kremer, Macromolecules. 161632K. Kremer, Macromolecules 16, 1632 (1983)
. G W Slater, S Y Wu, Phys. Rev. Lett. 75164G.W. Slater, S.Y. Wu, Phys. Rev. Lett. 75, 164 (1995)
. K Kremer, G S Grest, J. Chem. Phys. 925057K. Kremer, G.S. Grest, J. Chem. Phys. 92, 5057 (1990)
| []
|
[
"THE SPACES OF GEODESIC TRIANGULATIONS ON SURFACES",
"THE SPACES OF GEODESIC TRIANGULATIONS ON SURFACES"
]
| [
"Yanwen Luo "
]
| []
| []
| In this paper, we study the topology of the space of geodesic triangulations on a surface. We give a new proof of the contractibility of the space of geodesic triangulations for the case of a convex polygon. We also show that the space of geodesic triangulations on a flat torus is homotopy equivalent to a torus. Finally, we give a constructive method to generate geodesic triangulations for star-shaped polygons by minimizing the weighted length energy. | 10.1007/s00454-021-00359-4 | [
"https://arxiv.org/pdf/1910.03070v2.pdf"
]
| 203,902,442 | 1910.03070 | d8987d4ea467aa220af49f2d37396ba3839935dd |
THE SPACES OF GEODESIC TRIANGULATIONS ON SURFACES
Yanwen Luo
THE SPACES OF GEODESIC TRIANGULATIONS ON SURFACES
In this paper, we study the topology of the space of geodesic triangulations on a surface. We give a new proof of the contractibility of the space of geodesic triangulations for the case of a convex polygon. We also show that the space of geodesic triangulations on a flat torus is homotopy equivalent to a torus. Finally, we give a constructive method to generate geodesic triangulations for star-shaped polygons by minimizing the weighted length energy.
Introduction
A triangulation of a fixed combinatorial type of T on a surface with a Riemannian metric (S, g) is a geodesic triangulation if each edge in T is embedded as a geodesic arc in S. We study the space of geodesic triangulations with a fixed combinatorial type on certain surfaces, including a polygonal region in the Euclidean plane and a flat torus. We focus on the following two problems.
(1) The embeddability problem: Given a surface (S, g) with a triangulation T , can we construct a geodesic triangulation with the combinatorial type of T ? In particular, if S is a 2-disk with a triangulation T and we specify the positions of the boundary vertices of T in the plane so that they form a polygon, can we find positions of the interior vertices in the plane to construct a geodesic triangulation of S with the combinatorial type of T ? (2) The contractibility problem: If the space of geodesic triangulations on (S, g) with a fixed combinatorial type of T is not empty, what is the topology of this space? In particular, is it a contractible space?
In this paper, we first give a new proof of the contractibility of the space of geodesic triangulations of a fixed combinatorial type of T for the case of a convex polygon Ω in R 2 . We construct a homotopy equivalence from this space to an affine subspace in Euclidean space using the idea of Tutte's theorem [28], significantly simplifying the previous argument in [4]. We then give a constructive method to produce geodesic triangulations with a fixed combinatorial type for a star-shaped polygon under a mild assumption on the triangulation. This problem has been studied by Hong and Nagamochi [21]. The construction of geodesic triangulations of general polygon has been solved by Xu et al. [29]. Finally, we show that the idea of Tutte's theorem can be generalized to determine the homotopy type of the space of geodesic triangulations of flat tori. These results can be regarded as discrete versions of classical results by Smale [25] and Earle and Eells [12] about surface diffeomorphisms. The group of diffeomorphisms of the 2-disk fixing the boundary, denoted by D 0 (D 2 ), is contractible. Similarly, the group of diffeomorphisms of a torus isotopic to the identity D 0 (T 2 ) is homeomorphic to T 2 × D 0 (T 2 , x 0 ), where D 0 (T 2 , x 0 ) is a contractible space containing the space of all the diffeomorphisms in D 0 (T 2 ) fixing x 0 .
These two problems have been studied in [3,4,6,20], partly because they are closely related to the problem of determining the existence and uniqueness of differentiable structures on a triangulated manifolds [8]. They are also used to produce effective algorithms to solve graph morphing problems in [9,15,26,27].
In the general setting, we can consider a finite n-dimensional simplicial complex T , whose polyhedron |T | is homeomorphic to the n-dimensional disk D n . A geodesic triangulation of D n with the combinatorial type of T is determined by the positions of vertices of T in R n . The space of all such geodesic triangulations is denoted by GT (D n , T ).
We can also interpret this space in terms of homeomorphisms. First assume there exists an initial geodesic triangulation of D n . Then all the other geodesic triangulations are the images of the initial triangulation under simplexwise linear homeomorphisms fixing the boundary vertices of T , determined by the images of the interior vertices of T in R n . The space of all such simplexwise linear homeomorphisms is denoted by L(D n , T ). Ho showed in [20] that it was homeomorphic to GT (D n , T ).
When we restrict to the 2-dimensional case, Cairns [5,6] initiated an investigation of the topology of the space of geodesic triangulations of a geometric triangle in the Euclidean plane and the round 2-sphere.
Theorem 1.1.
If Ω is a geometric triangle with a triangulation T in the plane, then GT (Ω, T ) is path-connected.
Ho [20] proved that this space was simply-connected.
Theorem 1.2.
If Ω is a geometric triangle with a triangulation T in the plane, then GT (Ω, T ) is simply-connected.
A dividing edge in a triangulation T is an interior edge connecting two boundary vertices. Using an induction argument, Bing and Starbird [3] considered the general case of star-shaped polygons.
Theorem 1.3.
If Ω is a star-shaped polygon with a triangulation T in the plane, and T does not contain any dividing edge, then GT (Ω, T ) is non-empty and pathconnected.
Bing and Starbird [3] showed that GT (Ω, T ) was not necessarily path-connected if we didn't assume star-shaped boundary. Bloch, Connelly, and Henderson [4] proved the contractibility of the space of simplexwise linear homeomorphisms of a convex 2-disk. In a very recent paper, Cerf [7] improved the original argument in [4] to give a new proof of the Bloch-Connelly-Henderson theorem.
Theorem 1.4 (Bloch-Connelly-Henderson).
If Ω is a convex polygon with a triangulation T in the plane, and T does not contain any dividing edge, then GT (Ω, T ) is homeomorphic to R 2k , where k is the number of interior vertices of T .
This paper is organized as follows. In Section 2, we recall Tutte's theorem and its generalizations. In Section 3, we give a new proof of the contractibility of GT (Ω, T ) if Ω is a convex polygon using Tutte's method. In Section 4, we give an explicit construction of a geodesic triangulation in GT (Ω, T ) if Ω is a strictly starshaped polygon, assuming the triangulation does not contain any dividing edge. In Section 5, we give a characterization of a special class of geodesic triangulations corresponding to the minimizers of weighted length energies. In Section 6, we show that GT (T 2 , T ) has the homotopy type of the torus. In Section 7, we discuss some open problems about the space of geodesic triangulations for other surfaces.
2. Tutte's embedding and its generalization 2.1. Tutte's embedding for the disk. Given a triangulation T = (V, E, F ) of the 2-disk with the sets of vertices V , edges E and faces F , the 1-skeleton of T is a planar graph. There is no canonical method to embed this graph in the plane. Tutte [28] provided an efficient method to construct a straight-line embedding of a 3vertex-connected planar graph by specifiying the coordinates of vertices of one face as a convex polygon and solving for the coordinates of other vertices with a linear system of equations. Using a discrete maximal principle, Floater [13] proved the same result for triangulations of the 2-disk. Gortler, Gotsman, and Thurston [17] reproved Tutte's theorem with discrete one forms and generalized this results to the case of multiple-connected polygonal regions with appropriate assumptions on the boundaries. Since we are dealing with triangulations, we use the formulation given by Floater [13]. 2.1. Assume T = (V, E, F ) is a triangulation of a convex polygon Ω, and φ is a simplexwise linear homeomorphism from T to R 2 . If φ maps every interior vertex in T into the convex hull of the images of its neighbors, and maps the cyclically ordered boundary vertices of T to the cyclically ordered boundary vertices of Ω, then φ is one to one.
As Floater pointed out, this theorem gave a discrete version of the Rado-Kneser-Choquet theorem about harmonic maps from the disk to a convex polygon. Moreover, it gives a constructive method to produce geodesic triangulations of a convex polygon with the combinatorial type of T as follows.
First assign a positive weight c ij to a directed edge (i, j) ∈Ē, whereĒ is the set of directed edges of T . We normalize the weights by
w ij = c ij j∈N (vi) c ij where the set N (v i )
consists of all the vertices that are neighbors of v i , so that Σ j∈N (vi) w ij = 1 for all i = 1, 2, ..., N I . Notice that we don't impose symmetry condition w ij = w ji . We are given the coordinates
{(b x i , b y i )} |V |
i=N I +1 for all the boundary vertices such that they form a convex polygon Ω in R 2 . Then we can solve the following linear system where N I = |V I | is the size of the set of interior vertices V I , and N B = |V B | is the size of the set of boundary vertices V B . The solution to this linear system produces the coordinates of all the interior vertices in R 2 . We put the vertices in the positions given by their coordinates, and connect the vertices based on the combinatorics of the triangulation T . Tutte's theorem claims that the result is a geodesic triangulation of Ω with the combinatorial type of T .
j∈N (vi) w ij x j = x i i = 1, 2, ...N I ; j∈N (vi) w ij y j = y i i = 1,
The linear system above implies that the x-coordinate(or y-coordinate) of one interior vertex is a convex combination of the x-coordinates(or y-coordinates) of its neighbors. Notice that the coefficient matrix of this system is not necessarily symmetric but it is diagonally dominant, so the solution exists uniquely.
Tutte's theorem solves the embeddability problem for a triangulation of a convex polygon. We can vary the coefficients w ij to construct families of geodesic triangulations of a convex polygon. We will see that this idea will lead to a simple proof of the contractibility of the space of geodesic triangulations.
2.2.
Tutte's embedding for flat tori. In the case of a flat torus (T 2 , g) with a triangulation T , the situation is similar to the disk case, because we can lift a geodesic triangulation of (T 2 , g) to the universal covering R 2 . Using the method in Gu and Yau [18] and Gortler, Gotsman, and Thurston [17], we can compute the harmonic one form to produce geodesic triangulations on T 2 with a fixed combinatorial type of T . Specifically, we first assign a positive weight c ij to each directed edge in T and normalize the weights as in the case of the 2-disk to produce positive weights w ij satisfying Σ j∈N (vi) w ij = 1 for all i = 1, 2, ..., N I . Instead of computing the coordinates for vertices in T directly, we compute the harmonic one forms ∆z :Ē → R by solving the following system of equations
∆z ij = −∆z ji for all directed edges (i, j) ∈Ē; vj ∈N (vi) w ij ∆z ij = 0 for all vertices v i ∈ V ;
∆z ij + ∆z jk + ∆z ki = 0 for all faces f ijk ∈ F .
(2.1)
Gortler, Gotsman, and Thurston [17] showed that this linear system had exactly two independent solutions, denoted by ∆x and ∆y. Then we can assign a vertex v 0 in V to the origin in R 2 and compute the coordinates for other vertices v by summing the entries of the discrete one forms along a path p consisting a sequence of directed edges in T from v 0 to v (x 0 , y 0 ) = (0, 0) and (x i , y i ) = (
(i,j)∈p ∆x ij , (i,j)∈p ∆y ij ) for other vertices. (2.2)
Since the discrete form is closed, the coordinates for (x i , y i ) are independent of the choice of the paths.
Theorem 2.2 ([17]
). Given a triangulation T of (T 2 , g) whose 1-skeleton is a 3vertex-connected graph, the two linearly independent solutions of the system above produce embeddings of any sub-triangulations T of T with the topology of a disk.
Gortler, Gotsman, and Thurston pointed out that this statement of local injectivity produced a globally injective map from the universal cover of the torus to the Euclidean plane. We can generate families of equivariant geodesic triangulations in R 2 projecting to geodesic triangulations on (T 2 , g) by varying the weights w ij in the linear system. If we choose a different pair of harmonic one forms ∆x and ∆y , then the resulting geodesic triangulation in R 2 is the image of the original geodesic triangulation under an affine transformation. This method was extended by Aigerman and Lipman [1] to Euclidean orbifolds with spherical topology.
Geodesic Triangulations of the 2-Disk with Convex Boundary
In this section, we define the space of geodesic triangulations for the disk, and give a new proof of the contractibility of GT (Ω, T ) if Ω is a convex polygon.
{v i } |V | i=N I +1 of T in R 2 with coordinates {(b x i , b y i )} |V |
i=N I +1 and connect them based on T such that they form a convex polygon Ω in R 2 . The space of geodesic triangulations GT (Ω, T ) is defined as the set of all the geodesic triangulations of Ω with the combinatorial type of T whose boundary vertices
{v i } |V | i=N I +1 have the corresponding coordinates {(b x i , b y i )} |V | i=N I +1 .
Every geodesic triangulation is uniquely determined by the positions of the interior vertices in V I , so its topology is the subspace topology induced by Ω |V I | ⊂ R 2|V I | . Notice that this space could be empty if the boundary is complicated. For instance, if the polygon is not star-shaped, then there doesn't exist any geodesic embedding of a triangulation with only one interior vertex. Nevertheless, Tutte's theorem shows that this space is not empty if the polygonal region Ω is convex.
Let us consider the topology of the space GT (Ω, T ) where Ω is a fixed convex polygon in R 2 . Let E I be the set of interior edges in T and E B be the set of boundary edges in T .
Definition 3.2. Given a triangulation T of Ω with coordinates of the boundary vertices
{(b x i , b y i )} |V | i=N I +1 , define W to be the space of positive weights (w ij ) ∈ R 2|E I | on the set of directed edges of T satisfying the normalization condition j∈N (vi) w ij = 1 for all v i ∈ V I . The
Tutte map Ψ sends the weights in W to the solution to the linear system in Tutte's theorem with coefficients (w ij ) and
{(b x i , b y i )}. The weight space W is a 2|E I | − |V I | dimensional affine manifold in R 2E I . The image GT (Ω, T ) is a 2|V I | dimensional manifold. By Euler characteristic χ(Ω) = |V | − |E| + |F | = 1 and the requirement of simplicial complex 3|F | = 2|E I | + |E B |, we can deduce that |E I | − 3|V I | = |E B | − 3.
Hence the dimension of the space of weights W is not lower than the dimension of GT (Ω, T ). Proof. By Tutte's theorem, for any (w ij ) ∈ W , the solution to the linear system generates a geodesic triangulation of T . The continuity follows from the continuous dependence of the solutions on the coefficients in the linear system. To show surjectivity, given a geodesic triangulation τ , any interior vertex v i in τ is in the convex hull of its neighbors. Then we can construct the weights (w ij ) for a geodesic triangulation τ using the mean value coordinates defined in [14] below. The mean value coordinates on the directed edges of a geodesic triangulation are given by
w ij = c ij j∈N (vi) c ij and c ij = tan(α j i−1 /2) + tan(α j i /2) ||v i − v j ||
where the two angles α j i−1 and α j i at v i sharing the edge (i, j) ∈ E I in the Figure 3. The mean value coordinates provide a smooth map from GT (Ω, T ) to W .
There are various ways to construct the weights from a given geodesic triangulation other than the mean value coordinates. Floater proposed another construction by taking the average of barycentric coordinates [15]. An alternative method to construct weights from a geodesic triangulation τ is to take the center of mass of the space of weights (w ij ) ∈ W such that Ψ((w ij )) = τ . This subspace is a convex subspace of W and the center of mass is well-defined. All three methods agree with the barycentric coordinates of a vertex when the star of this vertex is a triangle.
Definition 3.4. The map σ : GT (Ω, T ) → W sends a geodesic triangulation τ to weights (w ij ) in W determined by the mean value coordinates.
Theorem 3.5.
If Ω is a convex polygon in R 2 with a triangulation T , the space of geodesic triangulations GT (Ω, T ) is contractible.
Proof. The map σ is continuous. By Tutte's theorem, Ψ(σ(τ )) = τ for any τ ∈ GT (D 2 , T ), so the map σ is a global section of Ψ from GT (Ω, T ) to W . We need to show σ•Ψ is homotopic to the identity map on W . From the previous discussion, we know that W is an affine manifold in R 2|E I | , so we can use the isotopy (1−t)σ•Ψ+t1 where 1 is the identity map on W . Since W is a contractible space, GT (Ω, T ) is contractible by this homotopy equivalence.
Although we mainly consider triangulations in this paper, this argument can be generalized to the case of the convex geodesic embedding of a 3-vertex-connected graph G, which is defined to be a geodesic embedding of G in the plane such that all its faces are convex. Then using the same idea of Tutte's theorem, we can show the contractiblity of the space of convex geodesic triangulations of G with the prescribed convex boundary Ω.
We can extend this result to convex polygons in other geometries of constant curvature. More precisely, if we have a convex polygon in the hyperbolic plane or a convex polygon in the round 2-sphere contained in a hemisphere, we can reduce it to the case of convex polygon in the Euclidean plane.
For a hyperbolic convex polygon Ω H , we embed it in the Klein model of the hyperbolic plane so that all the edges of Ω H are straight arcs in the Euclidean metric, inducing a convex polygon Ω in the Euclidean plane. Given a triangulation T of Ω H , there is a bijection between the space of all hyperbolic geodesic triangulations of Ω H represented in the Klein model and GT (Ω, T ), induced by the identity map on Ω |V I | . Hence the space of hyperbolic geodesic triangulations GT (Ω H , T ) is also contractible.
Similarly, if Ω S is a spherical convex polygon contained in a hemisphere with a triangulation T , we can apply the gnomonic transformation from the center of the 2-sphere to the plane P tangent to the center of the hemisphere containing Ω S . Then Ω S is mapped to a convex polygon Ω in the plane P under the gnomonic transformation. This projective transformation keeps the incidence and maps geodesic arcs in hemisphere to the straight arcs in P . Hence it induces a bijection between the space of spherical geodesic triangulations of Ω S with combinatorial type of T and GT (Ω, T ) in P .
Corollary 3.6. Assume Ω is a hyperbolic convex polygon, or a spherical convex polygon contained in a hemisphere, and T is a triangulation of Ω. Then the space of geodesic triangulations GT (Ω, T ) is contractible.
Geodesic Triangulations of the 2-Disk with star-shaped Boundary
In this section, we consider a star-shaped subset Ω of R 2 . An eye of a star-shaped region Ω is a point p in Ω such that for any other point q in Ω the line segment l(t) = tp + (1 − t)q lies inside Ω. The set of eyes of Ω is called the kernel of Ω. A set is called strictly star-shaped if the interior of the kernel is not empty.
In the case of polygons in R 2 , the kernel is the intersection of a family of closed half-spaces, each defined by the line passing one boundary edge of Ω. Every closed half space contains a half disk in Ω centered at one point on its corresponding boundary edge. If the star-shaped polygon is strict, the intersection of the open half-spaces is not empty. This means that we can pick an eye e with a neighborhood U of e such that if q ∈ U , then q is also an eye of Ω.
The first question to address is how to construct a geodesic triangulation of a strictly star-shaped polygon Ω with a combinatorial type of T . As Bing and Starbird [3] pointed out, it was not always possible if there was a dividing edge. Assuming there was no dividing edge in T , they proved that such geodesic triangulations existed by induction.
We give an explicit method to produce a geodesic triangulation for a strictly starshaped polygon. We can regard all the edges e ij in T as ideal springs with Hook constants w ij . Fixing the boundary vertices, the equilibrium state corresponds to the critical point of the weighted length energy defined as
E = 1 2 eij ∈E I w ij L 2 ij
where L ij is the length of the edge connecting v i and v j . This energy can be regarded as a discrete version of the Dirichlet energy [10,19], and it has a unique minimizer corresponding to the equilibrium state. Tutte's theorem guarantees that the equilibrium state is a geodesic embedding of T if the boundary is a convex polygon. Given a triangulation T of a fixed strictly star-shaped polygon Ω, assume that the weighted length energy E satisfies eij ∈E I w ij = 1. Notice that if the polygon is star-shaped but not convex, we can't choose arbitrary weights to generate a geodesic embedding of T . Hence we need to assign weights carefully to avoid singularities such as intersections of edges and degenerate triangles.
The idea is to distribute more and more weights to the interior edges connecting two interior vertices. As the weights for interior edges connecting two interior vertices tend to 1, all the interior vertices will concentrate at a certain point. If we can choose this point to be an eye of the polygon, we will produce an geodesic embedding of T of Ω.
Fix a polygon Ω with a triangulation T and the coordinates
{(b x j , b y j )} |V |
i=N I +1 for its boundary vertices. Given a set of coordinates in R 2 for all the interior vertices
{(x i , y i )} N I i=1 ,
we define a family of weighted length energies with a parameter 0 < < 1 as
E( ) = 1 − 2M I eij ∈E I I L 2 ij + 2M B eij ∈E B I L 2 ij where E B I isL 2 ij = (x i − x j ) 2 + (y i − y j ) 2 .
As → 0, most weights are assigned to interior edges in E I I , forcing all the interior vertices of the minimizer of E( ) to concentrate to one point.
lim →0 v I i = lim →0 (x I i ( ), y I i ( )) = (x 0 , y 0 ) = v 0 where v 0 = N B j=1 λ j v B j and λ j = deg(v B j ) − 2 j deg(v B j − 2) = deg(v B j ) − 2 M B , assuming deg(v) is the degree of the vertex v in T .
Proof. The minimizer of E( ) satisfies the following linear system formed by taking derivatives with respect to x i and y i for all i = 1, 2, ...., N I
1 − M I i∈N (v I k ) (v I k − v I i ) + M B j∈N (v I k ) (v I k − v B j ) = 0 for k = 1, 2, · · · N I .
Notice that we separate the interior vertices v I i ∈ V I and the boundary vertices
W (i, j) = 1 M B if v I i is connected to v B j ; 0 if v I i is not connected to v B j .
The matrix S is defined as
S(i, j)( ) = − i =k S(i, k) + N B k=1 W (i, k) if i = j; − 1− M I if v I i is connected to v I j ; 0 if v I i is not connected to v I j .
Notice that for the first N I rows in M ( ), the sums of their respective entries are zero, and all the off-diagonal terms are non-positive. The matrix W represents the relations of the boundary vertices with the interior vertices, and the sum of all its entries equals one. The matrix S( ) is symmetric, strictly diagonally-dominant, and the sum of all its entries equals .
To show the limiting behavior of the solution to the system as → 0, we need the lemma below.
v = (1/ √ N I , 1/ √ N I , ..., 1/ √ N I ) T .
First, we show that λ = 0 is a simple eigenvalue for S. If S has another eigenvector u = (u 1 , u 2 , ..., u N I ) T corresponding to λ = 0 not parallel to v, then it is orthogonal to v so i u i = 0. Without loss of generality, we assume that u 1 > 0 achieves the maximal absolute value among u i . Then we have
Su = 0 ⇒ N I i=1 S(1, i)u i = 0 ⇒ S(1, 1)u 1 = − N I i=2 S(1, i)u i .
Notice that S is weakly diagonally dominant, S(1, 1) > 0, and S(1, i) ≤ 0, so we can deduce that The derivative of a simple eigenvalue of a symmetric matrix is given in [23] by
S(1, 1)u 1 ≥ − N I i=2 S(1, i)u 1 ⇒ − N I i=2 S(1, i)(u i − u 1 ) ≥ 0.dλ d (0) = d(v T S( )v) d = d( /N I ) = 1 N I .
Finally, we are ready to prove the lemma. Since S( ) is symmetric, we have the diagonalization with an orthonormal matrix P ( )
S −1 ( ) = P ( ) λ −1 1 ( ) λ −1 2 ( ) . . . λ −1 N I ( ) P T ( ).
Without loss of generality, we assume the first eigenvalue lim →0 λ 1 ( ) = 0. Given any 0 < δ < 1, we can choose small > 0 such that the following three inequality holds λ i ( ) > C > 0 for i = 2, 3, ..., N i ;
||P ( ) λ −1 1 ( ) λ −1 2 ( ) . . . λ −1 N I ( ) P T ( )−P ( ) N I 0 . . . 0 P T ( )|| 2 < δ;
and the eigenvector v 1 ( ) of S( ) corresponding to the eigenvector λ 1 ( ) satisfies
||v 1 ( ) − 1 √ N I 1 1 . . . 1 || ∞ < δ.
Notice that the columns of P ( ) = (v 1 , v 2 , ..., v N I ) form a set of the orthonormal basis formed by eigenvectors v i , where the first eigenvector v 1 ( ) approaches v = (1/ √ N I , ..., 1/ √ N I ). Then we have
|| S −1 ( ) − 1|| 2 ≤||P ( ) λ −1 1 ( ) . . . λ −1 N I ( ) P T ( ) − P ( ) N I . . . 0 P T ( )|| 2 +||P ( ) N I . . . 0 P T ( ) − 1|| 2 ≤ δ + ||N I v T 1 ( )v 1 ( ) − 1|| 2 .
Notice that
||N I v T 1 ( )v 1 ( ) − 1|| 2 ≤ 2N 2 I δ. Hence || S −1 ( ) − 1|| 2 ≤ (1 + 2N 2 I )δ.
The inverse of the matrix M ( ) can be represented as
M −1 ( ) = S −1 ( ) S −1 ( )W 0 I .
Then the solution of the linear system M ( )x = b x is x = M −1 ( )b x , whose first N I entries are given by
x I 1 ( ) x I 2 ( ) . . . x I N I ( ) = S −1 ( )W x B 1 x B 2 . . . x B N B .
As → 0, the solution approaches 1W x B . All the x I i approach the same point lim →0
x
I i = (1, ..., 1)W x B 1 x B 2 . . . x B N B = N B i=1 deg(v B i ) − 2 N B x B i .
A similar result holds for y-coordinates of the interior vertices. Hence we conclude the limit of the solutions lim →0 v I i = v 0 . Notice that the matrix W can be replaced with more general matrices. The original energy E( ) distributes percentage of weights evenly to all the edges in E B I . We can define new energies by redistributing the weights
E W ( ) = 1 − 2M I eij ∈E I I L 2 ij + 2 eij ∈E B I w ij L 2 ij
with w ij > 0 and (i,j)∈E B I w ij = 1. The matrix W is defined as
W (i, j) = w ij if v I i is connected to v B j ; 0 if v I i is not connected to v B j . The limit of the solution is v 0 = N B j=1 λ j v B j where λ j = N I i=1 w ij .
To construct a geodesic triangulation, pick an eye e of Ω such that e =
N B i=1 λ i v B i where λ i > 0 and N B i=1 λ i = 1, then define W (i, j) = w ij = λi deg(v B j )−2 if v I i is connected to v B j ; 0
if v I i is not connected to v B j . and the corresponding energy E W ( ). The remaining task is to show that the critical point of E W ( ) is a geodesic embedding of T for small .
If Ω is not convex, there exists a reflex vertice, defined as a boundary vertice of Ω where the turning angle is negative. We use the result by Gortler, Gotsman and Thurston [17] to show that the minimizer of E W ( ) constructed above is an embedding for some > 0. Proof. Theorem 4.3 implies that we only need to check that the reflex vertices v r are in the convex hulls of their respective neighbors.
Choose an small enough such that the vertices of the critical point of E W ( ) defined above are eyes of Ω. Assume v r is a reflexive point on the boundary of Ω.
Let v be an interior vertex of the geodesic triangulation in the star of v r , and let v 1 and v 2 be the two boundary vertices connecting to v r . Since there is no dividing edge in T , v 1 and v 2 are the only boundary vertices connecting to v r . We want to show that v r is in the convex hull of its neighbors.
Assume the opposite, then all the edges connecting to v r lie in a closed half plane, so the inner product of any pair of three vectors
− − → v r v 1 , − − → v r v 2 and − → v r v is non-negative.
But the inner angle at v r is larger than π, then either angle ∠v 1 v r v or ∠vv r v 2 is strictly larger than than π 2 , which means one inner product is negative. This leads to a contradiction.
This result solves the embeddability problem for strictly star-shaped polygons Ω with a triangulation T . We can construct a geodesic triangulation of Ω as follows. Pick an eye e of Ω with the coefficients W defined above. Then choose = 1/2 and solve the linear system corresponding to the critical point of E W (1/2). If the solution is not an embedding, replace by /2 and continue.
We conjecture that the space of geodesic triangulations for strictly star-shaped polygon with a fixed combinatorial type is contractible.
A Characterization of Geodesic Triangulations From Energies
We use the weighted length energy to generate families of geodesic triangulations for both convex polygons and strictly star-shaped polygons in the previous sections. One interesting question is whether we can realize any given geodesic triangulation in GT (Ω, T ) as the critical point of certain weighted length energy by choosing appropriate weights. Unfortunately, this is not the case, given the example in Eades, Healy, and Nikolov [11].
Example 5.1.
We have two equilateral triangles with different sizes determined by the vertices below and the triangulation given in Figure 4. The weighted length energy is given by
v 1 = 0 2 , v 2 = − √ 3 −1 , v 3 = √ 3 −1 , v 4 = − sin cos , v 5 = −E( ) = 3((2 − cos ) 2 + sin 2 + (2 + √ 3 2 sin + 1 2 cos ) 2 + (− √ 3 2 cos + 1 2 sin ) 2 )
= 30 − 6 cos + 6 √ 3 sin . Notice that when is close to zero, E( ) is a monotonic increasing function with respect to . Moreover, the length of every interior edge decreases or at least stays with the same length when → 0 + . Then it can't be a critical point of any energy in the form of E = 1 2 w ij L 2 ij . The triangulation in Figure 4 is not a critical point of any energy, because we can construct a vector field to move the interior vertices of the triangulation so that no edge is lengthened. We can show that this condition leads to a necessary and sufficient condition for a geodesic triangulation to be realized as the minimizer of a weighted length energy. Eades, Healy, and Nikolov [11] gave another characterization for this class of geodesic triangulations.
Lemma 5.2. A geodesic triangulation τ of a polygon Ω can be realized by the critical point of a weighted length energy if and only if any vector field at the set of interior vertices of τ will shorten at least one edge and lengthen at least one edge.
Proof. Let (x i , y i ) be the coordinate for vertex v i of a given geodesic triangulation in R 2 . If there exists a vector field not increasing any edge length, then all the edge lengths will decrease or at least stay with the same length as we move the vertices of the geodesic triangulation along the vector field. Then it can't be a critical point of E for any choice of w ij .
Conversely, assume that we are given a geodesic triangulation τ such that any vector field at interior vertices of τ will increase the length of some edge and decrease the length of another edge. We want to show that we can find some positive weights w ij for every edge in E I such that τ is the critical point of the weighted length energy
E = 1 2 eij ∈E I w ij L 2 ij .
To find these weights, consider the linear system corresponding to the critical point of weighted length energy, denoted by V w = 0,
j∈N (vi) v T ij w ij = 0 i = 1, ..., N I
where we regard w ij as the unknowns for the system and v ij = −v ji = (x i − x j , y i − y j ) are determined by τ . For each interior vertex v i , we have two equations corresponding to the x coordinate and the y coordinate of v i , so V is a 2N I × |E I | matrix. If w ij is the weight of an interior edge connecting two interior vertices, then the column c ij of V corresponding to w ij is
(0, ..., 0, v ij , 0, ..., 0, v ji , 0, ..., 0) T .
If w ij is the weight of an interior edge connecting one interior vertex v i with a boundary vertex v j , then the column c ij of V corresponding to w ij is
(0, ..., 0, v ij , 0, ..., 0) T .
To show the existence of a positive solution, consider an arbitrary vector field X defined on the set of interior vertices of τ . It can be represented as (α 1 , α 2 , ..., α N I ) T where α i is a row vector in R 2 . Then consider the derivative of the length of an interior edge connecting two interior vertices under X
dL 2 ij dt t=0 = d dt t=0 (x i +α x i t−x j −α x j t) 2 +(y i +α y i t−y j −α y j t) 2 = v ij ·(α i −α j ) = 2X·c ij .
Similarly for an interior edge connecting one interior vertex v i with one boundary vertex v j , we have
dL 2 ij dt t=0 = d dt t=0 (x i + α x i t − x j ) 2 + (y i + α y i t − y j ) 2 = v ij · α i = 2X · c ij .
By assumption, we know that X shortens one edge with weight w ij and lengthens another with weight w ij . Hence the corresponding columns c ij and c ij produce different signs, namely X · c ij and X · c ij has different signs. This means that all the entries of X T V can't have the same sign. Since X is arbitrary, by Farkas's alternative [24], V w = 0 has a positive solution (w ij ).
Geodesic Triangulations of Flat tori
In this section, we study the space of geodesic triangulations of a fixed combinatorial type of T on a flat torus (T 2 , g). Here we assume a triangulation is a simplical complex. Notice that by the result of Colin de Verdiere [10], and Hass and Scott [19], the space GT (T 2 , g, T ) is not empty, so the embeddability problem for flat tori is resolved. We define the space of geodesic triangulations using simplexwise linear homeomorphisms and study its topology. Definition 6.1. Given a triangulation T = (V, E, F ) of a flat torus (T 2 , g), let τ 0 be a geodesic triangulation of the combinatorial type of T , then the space of geodesic triangulations is the set of images of τ 0 under all simplexwise linear homeomorphisms isotopic to the identity of T 2 . This space is denoted by GT (T 2 , g, T ).
The image of τ 0 under a simplexwise linear homeomorphism is a geodesic triangulation τ of (T 2 , g), because simplexwise linear homeomorphisms map triangulations to triangulations and geodesic arcs in the 1-skeleton of T to geodesic arcs.
For flat tori, GT (T 2 , g, T ) is not simply a submanifold of (T 2 ) |V | , because the positions of the vertices can't uniquely determine a geodesic triangulation. Specifically, fixing two points p and q on a flat torus, there exist many geodesic arcs connecting them. These arcs are not necessarily homotopic to each other by a homotopy fixing two endpoints. They are lifted to the straight arcs connecting a preimage of p ∈ R 2 with different preimages of q ∈ R 2 . Even if the positions of the vertices of two geodesic triangulations coincide, the corresponding edge can be different geodesic arcs. Nevertheless, we can show that GT (T 2 , g, T ) is a topological manifold.
Lemma 6.2. GT (T 2 , g, T ) is a topological manifold of dimension 2|V |.
Proof. We construct local charts to cover GT (T 2 , g, T ). The idea is that we can perturb the vertices of a geodesic triangulation to construct another.
For any given τ ∈ GT (T 2 , g, T ), choose any vertex v of τ and lift its star to the universal cover. Then the image of the star in the universal cover is a strictly starshaped polygon Ω in R 2 and the preimageṽ of v is an eye of Ω. Since the kernel of the polygon Ω is an open set, there exists an open neighborhood U ofṽ such that p ∈ U is an eye of Ω. So we can connect p with boundary vertices of Ω and project it to the flat torus to form another geodesic triangulation τ . The simplexwise linear homeomorphism corresponding to τ is linearly isotopic to the simplexwise linear homeomorphism corresponding to τ by the linear isotopy between p andṽ in Ω. We can project this linear isotopy to construct an isotopy between these two geodesic triangulations on T 2 .
Similarly, we can perturb other vertices v i of τ in their neighborhoods U i to generate other geodesic triangulations and the corresponding simplexwise linear homeomorphisms. We choose δ > 0 and choose one point
p i from B δ (v i ) ⊂ U i ,
the open disk centered at v i with radius δ in T 2 , and connect them with geodesic arcs based on the combinatorial type of T . We can choose δ small enough such that the if we pick one point from each B δ (v i ) and connect them based on T , we produce a geodesic triangulation. Hence we construct a chart
ψ τ : vi∈V B δ (v i ) → GT (T 2 , g, T )
covering the geodesic triangulation τ . We can construct charts for any element in GT (T 2 , g, T ) to cover the whole space. The product vi∈V B δ (v i ) is homeomorphic to R 2|V | , so the dimension of the space is 2|V |.
Notice that our definition for the space of geodesic triangulations requires a flat metric. For the polygons in the plane, the metric is the Euclidean metric. On the contrary, there are infinite flat metrics on T 2 representing different elements in the Teichmüller space of flat tori with unit area. We show that the topology of GT (T 2 , g, T ) is independent of the choice of flat metrics. Lemma 6.3. Assume T is a triangulation of T 2 and let g 1 and g 2 be two flat metrics representing two distinct elements in the Teichmüller space of flat tori with unit area. Then GT (T 2 , g 1 , T ) is homeomorphic to GT (T 2 , g 2 , T ).
Proof. We can construct a map between GT (T 2 , g 1 , T ) and GT (T 2 , g 2 , T ) explicitly using the universal covering R 2 . Since the Teichmüller space T (T 2 ) is parametrized by the upper half plane {z ∈ C|Imz > 0}, let z 1 and z 2 correspond to the metrics g 1 and g 2 . This means that (T 2 , g 1 ) is isometric to the quotient manifold R 2 / < α, β >, where α and β are two isometries of R 2 given by α(z) = z + 1 and β(z) = z + z 1 , and < α, β > is the group of isometries generated by α and β. Similarly, (T 2 , g 2 ) is isometric to R 2 / < α, γ >, where γ(z) = z + z 2 . Then there is an orientation preserving linear mapf : R 2 → R 2 fixing 0 and 1, sending z 1 to z 2 . This map induces a diffeomorphism f :
(T 2 , g 1 ) → (T 2 , g 2 ) becausef • β = γ.
Notice that the linear map sends straight lines to straight lines and keeps the incidence. Assume τ 1 is a geodesic triangulation on (T 2 , g 1 ) and we lift it to the universal covering toτ 1 . Thenf (τ 1 ) is a geodesic triangulation of R 2 and descends to a geodesic triangulation f (τ 1 ) on (T 2 , g 2 ). Hence we have a bijective map between GT (T 2 , g 1 , T ) and GT (T 2 , g 2 , T ) induced byf and its inverse. These two maps are continuous, hence we have a homeomorphism between the two spaces.
Choose v 0 ∈ V and define GT (T 2 , g, T, v 0 , x) to be the space of geodesic triangulations with the location of v 0 fixed at the point x ∈ T 2 . To find the homotopy type of GT (T 2 , g, T, v 0 , x), we construct a similar Tutte map Ψ from the space of weights W to GT (T 2 , g, T, v 0 , x) using Theorem 2.2. However, the definition of Tutte map is more complicated than the definition for the case of polygons in the plane.
Without loss of generality, assume that (T 2 , g) is isometric to R 2 / < u, v > such that u(z) = z + 1 and v(z) = z + z 1
where Imz 1 > 0. Let p 1 and p 2 be two loops consisting of sequences of directed edges in T based at v 0 homotopic to the meridian and the longitude of T 2 respectively. If we solve the system (2.1) and choose harmonic one forms ∆x and ∆y randomly, the resulting geodesic triangulation produced by the formula (2.2) in the plane might not project to a geodesic triangulation on (T 2 , g).
To find the pair of harmonic one forms ∆x and ∆y which produce a geodesic triangulation in (T 2 , g), assume that (α, β) = ( Notice that these two vectors (α, β) and (γ, δ) can't be the zero vector by Theorem 2.2. Then there exists a unique orientation preserving linear transformation A in R 2 such that A sends (α, β) to (1, 0) and (γ, δ) to (Rez 1 , Imz 1 ), namely,
A = a b c d and a b c d α γ β δ = 1 Rez 1 0 Imz 1 .
Then define ∆x = a∆x + b∆y and ∆y = c∆x + d∆y, and we have
(1, 0) = ( (i,j)∈p1 ∆x ij , (i,j)∈p1
∆y ij ) and (Rez 1 , Imz 1 ) = (
(i,j)∈p2 ∆x ij , (i,j)∈p2 ∆y ij ).
This means that by the formula (2.2), ∆x and ∆y produce an equivariant geodesic triangulation in the plane, projecting to a geodesic triangulation on (T 2 , g).
Hence we can define the harmonic one forms ∆x and ∆y as follows. Combine the system of equations (2.1) with the following two equations
(1, 0) = ( (i,j)∈p1 ∆z ij , (i,j)∈p1 ∆z ij ).
Then we can find a unique solution ∆x to this system by the discussion above. Similarly, combine the system of equations (2.1) with another two equations (Rez 1 , Imz 1 ) = (
(i,j)∈p2 ∆z ij , (i,j)∈p2 ∆z ij ),
and we can find a unique solution ∆y . Notice that ∆x and ∆y are uniquely determined by an element (w ij ) in the weight space W . Then we can have a welldefined Tutte map. Definition 6.4. Assume that T is a triangulation of T 2 , (T 2 , g) is a flat torus isometric to R 2 / < u, v > such that
u(z) = z + 1 and v(z) = z + z 1
where Imz 1 > 0. The Tutte map Ψ for (T 2 , g) sends the weights (w ij ) ∈ W to the geodesic triangulation τ in GT (T 2 , g, T, v 0 , x) constructed by projecting the equivariant geodesic triangulation produced by ∆x and ∆y defined above based on the formula (2.2) to (T 2 , g).
Then we can compute the mean value coordinates for each directed edge and define a similar section σ : GT (T 2 , g, T, v 0 , x) to W as the case of convex polygons. To apply theorem 2.2, we need to show that the 1-skeleton of any triangulation on a torus is 3-vertex-connected. Lemma 6.5. Given a triangulation T of T 2 , the 1-skeleton of T is a 3-vertexconnected graph.
Proof. We need to check that if we remove two vertices and all the edges and faces containing one of the two vertices, the remaining space is connected. Choose any two vertices v 1 and v 2 in T , then remove them and all the edges and faces containing v 1 or v 2 from T . Let S denote the remaining space.
If v 1 is not in the star of v 2 in T , then S is homotopic to a twice punctured torus, because we remove two disjoint open disks from the torus T 2 . This fact follows from a computation using Euler characteristic χ(S). The open star of a vertex v in T has one vertex, E edges and F faces with E = F , so we remove two surfaces with Euler characteristic one, namely two disks. Then χ(S) = 0 − 2 = −2. If S is disconnected, let S 1 and S 2 be two connected components. Then either one of S 1 and S 2 contains two boundary components, or each of S 1 and S 2 contains one boundary component. In either case, we will produce a disconnect surface instead of a torus when gluing the two open 2-disks back to S.
If v 1 is in the star of v 2 in T , then we remove a open disk from T . This is because we remove one vertex, E 1 edges, and F 1 faces from the star of v 1 , and one vertex, E 2 edges, and F 2 faces from the star of v 2 with E 1 = F 1 and E 2 = F 2 . Notice that the intersection of the stars of v 1 and v 2 contains one edge and two faces. Hence we remove two vertices, E 1 + E 2 − 1 edges and F 1 + F 2 − 2 faces, which combines to form a disk. Hence the remaining space is homotopic to a torus with one puncture, which is a connected surface.
In both cases, the remaining space is connected, so the 1-skeleton of T is 3vertex-connected. Theorem 6.6. Let T be a triangulation of a flat torus T 2 , then GT (T 2 , g, T, v 0 , x) is contractible and GT (T 2 , g, T ) = GT (T 2 , g, T, v 0 , x) × T 2 .
Proof. Given a geodesic triangulation in GT (T 2 , T, g), we can move the image of v 0 to x using an isometry of the flat torus isotopic to the identity. The Tutte map Ψ from W to GT (T 2 , g, T, v 0 , x) is well-defined and continuous. We have the continuous section σ from GT (T 2 , g, T, v 0 , x) to W defined by the mean value coordinates. Then Ψ(σ(τ )) = τ for any τ ∈ GT (T 2 , g, T, v 0 , x), and σ • Ψ is homotopic to the identity map on W . Hence Ψ and σ provide the homotopy equivalence so GT (T 2 , g, T, v 0 , x) is contractible. Since the group of isometries of a flat torus isotopic to the identity is homeomorphic to T 2 , we conclude that GT (T 2 , T, g) is homeomorphic to GT (T 2 , g, T, v 0 , x) × T 2 .
Notice that this is an analogous result for the smooth counterparts: D 0 (T 2 ) is homeomorphic to D 0 (T 2 ; x) × T 2 , where D 0 (T 2 ; x) is contractible.
Further Work
Remaining open is the contractibility problem for the spaces of geodesic triangulations on the star-shaped polygons, the round 2-sphere and hyperbolic surfaces. The space of geodesic triangulations on the 2-sphere was studies by Awartani-Henderson [2]. The conjecture is that GT (S 2 , T ) is homotopic to SO(3). For hyperbolic surfaces S, Hass and Scott [19] showed that GT (S, T ) was contractible if T is an 1-vertex triangulation. It is conjectured that GT (S, T ) is contractible.
Another direction to generalize the result was proposed by Luo [22]. Instead of the space of geodesic triangulations of a flat convex polygon with a fixed combinatorial type, we can study the space of geodesic triangulations of a convex polygon Ω with prescribed curvatures at the interior vertices of the triangulation. Then GT (Ω, T ) is the special case when the prescribed curvatures are zero at all the interior vertices. It is conjectured that these spaces with different prescribed curvatures are also contractible.
Acknowledgement
The author would like to thank his advisor, Professor Joel Hass, for suggesting this problem, helpful discussions, and constant encouragement.
Figure 1 .
1Three examples of geodesic triangulations of the 2-disk with fixed boundary vertices.
Figure 2 .
2Tutte's embedding
Theorem
2, ...N I ; x i = b x i i = N I + 1, N I + 2, ...N I + N B = |V |; y i = b y i i = N I + 1, N I + 2, ...N I + N B = |V |
Definition 3. 1 .
1Given a triangulation T = (V, E, F ) of the 2-disk, fix the boundary vertices
Lemma 3 . 3 .
33The Tutte map Ψ is continuous and surjective from the space of weights W to GT (Ω, T ).
Figure 3 .
3The mean value coordinate at v 0
the set of all the interior edges connecting an interior vertex to a boundary vertex and E I I is the set of all the interior edges connecting two interior vertices. Let M B = |E B I | and M I = |E I I |. The edge lengths L ij are determined by the coordinates of the vertices
Theorem 4. 1 .
1Let Ω be a polygonal region with a triangulation T of Ω.Let v B j = (x B j , y B j ) = (b x j , b y j ) for j = 1, ..., N B bethe coordinates of the boundary vertices of Ω and v I i ( ) = (x I i ( ), y I i ( )) for i = 1, ..., N I be the coordinates of the interior vertices of the minimizer of the energy E( ). Then for all i = 1, 2, ...., N I ,
v B j ∈ V B in the summation. This system can be represented as M ( )x = b x M ( )y = b y where the variables are x = (x I 1 , x I 2 , ..., x I N I , x B 1 , ..., x B N B ) T and y = (y I 1 , y I 2 , ..., y I N I , y B 1 , ..., y B N B ) T . The boundary conditions are b x = (0, 0, ..., 0, x B 1 , ..., x B N B ) T and b y = (0, 0, ..., 0, y B 1 , ..., y B N B ) T . The coefficient matrix M ( ) is an (N I + N B ) × (N I + N B ) matrix, and it can be decomposed as M ( ) = S( ) − W 0 Id where W is an N I × N B matrix, S( ) is a square matrix of size N I , and Id is the identity matrix of size N B . The matrix W is defined as
Lemma 4. 2 .
2Given the notations above, we have lim →0 S( ) −1 = 1 where the matrix 1 is the N I × N I matrix with all entries equal to 1. Proof. Notice that S( ) is symmetric and strictly diagonally dominant, so it is invertible. Let S = S(0) and M = M (0), then S has an eigenvalue λ = 0 with the normalized eigenvector
By our assumption, u i −u 1 ≤ 0 for all i = 1, ..., N I , so the only possibility is u i = u 1 for all i, which contradicts to the fact that u is orthogonal to v. Hence all the other eigenvalues of S are positive by Gershgorin circle theorem. (See, e.g. [16]) Second, we show that the eigenvalue λ( ) of S( This means that the derivative (dλ/d )(0) = 1/N I . To compute the derivative, notice that the sum of all the entries of S( ) is , hence we have v T S( )v = 1 N I (1, 1, ..., 1)S( )
Given a strictly star-shaped polygon Ω with a triangulation T without dividing edges, if the reflex vertices of Ω are in the convex hull of their respective neighbors, then the solution to the linear system generates a straight-line embedding of T .
Theorem 4. 4 .
4Given a strictly star-shaped polygon Ω with a triangulation T without dividing edges, and an eye e in Ω with coefficients W , there exists an > 0 such that the critical point of the energy E W ( ) generates a geodesic embedding of T .
Figure 4 .
4A geodesic triangulation which can not be realized by the minimizer of any weighted length energy.
Orbifold tutte embeddings. Noam Aigerman, Yaron Lipman, ACM Trans. Graph. 346Noam Aigerman and Yaron Lipman, Orbifold tutte embeddings., ACM Trans. Graph. 34 (2015), no. 6, 190-1.
Spaces of geodesic triangulations of the sphere. Marwan Awartani, W David, Henderson, Transactions of the American Mathematical Society. 3042Marwan Awartani and David W Henderson, Spaces of geodesic triangulations of the sphere, Transactions of the American Mathematical Society 304 (1987), no. 2, 721-732.
Linear isotopies in ?? 2. R H Bing, Michael Starbird, Transactions of the American Mathematical Society. 237RH Bing and Michael Starbird, Linear isotopies in ?? 2 , Transactions of the American Math- ematical Society 237 (1978), 205-222.
The space of simplexwise linear homeomorphisms of a convex 2-disk. D Ethan, Robert Bloch, David W Connelly, Henderson, Topology. 232Ethan D Bloch, Robert Connelly, and David W Henderson, The space of simplexwise linear homeomorphisms of a convex 2-disk, Topology 23 (1984), no. 2, 161-175.
Deformations of plane rectilinear complexes. S Steward, Cairns, The American Mathematical Monthly. 515Steward S Cairns, Deformations of plane rectilinear complexes, The American Mathematical Monthly 51 (1944), no. 5, 247-252.
Isotopic deformations of geodesic complexes on the 2-sphere and on the plane. S Stewart, Cairns, Annals of Mathematics. Stewart S Cairns, Isotopic deformations of geodesic complexes on the 2-sphere and on the plane, Annals of Mathematics (1944), 207-217.
About the bloch-connelly-henderson theorem on the simplexwise linear homeomorphisms of a convex 2-disk. Jean Cerf, arXiv preprint math/1910.00240Jean Cerf, About the bloch-connelly-henderson theorem on the simplexwise linear homeomor- phisms of a convex 2-disk, arXiv preprint math/1910.00240 (2019).
On the problems related to linear homeomorphisms, embeddings, and isotopies, Continua, decompositions, manifolds. Robert Connelly, W David, Chung Wu Henderson, Michael Ho, Starbird, Robert Connelly, David W Henderson, Chung Wu Ho, and Michael Starbird, On the problems related to linear homeomorphisms, embeddings, and isotopies, Continua, decompositions, manifolds, 1983, pp. 229-239.
Tutte's barycenter method applied to isotopies. Éric Colin De, Michel Verdière, Gert Pocchiola, Vegter, Computational Geometry. 261Éric Colin De Verdière, Michel Pocchiola, and Gert Vegter, Tutte's barycenter method applied to isotopies, Computational Geometry 26 (2003), no. 1, 81-97.
Comment rendre géodésique une triangulation d?une surface, L?. Y Colin De Verdiere, Enseignement Mathématique. 37Y Colin de Verdiere, Comment rendre géodésique une triangulation d?une surface, L?Enseignement Mathématique 37 (1991), 201-212.
The weighted barycenter drawing recognition problem. Peter Eades, Patrick Healy, Nikola S Nikolov, International Symposium on Graph Drawing and Network Visualization. SpringerPeter Eades, Patrick Healy, and Nikola S Nikolov, The weighted barycenter drawing recog- nition problem, International Symposium on Graph Drawing and Network Visualization, Springer, 2018, pp. 569-575.
A fibre bundle description of teichmüller theory. J Clifford, James Earle, Eells, Journal of Differential Geometry. 31-2Clifford J Earle, James Eells, et al., A fibre bundle description of teichmüller theory, Journal of Differential Geometry 3 (1969), no. 1-2, 19-43.
One-to-one piecewise linear mappings over triangulations. Michael Floater, Mathematics of Computation. 72242Michael Floater, One-to-one piecewise linear mappings over triangulations, Mathematics of Computation 72 (2003), no. 242, 685-696.
. S Michael, Floater, Computer aided geometric design. 201Mean value coordinatesMichael S Floater, Mean value coordinates, Computer aided geometric design 20 (2003), no. 1, 19-27.
How to morph tilings injectively. S Michael, Craig Floater, Gotsman, Journal of Computational and Applied Mathematics. 1011-2Michael S Floater and Craig Gotsman, How to morph tilings injectively, Journal of Compu- tational and Applied Mathematics 101 (1999), no. 1-2, 117-129.
H Gene, Cfv Golub, Loan, Matrix computations. forth editionGene H Golub and CFV Loan, Matrix computations, forth edition, 2013.
Discrete one-forms on meshes and applications to 3d mesh parameterization. Steven Gortler, Craig Gotsman, Dylan Thurston, Computer Aided Geometric Design. Steven Gortler, Craig Gotsman, and Dylan Thurston, Discrete one-forms on meshes and applications to 3d mesh parameterization, Computer Aided Geometric Design (2006).
Global conformal surface parameterization. Xianfeng Gu, Shing-Tung Yau, Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing. the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processingXianfeng Gu and Shing-Tung Yau, Global conformal surface parameterization, Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing, Eurographics Association, 2003, pp. 127-137.
Joel Hass, Peter Scott, arXiv:1206.2574Simplicial energy and simplicial harmonic maps. arXiv preprintJoel Hass and Peter Scott, Simplicial energy and simplicial harmonic maps, arXiv preprint arXiv:1206.2574 (2012).
On certain homotopy properties of some spaces of linear and piecewise linear homeomorphisms. i, Transactions of the. Chung Wu Ho, American Mathematical Society181Chung Wu Ho, On certain homotopy properties of some spaces of linear and piecewise linear homeomorphisms. i, Transactions of the American Mathematical Society 181 (1973), 213- 233.
Hee Seok, Hiroshi Hong, Nagamochi, Convex drawings of graphs with non-convex boundary constraints. 156Seok-Hee Hong and Hiroshi Nagamochi, Convex drawings of graphs with non-convex boundary constraints, Discrete Applied Mathematics 156 (2008), no. 12, 2368-2380.
Feng Luo, arXiv preprint math/0612714Rigidity of polyhedral surfaces. Feng Luo, Rigidity of polyhedral surfaces, arXiv preprint math/0612714 (2006).
The matrix cookbook. Kaare Brandt Petersen, Michael Syskind Pedersen, Technical University of Denmark. 715510Kaare Brandt Petersen, Michael Syskind Pedersen, et al., The matrix cookbook, Technical University of Denmark 7 (2008), no. 15, 510.
Advanced linear algebra. Steven Roman, F W Axler, Gehring, Springer3Steven Roman, S Axler, and FW Gehring, Advanced linear algebra, vol. 3, Springer, 2005.
Diffeomorphisms of the 2-sphere. Stephen Smale, Proceedings of the American Mathematical Society. 104Stephen Smale, Diffeomorphisms of the 2-sphere, Proceedings of the American Mathematical Society 10 (1959), no. 4, 621-626.
Controllable morphing of compatible planar triangulations. Vitaly Surazhsky, Craig Gotsman, ACM Transactions on Graphics (TOG). 204Vitaly Surazhsky and Craig Gotsman, Controllable morphing of compatible planar triangu- lations, ACM Transactions on Graphics (TOG) 20 (2001), no. 4, 203-231.
Intrinsic morphing of compatible triangulations. International Journal of Shape Modeling. 902, Intrinsic morphing of compatible triangulations, International Journal of Shape Mod- eling 9 (2003), no. 02, 191-201.
How to draw a graph. William Thomas Tutte, Proceedings of the London Mathematical Society. 31William Thomas Tutte, How to draw a graph, Proceedings of the London Mathematical Society 3 (1963), no. 1, 743-767.
Embedding a triangular graph within a given boundary. Yin Xu, Renjie Chen, Craig Gotsman, Ligang Liu, Computer Aided Geometric Design. 286Yin Xu, Renjie Chen, Craig Gotsman, and Ligang Liu, Embedding a triangular graph within a given boundary, Computer Aided Geometric Design 28 (2011), no. 6, 349-356.
95616 E-mail address: lyw@math. Davis , CaliforniaDepartment of Mathematics, University of California Davisucdavis.eduDepartment of Mathematics, University of California Davis, Davis , California 95616 E-mail address: [email protected]
| []
|
[
"Robust Autonomous Landing of UAV in Non-Cooperative Environments based on Dynamic Time Camera-LiDAR Fusion",
"Robust Autonomous Landing of UAV in Non-Cooperative Environments based on Dynamic Time Camera-LiDAR Fusion"
]
| [
"Lyujie Chen ",
"Xiaming Yuan ",
"Yao Xiao ",
"Yiding Zhang ",
"Jihong Zhu "
]
| []
| []
| Selecting safe landing sites in non-cooperative environments is a key step towards the full autonomy of UAVs. However, the existing methods have the common problems of poor generalization ability and robustness. Their performance in unknown environments is significantly degraded and the error cannot be self-detected and corrected. In this paper, we construct a UAV system equipped with low-cost LiDAR and binocular cameras to realize autonomous landing in noncooperative environments by detecting the flat and safe ground area. Taking advantage of the non-repetitive scanning and high FOV coverage characteristics of LiDAR, we come up with a dynamic time depth completion algorithm. In conjunction with the proposed self-evaluation method of the depth map, our model can dynamically select the LiDAR accumulation time at the inference phase to ensure an accurate prediction result. Based on the depth map, the high-level terrain information such as slope, roughness, and the size of the safe area are derived. We have conducted extensive autonomous landing experiments in a variety of familiar or completely unknown environments, verifying that our model can adaptively balance the accuracy and speed, and the UAV can robustly select a safe landing site. | null | [
"https://arxiv.org/pdf/2011.13761v1.pdf"
]
| 227,209,554 | 2011.13761 | 1c72e125afc05cf006a5372066bb8a0e2a164597 |
Robust Autonomous Landing of UAV in Non-Cooperative Environments based on Dynamic Time Camera-LiDAR Fusion
Lyujie Chen
Xiaming Yuan
Yao Xiao
Yiding Zhang
Jihong Zhu
Robust Autonomous Landing of UAV in Non-Cooperative Environments based on Dynamic Time Camera-LiDAR Fusion
Selecting safe landing sites in non-cooperative environments is a key step towards the full autonomy of UAVs. However, the existing methods have the common problems of poor generalization ability and robustness. Their performance in unknown environments is significantly degraded and the error cannot be self-detected and corrected. In this paper, we construct a UAV system equipped with low-cost LiDAR and binocular cameras to realize autonomous landing in noncooperative environments by detecting the flat and safe ground area. Taking advantage of the non-repetitive scanning and high FOV coverage characteristics of LiDAR, we come up with a dynamic time depth completion algorithm. In conjunction with the proposed self-evaluation method of the depth map, our model can dynamically select the LiDAR accumulation time at the inference phase to ensure an accurate prediction result. Based on the depth map, the high-level terrain information such as slope, roughness, and the size of the safe area are derived. We have conducted extensive autonomous landing experiments in a variety of familiar or completely unknown environments, verifying that our model can adaptively balance the accuracy and speed, and the UAV can robustly select a safe landing site.
where various factors should be considered, including detecting the flatness and roughness of the ground, distinguishing terrain types, calculating landing area size, and avoiding obstacles. Previous works propose several approaches to solve it, such as depth/altitude estimation, surface type semantic segmentation, and high-precision map construction. Specifically, [12] uses ultrasonic sensors to perceive depth for UAV safe landing at a low-altitude (150cm) range. [13], [14] estimate the altitude in indoor scenes with the widely-used stereo vision system. [15], [16], [17] apply traditional or machine-learning-based methods to realize monocular depth prediction. [18], [19] perform patch-level classification to evaluate the safeness of the ground. [20] uses the combination of a robust homography estimation and a strategy of adaptive thresholding of correlation scores to realize a pixel-level segmentation. [21] utilizes geographic information system data to estimate the landing position. Some approaches [22], [14] combine multiple criteria, such as altitude, flatness, roughness, slope, surface type, and energy consumption to jointly determine the landing sites. To a step further, [23] directly detects the plane of the ground in an end-to-end way learning with a virtual image dataset. However, the learning target is too high-level, leading to an over-fitting result and poor generalization.
Among these methods, depth/altitude estimation is most feasible since it is the basis for calculating the terrain characteristics and is also a widely studied area. Benefiting from the advantages of low cost, lightweight, and long viewing range, vision-based depth estimation is a common approach, including monocular depth prediction and stereo depth matching. Currently, deep-learning-based depth estimation methods [24], [25], [26], [27] are prevalent due to their high accuracy. However, applying it to the aerial scenes is difficult and rarely used. One reason is that although there are some virtual aerial datasets [28], [29] containing annotated depth maps, it still lacks reliable large-scale real scenes training data. Another reason is that the current deeplearning-based methods do not have reliable generalization ability, resulting in a great degradation of performance in unknown environments. What's more, the prediction errors cannot be self-detected under their end-to-end model strategies. For this reason, a feasible solution is to integrate a reliable depth sensor, such as ultrasonic wave sensors, Millimeter Wave Radar (MMWR), and Light Detection And Ranging (LiDAR), to formulate a depth completion problem with partial known depth. Among these sensors, LiDAR is widely used in unmanned driving and robotics to integrate with vision systems due to its long detection range and high accuracy. But the current mainstream products are too expensive and heavy, thus are rarely used in UAV systems.
In this paper, we build a UAV platform equipped with a low cost and lightweight binocular-LiDAR system to realize robust and safe autonomous landing. Our LiDAR (Livox Mid-40) uses a single line to scan towards one side in a nonrepetitive scanning pattern in which the areas scanned inside the Field of View (FOV) grow the longer the integration time until the FOV coverage approaches 100%. Using this feature, we propose a dynamic time depth completion algorithm. Specifically, in the training phase, the stereo camera and LiDAR provide the self-supervision and sparse ground truth respectively for joint training. While in the inference phase, we introduce a self-evaluation method to assess the accuracy of the predicted depth map using the character of the binocular camera. It realizes the dynamic selection of the LiDAR accumulation time and ensures the high quality of the predicted depth map, which greatly improves the generalization ability of the model. With the high FOV coverage advantage, even in the extreme unknown environment, accurate depth measurement can still be obtained through the LiDAR, which enhances the overall safety and reliability at the expense of reducing the speed of landing. Based on the estimated depth map, the first and second-order derivatives stand for the slope and roughness of the ground respectively. Then, the flatness and the safe area size can be easily derived.
We collect 10 real flight records in different environments using onboard sensors. By performing spatial and time alignment with high-frequency GPS and IMU data, 30,000 sparsely annotated depth maps are generated. On this dataset, our dynamic time depth completion network is trained in a weakly self-supervised way. We evaluate the depth prediction in both familiar and unknown environments. The results show that the predicted depth map is increasingly accurate and fine-grained as the density of the input LiDAR increases and our model can automatically choose the accumulation time of LiDAR data to adaptively balance the accuracy and speed. We also conduct complete autonomous landing experiments in a variety of real scenarios, verifying the effectiveness and robustness of our overall landing strategy. The videos of the real flight test are available at https: //youtu.be/0uj9LxWyMDA.
II. SYSTEM OVERVIEW
In this section, we introduce each module in the hardware system as well as the wire connection and software communication between them, illustrated in Fig. 1. The images of the UAV and sensors can be viewed in the supplementary video.
A. UAV Platform
The flight platform we used in this paper is DJI Matrice 600 Pro, a commercial hexacopter UAV equipped with a carbon-fiber airframe, the A3 Pro flight controller, Lightbridge 2 HD transmission system, six 4500mAh intelligent batteries, and a built-in battery management system. The vehicle is 1133mm in the diagonal wheelbase and weighs , which provides triple modular redundancy, improving the system's anti-risk performance. Self-adaptive systems will automatically adjust flight parameters based on different payloads. Other sensors and devices are mounted on our own designed two-layer carbon-fiber board at the bottom of the center airframe.
B. LiDAR
We adopt Livox Mid-40 as our onboard LiDAR because it is lightweight, cost-effective, and has several unique advantages fit for UAV landing scenes. The Livox Mid-40 sensor weighs 760g which can be easily adapted to our flight platform. It costs only $599, much cheaper than multiline surround-view LiDAR. Its major advantage is to detect towards one side, thus very suitable for mount in the downheaded direction on the UAV. Another advantage is that it features non-repetitive scanning patterns in which the areas scanned inside the FOV grow the longer the integration time, increasing the likelihood of objects and other details within the FOV being detected. It is equivalent to the 32-line product when the integration time is 0.1s. With an integration time of 0.5s, the coverage performance is equivalent to the 64-line product. As integration time increases, the FOV coverage will approach 100%. This feature enables UAV to dynamically choose different LiDAR integration time according to the needs of the algorithm. Even without other sensors, only relying on LiDAR can ensure reliable depth perception.
C. Stereo Camera
In this paper, although we use the fusion of monocular image and LiDAR points for depth completion, the binocular camera is still necessary. It can provide more self-supervision during the training phase, and more importantly, helps to evaluate the accuracy of the predicted depth map during the inference phase. For this reason, we chose MYNT EYE Standard/Color (S2110-95/Color) binocular camera, which provides a stereo color image with a resolution of 1280*400@60FPS. The baseline length is 8cm and the FOV angle is 95 • . To ensure the consistent quality of stereo images, the camera provides automatic ISP exposure and automatic white balance functions. The global shutter function also effectively reduces the image distortion during rapid shooting scenes. At the same time, the camera provides hardware-level time synchronization and binocular frame synchronization at millisecond-level accuracy.
D. Onboard Computer
To accelerate the inference speed of the depth completion model with modern GPU, we choose DJI Manifold 2-G as the onboard computer. Its processor is the NVIDIA Jetson TX2, which is built around an NVIDIA GPU with 256 CUDA cores and it has 8 GB 128-bit LPDDR4 memory. The added UART ports are used to connect with the A3 Pro flight controller for accessing the flight status and manipulating the UAV. Since Manifold 2-G supports two external power supply independently and can automatically choose the power source with a higher voltage, we use both a reserved output 18V power supply from the Matrice 600 Pro and a separate 5300mA 5S LiPo battery to power the computer. In this way, Manifold 2-G will prefer the separate battery because it has a higher voltage. This not only saves more power to the vehicle but also ensures the availability of the onboard computer when the separate battery fails. In the whole system, all sensors except LiDAR support global time synchronization, so we use Precision Time Protocol (PTP) protocol to synchronize the timestamp of the onboard computer and LiDAR.
E. Others
The DJI Power Distribution Unit is connected to the separate battery to power the Manifold 2-G and LiDAR since its electromagnetic compatibility can reduce interference to the GNSS or Wi-Fi signal from the power supply. However, because the battery voltage is higher than the working requirement of LiDAR, we use a DC-DC voltage converter module (LM2596) to provide 12V power for LiDAR.
Through the LightBridge HD transmission system, we can monitor the status of UAV and the algorithm result of the onboard computer in real-time. Also, we develop a mobile app based on DJI UXSDK to switch the flight mode of UAV, i.e., the auto landing mode and the manual control mode.
The software has been developed in C++ and Python as ROS (Robot Operating System) package, using the well known OpenCV libraries to process the images and PyTorch for model inference.
III. ROBUST LANDING SITE SELECTION
In this section, we first introduce the calibration between stereo cameras and LiDAR as well as their time and spatial alignment method for generating synchronized images and depth maps. Then we elaborate on the dynamic time depth completion model and its training and inference strategy. Finally, with the accurate depth estimation result, the overall landing site selection process is derived.
A. Calibration and Alignment
The calibration is the basis for dataset construction and depth completion algorithm. The intrinsic and extrinsic parameters of the stereo camera can be directly calculated using the existing calibration method [30] implemented in OpenCV, while the calibration between LiDAR and the camera needs a special conversion. Compared to the traditional sparse multi-line LiDAR, Livox Mid-40 has the advantage of high FOV coverage. Therefore, we can directly visualize it as an image by projecting a few seconds of LiDAR points of a static scene to a virtual image plane and allocating the grayscale colors based on their intensity. In this way, the common chessboard calibration board can be used to extract the characteristic corner points from both images, as shown in Fig. 2. The corner points in the LiDAR image corresponds to the 3D points in the world while the corner points in the camera image are the corresponding 2D projections. This constructs a Perspective-n-Point (PnP) problem, which can be effectively resolved to acquire the rotation matrix R c,l and the translation t c,l .
To generate the depth map corresponding to image I 0 captured at time t 0 , we first perform time alignment to transform several seconds of LiDAR data to the LiDAR coordinate system l 0 at time t 0 . Specifically, given a LiDAR point p l 1 measured at time t 1 , we can get p l 0 by using affine transformation as p l 0 = T l 0 ,l 1 p l 1 , in which
T l 0 ,l 1 = T l 0 ,w T w,l 1 = T l,i T i 0 ,w T w,i 1 T i,l = T l,i T −1 w,i 0 T w,i 1 T −1 l,i
where T l,i is a fixed transformation matrix between LiDAR and IMU, and T w,i 0 , T w,i 1 are the position and attitude of the IMU in the world coordinate system at time t 0 ,t 1 . However, since the data acquisition frequency of the various sensors is different, T w,i 0 , T w,i 1 are obtained by conducting a linear interpolation and a spherical linear interpolation to GPS and IMU data respectively. The second step is to spatially align the LiDAR point to the camera. Given a LiDAR point p l = x l , y l , z l T , its coordinate in the camera coordinate system is p c = (x c , y c , z c ) T = R c,l · p l +t c,l , the corresponding projection coordinate in the image plane is (x, y) = ( f · x c /z c + c x , f · y c /z c + c y ), where f is the focal length in pixel unit and (c x , c y ) is the principal center point. Since x, y are generally not the integer, we assign it to the nearest pixel (u, v). If one pixel is allocated multiple times, we only select the point closest to the image in time. The generated depth map is shown in Fig. 2.
B. Dynamic Time Depth Completion
Depth Completion is a depth estimation problem with partial known ground truth. So we adopt an encoder-decoder network structure as [31], which processes the image and sparse depth inputs separately with late fusion. In the following, we will detail our training and inference strategies to achieve robust and accurate depth prediction.
Training Losses Given an input image as well as its corresponding sparse depth d, network prediction pred, and the ground truth depth map gt, we define the training loss C as a combination of four main terms,
C = α d C d + α r C r + α p C p + α s C s
where C d ,C r ,C p ,C s are supervised depth loss, depth ratio loss, photometric loss, smoothness loss respectively, and α d , α r , α p , α s are their loss weights.
Specifically, the supervised depth loss C d penalizes the differences between the network output and the ground truth depth map on the set of pixels with known sparse depth, which helps the training for fast convergence, defined as 2 2 In addition, we propose a new depth ratio loss specialized for our scenarios. During the landing process, we expect that the predicted depth map becomes increasingly fine-grained with the altitude going down. Therefore, the depth error with the same absolute value should attach more supervision at a lower altitude, which can be realized by penalizing the proportion of the difference between the predicted and the ground truth value, defined as
C d = 1 {gt>0} · (pred − gt)C r = 1 {gt>0} · pred − gt gt 1
It is worth noting that the sparse depth input d can be used in place of the ground truth gt for C d and C r when forming a self-supervised training.
The photometric loss C p is to indirectly supervise the predicted depth by comparing the image similarity between the current image and the reconstruction image inversely warped from the nearby frame. Suppose the current image is captured by left side of the stereo camera at time t i denoted as I l,i , we choose the adjacent two frames on the same side and the image with the same timestamp captured by the other side of the stereo camera as the nearby frames denoted as I l,i−1 , I l,i+1 , I r,i respectively. The relative pose between stereo cameras is fixed while the method of calculating the transformation matrix between the adjacent frame with known flight position and attitude is the same as that between the LiDAR point, elaborated in Section 3.1. With relative poses that denoted as T i−1,i , T i+1,i , and T r,l , the reconstruction images can be obtained as I l,i−1 , I l,i+1 , and I r,i referred to [26]. So the photometric loss is defined as The smoothness loss C s forces a neighboring constraint to encourage depth to be locally smooth, which is the basis to achieve the stable result of terrain conditions with depth derivatives. Here we penalize the first and second-order derivatives of the depth predictions as
C s = ∇pred 1 + ∇ 2 pred 1
Training Strategy We use a mixed training strategy that dynamically selects the density of the sparse depth for each batch. At 20% of the time, the model is trained only with images where the input of the LiDAR branch is a zero tensor. In other situations, we randomly select 10% to 50% of LiDAR points to transform into a sparse depth map. This strategy improves the generalization ability at different inputs and realizes two useful characteristics that benefit our landing task. 1) It only requires a single image or LiDAR data as input, which maintains relatively stable performance even if one of the sensors fails. 2) The prediction result is gradually refined at the inference phase as the density of LiDAR points grows. This feature is very suitable for our LiDAR that has a non-repetitive scanning pattern. It ensures that given sufficient accumulation time of LiDAR, the predicted depth map will be accurate enough.
Self-evaluation Method Almost all existing depth completion algorithms are unable to self-assess and correct their prediction result at the inference stage. In our application, this may cause the UAV to select the wrong landing site and even crash. To improve the robustness of our model, we propose a self-evaluation method based on the natural character of the binocular camera. Similar to the principle of photometric loss, given the stereo images I l , I r , the predicted depth pred according to I l , the transformed sparse depth d, and the transformation matrix T r,l between stereo camera, we can generate the reconstructed right image I pred r based on I l , pred, and T r,l . Without considering the imaging difference between two cameras caused by hardware and occlusion, if the predicted depth value is completely correct, the image similarity sim pred = SSIM(l pred r , I r ) will approach 1. Otherwise, sim pred is close to 0. Therefore, given sim = SSIM(I l , I r ) and ssim d = SSIM(I d r , I r ) as lower and upper bounds, we can evaluate pred by comparing the relative size between sim pred and sim, sim d . Conclude through the statistic, we find that when sim pred > (sim + sim d )/2 or sim pred > sim + 0.2, the predicted depth is accurate enough for selecting the landing place.
Inference Strategy Using this evaluation method, our dynamic time depth completion algorithm can be realized. In our system, the camera and the LiDAR produce the data at the frequency of 20hz and 10hz respectively. Every time we get an image, a coarse depth is first predicted. As the LiDAR data continues to acquire, the image and gradually denser depth input are combined to refine the output depth map. After each step, the prediction result is evaluated until the depth map is accurate enough. Generally, after about 1 second of accumulation of LiDAR, the coverage in the FOV is close to 100%. If the result still doesn't meet the accuracy requirement, we directly perform the nearest neighbor interpolation on the sparse depth and set the depth outside the FOV area as unknown.
C. Landing Site Selection
After obtaining an accurate depth map, we select the possible landing site through the following four steps. 1) We first convert the original 'perspective' depth pred to the 'plane' depth pred p where all points in a plane parallel to the camera have the same depth. 2) We then perform a perspective processing on pred p by correcting the roll and pitch to zero to obtain the depth map pred c that simulates the UAV is parallel to the ground. 3) The first and secondorder derivatives of depth map pred c represents the slope and roughness of the ground. So we define that the safeness of ground should satisfies ∇pred c < t inc and ∇ 2 pred c < t tur , where t inc and t tur are the maximum acceptable slope angle and roughness angle of the landing plane. According to this formula, we will calculate a binary mask m that indicates the safeness of the ground. After performing a simple erosion and dilation to remove minor noises, a refined mask m r is generated. 4) Finally, we choose the largest inscribed circle of the safe area as the candidate landing place. The real radius R of this circle can be calculated with the 'plane' depth p of the circle center and the radius r in the image by R = p/ f * r, where f is the focal length. Referred to the UAV size, if R > 2m, we hold that the candidate area is big enough and the center of the circle is selected as the safe landing site.
The above procedures correspond to state 1 in Fig. 3. However, it is worth noting that when selecting the landing site for each frame, we will first determine whether the previous landing site is available or not to keep the result stable and smooth. To avoid misjudgment by a single wrong prediction, only the landing site that is confirmed by five consecutive frames will be chosen. On the contrary, if no landing site is confirmed for about 5 seconds, then it will shift to state 7 which performs a random wandering with a Fig. 3 is to perform a confirmation of the landing site. The only difference to state 1 is that the depth map is generated completely from dense LiDAR data, which can help to identify extremely small details of the dangerous area, such as stones or road curb.
IV. EVALUATIONS
A. Depth Completion Model
We implement our depth completion network referred to [31] with a backbone of ResNet-18 [33]. To match the FOV range between the camera and the LiDAR, input images and sparse depth maps are center-cropped to a size of 352×352. As mentioned in Section 3.2, a mixed training strategy by dynamically select the density of the input sparse depth is carried out to improve the generalization ability of the model. We train the model for 30 epochs using Adam [34] optimizer with an initial learning rate of 2e-4 and a batch size of 16. After 20 epochs the learning rate is dropped to 2e-5 for better convergence. The loss weights are set to be α d = 1, α r = 1, α p = 2, α s = 1.
We collect 10 real flight records in different environments using onboard sensors. By performing time and spatial alignment described in Section 3.1 to collected data, we construct an aerial depth dataset consist of 30,000 sparsely annotated depth maps. We then train our model on this dataset using both weak supervision of the sparse ground truth and the selfsupervision of the image similarity. The average inference time of our model without any acceleration method running on NVIDIA Jetson TX2 is 33ms, which meet the real-time requirement. We first conduct an experiment to compare the influence of the density of LiDAR input during the inference phase. The commonly used metrics such as RMSE, REL, and δ in the depth estimation problems [24] are used to evaluate the performance. We also introduce the average SSIM as a metric to indirectly evaluate the model through the quality of the reconstructed images. The numerical results under different test input choices are reported in Table. I and a visualized timeline of the prediction is illustrated in Fig. 4. In the beginning, we use a raw image without LiDAR data for prediction. The depth map can distinguish obvious altitude differences, such as trees and ground. As the density of the LiDAR increases, each metric is becoming better and the predicted depth map is more fine-grained. After about 1 second of accumulation, we directly interpolate the sparse depth map to get the most refined prediction map. This reduces the perception range of the UAV but makes it possible to observe a small height difference area such as grassland.
Next, we verify the effectiveness of the landing site selection strategy by setting t inc = 10.0,t tur = 10.0. It can be seen from Fig. 5 that after depth conversion and perspective transformation, the safeness of the ground can be derived based on the accurate depth map pred c and the erosion and dilation can effectively avoid the interference of local abnormal points.
B. Real UAV Landing Experiments
We conduct a series of autonomous landing experiments in a variety of environments where our UAV successfully finds the flat ground and land safely. At the same time, we record the average LiDAR accumulation time to obtain an accurate depth map for the depth completion model. In a familiar environment, the average time is 0.06 seconds, while in an unknown environment, this value reaches 0.7 seconds. This demonstrates that our model can adaptively balance the accuracy and speed to ensure the reliability of the prediction results. The typical safe landing sites selected in various real complex environments are shown in Fig. 6. We also draw a landing trajectory in Fig. 7. It can be seen that the UAV performs a horizontal and gradually vertical movement in order after a few seconds of hovering to select a coarse landing site. The overall trajectory is smooth and almost no time is wasted on hovering except for the landing site selection and confirmation phase.
V. CONCLUSIONS
In this paper, we construct a UAV system to complete autonomous landing tasks in a non-cooperative environment by detecting the flat and safe ground area. Taking advantage of the characteristics of our perception system consists of low-cost LiDAR and stereo cameras, we come up with a dynamic time depth completion algorithm. Through the proposed self-evaluation method, it can dynamically select the LiDAR accumulation time to ensure an accurate depth prediction map. Through the real flight experiments, we verify that the model can adaptively balance the accuracy and speed, and the UAV can robustly select the safe landing sites even in a completely unknown environment. In future work, we consider expanding the recognition and tracking of dynamic objects into the system.
Fig. 1 :
1UAV hardware system overview. 9.5 kg including batteries. Its maximum payload capacity is about 6 kg. The flight time in hovering is about 16 minutes with full payload and 32 minutes without payload. The A3 Pro Flight Controller consists of a flight controller, three GPS-Compass Pro, two IMU Pro, and a Power Management Unit (PMU)
Fig. 2 :
2(a)(b) Corner points in LiDAR and camera images; (c) Generated depth map corresponding to camera image.
− 1 Fig. 3 :
13SSIM(I s , I s ) 2 + (1 − α) I s − I s Flow chart of autonomous landing. where s ∈ {{l, i − 1}, {l, i + 1}, {r, i}} and α = 0.85. We use simplified SSIM [32] with a 3 × 3 block filter.
Fig. 5 :
5Process of selecting the safe landing site.
Fig. 6 :
6Safe landing sites selected in real scenes.
Fig. 7 :
7(a) Our UAV system during the autonomous landing. (b) The corresponding 3D trajectory of the flight test in the local NED coordinate.
TABLE I :
IPerformance of different density of LiDAR input. ↓ means less is better and ↑ means higher is better.P(LiDAR) RMSE (mm) ↓ REL (mm) ↓ δ < 0.25 ↑ SSIM ↑0
1292.8
107.0
93.6%
0.688
0.1
1117.2
94.9
94.1%
0.701
0.3
944.7
78.4
95.5%
0.724
0.5
910.5
76.3
95.7%
0.739
1.0
861.3
48.8
97.9%
0.764
Fig. 4: Example timeline of predictions. As time progresses,
the depth estimation becomes increasingly accurate.
non-repetitive global path. State 8 in
A vision system for landing an unmanned aerial vehicle. C S Sharp, O Shakernia, S S Sastry, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164). 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164)Ieee2C. S. Sharp, O. Shakernia, and S. S. Sastry, "A vision system for landing an unmanned aerial vehicle," in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), vol. 2. Ieee, 2001, pp. 1720-1727.
Vision-based autonomous landing of an unmanned aerial vehicle. S Saripalli, J F Montgomery, G S Sukhatme, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292). 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292)IEEE3S. Saripalli, J. F. Montgomery, and G. S. Sukhatme, "Vision-based autonomous landing of an unmanned aerial vehicle," in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), vol. 3. IEEE, 2002, pp. 2799-2804.
Visually guided landing of an unmanned aerial vehicle. S Saripalli, J F Montgomery, G S Sukhatme, IEEE transactions on robotics and automation. 193S. Saripalli, J. F. Montgomery, and G. S. Sukhatme, "Visually guided landing of an unmanned aerial vehicle," IEEE transactions on robotics and automation, vol. 19, no. 3, pp. 371-380, 2003.
Landing a uav on a runway using image registration. A Miller, M Shah, D Harper, 2008 IEEE International Conference on Robotics and Automation. IEEEA. Miller, M. Shah, and D. Harper, "Landing a uav on a runway using image registration," in 2008 IEEE International Conference on Robotics and Automation. IEEE, 2008, pp. 182-187.
A vision based onboard approach for landing and position control of an autonomous multirotor uav in gps-denied environments. S Lange, N Sunderhauf, P Protzel, 2009 International Conference on Advanced Robotics. IEEES. Lange, N. Sunderhauf, and P. Protzel, "A vision based onboard approach for landing and position control of an autonomous multirotor uav in gps-denied environments," in 2009 International Conference on Advanced Robotics. IEEE, 2009, pp. 1-6.
Autonomous landing of a vtol uav on a moving platform using image-based visual servoing. D Lee, T Ryan, H J Kim, 2012 IEEE international conference on robotics and automation. IEEED. Lee, T. Ryan, and H. J. Kim, "Autonomous landing of a vtol uav on a moving platform using image-based visual servoing," in 2012 IEEE international conference on robotics and automation. IEEE, 2012, pp. 971-976.
Autonomous landing of an uav with a ground-based actuated infrared stereo vision system. W Kong, D Zhang, X Wang, Z Xian, J Zhang, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEW. Kong, D. Zhang, X. Wang, Z. Xian, and J. Zhang, "Autonomous landing of an uav with a ground-based actuated infrared stereo vision system," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013, pp. 2963-2970.
A ground-based optical system for autonomous landing of a fixed wing uav. W Kong, D Zhou, Y Zhang, D Zhang, X Wang, B Zhao, C Yan, L Shen, J Zhang, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEW. Kong, D. Zhou, Y. Zhang, D. Zhang, X. Wang, B. Zhao, C. Yan, L. Shen, and J. Zhang, "A ground-based optical system for autonomous landing of a fixed wing uav," in 2014 IEEE/RSJ International Confer- ence on Intelligent Robots and Systems. IEEE, 2014, pp. 4797-4804.
Landing of a fixed-wing uav on a mobile ground vehicle. T Muskardin, G Balmer, S Wlach, K Kondak, M Laiacker, A Ollero, 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEET. Muskardin, G. Balmer, S. Wlach, K. Kondak, M. Laiacker, and A. Ollero, "Landing of a fixed-wing uav on a mobile ground vehicle," in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 1237-1242.
Real-time, gpubased pose estimation of a uav for autonomous takeoff and landing. A Benini, M J Rutherford, K P Valavanis, 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEEA. Benini, M. J. Rutherford, and K. P. Valavanis, "Real-time, gpu- based pose estimation of a uav for autonomous takeoff and landing," in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 3463-3470.
Hierarchical fiducial marker design for pose estimation in large-scale scenarios. H Wang, Z Shi, G Lu, Y Zhong, Journal of Field Robotics. 356H. Wang, Z. Shi, G. Lu, and Y. Zhong, "Hierarchical fiducial marker design for pose estimation in large-scale scenarios," Journal of Field Robotics, vol. 35, no. 6, pp. 835-849, 2018.
Design of sonar sensor model for safe landing of an uav. U Papa, G Del Core, 2015 IEEE Metrology for Aerospace (MetroAeroSpace). IEEEU. Papa and G. Del Core, "Design of sonar sensor model for safe landing of an uav," in 2015 IEEE Metrology for Aerospace (MetroAeroSpace). IEEE, 2015, pp. 346-350.
Uav altitude estimation by mixed stereoscopic vision. D Eynard, P Vasseur, C Demonceaux, V Frémont, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEED. Eynard, P. Vasseur, C. Demonceaux, and V. Frémont, "Uav altitude estimation by mixed stereoscopic vision," in 2010 IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems. IEEE, 2010, pp. 646-651.
Landing site searching and selection algorithm development using vision system and its application to quadrotor. J Park, Y Kim, S Kim, IEEE Transactions on Control Systems Technology. 232J. Park, Y. Kim, and S. Kim, "Landing site searching and selection algorithm development using vision system and its application to quadrotor," IEEE Transactions on Control Systems Technology, vol. 23, no. 2, pp. 488-503, 2014.
Vision guided landing of an autonomous helicopter in hazardous terrain. A Johnson, J Montgomery, L Matthies, Proceedings of the 2005 IEEE International Conference on Robotics and Automation. the 2005 IEEE International Conference on Robotics and AutomationIEEEA. Johnson, J. Montgomery, and L. Matthies, "Vision guided landing of an autonomous helicopter in hazardous terrain," in Proceedings of the 2005 IEEE International Conference on Robotics and Automation. IEEE, 2005, pp. 3966-3971.
Autonomous altitude estimation of a uav using a single onboard camera. A Cherian, J Andersh, V Morellas, N Papanikolopoulos, B Mettler, IEEEA. Cherian, J. Andersh, V. Morellas, N. Papanikolopoulos, and B. Met- tler, "Autonomous altitude estimation of a uav using a single onboard camera," in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2009, pp. 3900-3905.
Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras. A Gordon, H Li, R Jonschkowski, A Angelova, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionA. Gordon, H. Li, R. Jonschkowski, and A. Angelova, "Depth from videos in the wild: Unsupervised monocular depth learning from un- known cameras," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 8977-8986.
Automatic uav forced landing site detection using machine learning. X Guo, S Denman, C Fookes, L Mejias, S Sridharan, 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEEX. Guo, S. Denman, C. Fookes, L. Mejias, and S. Sridharan, "Auto- matic uav forced landing site detection using machine learning," in 2014 International Conference on Digital Image Computing: Tech- niques and Applications (DICTA). IEEE, 2014, pp. 1-7.
A robust uav landing site detection system using mid-level discriminative patches. X Guo, S Denman, C Fookes, S Sridharan, 2016 23rd International Conference on Pattern Recognition (ICPR). X. Guo, S. Denman, C. Fookes, and S. Sridharan, "A robust uav landing site detection system using mid-level discriminative patches," in 2016 23rd International Conference on Pattern Recognition (ICPR).
. IEEE. IEEE, 2016, pp. 1659-1664.
Autonomous detection of safe landing areas for an uav from monocular images. S Bosch, S Lacroix, F Caballero, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEES. Bosch, S. Lacroix, and F. Caballero, "Autonomous detection of safe landing areas for an uav from monocular images," in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006, pp. 5522-5527.
Utilizing geographic information system data for unmanned aerial vehicle position estimation. T Patterson, S Mcclean, P Morrow, G Parr, 2011 Canadian Conference on Computer and Robot Vision. IEEET. Patterson, S. McClean, P. Morrow, and G. Parr, "Utilizing geo- graphic information system data for unmanned aerial vehicle position estimation," in 2011 Canadian Conference on Computer and Robot Vision. IEEE, 2011, pp. 8-15.
Free lsd: prior-free visual landing site detection for autonomous planes. T Hinzmann, T Stastny, C Cadena, R Siegwart, I Gilitschenski, IEEE Robotics and Automation Letters. 33T. Hinzmann, T. Stastny, C. Cadena, R. Siegwart, and I. Gilitschenski, "Free lsd: prior-free visual landing site detection for autonomous planes," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2545-2552, 2018.
Safeuav: learning to estimate depth and safe landing areas for uavs from synthetic data. A Marcu, D Costea, V Licaret, M Pîrvu, E Slusanschi, M Leordeanu, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)A. Marcu, D. Costea, V. Licaret, M. Pîrvu, E. Slusanschi, and M. Leordeanu, "Safeuav: learning to estimate depth and safe landing areas for uavs from synthetic data," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0-0.
Depth map prediction from a single image using a multi-scale deep network. D Eigen, C Puhrsch, R Fergus, Advances in neural information processing systems. D. Eigen, C. Puhrsch, and R. Fergus, "Depth map prediction from a single image using a multi-scale deep network," in Advances in neural information processing systems, 2014, pp. 2366-2374.
Deeper depth prediction with fully convolutional residual networks. I Laina, C Rupprecht, V Belagiannis, F Tombari, N Navab, 2016 Fourth international conference on 3D vision (3DV). IEEEI. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab, "Deeper depth prediction with fully convolutional residual networks," in 2016 Fourth international conference on 3D vision (3DV). IEEE, 2016, pp. 239-248.
Unsupervised monocular depth estimation with left-right consistency. C Godard, O Mac Aodha, G J Brostow, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Godard, O. Mac Aodha, and G. J. Brostow, "Unsupervised monoc- ular depth estimation with left-right consistency," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270-279.
Deep ordinal regression network for monocular depth estimation. H Fu, M Gong, C Wang, K Batmanghelich, D Tao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionH. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao, "Deep ordinal regression network for monocular depth estimation," in Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2002-2011.
Mid-air: A multi-modal dataset for extremely low altitude drone flights. M Fonder, M V Droogenbroeck, Conference on Computer Vision and Pattern Recognition Workshop. CVPRWM. Fonder and M. V. Droogenbroeck, "Mid-air: A multi-modal dataset for extremely low altitude drone flights," in Conference on Computer Vision and Pattern Recognition Workshop (CVPRW), June 2019.
Valid: A comprehensive virtual aerial image dataset. L Chen, F Liu, Y Zhao, W Wang, X Yuan, J Zhu, 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEEL. Chen, F. Liu, Y. Zhao, W. Wang, X. Yuan, and J. Zhu, "Valid: A comprehensive virtual aerial image dataset," in 2020 IEEE Interna- tional Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 2009-2016.
A flexible new technique for camera calibration. Z Zhang, IEEE Transactions. 2211Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
Self-supervised sparseto-dense: Self-supervised depth completion from lidar and monocular camera. F Ma, G V Cavalheiro, S Karaman, 2019 International Conference on Robotics and Automation (ICRA). IEEEF. Ma, G. V. Cavalheiro, and S. Karaman, "Self-supervised sparse- to-dense: Self-supervised depth completion from lidar and monocular camera," in 2019 International Conference on Robotics and Automa- tion (ICRA). IEEE, 2019, pp. 3288-3295.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE transactions on image processing. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," arXiv preprint arXiv:1412.6980, 2014.
| []
|
[
"A new class of viable and exact solutions of EFE's with Karmarkar conditions: An application to cold star modeling",
"A new class of viable and exact solutions of EFE's with Karmarkar conditions: An application to cold star modeling"
]
| [
"Neeraj Pant [email protected] ",
"Megandhren Govender [email protected] \nDepartment of Mathematics\nFaculty of Applied Sciences\nDurban University of Technology\nDurbanSouth Africa\n",
"Satyanarayana Gedela \nDepartment of Mathematics\nNational Defence Academy\nKhadakwasla, Pune-411023India\n\nDepartment of Mathematics\nSSJ Campus\nKumaun University\nAlmora-263601India\n"
]
| [
"Department of Mathematics\nFaculty of Applied Sciences\nDurban University of Technology\nDurbanSouth Africa",
"Department of Mathematics\nNational Defence Academy\nKhadakwasla, Pune-411023India",
"Department of Mathematics\nSSJ Campus\nKumaun University\nAlmora-263601India"
]
| []
| In this work we present a theoretical framework within Einstein's classical general relativity which models stellar compact objects such as PSR J1614-2230 and SAX J1808.4-3658. The Einstein field equations are solved by assuming that the interior of the compact object is described by a class I spacetime. The so-called Karmarkar condition arising from this requirement is integrated to reduce the gravitational behaviour to a single generating function. By appealing to physics we adopt a form for the gravitational potential which is sufficiently robust to accurately describe compact objects. Our model satisfies all the requirements for physically realistic stellar structures. | 10.1088/1674-4527/21/5/109 | [
"https://arxiv.org/pdf/2012.00361v1.pdf"
]
| 227,238,789 | 2012.00361 | b3f816dc311586a7220ba990843d305c1368a6e6 |
A new class of viable and exact solutions of EFE's with Karmarkar conditions: An application to cold star modeling
1 Dec 2020 August 2020
Neeraj Pant [email protected]
Megandhren Govender [email protected]
Department of Mathematics
Faculty of Applied Sciences
Durban University of Technology
DurbanSouth Africa
Satyanarayana Gedela
Department of Mathematics
National Defence Academy
Khadakwasla, Pune-411023India
Department of Mathematics
SSJ Campus
Kumaun University
Almora-263601India
A new class of viable and exact solutions of EFE's with Karmarkar conditions: An application to cold star modeling
1 Dec 2020 August 2020Submitted to: Res. Astron. Astrophys.Pant et al; A new class of viable and exact solutions of EFE's with..... 2Compact staranisotropyembedding classEinstein field equationsadia- batic index
In this work we present a theoretical framework within Einstein's classical general relativity which models stellar compact objects such as PSR J1614-2230 and SAX J1808.4-3658. The Einstein field equations are solved by assuming that the interior of the compact object is described by a class I spacetime. The so-called Karmarkar condition arising from this requirement is integrated to reduce the gravitational behaviour to a single generating function. By appealing to physics we adopt a form for the gravitational potential which is sufficiently robust to accurately describe compact objects. Our model satisfies all the requirements for physically realistic stellar structures.
Introduction
Since the publication of Einstein's general relativity in 1914, researchers were captivated by the search for exact solutions of the field equations.
Over the past century a myriad of exact solutions were obtained which attempted to explain observations in cosmology and astrophysics. The gravitational field exterior to a static, spherically symmetric star was first obtained by . This vacuum solution was followed by the interior Schwarzschild solution which describes the gravitational field of a uniform density sphere [Schwarzschild (1916a,b)]. Causality is one of the cornerstones of relativity which requires 0 < dp dρ < 1 [Dev & Gleiser (2002), Dev & Gleiser (2003)]. It is clear that causality is violated at each interior point of the Schwarzschild constant density sphere. This prompted researchers to consider more realistic matter configurations which included inhomogeneous density profiles, anisotropic pressures, electric charge, bulk viscosity and scalar fields. Generalization of the perfect fluid interior matter distribution to include anisotropic stresses has yielded interesting physical characteristics of such models. It was shown that physical properties such as surface tension, compactness and surface red-shift of these stars are sensitive to the anisotropy parameter , Bowers & Liang (1974), Maurya & Govender (2017), Pant et al (2016)]. The impact of electric charge in compact objects has been widely studied within the context of stability and physical viability. It was shown that the presence of electric charged alters the Buchdahl limit required for stability of a self-gravitating, bounded matter distribution [Singh et al (2016), Andreasson et al (2012)]. Departure from spherical symmetry has also been pursued in the context of slowly rotating stars and in the description of gravitational waves [Herrera et al (2005a,b)]. Various techniques ranging from ad-hoc assumptions, imposition of pressure isotropy, use of an equation of state, use of the condition of conformal flatness, Lie symmetry analysis, to name just a few, were used to solve the field equations [Manjonjo et al (2018), Ivanov (2018)]. While these methods yield solutions, there is no guarantee that the ensuing models are physically viable. An extensive review of exact solutions of the Einstein field equations describing static objects show that a very small subset of these satisfy all the requirements for realistic stellar models [Stephani et al (2003)].
A natural question which arises in astrophysics is what happens when a star loses hydrostatic equilibrium and undergoes continued gravitational collapse? Oppenheimer and Snyder tackled this problem by considering a spherically symmetric dust cloud under-going gravitational collapse [Oppenheimer & Snyder (1939)].
Their model served as a catalyst in understanding end-states of gravitational collapse. The Cosmic Censorship Conjecture which ruled out the formation of naked singularities for collapsing matter configurations with reasonable initial states was shown to be violated under various assumptions [Guo & Joshi (2015), Ghosh & Maharaj (2015), Sherif et al (2019)]. The study of black holes has moved into the observable realm making it a popular research topic [Akiyama et al (2019)]. Black hole physics has evolved immensely from the simple Oppenheimer-Snyder dust model to include anisotropic pressures, electromagnetic field, cosmological constant as well as higher dimensions.
In the paper Vaidya (1951) presented an exact solution describing the exterior gravitational field of a radiating star.
This solution is a unique solution of the Einstein field equations describing a spherically symmetric atmosphere composed of null radiation. The Vaidya solution made it possible to model dissipative collapse in which the collapsing core radiates energy to the exterior spacetime in the form of a radial heat flux or null radiation. There were several early attempts at modeling a radiating star with a Vaidya exterior. The problem was the matching of the interior and exterior spacetimes across the boundary of the star. The junction conditions required for the smooth matching of a spherically symmetric, shear-free line element to Vaidya's outgoing solution was provided by Santos (1985). It was shown that for a radiating spherical body dissipating energy in the form of a radial heat flux, the pressure on the boundary is proportional to the magnitude of the heat flux. This condition ensures conservation of momentum across the boundary of the collapsing body. Since the publication of the Santos junction conditions, there has been an explosion of models describing dissipative collapse starting with simple solutions and thus rapidly developing into more sophisticated stellar models. The authors Herrera et al (1989), Chan et al (1994), Di Prisco et al (2007), Herrera & Martinez (1998), Di Prisco et al (1997 have been instrumental in investigating the nature of collapse with dissipation within a general framework thus giving researchers rich insights into these problems especially with the inclusion of shear, inhomogeneity and anisotropy. The thermodynamics of radiating stars was developed by Govender and co-workers since the early 1990's. Relaxational effects due to heat dissipation and shear viscosity predict temperature and luminosity profiles which are significantly different from the Eckart theory of thermodynamics [Govender et al (2010), Govender (2013), Govender & Govinder (2001)].
Recently, there has been a resurgence in seeking exact solutions to the Einstein field equations describing static, compact objects by employing the concept of embedding.
The Karmarkar condition which needs to be satisfied if the spacetime has to be of class I embedding has been widely used to generate various stellar models describing anisotropic spheres [Karmarkar (1948)]. These models have been shown to satisfy all the stringent stability and physical tests imposed by the behaviour of the thermodynamic and gravitational variables [Bhar (2019) , Ivano (2020), Sarkar et al (2020)]. Many of these solutions have been reconciled with observational data of compact objects including strange stars, pulsars and neutron stars [Gedela et al (2018[Gedela et al ( , 2019a, Upreti et al (2020), Fuloria (2017), Pant et al (2020)].
By utilising a quadratic equation of state together with the Karmarkar condition a model for the strange star candidate SAX J1808.4-3658 was obtained. It was shown that this model agrees with observational characteristics of this star. Furthermore, a comparison of the quadratic EoS model with modified Bose-Einstein condensation EoS and linear EoS was carried out Gedela et al (2019c). The Karmarkar condition has also been utilised to model dissipative collapse ensuing from an initially static configuration losing hydrostatic equilibrium and starts to radiate energy to the exterior spacetime. The Karmarkar condition together with the junction condition which represents conservation of momentum across the collapsing boundary determine the temporal and gravitational evolution of the model [Naidu et al (2018)]. Many of these models indicate their robustness under the scrutiny of physical viability.
To this end we employ the Karmarkar condition to seek a model which accurately describes two stellar compact objects, namely, PSR J1614-2230 and SAX J1808.4-3658.
This paper is structured as follows: In section I, we present the Einstein field equations describing the interior spacetime of the stellar model. The Karmarkar and embedding class I conditions are introduced in section III. By adopting a parametric form for one of the metric potentials we generate a stellar model in section IV. The matching of the interior and exterior spacetimes is accomplished in section V. The physical features of the model is discussed in section VI. We investigate the stability of our model in section VII. The paper concludes with a discussion and finding of our main results in section VIII.
The Einstein Field Equations
The line element within the spherically symmetric anisotropic fluid matter distribution in Schwarzschild coordinates (x i ) = (t, r, θ, φ) is delineated in the following form: ds 2 = e ν(r) dt 2 − e λ(r) dr 2 − r 2 (dθ 2 + sin 2 θdφ 2 ).
(1)
where the gravitational potentials ν(r) and λ(r) are yet unknown. The energy-momentum tensor for anisotropic matter takes the form
T jk = [(p t + ρ)v j v k − p t g jk + (p r − p t )χ j χ k ],(2)
where ρ, p r and p t are the energy density, radial and transverse pressures respectively and p t is in the perpendicular direction to p r . The normalized 4velocity vector v j = 1 gtt δ j t and the unit spacelike vector χ j = − 1 grr δ j r along r provide g jk v j v k = 1 and g jk χ j χ k = −1 respectively.
The line element (1) and momentum tensor T jk (2) give rise to the following system of equations [Maurya et al (2019c)
] 8πρ = 1 − e −λ(r) r 2 + λ ′ (r)e −λ(r) r ,(3)8πp r = ν ′ (r)e −λ(r) r − 1 − e −λ(r) r 2 ,(4)8πp t = e −λ 4 2ν ′′ + ν ′ 2 − ν ′ λ ′ + 2ν ′ r − 2λ ′ r ,(5)
where ( ′ ) denotes the derivative with respect to the radial coordinate r.
Using the field equations Eqs. (4) and (5), the anisotropic factor (∆) takes the form
∆ = p t − p r = e −λ ν ′′ 2 − λ ′ ν ′ 4 + ν ′2 4 − ν ′ + λ ′ 2r + e λ − 1 r 2 .(6)
Here we choose the gravitational constant G and speed of sound c to be unity.
The Karmarkar condition
The Karmarkar condition required for the spacetime to be of class I embedding is
R 1414 = R 1212 R 3434 + R 1224 R 1334 R 2323 ,(7)
subject to R 2323 = 0 [Pandey & Sharma (1981)]. The non-zero Riemann tensor components for the line element (1) are
R 1414 = − e ν(r) ( ν ′′ (r) 2 + ν ′ 2 (r) 4 − λ ′ (r)ν ′ (r) 4 ),(8)R 2323 = − e −λ(r) r 2 sin 2 θ(e λ(r) − 1),(9)R 1212 = 1 2 rλ ′ (r),(10)R 3434 = − 1 2 r sin 2 θν ′ (r)e ν(r)−λ(r) .(11)
The differential equation derived using the Karmarkar condition (7) assumes the form
2ν ′′ ν ′ + ν ′ = λ ′ e λ(r) e λ(r) − 1 .(12)
Solving eqn. (12), we find the following relation between e λ(r) and e ν(r)
e λ(r) = P + Q r 0 e λ(r) − 1dr 2 ,(13)
where P and Q are integration constants.
In view of (6), the anisotropy of the fluid ∆ [Maurya et al (2016)] is obtained as
∆ = ν ′ (r) 4e λ(r) 2 r − λ ′ (r) e λ(r) − 1 ν ′ (r)e ν(r) 2rB 2 − 1 .(14)
At this juncture we should point out that when ∆ = 0, the only bounded solution simultaneously satisfying the Karmarkar condition and pressure isotropy is the interior Schwarzschild solution. This solution suffers various shortcomings including superluminal speeds within the interior of the fluid. To this end we consider a solution describing an anisotropic fluid distribution which will be taken up in the next section.
A new parametric class solutions
In this paper, we assumed the following metric potential e λ(r) = 1 + ar 2 α n (r),
where α n (r) = csc n br 2 + c , and a, b and c are positive constants and n ≥ 0. We have selected e λ(r) such that at center e λ(r) = 1, which emphasizes that at the center the tangent 3 space is flat and the Einstein field equations (EFEs) can be integrated. Substituting the e λ(r) from (15) in (13), we obtain the remaining metric potential e ν(r) as
e ν(r) = P − Qh 1 (r)h 2 (r) aα n (r) 4b 2 ,(16)
where P and Q are integration constants.
Using the metric potentials given by Eqs. (15) and (16), the expressions of ρ, p r , ∆ and p t can be cast as
ρ = aα n (r) r 2 aα n (r) − 2bn cot br 2 + c + 3 (ar 2 α n (r) + 1) 2 ,(17)p r = h 2 (r) aα n (r) h 3 (r) (ar 2 α n (r) + 1) ,(18)∆ = h 5 (r)r 2 (2bh 6 (r) − h 7 (r)) h 8 (r) (1 + ar 2 α n (r)) 2 ,(19)p t = p r + ∆,(20)
where h 1 (r) = 2 F 1 1 2 , n + 2 4 ; 3 2 ; cos 2 br 2 + c , h 2 (r) = sin 2 br 2 + c sin 2 br 2 + c
n−2 4 , h 3 (r) = 2P b √ aα n − aQh 1 (r) √ α n cos br 2 + c − 4bQ, h 4 (r) = √ aQh 1 (r) cos br 2 + c − 2P b h 5 (r) = aα n (r) + bn cot br 2 + c h 6 (r) = aP α n (r) − Q aα n (r) h 7 (r) = aBh 4 (r) cos br 2 + c csc n 2 br 2 + c h 8 (r) = 2P b − √ aQh 1 (r) cos br 2 + c
The mass function m(r), gravitational red-shift z(r) and compactification factor u(r) at the surface and within the interior of the stellar system are given by
m(r) = ar 3 α n (r) 2 (ar 2 α n (r) + 1) ,(21)z(r) = 1 P − Qh1(r)h2(r) √ aαn(r) 4b − 1,(22)u(r) = m(r) r = ar 2 α n (r) 2 (ar 2 α n (r) + 1) .(23)
Matching of interior and exterior spacetime over the boundary
To determine the constants a, b, c, P, Q appearing in our class of solutions, the interior metric must be matched smoothly across the boundary with the exterior Schwarz-schild solution
ds 2 = 1 − 2M r dt 2 − 1 − 2M r −1 dr 2 − r 2 (dθ 2 + sin 2 θdφ 2 ).(24)
By comparing the interior solution (1) with exterior solution (24) at the boundary r = R (Darmois-Isreali conditions), we obtain
e ν b = 1 − 2M R = P + Q n √ 1 − γ + 2bR 2 + 2c aα n (R) b (n 2 + 4) 2 ,(25)e −λ(r) b = 1 − 2M R = 1 1 + aR 2 α n (R) ,(26)p r (R) = 0.(27)
With the help of the boundary conditions (25-27), we obtain
a = − 2M csc −n bR 2 + c R 2 (2M − R) ,(28)P = 1 − 2M R ah 1 (R) cos bR 2 + c α n (R) + 4b 4b ,(29)Q = 1 2 1 − 2M R a csc n (bR 2 + c),(30)
where γ = bR 2 + c 2 .
The constants b and c are free parameters and are selected such a way that all the physical properties of the assumed stars for a suitable range of n are well-behaved and satisfy the Darmois-Israel conditions. The values of P and Q are expressed in Eqs. (29) and (30) respectively.
6. Discussion of physical features for well-behaved solutions
Geometrical regularity
The metric potentials (geometrical parameters) for the stars PSR J1614-2230 and SAX J1808.4-3658 for the range of n mentioned in Table 1 at the center (r = 0), give the values e ν | r=0 = positive constant and e −λ(r) | r=0 = 1. This shows that the metric potentials are regular and free from geometric singularities inside the stars. Also, both metric potentials e ν (r) and e −λ(r) are monotonically increasing and decreasing respectively, with r ( Fig.1).
6.2. Viable trends of physical parameters 6.2.1.
Density and pressure trends The matter density ρ, radial pressure p r and transverse pressure p t for stars PSR J1614-2230 and SAX J1808.4-3658 are non-negative inside the stars and monotonically decrease from center to surface of these stars for the range of n mentioned in Table 1 (Fig.2,Fig.3) [Zeldovich & Novikov (1971), Ivano (2002)].
6.2.2. Relation between pressure-density ratios (Equation of state) We plot the graphs of the equation of state parameters (p r /ρ, p t /ρ) to establish some connection between density and the pressures. Using the trend of plots, we establish a linear, quadratic or CFL EoS for our model. An example of starting off with the metric functions and then establishing an EoS is the classic paper by Mukherjee et al (1997). In this work they show that the Vaidya-Tikekar geometry leads to a linear EoS. From the plots of figures, we observe the decreasing trend of pressure to density ratios with r Table 1. Based on the trends of the plots, we calculate equation of state (EOS) for neutron star PSR J1614-2230 as
p r = 0.861538ρ 2 + 0.206369ρ − 0.00223306,(31)p r = 69.1848ρ 2 − 1.27803ρ + 0.00560289,(32)
for n = 13.5, n = 28.98 respectively and for the strange star SAX J1808.4-3658 as
p r = 0.276979ρ 2 + 0.155325ρ − 0.00151322,(33)p r = 48.6746ρ 2 − 0.639035ρ + 0.00149093,(34)
for n = 9.56, n = 20.3 respectively, using the method of least of square technique (elaborated in appendix).
The profiles of equation of state for PSR J1614-2230 (n = 13.5 ), SAX J1808.4-3658 (n = 9.56) are exhibited in the Fig.(5). The trends of EOS for other values of n in their corresponding ranges of the stars remain same as in the Fig.(5).
Mass-radius relation, red-shift and compactification factor
The mass function m(r) and gravitational red-shift z(r) function of stars PSR J1614-2230 and SAX J1808.4-3658 for the range of n mentioned in Table 1 are increasing and decreasing respectively with r. The variation of m(r) and z(r) is shown in Figs.(6,7). Also, compactification parameter u(r) for both the stars are increasing functions with r, shown in Fig.(8) and lies within the Buchdahl limit [Buchdahl (1959)].
6.2.4. Anisotropic parameter In Fig.(9), the radial pressures (p r ) coincides with tangential pressure (p t ) at the center of stars PSR J1614-2230 and SAX J1808.4-3658 for the range n mentioned in Table 1, i.e, pressure anisotropies vanish at the center, ∆(0) = 0 and increase outwards [Bowers & Liang (1974), Ivano (2002)].
Physical Stability analysis
Zeldovich's condition
The values of p r , p t and ρ at the center are given by
8πp rc = 8πp tc = a csc n (c) −2P b a csc n (c) + 4bQ + β 1 β 2 Q 2P b a csc n (c) − β 1 β 2 Q > 0,(35)
and 8πρ c = 3a csc n (c) > 0 if a > 0.
Using Zeldovich's condition [Zeldovich & Novikov (1971)], i.e., p rc /ρ c ≤ 1, we get
−2P b a csc n (c) + 4bQ + β 1 β 2 Q 3 2P b a csc n (c) − β 1 β 2 Q ≤ 1,(37)
In view of (36) and (37), we get the following inequality 2Ab a csc n (c)
4b + β 1 β 2 ≤ Q P ≤ 2Ab a csc n (c) b + β 1 β 2 ,(38)
where β 1 = 2 F 1 1 2 , n + 2 4 ; 3 2 ; cos 2 (c) , β 2 = a cos(c) sin n 2 (c) csc n (c).
7.1.1. Hererra cracking stability of an anisotropic fluid sphere The Hererra cracking method [Herrera (1992)] is used to analyze the stability of anisotropic stars under radial perturbations. We also employ the concept of cracking due to Abreu et al (2007) to analyze potentially stable regions within the stellar configuration by subjecting our model to the condition
−1 < v 2 t − v 2 r ≤ 0 dp t dρ = dp r dρ + d∆ dρ = dp r dρ + d∆ dρ dr dρ ,(39)v 2 r − v 2 t = − d∆ dρ dr dρ .(40)
For a physically feasible model of anisotropic fluid sphere the radial and transverse velocities of sound should be less than 1, which are referred to as causality conditions in the literature. The profiles of v 2 r and v 2 t of stars PSR J1614-2230 and SAX J1808.4-3658 for the range n mentioned in Table 1 are given in Fig.(10), which shows that 0 < v 2 r ≤ 1 and 0 < v 2 t ≤ 1 everywhere within the stellar configuration. Therefore, both the speeds satisfy the causality conditions and monotonically decreasing nature.
Here, we use the Herrera cracking method [Herrera (1992)] for analyzing the stability of anisotropic stars under the radial perturbations. Using the concept of cracking, Abreu et al (2007) gave the idea that the region of the anisotropic fluid sphere where −1 < v 2 t − v 2 r ≤ 0 is potentially stable. Fig.(11) clearly depicts that our model is potentially stable inside the both stars PSR J1614-2230 and SAX J1808.4-3658 for the range n mentioned in Table 1.
Bondi stability condition for adiabatic index
For a relativistic anisotropic sphere the stability depends on the adiabatic index Γ r , the ratio of two specific heats, defined by Heintzmann & Hillebrandt (1975), Γ r = ρ+pr pr ∂pr ∂ρ . Bondi (1964) suggested that for a stable Newtonian sphere, Γ value should be greater than 4 3 . For an anisotropic relativistic sphere the stability condition is given by Chan et al (1993),
Γ > 4 3 + 4(pt0−pr0) 3|p ′ r0 |r + ρ0pr0
2|p ′ r0 | r , where p r0 , p t0 and ρ 0 represent the initial radial pressure, tangential pressure and energy density respectively in static equilibrium.
The first and last term inside the square brackets represent the anisotropic and relativistic corrections respectively. Moreover, both the quantities are positive and increase the unstable range of Γ. Chandrasekhar (1964a) established a condition on Γ to study the stability of interior of Schwarzschild metric and it is defined as
Γ > Γ cr = 4 3 + 19 42 (2δ),(41)
where δ is compactification factor and Γ cr is the critical adiabatic index which is determined from neutral configuration. Moustakidis (2017) suggested that in the interior of fluid sphere Γ cr should linearly depend on the pressure and density rations at center and Γ > Γ cr .
For stable Newtonian sphere, Bondi and Chandrasekhar suggested that Γ > 4 3 [Bondi (1964), Chandrasekhar (1964a,b)].
The present class of models satisfy Bondi, Chandrasekhar, Moustakidis conditions for both the compact stars PSR J1614-2230 and SAX J1808.4-3658 for the range of n mentioned in Table 1 and Γ cr linearly depend on the ratio pr (0) ρ(0) .
7.1.3.
Energy conditions For a physically stable static model the interior of the star should satisfy (i) null energy condition ρ+p r ≥ 0 (NEC) (ii) weak energy conditions ρ + p r ≥ 0, ρ ≥ 0 (WEC r ) and ρ + p t ≥ 0, ρ ≥ 0 (WEC t ) and (iii) strong energy condition ρ + p r + 2p t ≥ 0 (SEC) [Maurya et al (2019b)]. The profiles of energy conditions i.e. NEC, WEC, SEC are displayed in Fig.(13) and our models satisfy all the energy conditions for both the stars PSR J1614-2230 and SAX J1808.4-3658 for the range n mentioned in Table 1.
Tolman-Oppenheimer-Volkoff condition for equilibrium under three forces
The Tolman-Oppenheimer-Volkoff (TOV) equation [Ponce de Leon (1987)] for anisotropic fluid matter distribution is given as
− M g (r)(ρ + p r )
r 2 e (λ(r)−ν(r))/2 − dp r dr + 2∆(r) r = 0, (42) where F g , F h , F a are gravitational, hydrostatic and anisotropic forces respectively and M g (r) is the gravitational mass can be obtained from the Tolman-Whittaker formula M g (r) = 1 2 r 2 ν ′ (r)e (ν(r)−λ(r))/2 .
The TOV equation (42) can be expressed in the following balanced force equation
F g + F h + F a = 0, .(44)
In an equilibrium state the three forces F g , F h and F a satisfy TOV equation. The profiles of the three forces of the stars PSR J1614-2230, SAX J1808.4-3658 are exhibited in Fig.(14) and in which F g overshadows the other two forces F h and F a such that the system to be in a static equilibrium.
Harrison-Zeldovich-Novikov Static stability criterion
The Harrison-Zeldovich-Novikov static stability criteria for non-rotating spherically symmetric equilibrium stellar models provides that the mass of compact stars must be an increasing function of its central density under small radial pulsation i.e. ∂M ∂ρ c > 0.
This criteria ensures that the model is static and stable. It was proposed by Harrison et al (1965) and Zeldovich & Novikov (1971) independently for stable stellar models. With the help of (36) and total mass M = m(R) = aR 3 csc n bR 2 + c 2 (aR 2 csc n (bR 2 + c) + 1)
,
The expression of the mass in terms of the central density is given by M (ρ c ) = ρR 3 csc −n (c) csc n bR 2 + c 2 (ρR 2 csc −n (c) csc n (bR 2 + c) + 3) .
Also, ∂M ∂ρ c = R 3 csc −n (c) csc n bR 2 + c 6 1 3 ρR 2 csc −n (c) csc n (bR 2 + c) + 1 2 > 0, satisfies (Fig.15) the static stability criterion (45). The Harrison-Zeldovich-Novikov condition is satisfied for both the stars PSR J1614-2230 and SAX J1808.4-3658 for the range n mentioned in Table 1.
Discussion and Conclusion
Our aim in this paper is to use the Karmarkar condition (which is purely geometric) to establish a physically viable stellar model (albeit a toy model). Toy models are important as they give a sense of the behaviour of the various physical and thermodynamical properties of the star and assist in setting up numerical codes and simulations.
In this paper, we have explored a new parametric class of solutions for anisotropic matter distribution to model the compact star PSR J1614-2230 and strange star SAX J1808.4-3658 by invoking the Karmarkar condition and adopting a form for one of the metric potentials, e λ(r) . We find a range for one of the parameters, n for the both stars such that the solutions are well behaved for particular choices of the free constants b, c. We have analyzed all the geometrical and physical properties of these two stars and verified the physically viability of the solutions for the same range of n.
The graphs of the two stars for different models i.e. (i) n = 13.5, 18.66, 23.82, 28.98 for PSR J1614-2230; (ii) n = 9.56, 13.14, 16.72, 20.3 for SAX J1808.4-3658 for parameters values of b = 0.0001/km 2 , c = 2.5/km 2 are plotted to find the range of n such that the solutions are well behaved. Furthermore, we concluded that the range of well behaved solutions for PSR J1614-2230 is n = 13.5 to 28.98 and for SAX J1808.4-3658 is n = 9.56 to 20.3 corresponding to same parameter values b, c.
For any value in the range of n the geometrical parameters (e −λ(r) and e ν(r) ) are decreasing and increasing respectively throughout interior of the stars and both curves meet at their boundary (Fig.1). The physical parameters such as density, radial and tangential pressures, pressures to density ratios, redshift, both the velocities in that range of n are nonnegative at the center and monotonically decreasing from center to surface of the stars Figs. (2,3,4,7,10). Physical parameters mass, compactification factor, anisotropy and adiabatic index are increasing outward which is required for a physically viable stellar configuration Figs. (6,8,9,12).
Our models satisfy all the stability conditions for the two stars for any value of n in that range, i.e, Herrera cracking condition (−1 < v 2 t − v 2 r < 0, 0 < v 2 r , v 2 t < 1), Bondi condition (Γ > 4/3), Zeldovich's condition (0 < pr ρ , pr ρ < 1) and Harrison-Zeldovich-Novikov criterion ( ∂M ∂ρc > 0) Figs.(11,12,15). For the same range of n of the both stars the present models hold all the energy conditions (ρ > 0, ρ + p r > 0, ρ + p t > 0, ρ + p r + 2p t > 0) which are required for a physically viable configuration (Fig.13). Furthermore, our models represent a static anisotropic stellar fluid in equilibrium configuration as the gravitational force, the hydrostatic force and the anisotropic force are acting in the interior stars through the TOV equation are counter-balancing each other (Fig.14).
The physical quantities i.e., central adiabatic index (Γ c ), central density (ρ c ), central pressure (p rc ), central red-shift (z c (r)), surface red-shift (z s (c)) and compactness factor (u(r) = GM cR 2 ) are given in Table 1. From Table 1 we conclude that the larger the value Table 1. The variation in physical parameters, i.e., central adiabatic index, central density, central red-shift, surface redshift and compactness factor for different models of (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for parameters n = 13. 5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for parameters n = 9. 56, 13.14, 16.72, 20.3 for the values of b = 0.0001/km 2 , c = 2.5, G = 6.67 × 10 −11 m 3 kg −1 s −2 , M ⊙ = 2 × 10 30 kg and C = 3 × 10 8 ms −1 . Other physical parameters i.e. compactification factor and red-shift at the surface remain constant for any value of the range n. This work has provided a family of parametric solutions of the Einstein field equations obeying the Karmarkar condition. We show that these solutions are sufficiently useful to model compact objects and predict their observed stellar characteristics within very good approximation.
Appendix: Generating function
All the spherically symmetric solutions can be generated from the two generating functions given by Herrera et al (2008). The two primitive generating functions η(r) and Π(r) are given as
e ν(r) = e (2η(r)− 2 r )dr , Π(r) = 8π(p r − p t ).(47)
The two generating functions pertaining to the present class of solutions are obtained as η(r) = √ aQh 1 (r) cos br 2 + c − 2b √ aQr 2 csc n 2 br 2 + c + P r ( √ aQh 1 (r) cos (br 2 + c) − 2P b) and Π(r) = 8π(p r − p t ) = −8π∆.
Appendix: Equation of state
The equation of state is defined as the relation between radial pressure (p r ) and density (ρ) within the star.
Since the presence of cumbersome transformation of p r in terms of ρ, here we use curve fitting technique of approximation to get equation of state. Further, from Fig.(10), we observe that the plot of v r = dpr dρ is not a straight line (i.e. dpr dρ is not a constant), therefore, it is necessary that the relation between p r and ρ is parabolic in nature. Consequently, in order to get the equation of state we consider the curve fitting method for quadratic form p r (r) = U + T ρ(r) + Sρ 2 (r), (48) Σp r (r) = 11U + T Σρ(r) + SΣρ 2 (r),
Σρ(r) p r (r) = U Σρ(r) + T Σρ 2 (r) + SΣρ 3 (r) (50) Σρ 2 (r) p r (r) = U Σρ 2 (r) + T Σρ 3 (r) + SΣρ 4 (r), (51) where, r varies from central to boundary of the star. To find the curve via least square method, we consider the points with the differences 0. . Variation of red-shift with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 8. Variation of the compactification factor u(r) with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 9. Variation of anistropy ∆(r) with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 10. Variation of v 2 r , v 2 t with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 11. Variation of vt 2 − vr 2 with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 12. Variation of Γ(r) with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 13. Variation of energy conditions with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 14. Variation of balancing forces Fa, Fg, Fa, Fa +Fg +F h with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5. Figure 15. Variation of mass with central density ρc for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
, TellOrtiz et al (2019), Maurya et al (2019a), Jasim et al (2020), Sing et al (2020),
Fig
the range of n mentioned in
969, 0.7951 for PSR J1614-2230, SAX J1808.4-3658 respectively. Solving the Eqns.(49,50,51) for S, T , U and substituting the values in Eq.(48) , we get required equation of state.
Figure 1 .
1Variation of e −λ(r) , e ν(r) with r for (i) PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 2 .
2Variation of ρ with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 3 .
3Variation of pr, pt with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 4 .
4Variation of pr/ρ and pt/ρ with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 5 .
5Variation of equation of state parameters with ρ for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the model n = 13.5; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the model n = 9.56 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 6 .
6Variation of mass (m(r)) with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Figure 7
7Figure 7. Variation of red-shift with r for (i)PSR J1614-2230 with mass M = 1.97M ⊙ and radius R = 9.69km for the models n = 13.5, 18.66, 23.82, 28.98; (ii) SAX J1808.4-3658 with mass M = 0.9M ⊙ and radius R = 7.951km for the models n = 9.56, 13.14, 16.72, 20.3 and the values of b = 0.0001/km 2 , c = 2.5.
Acknowledgments The authors are thankful to the learned referee for the valuable comments and suggestions to improve the paper.
. H Abreu, Class. Quantum Grav. 244631Abreu, H., et al 2007, Class. Quantum Grav. , 24, 4631
. H Andreasson, C G Boehmer, A Mussa, Class. Quantum Grav. 2995012Andreasson, H., Boehmer, C. G., & Mussa, A. 2012, Class. Quantum Grav. , 29, 095012
. K Akiyama, Event Horizon Telescope CollaborationarXiv:1906.11238Astrophys.J. 11astro-ph.GAAkiyama, K. et al 2019, Event Horizon Telescope Collaboration Astrophys.J., L1, 875, 1. arXiv:1906.11238 [astro-ph.GA]
. P Bhar, Eur. Phys. J. C. 79138Bhar, P. 2019, Eur. Phys. J. C, 79, 138
. S Biswas, D Shee, Annals of Phys. 409167905Biswas, S., Shee, D., et al 2019, Annals of Phys., 409, 167905
H Bondi, Proc. R. Soc. Lond. A. R. Soc. Lond. A28139Bondi, H. 1964, Proc. R. Soc. Lond. A, 281, 39
. R L Bowers, E P T Liang, Astrophys. J. 188657Bowers, R.L., & Liang, E.P.T. 1974, Astrophys. J., 188, 657
. H A Buchdahl, Phys. Rev. D. 1161027Buchdahl, H. A. 1959, Phys. Rev. D, 116, 1027
. R Chan, Mon. Not. R. Astron. Soc. 265533Chan, R., et al 1993, Mon. Not. R. Astron. Soc., 265, 533
. R Chan, L Herrera, N O Santos, Mon. Not. R. Astron. Soc. 267637Chan, R., Herrera, L., & Santos, N. O. 1994, Mon. Not. R. Astron. Soc., 267, 637
. S Chandrasekhar, Astrophys, J. 140417Chandrasekhar, S. 1964, Astrophys, J., 140, 417
. S Chandrasekhar, Phys. Rev. Lett. 12114Chandrasekhar, S. 1964, Phys. Rev. Lett. ,12, 114
. K Dev, M Gleiser, Gen. Relativ. Gravit. 341793Dev, K., & Gleiser, M. 2002, Gen. Relativ. Gravit., 34, 1793
. K Dev, M Gleiser, Gen. Relativ. Gravit. 351435Dev, K., & Gleiser, M. 2003, Gen. Relativ. Gravit. 35, 1435
. A Di Prisco, L Herrera, G Le Denmat, M A H Maccallum, N O Santos, Phys. Rev. D. 7664017Di Prisco, A., Herrera, L., Le Denmat, G., MacCallum, M. A. H., & Santos, N. O. 2007, Phys. Rev. D, 76, 064017
. A Di Prisco, L Herrera, N Falcon, M Esculpi, N O Santos, Gen. Relativ. Gravit. 291391Di Prisco, A., Herrera, L., Falcon, N., Esculpi, M., & Santos, N. O. 1997, Gen. Relativ. Gravit., 29, 1391
. P Fuloria, Astro. phys. space sci. 362217Fuloria, P. 2017, Astro. phys. space sci., 362, 217
. T Gangopadhyay, Mon. Not. R. Astron. Soc. 4313216Gangopadhyay, T., et al 2013, Mon. Not. R. Astron. Soc., 431, 3216
. S Gedela, R K Bisht, N Pant, Eur. Phys. J. A. 54207Gedela, S., Bisht, R. K., & Pant, N. 2018, Eur. Phys. J. A, 54, 207
. S Gedela, R K Bisht, ; N Pant, Mod. Phys. Lett. A. 34195015754Gedela, S., Bisht, R. K., & Pant 2019, N., Mod. Phys. Lett. A., 34, 195015754
. S Gedela, R P Pant, R K Bisht, N Pant, Eur. Phys. J. A. 5595Gedela, S., Pant, R.P, Bisht, R. K., & Pant, N. 2019, Eur. Phys. J. A, 55, 95
. S Gedela, N Pant, R P Pant, J Upreti, Int. J. Mod. Phys. A. 341950179Gedela, S., Pant, N., Pant, R.P, & Upreti, J. 2019, Int. J. Mod. Phys. A., 34, 1950179
. S Gedela, R K Bisht, N Pant, Mod. Phys. Let. A. 332050097S. Gedela, Bisht, R.K., & Pant, N. 2020, Mod. Phys. Let. A, 33, 2050097
. S G Ghosh, S Maharaj, Eur. Phys. J. C. 757Ghosh, S. G., & Maharaj, S. D 2015, Eur. Phys. J. C, 75, 7
. G Govender, M Govender, K S Govinder, Int. J. Mod. Phys. D. 191773Govender, G., Govender, M., & Govinder, K. S. 2010, Int. J. Mod. Phys. D 19, 1773
. M Govender, Int. J. Mod. Phys. D. 221350049Govender, M. 2013, Int. J. Mod. Phys. D 22, 1350049
. M Govender, K S Govinder, Phys. Lett. A. 28371Govender, M., & Govinder, K. S. 2001, Phys. Lett. A, 283, 71
. J Guo, P S Joshi, Phys. Rev. D. 9264013Guo, J., & Joshi, P. S. 2015, Phys. Rev. D, 92, 064013
. B K Harrison, Gravitational Theory and Gravitational Collapse. University of Chicago PressHarrison, B.K., et al 1965, Gravitational Theory and Gravitational Collapse, University of Chicago Press, Chicago.
. H Heintzmann, W Hillebrandt, Astron. Astrophys. 3851Heintzmann, H., & Hillebrandt, W. 1975, Astron. Astrophys., 38, 51
. L Herrera, J Martinez, Gen. Relativ. Gravit. 30445Herrera, L., & Martinez, J. 1998, Gen. Relativ. Gravit., 30, 445
. L Herrera, Phys. Lett. A. 165206Herrera, L. 1992, Phys. Lett. A, 165, 206
. L Herrera, G Le Denmat, G Marcilhacy, N O Santos, Int.J.Mod.Phys. D. 14657Herrera, L., Le Denmat, G., Marcilhacy, G., & Santos, N. O. 2005, Int.J.Mod.Phys. D, 14, 657
. L Herrera, Gen. Relativ. Gravit. 37873Herrera, L., et al 2005, Gen. Relativ. Gravit., 37, 873
. L Herrera, J Ospino, A Di Prisco, Phys. Rev. D. 7727502Herrera, L., Ospino, J., & Di Prisco, A. 2008, Phys. Rev. D, 77, 027502
. L Herrera, G Le Denmat, N O Santos, Mon. Not. R. Astron. Soc. 237257Herrera, L., Le Denmat, G., & Santos, N. O. 1989 Mon. Not. R. Astron. Soc., 237, 257
. B V Ivanov, Phys. Rev. D. 65104011Ivanov, B.V. 2002, Phys. Rev. D, 65, 104011
. B Ivanov, Eur. Phys. J. C. 78332Ivanov, B. V. 2018, Eur. Phys. J. C, 78, 332
. B V Ivanov, Eur. Phys. J. Plus. 135377Ivanov, B. V. 2020, Eur. Phys. J. Plus, 135, 377
. M K Jasim, S K Maurya, A S M Sawaii, Astrophys Space Sci. 3659Jasim, M.K., Maurya, S.K., & Al Sawaii, A. S. M. 2020, Astrophys Space Sci., 365, 9
K R Karmarkar, Proc. Indian. Acad. Sci. A. Indian. Acad. Sci. A2756Karmarkar, K.R. 1948, Proc. Indian. Acad. Sci. A, 27, 56
. S Karmarkar, S Mukherjee, R Sharma, S D Maharaj, Pramana J. Phys. 68881Karmarkar, S., Mukherjee, S., Sharma, R., & Maharaj, S. D. 2007, Pramana J. Phys., 68, 881
. A M Manjonjo, S D Maharaj, S Moopanar, Class. Quantum Grav. 3545015Manjonjo, A.M., Maharaj, S. D., & Moopanar, S. 2018, Class. Quantum Grav. , 35, 045015
. S K Maurya, M Govender, Eur. Phys. J. C. 77420Maurya, S.K., & Govender, M. 2017, Eur. Phys. J. C, 77, 420
. S K Maurya, Eur. Phys. J. A. 52191Maurya, S.K., et al 2016, Eur. Phys. J. A 52, 191
. S K Maurya, Phys. Rev. D. 10044014Maurya, S.K., et al 2019, Phys. Rev. D, 100, 044014
. S K Maurya, Phys. Rev. D. 9944029Maurya, S. K., et al 2019, Phys. Rev. D, 99, 044029
. S K Maurya, S D Maharaj, D Deb, Eur. Phys. J. C. 79170Maurya, S. K., Maharaj, S. D., & Deb, D. 2019, Eur. Phys. J. C, 79, 170
. Ch C Moustakidis, Gen. Relativ. Gravit. 4968Moustakidis, Ch. C. 2017, Gen. Relativ. Gravit., 49, 68
. S Mukherjee, B C Paul, N Dadhich, Class. Quantum Grav. 143475Mukherjee, S., Paul, B.C., & Dadhich, N. 1997, Class. Quantum Grav., 14, 3475
. N F Naidu, M Govender, S Maharaj, Eur. Phys. J. C. 7848Naidu, N.F., Govender, M., & Maharaj, S.D. 2018, Eur. Phys. J. C, 78, 48
. J P Oppenheimer, H Snyder, Phys. Rev. 56455Oppenheimer, J. P., & Snyder, H. 1939, Phys. Rev., 56, 455
. S N Pandey, S P Sharma, Gen. Relativ. Gravit. 14113Pandey, S.N., & Sharma, S.P. 1981, Gen. Relativ. Gravit., 14, 113
. N Pant, N Pradhan, R K Bansal, Astrophys. Space Sci. 36141Pant, N., Pradhan, N., & Bansal, R. K. 2016, Astrophys. Space Sci., 361, 41
. N Pant, S Gedela, R K Bisht, 10.1016/j.cjph.2020.06.020Chin. J. Phys. Pant, N., Gedela, S., & Bisht, R. K. 2020, Chin. J. Phys. DOI: 10.1016/j.cjph.2020.06.020
. J Ponce De Leon, Gen. Relativ. Gravit. 19797Ponce de Leon, J. 1987, Gen. Relativ. Gravit. 19, 797
. N O Santos, Mon. Not. R. Astron. Soc. 216403Santos, N. O. 1985, Mon. Not. R. Astron. Soc., 216, 403
. N Sarkar, Eur. Phys. J. C. 80255Sarkar, N., et al 2020, Eur. Phys. J. C, 80, 255
. K Schwarzschild, arXiv:physics/9905030Sitz. Deut. Akad.Wiss. Berlin Kl.Math. Phys. 189Schwarzschild, K. 1916, Sitz. Deut. Akad.Wiss. Berlin Kl.Math. Phys., 1916, 189. arXiv:physics/9905030
. K Schwarzschild, arXiv:physics/9912033Sitz. Deut. Akad.Wiss. Berlin Kl.Math. Phys. 24424Schwarzschild, K. 1916, Sitz. Deut. Akad.Wiss. Berlin Kl.Math. Phys., 24, 424. arXiv:physics/9912033.
. R Sharma, S Mukherjee, Mod. Phys. Lett. A. 161049Sharma, R., & Mukherjee, S. 2001, Mod. Phys. Lett. A, 16, 1049
. R Sharma, S Mukherjee, S D Maharaj, Gen. Relativ. Grav. 33999Sharma, R., Mukherjee, S., & Maharaj, S. D. 2001, Gen. Relativ. Grav., 33, 999
. R Sharma, S D Maharaj, J. Astrophys. Astron. 28133Sharma, R., Maharaj, S.D. 2007, J. Astrophys. Astron., 28, 133
. A Sherif, R Goswami, S Maharaj, Class. Quantum Grav. 36215001Sherif, A., Goswami, R., & Maharaj, S. 2019, Class. Quantum Grav. , 36, 215001
. K N Singh, N Pant, N Pradhan, Astrophys. Space Sci. 361173Singh, K. N., Pant, N., & Pradhan, N. 2016, Astrophys. Space Sci., 361, 173
. K N Singh, Chin. Phys. C. 4435101Singh, K. N., et al. 2020, Chin. Phys. C, 44, 035101
Exact Solutions of Einstein's Field Equations. H Stephani, D Kramer, Cambridge University Press2nd EditionStephani, H., Kramer, D., et.al. 2003, Exact Solutions of Einstein's Field Equations, 2nd Edition, Cambridge University Press
. F Tello-Ortiz, S K Maurya, A Errehymy, N K Singh, M Daoud, Eur. Phys. J. C. 79885Tello-Ortiz, F., Maurya, S. K., Errehymy, A., Singh, N. K., & Daoud, M. 2019, Eur. Phys. J. C, 79, 885
. J Upreti, S Gedela, N Pant, R Pant, 2020, New Astronomy. 80101403Upreti, J., Gedela, S., Pant, N., & Pant, R.P 2020, New Astronomy, 80, 101403
P C Vaidya, Proc. Indian Acad. Sc. A. Indian Acad. Sc. A33264Vaidya, P. C. 1951, Proc. Indian Acad. Sc. A, 33, 264
Y B Zeldovich, I D Novikov, Relativistic Astrophysics. ChicagoUniversity of Chicago Press1Zeldovich, Y.B., & Novikov, I.D. 1971, Relativistic Astrophysics Vol. 1: Stars and Relativity, University of Chicago Press, Chicago
| []
|
[
"The laws of thermodynamics and information for emergent cosmology",
"The laws of thermodynamics and information for emergent cosmology"
]
| [
"M Hashemi \nDepartment of Physics\nShahid Beheshti University\n19839Evin, TehranG. CIran\n",
"S Jalalzadeh \nDepartment of Physics\nShahid Beheshti University\n19839Evin, TehranG. CIran\n",
"S Vasheghani Farahani \nDepartment of Physics\nTafresh University\nP.O. Box39518-79611TafreshIran\n"
]
| [
"Department of Physics\nShahid Beheshti University\n19839Evin, TehranG. CIran",
"Department of Physics\nShahid Beheshti University\n19839Evin, TehranG. CIran",
"Department of Physics\nTafresh University\nP.O. Box39518-79611TafreshIran"
]
| []
| The aim here is to provide a set of equations for cosmology in terms of information and thermodynamical parameters. The method we implement in order to describe the universe is a development of Padmanabhan's approach which is based on the fact that emergence of the cosmic space is provided by the evolution of the cosmic time. In this line we obtain the Friedmann equation or its equivalent the conservation law in terms of information by the implementation of Laundauer's principle or in other words the information loss/production rate. Hence, a self consistent description of the universe is provided in terms of thermodynamical parameters. This is due to the fact that in this work the role of information which is the most important actor of all times, has stepped in to cosmology. We provide a picture of the emergent cosmology merely based on the information theory. In addition, we introduce a novel entropy on the horizon, which can also generalize Bekenstein-Hawking entropy for the asymptotic holographic principle. | 10.1007/s10714-015-1971-8 | [
"https://arxiv.org/pdf/1509.07976v1.pdf"
]
| 118,380,178 | 1509.07976 | c051da882ffc61c73cef1fc3318a5ffe5cb19d2a |
The laws of thermodynamics and information for emergent cosmology
26 Sep 2015 September 29, 2015
M Hashemi
Department of Physics
Shahid Beheshti University
19839Evin, TehranG. CIran
S Jalalzadeh
Department of Physics
Shahid Beheshti University
19839Evin, TehranG. CIran
S Vasheghani Farahani
Department of Physics
Tafresh University
P.O. Box39518-79611TafreshIran
The laws of thermodynamics and information for emergent cosmology
26 Sep 2015 September 29, 20150420Cv0450-h0470Dy9880-k0570-a
The aim here is to provide a set of equations for cosmology in terms of information and thermodynamical parameters. The method we implement in order to describe the universe is a development of Padmanabhan's approach which is based on the fact that emergence of the cosmic space is provided by the evolution of the cosmic time. In this line we obtain the Friedmann equation or its equivalent the conservation law in terms of information by the implementation of Laundauer's principle or in other words the information loss/production rate. Hence, a self consistent description of the universe is provided in terms of thermodynamical parameters. This is due to the fact that in this work the role of information which is the most important actor of all times, has stepped in to cosmology. We provide a picture of the emergent cosmology merely based on the information theory. In addition, we introduce a novel entropy on the horizon, which can also generalize Bekenstein-Hawking entropy for the asymptotic holographic principle.
Introduction
Thermodynamics is not anymore a stranger in cosmology. As a matter of fact thermodynamics provides the backbone for all analysis in the context of emergent universe models. The basis for this statement was established by Sakharov in 1967 who first pointed out that instead of spacetime, it is better to talk of spacetime atoms, which is only possible by the language of thermodynamics, see Ref [1]. This statement paved the way for scientists to work out Einstein's field equations based on the thermodynamic equation δQ = T dS known as the Clausius relation [2]. Note that T is the Unruh temperature which is observed by an accelerating observer inside the horizon, and δQ is the energy flux across the horizon [3]. The fact of the matter is that the Einstein's field equations are now understood as the spacetime equations of state.
Recently due to the efforts of Verlinde a new interpretation of the spacetime has been provided. This is in a sense that by relying on the information extracted from the holographic screen, three of the fundamental equations of physics, (Newton's law of gravity, Newton's second law, Einstein's field equations) were seen. Verlinde interpreted the gravitational force as an entropic force which is non-fundamental; this is due to the change of information on the holographic screen, see Ref [4] for details. However, Padmanabhan argued with the definitions provided by Verlinde, by bringing up two issues; the first was on the general covariance, and the second was on the finite systems e.g. the Sun-Earth system [9]. The former issue was raised based on the fact that in general covariance, space and time are treated within the same body, but Verlinde treated them differently. The later issue was raised due to the fact that our every day experience does not prove that finite systems emerge, which contradicts with Verlinde's statement that space emerges everywhere, both around finite or infinite systems. However, by selecting the cosmic time in cosmology, the two issues regarding general covariance and finite systems are out of the question. Padmanabhan pointed out that in cosmology by selecting a cosmic time these two issues do not stand. In other words, Padmanabhan stated that in cosmology, the "Cosmic space 1 is emergent as cosmic time progresses" which means that the expansion of the universe works out as long as the holographic equipartition stands, see Refs [6,7] for details. In this line the first proposal for the emergent cosmology was issued by Padbanabhan as [6] dV H dt
= L 2 P (N sur − N bulk ),(1)
where dt and dV H are the variations of the cosmic time and cosmic volume respectively. Note that in Padmanabehan's proposal the cosmic volume has been taken as the Hubble horizon volume. N sur represents the number of surface degrees of freedom which is equal to A H /L 2 P , where A H equals to 4πH −2 denotes the cosmic sphere's area, H is the Hubble parameter, L P = G/c 3 is the Planck length. The parameters , G and c are the reduced Planck constant, the gravitational constant, and the speed of light in the vacuum respectively. The bulk degrees of freedom which satisfies the equipartition law of energy is obtained as
N bulk = 2 k B T E Komar ,(2)
the temperature corresponding to the hubble horizon is T H = H/2π, and E Komar is the Komar energy contained inside the bulk of a perfect fluid obtained by
E Komar = 2 υ dV cs [T µν − 1 2 T g µν ]u µ u ν = |(ρ + 3p)V cs |,(3)
where T µν is the energy-momentum tensor, T is the trace of T µν , u µ is the 4-vector velocity, and V cs denotes the cosmic space volume. The reason for choosing cs as the indice is due to the fact that we have various cosmic space volumes, where (for Padmanabhan proposal we have V cs = V H = 4πH −3 /3).
Note that E Komar is the source for gravitational acceleration [10]. In general, the dynamical emergent equation is
dV cs dt = f (N sur , N bulk ),(4)
where f (N sur , N bulk ) is an arbitrary function with respect to the asymptotic holographic principle. Equation (1) is the simplest accessible form of Eq. (4). In this stage it is worth stating other proposals which prove adequate for better understanding the context of the present study. We start with the proposal issued by Sheykhi [5] who provided his suggestion based on the apparent horizon as
dV A dt = L 2 P R A H −1 (N sur − N bulk ),(5)
where R A = [H 2 + k a 2 ] − 1 2 is the apparent horizon radius in the FLRW background. Note that the apparent horizon is a boundary surface that allows inward light rays enter it, but prevents outward light rays exiting it. The constant k could take values equal to −1 in case of an open universe, 1 in case of a closed universe or 0 in case of a flat universe [11]. V A = 4πR 3 A /3 shows the apparent cosmic space volume. Similar to the Padmabanbhan proposal N sur equals to A/L 2 P , where A equals to 4πR 2 A . The temperature corresponding to the apparent horizon is the Kodama-Hayward temperature (T A ) obtained by [8]
T A = |κ| 2π = 1 2πR A 1 −Ṙ A 2HR A ,(6)
where κ is dynamical surface gravity and the dot denotes derivative in respect to time. Therefore, the bulk degrees of freedom is N bulk = 2 kBTA E Komar . Although this equation works fine, but is however not consistent with Padmanabhan's proposal. The reason for this is that the RHS of equation (5) is not totally based on thermodynamical quantities. Strictly speaking, a thermodynamical interpretation of the term R A /H −1 is not simply achieved. Yang et al. suggested [14] dV
H dt = f (N sur − N bulk , N sur ),(7)
where for details on the function f (∆N, N sur ) = L 2 P ∆N/α+αK(Nsur/α) [14] and the appendix. The proposition issued by Eune et al. focuses on the corrected form of the horizon volume [15]
1+ 2 1−n 1+2αK(Nsur/α) 2 1−n , see RefdV A dt = L 2 P f k (t)(N sur − N bulk ),(8)
where f k (t) is the deviation volume coefficient from the flat universe, defined as
f k (t) = V A V k Ṙ AH −1 RA − RA H −1 V k VA + 1 RAH −1 RA + RA H −1 V k VA − 1 ,(9)
where we have
V k = 2πa 2 [ √ ka arcsin ( √ kR A /a) − kR 2 A H].
Note that k is as stated earlier in the text. Moreover, a thermodynamic interpretation for f k (t) is not simple.
In a work prior to the present study we have proposed [12] dV
A dt = 2L 2 P T A T H (N sur − N bulk ),(10)
where T H = (2πHR 2 A ) −1 is the cosmological horizon temperature with non-dynamical radius which is measured by a comoving observer [13] which by implying an apparent horizon (which is considered as the most appropriate boundary in application to thermodynamics) alongside Kodama-Hayward temperature (which is a temperature for an evolving horizon implied in cosmology) will enable us to write the RHS of the dynamical emergent equation based only on thermodynamical parameters. Note that T A is the physical and working temperature in Eq. (10). As of the spirit of the emergent cosmology where the universe has a tendency towards the holographic principle state, T A tends to the asymptotic temperature T H . Due to the presence of H and R A in Eqs. (5) and (9) some difficulties may be observed for providing thermodynamical interpretations. This is due to the fact that Eqs. (5) and (9) do not seem to be completely based on information parameters. Equations (1) and (11) although do describe the evolution of an emergent cosmic space, but still lack generality. This generality is observed in Eq. (10) justifying its importance. For more proposals please see [16]).
In the 70s, studying black hole physics became very hot due to the discovery of a mysterious relation between gravity and thermodynamics [17,18]. After facing the information paradox while measuring the transmission information through the horizon, measuring the transmitted information from the horizon became a big issue. In the emergent theory due to the assumption of the existence of space atoms, we are hopeful that not only a better description for the universe could be provided, but also the quantum gravity theory could be established. The variation of the cosmic space volume, the space atoms go in and out the horizon. These movements create or destroy information inside the bulk. Space atoms which go in and out the horizon are not recoverable due to the horizon's notion. Now since these inward and outward space atoms are non-recoverable, the change in information that they provide, would generate entropy [17,19,20,21].
In the present work, we discuss the information interpretation of the second law of thermodynamics based on Landauer principle. In fact, the general idea of this work is based on this principle. This principle states that information is a physical concept. As a matter of fact this principle came to save the second law of thermodynamics from criticizers, where the most important one of all was Maxwell's demon, see e.g. [22,23]. However, it was Landauer who saved the day for the second law of thermodynamics by stating that if information on a system is so how deleted that it is impossible to undelete it, entropy is created. Note that Landauer was highly inspired by Szilard's engine, who had proved that information is not an intrinsic concept, but has relations with the outside world, see [22]. This entropy which is created by the loss of information is obtained by
∆S = −k B ln 2 ∆I,(11)
where ∆I denotes of information loss. This means that for every one bit of destroyed data, the entropy increases as much as k B ln 2, see [24]. For an intensive reading on Landauer principle, see e.g. [25,26,27].
The information interpretation of the conservation law
The standard model for cosmology is based on two main equations 2
1 R 2 A = H 2 + k a 2 = 8π 3 L 2 P ρ,(12)H + H 2 = − 4π 3 L 2 P (ρ + 3p),(13)
where, the first is known as Friedmann's equation and the second is known as Raychaudhuri's equation. However, Friedmann's equation could be derived when Raychaudhuri's equation is combined with the conservation law. Note that the conservation law for a universe with a perfect fluid matter content is expressed as
ρ + 3H(ρ + p) = 0 in terms of H, ρ + 2Ṙ A RA ρ = 0 in terms of R A .(14)
In this stage the intention is to write an expression for Eq. (14) in terms of the first law of thermodynamics
dE + pdV = TdS(15)
Before proceeding it is worth providing an example; consider a comoving volume defined as, V c = 4π 3 a 3 , therefore the conservation law would take the forṁ
ρ +V c V c (ρ + p) = 0,(16)
now if this volume has the Misner-Sharp energy [28], we have
dE + pdV c = 0.(17)
Equation (17) is as the form of the first law of thermodynamics. It could also be deduced from Eq. (17) that dS is equal to zero, meaning that the entropy is constant. In other words, a universe defined by the comoving volume with energy E has no entropy production. Now it is time to implement the findings of the present study to a more realistic universe. All of the laws of thermodynamics work perfect for the apparent horizon, the desired boundary here is chosen accordingly (as the apparent horizon). Note that the apparent horizon has been defined below Eq. (5). In this line we start by dividing both sides of the first law of thermodynamics (Eq. 15) by the apparent horizon volume (V A = 4π 3 R 3 A ), which giveṡ
ρ + 3(ρ + p)Ṙ A R A = T AṠ V A .(18)
2 For simplicity, the natural units k B = c = = 1 are used throughout this section.
It is clear that the entropy is not constant, which is due to the selection of the apparent volume. Now the conclusion from all of this is that the apparent horizon volume with a fixed entropy is out of the question in the present study. What we need here is something that exhibits entropy production. The fact of the matter is that in contrast to a comoving volume there is an entropy production/loss for an observer who travels with the apparent horizon. The question that arises here is how the new entropy would look. To answer this question we must go down to facts; we all know that for the de Sitter universe holographic principle holds (N sur = N bulk ) [29], hence due to the dynamical equation for emergence we can take dV A /dt equal to zero. This results in having a constant radius, which leads to conclude that the entropy is constant This could be readily noticed from Eqs. (14) and (18) which leads toṠ ∝Ṙ. . It is known that the de Sitter universe entropy S dS is equal to 3π/Λ which is constant, see e.g. ref [30]. Another fact that supports our proposal for the entropy rate is that for vacuum, the information production/loss rate must be zero. Note that for vacuum (cosmological constant Λ) the equation of state is ρ + p = 0. The last fact is that for the Minkowski spacetime, due to the flat geometry, information is neither produced nor lost. These facts in addition to the fact that entropy is non-constant justifies our proposal for the information production/loss, defined as
dI dt = 3HV A − f (N sur , N bulk ) T A ln 2 (ρ + p),(19)
where f (N sur , N bulk ) has been defined in Eq. (4). It is worth reminding that when 3HV A is equal to f (N sur , N bulk ), the numerator on the RHS of Eq. (19) would be zero, resulting in the Milne universe which is part of the Minkowski spacetime. Now the rate of change for the entropy inside the apparent horizon due to the Landauer Principle (11) is
dS dt = − ln 2 dI dt = f − 3HV A T A (ρ + p).(20)
One may reproduce the conservation law (14) based on the first law of thermodynamics (17), together with the evolutionary parameters expressed by Eqs. (4) and (19).
To comply with the aims of the present study which is to write the Friedmann's equation for an apparent horizon, we write all emergent equations in terms of information. The two fundamental equations for describing the cosmic space in the emergent perspective are
dV A dt = f (N sur , N bulk ),(21)dI dt = 3HV A − f T A ln 2 (ρ + p).(22)
The dynamical equation of emergent gravity (21) is equivalent to the Raychaudhuri equation in the standard model of cosmology, and the rate of information change of the apparent horizon (22) is equivalent to equation (14). Therefore, one can say that Eqs. (21) and (22), would lead to the equations for the standard model of cosmology. In order to establish the idea that Equations (21) and (22) define emergent cosmology, one more thing has to be done. That is on the parameter H In Eq. (22) which its information aspect has not been well defined. To proceed with this we have to deal with the last degree of freedom for the system,f , located in Eq. (4). Therefore the proposal expressed by Eq. (10) is implemented. This provides an information nature for all parameters of Eq. (22). It is worth noting that proposal (10) is well chosen due to the fact that it is more in line with the nature of the emergent cosmology than the others. One may follow the other proposals in the appendix. In this line we should find the rate of change for the entropy by combining the Landauer Principle (20) and the holographic Raychaudhuri equation (10), which is
dS dt = 2 T H L 2 P (N sur − N bulk ) − 3HV A T A (ρ + p).(23)
It is instructive to check the generality of Landauer's entropy (Eq. (20)). Equation (14) combined
with Eq. (18) givesṠ = V A T A ρ + 3(ρ + p)Ṙ A R A = (ρ + 3p)V A T AṘ A R A ,(24)
where the dot denotes derivative in respect to time. Equation (24) could be written in terms of the surface and bulk degrees of freedomṠ
= 1 4 Nbulk NsurṄ sur ,(25)
Note that N bulk and N sur have been defined in Eq. (2) and its preceding paragraph. Equation (25) could also be written as the change of entropy in terms of the surface degrees of freedom
dS = 1 4 N bulk N sur dN sur .(26)
It could readily be noticed that when the holographic condition (N sur = N bulk ) applies, the well-known Bekenstein-Hawking entropy (S = 1
4 A L 2 P )
is resulted from Eq. (26). Hence interestingly, Eq. (26) can be considered as a generalization of the Bekenstein-Hawking entropy which holds not only for the holographic principle but also for the asymptotic holographic principle.
In this stage it is worth writing the emergent equations in the form proposed in the present study as
dV dt = 2L 2 P TA TH (N sur − N bulk ), dI = −1 4 ln 2 Nbulk Nsur dN sur ,(27)
which is accompanied by the first law of thermodynamics (15).
To comply with the aims of the present study which is to obtain an expression for the cosmological equations, we start from the holographic Raychaudhuri equation (10). But first we should recall two parameters which are the Kodama-Hayward temperature [8,31,32,33] and the Komar energy of the universe. As stated earlier, in order to measure the temperature of the evolving horizons it is suitable to use the Kodama-Hayward temperature, T A = 1 2πRA 1 −Ṙ A 2HRA , which is measured by the Kodama observer. It is worth noting that in emergent cosmology due to the asymptotic de Sitter behavior the existence of dark energy is compulsory, see [7]. Thus, the temperature for accelerated universe with perfect fluid will never be zero. This means that the total energy density, ρ, would never be equal to triple of the total pressure, 3p. Hence the temperature would never become zero. The Komar mass of the accelerated universe expressed as [7,10]
E Komar = |(ρ + 3p)V A | = −(ρ + 3p)V A .(28)
Substitute the definitions for the surface and the bulk degrees of freedom (2) in the holographic Raychaudhuri equation (10). The result would be
4πR 2 AṘ A = 2L 2 P l HR A 1 −Ṙ A 2HR A 4πR 2 A L 2 P l + 32π 2 R 5 A H(ρ + 3p) 3(2HR A −Ṙ A ) ,(29)
where by rearranging leads to the Raychadhuri equatioṅ
H + H 2 = − 4π 3 L 2 P (ρ + 3p).(30)
Equation (30) is one of the cosmological equations for the standard model of cosmology that we where after, the other one is the Friedmann equation. By substituting Eq. (23) into the first law of thermodynamics, the conversation law (Eq. (14)) can be obtained. Clearly, Eqs. (14) and (30) are sufficient to derive the Friedmann equation, expressed by
1 R 2 A = 8πL 2 p 3 ρ.(31)
The picture that we have drawn of the cosmological universe by Eqs. (30) and (31) due to the Landauer Principle and the asymptotic holographic principle, shed light on the universe based on information.
Concluding remarks
The main achievement of the present work is writing the equations of cosmology in terms of the information of the system. The starting point of this study was Padmanabhan's proposal on the emergence of cosmic space. He stated that the difference between the degrees of freedom on the surface and the bulk of the horizon, causes the emergence of the cosmic space. We obtain a generalized form for the Bekenstein-Hawking entropy both for the holographic principle and the asymptotic holographic principle (Eq. (26)) enables a more realistic understanding of the horizon's entropy.
APPENDIX
It is worth investigating other proposals fron f extracted from Eq. (4) in terms of the Laundauer entropy (20). In this line after checking Laundauer entropy for other proposals it will be seen that the chosen proposal of the present study (10) is indeed reasonable. We hope that the last degree of freedom in the dynamical emergent equation (choosing f in Eq. (4)) eliminates H from Eq. (20). Note that the existence of H in Eq. (20) puts us in a weak position for issuing an informational interpretation for the emergent cosmological equations (21) and (22).
Equation (1) was proposed by Padmanabhan. He assumed the hubble horizon (R H = H −1 ) to be the boundary of the universe. Therefore, the area and volume of the universe would be expressed as
A H = 4πH −2 , V H = 4π 3 H −3 . (A. 1)
The temperature corresponding to the horizon is assumed T H = H/2π. By calculating N sur and N bulk as defined in the Eq. (2) and its preceding paragraph, we have
N sur = 4π L 2 P H 2 , N bulk = − 16π 2 3H 4 (ρ + 3p). (A. 2)
Now by substituting these parameters into the Laundauer entropy (20) we havė
S = 32π 3 3H 5 L 2 P (ρ + 3p) (ρ + p). (A. 3)
Unluckily H remains in the Equation disabling a robust thermodynamical interpretation of Laundauer-Padmanabhan Entropy. Shyekhi's Proposal is Eq.(5). He generalized the boundary of the universe to the apparent horizon and provided a general expression for the curvature of the universe. The area and volume for the universe is as of Shyekhi's Proposal
A A = 4πR 2 A , V A = 4π 3 R 3 A . (A. 4)
The Kodama-Hayward temperature of the apparent horizon is assumed to be in the form T = 1/2πR A . Similar to Laundauer-Padmanabhan entropy we should calculate N sur and N bulk as defined in Eq. (2) and its preceding paragraph
N sur = 4πR 2 A L 2 P , N bulk = − 16π 2 3 R 4 A (ρ + 3p), (A. 5) resulting inṠ = f − 3HV A T A (ρ + p) = 32π 3 3 R 6 A HL 2 P (ρ + 3p) (ρ + p). (A. 6)
It is not surprising that H still exists. We know that by substituting R A = H −1 in Eq. (A. 6) one could reach Eq. (A. 3). The informational interpretation of this entropy is unclear.
Another extension of Padmanabhan's approach hs been provided by Yang et al. [14]. Their proposal (Eq. (7)) calculates the Raychaudhuri's equation for an arbitrary dimension. For simplicity we focus on the 3+1 dimension universe. Their assumptions for the horizon radius and its temperature is similar to Padmanabhan's approach, see equations (A. 1 and A. 2). In 3+1 dimensions, the auxiliary parameters α and K [14] are expressed by
α = 1, K = 3Ω 3 L 2 P = 4π L 2 P . (A. 7)
Another auxilary parameterα is the new degree of freedom in the Yang's proposal which meets Padmanabhan's proposal ifα equals zero. Note that for consistency (see Eq. (4)
f = f k (t) H −1 R A f Sheykhi , (A. 10)
where f k is defined by Eq. (9). Except the definition of volume in their approach (and their proposal) other assumptions are the same as of sheykhi's proposal. The entropy would bė
S = 2π H V A V k Ṙ AH −1 RA − RA H −1 V k VA + 1 RAH −1 RA + RA H −1 V k VA − 1 f Sheykhi − 6πHR A V k (ρ + p). (A. 11)
Note that in this appendix the compatibility check between the Laundauer entropy proposed in this work (Eq. (20)) with other dynamical emergent equations (Eq. 4) has been carried out. However, other potential proposals for Laundauer entropy may provide an expression based on only thermodynamic parameters.
He supported his statement by deriving the Raychaudhuri equation for flat FLRW. The emergent dynamical expression (Eq. (4)) is the equivalent of the Raychaudhuri equation in the standard model of cosmology. The emergent equation shows the dependence of the cosmic volume space on the cosmic time in terms of the number of degrees of freedom, which itself is based on the information extracted from the system. The question to be answered is whether if it is possible to write the Friedmann equation or in other words the continuity equation in terms of the information of the system. The key to this question lies in the hands of the Landauer's principle and the loss/production of information, where by implementing these, we have obtained the continuity equation. This enabled us to obtain two consistent equations in emergent cosmology based on the information extracted from the surface and bulk of the apparent horizon, see Eq. 27).
Padmanabhan is Padmanbhan's proposal (1). Yang's proposal is reduced to the Padmanabhan's suggestion ifα = 0. By substituting equations (A. 1), (A. 2) and (A. 7) into the Eq. (A. not directly based on only thermodynamic parameters.Eune et al. propsed another propsal by calculating the volume differently. In brief their proposal is described as
The cosmic space volume is an evolutionary volume in respect to the cosmic time. This statement works only if the CMB is observed homogeneous and isotropic. As long as the CMB is proved to be homogeneous and isotropic for the geodesic observer, the respected time is the cosmic time. Note that the evolution of the cosmic space regarding the cosmic time may be subject to authors.
. A D Sakharov, A D Sakharov, Gen. Relativ. Gravit. 12365Sov. Phys. Dokl.Sakharov, A.D.: Sov. Phys. Dokl. 12, 1040 (1968) [Sakharov, A.D.: Gen. Relativ. Gravit. 32, 365 (2000).]
. T Jacobson, arXiv:gr-qc/9504004Phys. Rev. Lett. 751260Jacobson, T.: Phys. Rev. Lett. 75, 1260 (1995). arXiv:gr-qc/9504004
. W G Unruh, Phys. Rev. D. 14870Unruh, W.G.: Phys. Rev. D 14, 870 (1976).
. E Verlinde, arXiv:1001.0785J. High Energy Phys. 429Verlinde, E.: J. High Energy Phys. 4, 029 (2011). arXiv:1001.0785
. A Sheykhi, arXiv:1304.3054Phys. Rev. D. 8761501Sheykhi, A.: Phys. Rev. D 87, 061501 (2013). arXiv:1304.3054
. T Padmanabhan, arXiv:1206.4916Padmanabhan, T.: arXiv:1206.4916
. T Padmanabhan, arXiv:1207.0505Res. Astro. Astrophys. 12891Padmanabhan, T.: Res. Astro. Astrophys. 12, 891 (2012). arXiv:1207.0505
. R G Cai, S P Kim, arXiv:hep-th/0501055J. High Energy Phys. 0250Cai, R.G., Kim, S.P.: J. High Energy Phys. 02, 050 (2005). arXiv:hep-th/0501055
. T Padmanabhan, arXiv:0912.3165Mod. Phys. Lett. 251129Padmanabhan, T.: Mod. Phys. Lett. 25, 1129 (2010). arXiv:0912.3165
. T Padmanabhan, arXiv:gr-qc/0308070Class. Quantum Gravity. 214485Padmanabhan, T.: Class. Quantum Gravity 21, 4485 (2004). arXiv:gr-qc/0308070
. D Bak, S J Rey, arXiv:hep-th/9902173Class. Quantum Gravity. 1783Bak, D., Rey, S.J.: Class. Quantum Gravity 17, L83 (2000). arXiv:hep-th/9902173
. M Hashemi, S Jalalzadeh, S Vasheghani Farahani, arXiv:gr- qc/1308.2383Gen. Relativ. Gravit. 4753Hashemi, M., Jalalzadeh S., Vasheghani Farahani S.: Gen. Relativ. Gravit. 47, 53 (2015). arXiv:gr- qc/1308.2383
. Y P Hu, arXiv:1007.4044v3Phys. Lett. B. 701269Hu, Y.P.: Phys. Lett. B 701, 269 (2011). arXiv:1007.4044v3
. K Yang, Y X Liu, Y Q Wang, arXiv:hep-th/1207.3515Phys. Rev. D. 86104013Yang K., Liu Y. X., Wang Y. Q.: Phys. Rev. D 86, 104013 (2012). arXiv:hep-th/1207.3515
. M Eune, W Kim, arXiv:1305.6688v2Phys. Rev. D. 8867303Eune, M., Kim, W.: Phys. Rev. D 88, 067303 (2013). arXiv:1305.6688v2
. Farag Ali, A , arXiv:1310.1790Phys. Lett. B. 732335Farag Ali A.: Phys. Lett. B 732 335 (2014). arXiv:1310.1790
. E Chang-Young, D Lee, arXiv:1309.3084J. High Energy Phys. 04125Chang-Young E., Lee D.: J. High Energy Phys. 04 125 (2014). arXiv:1309.3084
. W Y Ai, H Chen, X R Hu, J B Deng, arXiv:1309.1857Gen Relativ Gravit. 461680Ai W.Y., Chen H., Hu X.R., Deng J. B.: Gen Relativ Gravit 46 1680 (2014). arXiv:1309.1857
. Y Heydarzade, H Hadi, F Darabi, A Sheykhi, arXiv:1506.02388Heydarzade Y., Hadi H., Darabi F., Sheykhi A.: arXiv:1506.02388
. R G Cai, arXiv:1207.0622J. High Energy Phys. 1116Cai R.G.: J. High Energy Phys. 11 016 (2012). arXiv:1207.0622
. F F Yuan, Y C Huang, arXiv:1304.7949Yuan F.F., Huang Y. C.: arXiv:1304.7949
. W Y Ai, H Chen, X R Hu, J B Deng, arXiv:1307.2480Phys. Rev. D. 8884019Ai W.Y., Chen H., Hu X.R., Deng J. B.: Phys. Rev. D 88 084019 (2013). arXiv:1307.2480
. K Yang, Y X Liu, Y Q Wang, arXiv:1207.3515Phys. Rev. D. 86104013Yang K., Liu Y.X., Wang Y.Q.: Phys. Rev. D 86 104013 (2012). arXiv:1207.3515
. F Q Tu, Y X Chen, : J Cosmo, Astro. Phys. 0524Tu F.Q., Chen Y.X.: J. Cosmo. Astro. Phys. 05 024 (2013).
. Y Ling, W J Pan, arXiv:1304.0220Phys. Rev. D. 8843518Ling Y., Pan W.J.: Phys. Rev. D 88 043518 (2013). arXiv:1304.0220
. A Sheykhi, M H Dehghani, S E Hosseini, Phys. Lett. B. 72623Sheykhi A., Dehghani M.H., Hosseini S.E.: Phys. Lett. B 726 23 (2013).
. T Padmanabhan, arXiv:1003.5665Phys. Rev. D. 81124040Padmanabhan T.: Phys. Rev. D 81, 124040 (2010). arXiv:1003.5665
. T Padmanabhan, arXiv:1012.0119Phys. Rev. D. 8344048Padmanabhan T.: Phys. Rev. D 83, 044048 (2011). arXiv:1012.0119
. T Padmanabhan, arXiv:0911.5004Rept. Prog. Phys. 7346901gr-qcPadmanabhan T.: Rept. Prog. Phys. 73, 046901 (2010) [arXiv:0911.5004 [gr-qc]].
. F Dowker, arXiv:1405.3492gr-qcDowker F.: arXiv:1405.3492 [gr-qc].
. T Padmanabhan, Int. J. Mod. Phys. D. 17591Padmanabhan T.: Int. J. Mod. Phys. D 17, 591 (2008).
L Szilard, Zeitschrift fur Physik. Maxwells Demon 2, H. S. Leff, A. Rex53840Szilard L.: Zeitschrift fur Physik 53, 840 (1929). [Maxwells Demon 2, H. S. Leff, A. Rex, IOP 110 (2003).]
V Cpek, D P Sheehan, Challenges to the Second Law of Thermodynamics: Theory and Experiment. SpringerCpek V., Sheehan D. P.: Challenges to the Second Law of Thermodynamics: Theory and Exper- iment, Springer, (2005).
. R Landauer, IBM J. Res. Dev. Maxwells Demon 2, H. S. Leff, A. Rex5183IOPLandauer R.: IBM J. Res. Dev. 5, 183 (1961). [Maxwells Demon 2, H. S. Leff, A. Rex, IOP 148(2003).]
T Sagawa, Thermodynamics of Information Processing in Small Systems. Springer PressSagawa T.: Thermodynamics of Information Processing in Small Systems, Springer Press (2012).
. T L Duncan, J S Semura, arXiv:0703235Found. Phys. 371767Duncan T. L., Semura J. S.: Found. Phys. 37, 1767 (2007). arXiv:0703235
. T L Duncan, J S Semura, arXiv:0501014Entropy. 06Duncan T. L., Semura J. S.: Entropy 06, 21 (2004). arXiv:0501014
. C W Misner, D H Sharp, Phys. Rev. 136571Misner C. W., Sharp D. H.: Phys. Rev 136, B571 (1964).
. T Padmanabhan, arXiv:gr-qc/1312.3253Gen. Relativ. Gravit. 461673Padmanabhan T.: Gen. Relativ. Gravit. 46, 1673 (2014). arXiv:gr-qc/1312.3253
. R Bousso, hep-th/0205177Bousso R.: hep-th/0205177
. S A Hayward, arXiv:gr-qc/9710089Class. Quantum Gravity. 153147Hayward, S.A.: Class. Quantum Gravity 15, 3147 (1998). arXiv:gr-qc/9710089
. S A Hayward, R Di Criscienzo, L Vanzo, M Nadalini, S Zerbini, arXiv:0806.0014Class. Quantum Gravity. 2662001Hayward, S.A., Di Criscienzo, R., Vanzo, L., Nadalini, M., Zerbini, S.: Class. Quantum Gravity 26, 062001 (2009). arXiv:0806.0014
. A Helou, arXiv:1502.04235Helou A.: arXiv:1502.04235
| []
|
[
"Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies",
"Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies"
]
| [
"A Ruiz \nInstituto de Física de Cantabria (IFCA)\nCSIC-UC, Avda. de los Castros\n39005SantanderSpain\n\nIstituto Nazionale di Astrofisica (INAF)\nOsservatorio Astronomico di Brera\nvia Brera 2120121MilanoItaly\n",
"G Miniutti \nLAEX\nLAEFF\nCentro de Astrobiología (CSIC-INTA)\nP.O: Box 78E-28691Villanueva de la Cañada, MadridSpain\n",
"F Panessa \nIstituto Nazionale di Astrofisica (INAF)\nIASF-Roma\nVia Fosso del Cavaliere 100I-00133RomeItaly\n",
"F J Carrera \nInstituto de Física de Cantabria (IFCA)\nCSIC-UC, Avda. de los Castros\n39005SantanderSpain\n"
]
| [
"Instituto de Física de Cantabria (IFCA)\nCSIC-UC, Avda. de los Castros\n39005SantanderSpain",
"Istituto Nazionale di Astrofisica (INAF)\nOsservatorio Astronomico di Brera\nvia Brera 2120121MilanoItaly",
"LAEX\nLAEFF\nCentro de Astrobiología (CSIC-INTA)\nP.O: Box 78E-28691Villanueva de la Cañada, MadridSpain",
"Istituto Nazionale di Astrofisica (INAF)\nIASF-Roma\nVia Fosso del Cavaliere 100I-00133RomeItaly",
"Instituto de Física de Cantabria (IFCA)\nCSIC-UC, Avda. de los Castros\n39005SantanderSpain"
]
| []
| Aims. The relationship between star formation and super-massive black hole growth is central to our understanding of galaxy formation and evolution. Hyper-Luminous Infrared Galaxies (HLIRGs) are unique laboratories to investigate the connection between starburst (SB) and Active Galactic Nuclei (AGN), since they exhibit extreme star formation rates, and most of them show evidence of harbouring powerful AGN. Methods. Our previous X-ray study of a sample of HLIRGs shows that the X-ray emission of most of these sources is dominated by AGN activity. To improve our estimate of the relative contribution of the AGN and SB emission to its total bolometric output, we have built multi-wavelength (from radio to X-rays) spectral energy distributions (SEDs) for these HLIRGs, and we have fitted standard empirical AGN and SB templates to these SEDs. Results. In broad terms, most sources are well fitted using this method, and we found AGN and SB contributions similar to those obtained by previous studies of HLIRGs. We have classified the HLIRGs SEDs in two groups, named class A and class B. Class A HLIRGs show a flat SED from the optical to the infrared energy range. Three out of seven class A sources can be modelled with a pure luminosity-dependent QSO template, while the rest of them require a type 1 AGN template and a SB template. The SB component is dominant in three out of four class A objects. Class B HLIRGs show SEDs with a prominent and broad IR bump. These sources can not trivially be modelled with a combination of pure AGN and pure SB, they require templates of composite objects, suggesting that 50% of their emission comes from stellar formation processes. Conclusions. We propose that our sample is actually composed by three different populations: very luminous QSO (class A objects with negligible SB contribution), young galaxies going through their maximal star formation period (class A objects with significant SB emission) and the high luminosity tail of ULIRG population distribution (class B sources). | 10.1051/0004-6361/200912235 | [
"https://arxiv.org/pdf/1003.0800v1.pdf"
]
| 118,612,729 | 1003.0800 | 17e9193909e55e793a94182c3d5f2451861b82cc |
Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies
3 Mar 2010 March 3, 2010
A Ruiz
Instituto de Física de Cantabria (IFCA)
CSIC-UC, Avda. de los Castros
39005SantanderSpain
Istituto Nazionale di Astrofisica (INAF)
Osservatorio Astronomico di Brera
via Brera 2120121MilanoItaly
G Miniutti
LAEX
LAEFF
Centro de Astrobiología (CSIC-INTA)
P.O: Box 78E-28691Villanueva de la Cañada, MadridSpain
F Panessa
Istituto Nazionale di Astrofisica (INAF)
IASF-Roma
Via Fosso del Cavaliere 100I-00133RomeItaly
F J Carrera
Instituto de Física de Cantabria (IFCA)
CSIC-UC, Avda. de los Castros
39005SantanderSpain
Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies
3 Mar 2010 March 3, 2010Received March 31, 2009; accepted March 2, 2010Astronomy & Astrophysics manuscript no. aruiz˙SEDhlirgs c ESO 2010galaxies: active -galaxies: starburst -galaxies: evolution -X-rays: galaxies -infrared: galaxies
Aims. The relationship between star formation and super-massive black hole growth is central to our understanding of galaxy formation and evolution. Hyper-Luminous Infrared Galaxies (HLIRGs) are unique laboratories to investigate the connection between starburst (SB) and Active Galactic Nuclei (AGN), since they exhibit extreme star formation rates, and most of them show evidence of harbouring powerful AGN. Methods. Our previous X-ray study of a sample of HLIRGs shows that the X-ray emission of most of these sources is dominated by AGN activity. To improve our estimate of the relative contribution of the AGN and SB emission to its total bolometric output, we have built multi-wavelength (from radio to X-rays) spectral energy distributions (SEDs) for these HLIRGs, and we have fitted standard empirical AGN and SB templates to these SEDs. Results. In broad terms, most sources are well fitted using this method, and we found AGN and SB contributions similar to those obtained by previous studies of HLIRGs. We have classified the HLIRGs SEDs in two groups, named class A and class B. Class A HLIRGs show a flat SED from the optical to the infrared energy range. Three out of seven class A sources can be modelled with a pure luminosity-dependent QSO template, while the rest of them require a type 1 AGN template and a SB template. The SB component is dominant in three out of four class A objects. Class B HLIRGs show SEDs with a prominent and broad IR bump. These sources can not trivially be modelled with a combination of pure AGN and pure SB, they require templates of composite objects, suggesting that 50% of their emission comes from stellar formation processes. Conclusions. We propose that our sample is actually composed by three different populations: very luminous QSO (class A objects with negligible SB contribution), young galaxies going through their maximal star formation period (class A objects with significant SB emission) and the high luminosity tail of ULIRG population distribution (class B sources).
Introduction
During the last decade, the hypothesis that Active Galactic Nuclei (AGN) and galaxy formation and evolution are closely related has been supported by a growing body of observational evidences. On one hand, most galaxies have been shown to harbour a central super-massive black hole (Kormendy & Gebhardt 2001) whose mass is correlated with that of the host galaxy spheroid (Magorrian et al. 1998;McLure & Dunlop 2002) and, on the other hand, the evolution of cosmic star formation and of luminous AGN activity appear rather similar (Franceschini et al. 1999;Silverman et al. 2005). These hints clearly suggest a connection between the growth of the central black hole through accretion and the growth of the spheroid through star formation.
The observational study of these two phenomena needs penetrating radiation like X-rays, mid-infrared (MIR), far-infrared (FIR) or sub-mm. On one hand, star formation takes place in heavily obscured environments. Primary radiation is then reprocessed by dust and re-emitted in the MIR-FIR band. X-ray emission from starburst (SB) activity is enhanced by energetic phenomena related to the final stages of stellar evolution, e.g. supernova remnants or X-ray binaries (Persic & Rephaeli 2002). On the other hand, X-ray emission is the signature of AGN activity, produced by black hole (BH) growth through accretion. However, synthesis models of the X-ray background re-quire that most AGNs in the Universe are obscured (Ueda et al. 2003;Gilli et al. 2007), i.e. most of the accretion power in the Universe is absorbed and then re-emitted in the infrared (IR) bands (Fabian & Iwasawa 1999).
IR and X-ray observations are therefore essential to understand the phenomena of star formation and AGN, as well as their interplay and connection. Fortunately, nowadays we have powerful tools to observe the Universe in both energy ranges, like Chandra, XMM-Newton, Spitzer, AKARI or Suzaku. Different strategies can be employed to investigate the IR/X-ray synergy and its effect on the AGN-galaxy co-evolution, e.g. by multi-wavelength surveys like GOODS, AEGIS or COSMOS (Dickinson et al. 2003;Davis et al. 2007;Scoville et al. 2007), by targeted MIR observations of peculiar X-ray sources like Xray absorbed broad line QSO (Stevens et al. 2005;Page et al. 2007), and by targeted X-ray observations of MIR/FIRemitting objects like Ultraluminous Infrared Galaxies (ULIRGS, Franceschini et al. 2003;Teng et al. 2005) or Hyperluminous Infrared Galaxies (HLIRGS, Wilman et al. 1998;Ruiz et al. 2007).
ULIRGs are a family of galaxies with IR luminosity L IR ≥ 10 12 L ⊙ , whose bolometric output is dominated by the emission in the IR waveband (see Lonsdale et al. 2006 for a complete review). X-ray and IR data clearly suggest that these sources are powered by SB and, in some cases (∼ 50%), by 2 A. Ruiz et al.: Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies AGNs (Farrah et al. 2003;Franceschini et al. 2003;Teng et al. 2005;Nardini et al. 2008). The fraction of ULIRGs hosting an AGN increases with increasing IR luminosity (Veilleux et al. 1995(Veilleux et al. , 1999. Most of these objects are in interacting systems, i.e. ULIRGs are most likely triggered by mergers of galaxies (Farrah et al. 2001;Veilleux et al. 2002).
HLIRGs present an IR luminosity L IR ≥ 10 13 L ⊙ . These are among the most luminous objects in the Universe. Assuming that the FIR emission above 50 µm is dominated by SB, their estimated star formation rates (SFR) are 1000 M ⊙ yr −1 (Rowan-Robinson 2000). IR and optical observations support that most harbour an AGN (Verma et al. 2002;Farrah et al. 2002a), although the main power source is still controversial. As HLIRGs could represent the most vigorous stage of galaxy formation, they are unique laboratories to investigate extremely high stellar formation, and its connection to super-massive black hole growth.
Only about a third of HLIRGs are located in interacting systems (Farrah et al. 2002b), so a considerable number of these objects can not be classified just as objects in the brightest end of the ULIRG population. They could be very young galaxies experiencing their major episode of star formation (Rowan-Robinson 2000), or may be a completely new class of objects, e.g. a transient IR-luminous phase in quasar evolution Stevens et al. 2005).
X-rays are a very convenient tool to disentangle the relative contribution of SB and AGN to the total output of HLIRGs. Only a few of these objects had been studied in X-rays (Wilman et al. 1998(Wilman et al. , 2003Iwasawa et al. 2005) before Ruiz et al. (2007) presented the first systematic study of these sources in the X-ray band.
A sample of 14 HLIRGs was observed by XMM-Newton and 10 were detected (Ruiz et al. 2007). All of them show an AGN-dominated X-ray spectrum. We find X-ray thermal emission associated with SB for just one source, while all ULIRGs show a SB component in their X-ray spectra . The much brighter AGN emission probably hides the Xrays originated in the SB (if this component actually exists). The IR luminosity of most HLIRGs of the sample is consistent with an AGN origin, but it is systematically over that expected for a local QSO (Elvis et al. 1994;Risaliti & Elvis 2004) of the same X-ray luminosity. This IR excess could be due to X-ray obscuration, SB emission or may be due to an intrinsic difference between the spectral energy distribution (SED) of AGN in HLIRGs and the SED of local QSO.
To clarify these questions a proper study of the SED of these objects is needed. Several studies of HLIRGs SED have been published (Rowan-Robinson 2000;Verma et al. 2002;Farrah et al. 2002a), but they were always limited to the IR energy range. These studies apply a two component model (AGN+SB) to reproduce the IR emission, using radiative transfer models (RTM) for the AGN dust torus (Efstathiou & Rowan-Robinson 1995;Rowan-Robinson 1995) and the SB (Efstathiou et al. 2000) components. Rowan-Robinson (2000) studied a sample of 45 HLIRGs, finding a continuum distribution in the relative contribution of the AGN and SB components, from pure starburst to pure AGN, with most objects being composite. On the other hand, Farrah et al. (2002a) selected a complete sample of HLIRGs in a manner independent of obscuration, inclination or AGN content and included sub-mm data (sub-mm data introduce a tight constraint on the SB luminosities), finding that all HLIRGs in the sample were composite objects.
In this paper we present a study of HLIRGs SED with two majors improvements and one limitation compared with the earlier studies commented above: (a) we have greatly enlarged the wavelength coverage, from radio to X-rays, and (b) we have significantly increased the photometric data coverage. However, as a self consistent analytical model able to reproduce the whole SED at so broad frequency range would be very complex to compute (and beyond the scope of this paper), we have compared our constructed SEDs with empirical AGN and SB templates, instead of using analytical RTM as in previous studies.
The paper is organized as follows. Section 2 briefly describes the HLIRG sample. Section 3 explains how we built the SED and the data used to this end, and Sect. 4 the methods we have employed to model the SEDs. Results are presented in Sect. 5, compared with previous studies of HLIRGs in Sect. 6 and discussed in Sect. 7. Section 8 summarizes our conclusions.
The Wilkinson Microwave Anisotropy Probe (WMAP) concordance cosmology has been adopted throughout this paper: (Spergel et al. 2003).
H 0 = 70 km s −1 Mpc −1 , Ω m = 0.27, Ω Λ = 0.73
The HLIRG sample
The sample studied here is the one investigated in Ruiz et al. (2007). From the Rowan-Robinson (2000) sample of HLIRGs we selected those sources with public data available in the XMM-Newton Science Archive as of December 2004, and we added our own XMM-Newton AO-5 observations.
We limited this sample to sources with redshift less than ∼ 2 to avoid strong biasing towards high redshift quasars. Nevertheless, selecting the sample by using the availability of X-ray data probably introduces a selection effect in favour of the presence of an AGN. We also rejected one source from the original sample, IRAS 13279+3401. Using recent optical and MIR spectra, we have determined its redshift to be z ∼ 0.02 (see Appendix A), much lower than the one presented in the literature (z = 0.36, Rowan-Robinson 2000). Therefore, our estimate of its IR luminosity is 3 × 10 10 L ⊙ , even below that necessary to classify it as a LIRG. Hence, we have thirteen objects in our final sample (see Table 1).
According to their optical spectra (derived from the literature), two sources are classified as starburst galaxies and twelve sources present AGN features. Among the latter eight are classified as 'type I', and four of them as 'type II'. All type II and one NL-SB galaxy are Compton-Thick (CT) candidates. See Ruiz et al. (2007) for a further discussion on this sample.
Data compilation
Our goal is to construct a well sampled SED for each object in a broad frequency range, from radio to X-rays. To this end, we have carefully searched in the literature and in several astronomical databases. See Appendix B for a complete description on the origin of the photometry data for each HLIRG.
All data included in the SEDs (presented in Tables B.1-B.13, see Appendix B) have been converted to monochromatic flux density units, corrected for the Galactic reddening and blueshifted to rest-frame.
Radio
Most of the HLIRGs in the sample have at least one observation in the radio range. These data come from different observations by VLA, ATCA, IRAM and other radio-telescopes.
Infrared
Our sources are well observed in the IR band. There are photometric data from IRAS (Point Source Catalogue, Joint IRAS Science Working Group 1988; Faint Source Catalogue, Moshir et al. 1990) or ISO for all the objects. Most of them also have been observed with SCUBA in the sub-mm band ).
In addition, there are public Spitzer MIR data for several sources: IRAC photometric data and IRS spectra. We have reduced the IRAC data and we made our own photometric measurements. We have re-binned the IRS spectra of our HLIRG in broad bands, avoiding known emission and absorption features (a further analysis of these MIR spectra will be presented in Ruiz et al., in preparation). Most of these sources also have NIR data from the 2MASS survey 1 ).
Optical and UV
Most of the optical data were obtained from the Sloan Digital Sky Survey-Data Release 5 2 (SDSS-DR5, Adelman-McCarthy et al. 2007) and SuperCOSMOS Sky Survey 3 (SSS). A few data in V and B bands were taken from the XMM-Newton Optical Monitor (OM).
We have only a few data in the UV range, mostly from the OM. Other data come from IUE and FUSE observations.
X-ray
The XMM-Newton spectra previously studied in Ruiz et al. (2007) are available. We have corrected each X-ray spectrum for the line of sight Galactic absorption (Dickey & Lockman 1990) and we have re-binned the data in just a few energy bands 4 . In addition, the X-ray and the OM data come from simultaneous observations, allowing us to check any variability effects. Figure 1 shows the SEDs we have built for our sources. We have divided the sample in two classes accordingly to their optical spectral classification. On one hand we grouped objects classified as type I AGN (named class A sources) and on the other hand objects classified as type II AGN and SB (named class B sources).
Overall description of the SEDs
From a purely phenomenological point of view, class A and B sources seem to show a different SED shape. Class A objects have a SED approximately flat from the FIR to the optical range (the typical shape of quasars' SED), while class B objects show a prominent broad IR bump dominating the emission over the rest of the spectrum.
To check if the above distinction holds quantitatively, we compared the distribution of X-ray-to-IR and optical-to-IR flux ratios for class A and class B sources. We estimated the monochromatic fluxes at three different rest-frame wavelengths, in the IR (30 µm), optical (4400 Å) and X-rays (2 keV) through a linear interpolation of the SED (these points lie in well-sampled regions of the SEDs, so these are reasonable estimates of the continua at those energies). Fig 2 shows the distribution of the X-ray-to-IR (F X /F IR ) and optical-to-IR (F opt /F IR ) flux ratios for the class A (blue histogram) and class B (pink histogram) sources. The distributions seem to be different for both classes of HLIRGs. By using a Kolmogorov-Smirnov test, the probability that class A and class B samples come from different parent populations is 92.6% for the F X /F IR distribution and ∼ 99.7% for the F opt /F IR distribution.
This rough analysis of the SED properties is clearly limited, but the results seem to support our classification of HLIRGs in two classes. We suggest that, since the SED classification is directly related to the optical spectra classification, the distinct SED shape of HLIRGs could be explained by different levels of obscuration in the line of sight and/or the relative contribution of the SB emission to the total output.
SED fitting
Once all the SEDs were built, our aim was to check for the presence of AGN and/or SB emission in these sources and estimate the contribution of these components to the total output. We fitted all SEDs by using the χ 2 minimization technique with a simple model based on the use of convenient templates (see Sect. 4.1 for details). The fitting procedure and the SED templates were implemented using the modelling and fitting tool Sherpa (Freeman et al. 2001), included in the software package CIAO 3.4 5 .
Our model comprises two additive components, one associated to the AGN emission and the other associated to the SB emission. We can express this model as follows:
F ν = F BOL α u AGN ν + (1 − α) u SB ν ,(1)
where F BOL is the total bolometric flux, α is the relative contribution of the AGN to F BOL , F ν is the total flux at the frequency ν, while u AGN ν and u SB ν are the normalized AGN and SB templates (i.e., the value of the integral over the whole range of frequencies is unity for each SED template). This model contains only two free parameters, F BOL (the normalization) and α. The bolometric luminosity can be estimated as L BOL = 4πD 2 L F BOL , where D L is the luminosity distance.
The model we are adopting to fit the SED is somehow rough and does not provide a precise description of the SED features, so we expect a poor fit in terms of χ 2 value. However, the entire SED shape, from the radio to soft gamma rays, depends on a large number of physical parameters which produce different SED shapes even among the same class of sources (AGN, SB, etc.). Moreover, the impact of the different individual physical quantities on the overall SED and, perhaps most importantly, the effect of their interplay and interaction on the overall SED shape is far from being robustly settled from a theoretical point of view. The development of an analytical or semi-analytical model would be of great importance, but given that such models are difficult to build and likely not unique, they clearly are beyond the scope of this work. We propose instead the simpler template-fitting approach to discriminate, as a zeroth-order approximation, the relative component contribution (AGN and/or SB) to the overall bolometric luminosity of each source.
We have chosen the fit with the lowest reduced χ 2 as our "best fit" model. As we said above, the value of χ 2 /d.o.f. >> 1 even for these best fits. Nevertheless, this quantity varies significantly for most sources between the different combinations 4 A. Ruiz et al.: Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies of templates we tested during the χ 2 minimization process. In those objects where different types of templates obtained similar χ 2 values, we have chosen the template most consistent with previous results in the literature.
Our templates were chosen to minimize the contribution of the host galaxy's non-SB stellar emission (see Sect. 4.1), but there could still be a remnant of this emission in the templates. Therefore, by adding two different templates we could have summed twice this stellar emission. We checked this effect adding a stellar template to the model 6 . The normalization of this component was free and negative, in order to subtract the "second" stellar contribution. The addition of the new component did not change the final results of the SED fitting, so we can reject any important stellar contamination in our templates.
Templates
The templates we have employed to model the SEDs of our sources are empirical SEDs of well observed SB and Seyfert (Sy) galaxies in the local universe (see Table 2).
To reproduce the AGN contribution we used six AGN templates:
1. Two mean SED of radio quiet local QSO ( Fig. 3(a)): a luminosity independent SED (Elvis et al. 1994;Richards et al. 2006) and a luminosity-dependent one (Hopkins et al. 2007). The latter template is similar to the standard SED of QSO from Elvis et al. (1994), but the value of α OX depends on the bolometric luminosity (Steffen et al. 2006), and the Xray emission above 0.5 keV is modelled by a power law (Γ = 1.8) with a cut-off at 500 keV and a reflection component generated with the PEXRAV model (Magdziarz & Zdziarski 1995). Therefore, this template has two parameters: normalization (the bolometric flux of the AGN) and redshift. For a given flux and redshift, the bolometric luminosity is calculated and, hence, the value of α OX . The first parameter was left free to vary during the fitting, while the second was fixed accordingly to the redshifts obtained in the literature. 2. Four Sy2 galaxies ( Fig. 3(b)): these objects have hydrogen column densities (N H ) varying from 10 22 cm −2 (Compton thin objects) to greater than 10 25 cm −2 (Compton thick objects). They were selected from a sample of Sy2 galaxies with minimal starburst contribution (Bianchi et al. 2006). The AGN templates show two bumps, in the FIR and in the NIR-optical, except for the AGN3 template, which only present a broad IR bump. The differences between them are the relative height of these bumps, the position of their peaks and the ratio between the optical and X-ray fluxes.
To represent the SB contribution we have chosen a set of four starburst galaxies well observed in the full spectral range ( Fig. 4(a)). We have tried to cover a broad range of burst ages, dust contents and SFR. These physical properties are reflected in the SEDs showing different levels of obscuration, width and wavelength peaks.
1. NGC 5253 is a low-metallicity star-forming dwarf galaxy. Its nucleus is the site of a highly obscured and extremely young (< 10 Myr) burst of star formation (Beck et al. 1996), with a SFR∼ 8 M ⊙ yr −1 . 2. NGC 7714 is a young unobscured SB (Brandl et al. 2004) with SFR∼ 6 M ⊙ yr −1 and a burst age between 3-5 Myr (Gonzalez-Delgado et al. 1995). 6 The SED of the elliptical galaxy M 87 was employed to model the stellar emission.
3. M82 is an evolved pure SB galaxy with SFR∼ 10 M ⊙ yr −1 (Strickland et al. 2004). 4. IRAS 12112+0305 is a bright ULIRG powered by SB and with severe limits to any AGN contribution (Imanishi et al. 2007;Nardini et al. 2008). The estimated SFR for this object is ∼ 600 M ⊙ yr −1 ).
All of them show two bumps, peaking in the FIR and in the NIR-optical. The main difference between the templates is the relative height between these bumps and their widths.
We included four SED templates built from sources which they harbour both an AGN and a SB ( Fig. 4(b)):
1. NGC 1068 is a Sy2 galaxy with a composite nature, i.e. it harbours a heavily buried AGN (N H > 10 25 cm −2 , Matt et al. 1997) and also an intense SB (Telesco et al. 1984). The bolometric luminosity of this object is roughly evenly divided between the two component. The SB emission dominates longward of 30 µm and the AGN dominates shortward of 20-10 µm. 2. Mrk 231 is an ULIRG (L IR = 3.2 × 10 12 ) optically classified as a Broad Absorption Line QSO (Berta 2005) with a massive young nuclear SB which is responsible for 25%-40% of the nuclear bolometric luminosity (Davies et al. 2004). 3. IRAS 19254-7245, the "Superantennae", is a doublenucleated ULIRG optically classified as a Sy2 galaxy, with intense star formation. The AGN contribution to the total output is ∼ 40 − 50% (Berta et al. 2003). 4. IRAS 22491-1808 is a Sy2 ULIRG (Berta 2005) where the AGN emission is ∼ 70% of the bolometric luminosity (Farrah et al. 2003).
We fitted these composite templates to those HLIRGs where the initial AGN+SB model was insufficient to reproduce the data (see Sect. 5.2).
We extracted the photometric data for the templates using VOSED 7 and VOSpec 8 software. These utilities use Virtual Observatory (Quinn et al. 2004) tools to extract photometric and spectral data from several astronomical archives and catalogues. The templates were improved with data from NED database in wavelength ranges where VOSED and VOSpec provided no data. These objects are well observed at all the frequency ranges, particularly in the NIR and optical bands. We rejected some redundant data and we tried to extract only the nuclear emission to avoid as much contamination from the host galaxy as possible. To this end we have chosen only those data with a roughly constant aperture within the nucleus of the galaxy.
Results
Figures 5 and 6 show the SED 9 and the best fit model selected for each object, and Table 1 summarizes the results of our analysis. See Sect. 5.4 for comments on some particular sources.
Class A HLIRGs
We have shown that our simple two-component SED model is a fair approximation for most of these HLIRGs (see Fig. 1(a)). We found that all class A HLIRGs but one (IRAS 14026+4341, see Sect. 5.4 below) are well fitted with type I AGN templates, consistent with their optical classification, and an additional SB component is required in three objects. The AGN component dominates the bolometric output in four out of these six sources, while two objects present a powerful SB component, with 60%-70% contribution of the bolometric luminosity.
We have then 3 sources with a SB-dominated SED (IRAS F12509+3122, IRAS 14026+4341 and IRAS F14218+3845), one AGN-dominated source with an important SB contribution (IRAS 18216+6418), and 3 objects which seem to be extremely luminous quasars with no particular differences from the local ones, judging from their SEDs and X-ray spectra (Ruiz et al. 2007).
A noticeable result for the class A HLIRGs is that the AGN1 template over-predicts the X-ray flux of these sources, as found in our previous X-ray analysis. These discrepancies in the Xray band can not be related to variability effects, since the OM data, simultaneous to the X-ray observations, match well with other optical and UV data obtained in different epochs. When we modelled these objects with the luminosity-dependent AGN1-L SED template, we found a significant improvement in the fit in terms of χ 2 for most sources (4 out of 6) and the X-ray emission is better predicted. This result is consistent with the known α OX luminosity relationship (Strateva et al. 2005;Steffen et al. 2006;Kelly et al. 2008).
We must also note that the IR-to-bolometric ratio of these sources is within ∼ 40 − 70%, so an important fraction of their bolometric output is not emitted in the IR range. Hence, strictly speaking, they should not be considered as HLIRGs, particularly those with a completely AGN-dominated SED, where less than 50% of their bolometric luminosity is in the IR. This "contamination" can be expected given the selection criteria of the Rowan-Robinson (2000) parent sample, which simply selected those known sources with L IR 10 13 L ⊙ .
Class B HLIRGs
We found that these sources are fitted with a dominant SB component and, in most cases, a minor AGN contribution (< 10%). However our model present some problems for class B HLIRGs that we did not find in class A objects (see Fig. 6):
1. The level of obscuration in the observed X-ray spectra is higher than the one expected from the AGN templates. 2. Most sources show an excess in the MIR-NIR band not modelled by these templates, i.e. the width of the IR bump seems to be broader than the bumps in the starburst templates. 3. The peak of the template does not match the IR peak of the SEDs in several sources.
In order to improve the fit quality for the class B sources, we repeated the SED fitting using a set of templates from composite sources (see Sect. 4.1), where both AGN and SB emission are significant. By using these composite templates, we found that the statistical quality of our fits was significantly improved for all but one case (IRAS F15307+3252, see Sect. 5.4 below). For most objects, the χ 2 obtained with any of the composite templates is significantly lower than the χ 2 obtained with any combination of pure AGN and pure SB templates. CP1 is the best fit template for 4 out of 6 sources, consistent with their spectral classification (type 2 AGN) and X-ray obscuration level (Compton-Thick). Two sources are best fitted with the CP2 template.
IRAS F00235+1024 is the only source that still shows a significant IR excess, which suggest that the SB contribution may be larger in this source that in the CP1 template (∼ 50%).
Fitting without X-ray data
In order to check how much X-ray data influence the SED fitting results, we exclude X-ray data from the SED fitting procedure. Class A sources are still well represented by the same models (see Table 1, columns labeled as "no X-rays"), while class B galaxies are preferentially fitted with an AGN3 template (Compton thin model) and a SB component. Moreover, the AGN contribution grows significantly in most sources, particularly in the class B sources. When X-rays are included, a severe limit is imposed and the AGN contribution decreases dramatically. This shows that X-rays are important to obtain an accurate model with our technique and, hence, a better estimation of the contribution of each component to the total output.
Notes on particular sources IRAS 14026+4341
This source is optically classified as a type I AGN (Rowan-Robinson 2000), in agreement with the SDSS classification, and recent MIR Spitzer data also suggest the presence of an AGN in this object (Ruiz et al., in preparation), but our best fit model is obtained by using the SB2 template. The X-ray data impose a severe constraint, rejecting the AGN templates that predict a higher emission in the X-ray band. If we fit again this SED using no X-ray data (see Sect. 5.3) we find that the best fit is obtained by AGN1+SB2.
The X-ray emission of this source seems to be affected by absorption (see Fig. 5(d)): it is not detected in the soft X-ray band (0.5-2 keV) and its 2XMMi hardness ratio (HR3 ∼ −0.2) 10 is consistent with an X-ray absorbed AGN (Della Ceca et al. 2004). This indicates IRAS 14026+4341 as an X-ray absorbed QSO. These objects are often embedded in ultraluminous starburst galaxies (Page et al. 2007), and they have been pointed out as a transitional phase in an evolutionary sequence relating the growth of massive black holes to the formation of galaxies (Stevens et al. 2005;Page et al. 2007).
Under these circumstances, we have selected as best fit the model resulting from fitting the SED without X-ray data. We must note, however, that both models (pure SB or AGN+SB) poorly fit the data between ∼ 1 − 100 µm. The observed IR excess, may be related to the X-ray emission absorbed and reprocessed in the IR, can not be reproduced by AGN1 template (an unabsorbed template).
IRAS F15307+3252
This object has been optically classified as a QSO 2 (Rowan-Robinson 2000) and there is strong evidence in Xrays favouring the presence of a heavily obscured AGN (Iwasawa et al. 2005). However we have found that its SED best fit, in terms of χ 2 , is obtained with a SB template with minor AGN1 contribution. The CP1 template is also a fair fit, but with a slightly worse χ 2 .
Previous analyses of the IR emission of this HLIRG (Deane & Trentham 2001;Verma et al. 2002) suggest that the SB contribution is considerably lower than what we found using a pure SB template. Hence, we have selected the CP1 as "best fit", which is also consistent with its optical classification, to estimate the AGN and SB contribution to the bolometric luminosity.
Comparison with previous results
X-ray emission
We can estimate the expected X-ray luminosity of the AGN and SB components for each source in our sample using the parameters obtained in our SED analysis, and compare with the X-ray luminosities calculated through XMM-Newton observations.
We have seen that the AGN SED of these sources is better modelled with a luminosity-dependent template. Hence, we have employed the relation obtained by Sani et al. (private communication) 11 to estimate the intrinsic 2-10 keV luminosity for a given AGN bolometric luminosity:
L 2−10 keV L BOL = 0.043 L BOL 10 45 −0.357
(2) Figure 7 shows those sources detected in X-rays and with an AGN component in their SED model. We plotted the bolometric luminosity of the AGN component versus the intrinsic (absorption corrected) 2-10 keV luminosity (see Table 3), as calculated in Ruiz et al. (2007).
Most sources are scattered roughly following the Eq. 2 estimate. This scatter is probably related to the intrinsic dispersion in X-ray luminosities of AGN, i.e. for a given bolometric luminosity, there is a broad range of possible X-ray luminosities (Steffen et al. 2006).
There are, however, three sources (PG 1206+459, IRAS F12509+3122 and IRAS 14026+4341) with X-ray luminosities much lower than the estimated by Eq. 2. The X-ray luminosity of IRAS 14026+4341 was calculated using the 2XMMi X-ray fluxes so it is not corrected by absorption. Hence, this large discrepancy between the prediction and the observed luminosity is likely another sign of X-ray absorption (see Sect. 5.4).
For the other two sources Ruiz et al. (2007) did not find any sign of X-ray absorption. This effect could be, in principle, due to an overestimate of the AGN contribution to the bolometric luminosity. If we assume that the difference between the bolometric luminosity calculated using the SED fitting and that estimated using Eq. 2 is completely originated due to star formation, we find that the SB contribution to the total output should be larger than 90% in these two sources. Such a powerful SB must be clearly reflected in the SED shape, but we did not find this kind of deviation in the SED analysis of these sources. The X-ray weakness of these HLIRGs can not therefore be related to the underestimate of the SB contribution to the bolometric output, or due to X-ray absorption. They seem to be intrinsically weak X-ray sources (Leighly et al. 2001(Leighly et al. , 2007.
IR SED: comparison with previous work
The IR (1-1000 µm) SED of our sources has been previously studied: Rowan-Robinson (2000), Farrah et al. (2002a) and Verma et al. (2002) modelled it using RTM. We estimated the IR luminosities of our models, integrating between 1-1000 µm, and compared their results with ours (see Table 3).
The IR luminosities estimated through our SED fitting and that estimated using RTM match fairly well (see Fig. 8(a)) for most sources. For three objects, our luminosity estimation is almost an order of magnitude greater than the RTM estimation, probably because our best-fit models overestimate the FIR-submm emission (see Figs. 5(b), 5(f) and 6(f)). This spectral emission is problably better recovered by using RTM. Nevertheless, in spite of this large disagreement in luminosities, our AGN contribution estimates are consistent with those obtained through RTM, as Fig. 8(b) shows.
The latter plot shows that our AGN contribution estimates for most sources are roughly consistent with those obtained through RTM. We can conclude that our simple model based on templates is a fair method to obtain a first estimate of the AGN and SB relative contribution to the IR output.
Discussion
The broad band SEDs of the HLIRGs presented in this work can be roughly well fitted using templates, and their best fits are consistent with the optical classification of most sources (9 out of 13). Among class A sources we found three objects fitted with pure type 1 AGN templates. They seem to be very luminous quasars and, since most of their bolometric output is not emitted in the IR band, should not be considered as proper HLIRGs. Four out of seven class A HLIRGs require, in addition to a type 1 AGN template, a SB component which is, in three cases, dominant with respect to the AGN. The AGN emission in four sources is consistent with a luminosity-dependent SED.
On the other hand, we have found that class B sources can not be fitted with simple combination of pure AGN and pure SB templates: a composite template is needed, where AGN and SB phenomena are both significant. This suggests that there should be some feedback between accretion process and star formation that changes the shape of the SED in a way that can not be imitated just by combining a pure SB and a pure AGN components. The main observational imprint of this feedback seems to be an excess in the SED around ∼ 10 µm with respect to the predicted emission of pure AGN and pure SB combined model.
Our division between class A and class B sources is based on the optical spectral classification, and since all objects show a significant AGN emission, it seems that the SED shape differences between the two groups could be an inclination effect as in the unified model of AGNs (Antonucci & Miller 1985): those HLIRGs where we have a direct view of the nucleus are luminous QSO and show a class A SED, while those HLIRGs seen through the molecular torus and/or other obscuring material show a class B SED. The comparable mean SB contribution of class A (excluding pure AGN sources) and class B sources is consistent with this hypothesis. Within this scenario, all types of HLIRGs belong to the same class of sources, seen at different inclination angles. Farrah et al. (2002a) proposed, however, that HLIRG population is comprised of (1) mergers between gas rich galaxies, as found in the ULIRG population, and (2) young active galaxies going through their maximal star formation periods whilst harbouring an AGN.
The N H distribution we found in the X-ray study seems to favour the two population hypothesis. In a pure inclination scenario we would expect a broad range of X-ray absorption, from no absorbed to heavily absorbed sources. However we found only objects with no significant intrinsic absorption (all but one class A sources) or CT absorbed objects (all class B sources). Since AGNs observed in ULIRGs usually show heavy absorption in X-rays Ptak et al. 2003;Teng et al. 2005), in principle class B sources could represent the high luminosity tail of ULIRG population, while the strong SB found in class A HLIRGs could represent young active galaxies experiencing their maximal star formation, without being in interacting systems (i.e. with little connection with a recent major merger).
The study of the host galaxy morphology and environment of HLIRGs also support the two population hypothesis. Farrah et al. (2002b) found in a sample of nine HLIRGs observed by HST both strongly interactive systems and objects with no clear signs of ongoing interactions. Five sources of this sample are also included in ours: IRAS F00235+1024 and IRAS F15307+3252 (class B objects) show signs of strong interactions, while IRAS F12509+3122, IRAS F14218+3845 and IRAS 16347+7037 (class A objects) are isolated systems. This result favours our suggestion that class B HLIRGs could be objects in the extreme bright end of the ULIRG population distribution.
Hence, while class B HLIRGs share common properties with ULIRGs (high levels of X-ray obscuration, strong star formation, signs of mergers and interactions), class A HLIRGs seem to be a different class of objects. Excluding the 3 pure AGN sources, class A objects could be among the young active galaxies proposed by Farrah et al. (2002a). The powerful SB we found in these sources, and the large amounts of gas available to fuel the star formation (as calculated by Farrah et al. 2002a), along with the non detection of mergers or interactions in these systems, support this idea. Moreover, the SB emission of the bona fide class A HLIRGs is modelled with young SB templates (SB1 and SB2) in all but one object (IRAS 18216+6418), which is modelled with an old SB (SB3). This source could be a more evolved object.
Therefore, sources in our sample likely belong to three different populations:
1. Very luminous QSO with minor star formation activity. 2. Young, isolated active galaxies undergoing their first episode of major star formation with little connection with a recent major merger. 3. Galaxies which have recently experienced a merger/disturbance that brought lots of gas and dust in the inner regions. This event trigger both the star formation and the AGN activity in a heavily obscured environment. These objects suit well as the high luminosity tail of the ULIRG population.
Nevertheless, our sample of HLIRGs is not complete in any sense and we cannot derive further conclusions about the global properties of the HLIRG population. Further studies established on larger and complete samples of HLIRGs are needed to conclude if the division between class A and class B objects is just due to an inclination effect, or is based on intrinsic differences of their physical properties.
Conclusions
In this paper we have built and analysed the multi-wavelength SED (from radio to X-rays) of a sample of 13 HLIRGs, previously studied in detail in X-rays (Ruiz et al. 2007). We assembled the SEDs using public data in several astronomical databases and in the literature, and we modelled them using templates. Most sources are roughly well fitted with this simple model and we find AGN relative contributions consistent with those inferred by previous analyses of the IR SEDs of HLIRGs using radiative transfer models.
We divided the HLIRGs in two groups, accordingly to their optical spectral classification: class A (type 1 AGNs) and class B (type 2 AGNs and SB) sources. A first look at their SED shape indicates some differences between the two classes: class A sources show a roughly flat SED between the IR and the optical, while class B sources have a prominent IR bump dominating the rest of the emission. A significant fraction (3 out of 7) of class A HLIRGs seem to be very luminous quasars with no particular deviations from the local quasars. Strictly speaking these objects should not be considered HLIRGs since most of their bolometric output is emitted outside the IR band. The SED of these QSO are consistent with a luminosity-dependent quasar template. The remaining class A sources show significant additional SB components, which are dominant in all but one object. Given their strong SB activity and the lack of any sign of mergers in these systems, these HLIRGs could be very young galaxies experiencing their first episode of maximal star formation.
Class B HLIRGs show an IR excess that can not be modelled with any combination of our selected pure AGN and pure SB templates. This feature can be properly fitted using composite templates (SEDs from objects where AGN and SB emission are both important). This suggests that a significant fraction of the emission of this class of objects is originated in a SB. This also shows that the feedback between accretion and star formation processes modifies the SED of class B HLIRGs in a way that can not be replicated by just the addition of pure AGN and pure SB independent templates. Class B HLIRGs share many properties with ULIRGs (high X-ray absorption, strong star formation, signs of mergers and interactions), so they could be just the high luminosity tail of this population. Therefore, we have found some evidence supporting the idea that bona fide HLIRGs are composed of two populations: young active galaxies with no sign for recent mergers most likely going through their first episode of strong star formation, and the highluminosity end of the ULIRG population, where both the SB and AGN are likely triggered by a recent merger/interaction. Further observational studies based on larger and, most importantly, complete samples of HLIRGs are needed to obtain stronger evidence for this hypothesis. Moreover our simple template-fitting approach should be complemented with RTMs (or other theoretical models of AGN and SB emission), since the two approaches are complementary in many ways and their combination may shed further light onto the relative SB-AGN contribution and on the feedback processes that take place in the most interesting HLIRGs, namely those that are well represented by composite templates within our approach. > 10 25 a) IR luminosities (1-1000 µm) estimated using our SED fitting. b) IR luminosities (1-1000 µm) estimated by the analysis of the IR SED using RTM (Rowan-Robinson 2000;Farrah et al. 2002a). c) Absorption corrected 2-10 keV luminosities from Ruiz et al. (2007). d) Intrinsic absorption estimated using X-ray spectra (Ruiz et al. 2007). e) The X-ray luminosity of this source has been calculated from 2XMMi fluxes (Watson et al. 2009), and it is not corrected from absorption. The object IRAS 13279+3401 has been previously classified as a QSO, and the IR luminosity estimated through the redshift presented in the literature (z=0.36, Rowan-Robinson 2000; Farrah et al. 2002a) exceed the HLIRG limit. However, we have now strong evidence showing that this source is a much closer galaxy. Figure A.1(a) shows its optical spectrum obtained by the 2.5m Isaac Newton Telescope, where we do not find any type I feature. A QSO with z=0.36 should present a broad H β emis-sion line at ∼ 6600 Å. We estimate z=0.023 for this spectrum, from stellar absorption features.
We have also the MIR spectrum of this source, observed by Spitzer (see Fig. A.1(b)). We have estimated the redshift of the source using a SB template from Nardini et al. (2008). We redshifted the template matching the most important spectral features and we found z ∼ 0.02, which is consistent with our estimate using the optical spectrum. The IR luminosity derived from this redshift is ∼ 3 × 10 10 L ⊙ , well below the HLIRG limit and even below the LIRG. 1.40 × 10 9 2.70 × 10 −3 5.00 × 10 −4 VLA (NED) 3.53 × 10 11 6.38 × 10 −3 ... 6.66 × 10 11 9.67 × 10 −1 ... 1.66 × 10 12 8.05 × 10 −1 3.93 × 10 −1 ISO (NED) 3.15 × 10 12 4.78 × 10 −1 1.48 × 10 −1 ISO (NED) 5.00 × 10 12 4.28 × 10 −1 5.56 × 10 −2 IRAS (NED) 2.00 × 10 13 6.75 × 10 −3 2.14 × 10 −3 ISO (NED) 4.44 × 10 13 9.20 × 10 − 3.53 × 10 11 2.68 × 10 −2 4.20 × 10 −3 SCUBA 6.66 × 10 11 9.67 × 10 −1 ... 3.00 × 10 12 3.55 × 10 0 2.84 × 10 −1 IRAS (NED) 5.00 × 10 12 1.17 × 10 0 9.36 × 10 −2 IRAS (NED) 1.20 × 10 13 8.00 × 10 −1 8.00 × 10 −2 IRAS (NED) 2.50 × 10 13 4.84 × 10 −1 3.39 × 10 −2 IRAS (NED) 1.39 × 10 14 4.08 × 10 −3 1.49 × 10 −4 2MASS 1.80 × 10 14 1.35 × 10 −3 9.71 × 10 −5 2MASS 2.43 × 10 14 4.74 × 10 −4 4.79 × 10 −5 2MASS 3.33 × 10 14 1.62 × 10 −4 2.23 × 10 −5 DENIS 5.45 × 10 14 3.80 × 10 −5 2.10 × 10 −5 XMM-Newton -OM 3.02 × 10 17 2.51 × 10 −9 ... XMM-Newton -EPIC 1.45 × 10 18 1.87 × 10 −9 ... 1.40 × 10 9 6.88 × 10 −3 1.31 × 10 −4 VLA (NED) 1.49 × 10 9 6.00 × 10 −3 1.00 × 10 −3 VLA (NED) 4.90 × 10 9 1.80 × 10 −3 3.00 × 10 −4 VLA (NED) 3.53 × 10 11 9.54 × 10 −3 ... 6.66 × 10 11 7.28 × 10 −2 ... 3.00 × 10 12 4.38 × 10 −1 ... 1.49 × 10 9 1.17 × 10 −3 ... VLA (NED) 4.90 × 10 9 7.20 × 10 −4 8.00 × 10 −5 VLA (NED) 1.49 × 10 10 1.51 × 10 −3 2.20 × 10 −4 VLA (NED) 2.30 × 10 11 2.10 × 10 −3 ... IRAM (NED) 1.72 × 10 12 1.50 × 10 −1 ... ISO (NED) 2.93 × 10 12 1.74 × 10 −1 3.48 × 10 −2 ISO (NED) 4.93 × 10 12 2.36 × 10 −1 4.72 × 10 −2 ISO (NED) 1.20 × 10 13 1.13 × 10 −1 ... IRAS (NED) 2.34 × 10 13 3.00 × 10 −2 6.00 × 10 −3 ISO (NED) 3.91 × 10 13 1.00 × 10 −2 2.00 × 10 −3 ISO (NED) 6.17 × 10 13 9.00 × 10 −3 ... ISO (NED) 1.39 × 10 14 3.55 × 10 −3 1.36 × 10 −4 2MASS 1.80 × 10 14 2.99 × 10 −3 1.45 × 10 −4 2MASS 2.43 × 10 14 2.93 × 10 −3 1.10 × 10 −4 2MASS 3.33 × 10 14 6.31 × 10 −3 1.74 × 10 −3 SSS 3.28 × 10 14 3.01 × 10 −3 1.37 × 10 −5 SDSS-DR5 3.93 × 10 14 2.66 × 10 −3 9.69 × 10 −6 SDSS-DR5 4.28 × 10 14 4.03 × 10 −3 1.11 × 10 −3 SSS 4.81 × 10 14 2.26 × 10 −3 8.02 × 10 −6 SDSS-DR5 6.28 × 10 14 2.13 × 10 −3 6.42 × 10 −6 SDSS-DR5 6.81 × 10 14 3.54 × 10 −3 1.30 × 10 −3 SSS 8.47 × 10 14 2.27 × 10 −3 9.92 × 10 −6 SDSS-DR5 9.67 × 10 14 1.17 × 10 −3 8.00 × 10 −6 XMM-Newton -OM 1.25 × 10 15 6.70 × 10 −4 2.00 × 10 −5 XMM-Newton -OM 1.48 × 10 15 3.70 × 10 −4 2.00 × 10 −5 XMM-Newton -OM 1.40 × 10 9 1.76 × 10 −3 1.25 × 10 −4 VLA (NED) 3.53 × 10 11 9.23 × 10 −3 ... 6.66 × 10 11 3.33 × 10 −1 ... 3.00 × 10 12 6.75 × 10 −1 ... 3.53 × 10 11 7.53 × 10 −3 ... 6.66 × 10 11 9.40 × 10 −2 ... 3.00 × 10 12 9.94 × 10 − 3.53 × 10 11 8.55 × 10 −3 ... 6.66 × 10 11 2.53 × 10 −1 ... 3.15 × 10 12 1.64 × 10 −1 6.10 × 10 −2 ISO (NED) 2.00 × 10 13 3.23 × 10 −3 1.04 × 10 −3 ISO (NED) 4.44 × 10 13 7.90 × 10 −4 2.60 × 10 −4 ISO (NED) 3.33 × 10 14 5.00 × 10 −5 ... 3.28 × 10 14 7.97 × 10 −5 4.34 × 10 −6 SDSS-DR5 3.93 × 10 14 9.09 × 10 −5 1.26 × 10 −6 SDSS-DR5 4.28 × 10 14 1.14 × 10 −4 1.05 × 10 −5 APM 4.81 × 10 14 1.01 × 10 −4 1.02 × 10 −6 SDSS-DR5 6.28 × 10 14 8.52 × 10 −5 7.85 × 10 −7 SDSS-DR5 8.47 × 10 14 8.47 × 10 −5 2.26 × 10 −6 SDSS-DR5 Table B.11. Photometric data for IRAS F15307+3252.
XMM-Newton -EPIC
IRAS (NED)
IRAS (NED)
ν (Hz) F ν (Jy) Error (Jy) Ref.
1.40 × 10 9 5.71 × 10 −3 1.09 × 10 −4 VLA (NED) 8.42 × 10 9 9.20 × 10 −4 4.00 × 10 −5 VLA (NED) 1.02 × 10 11 4.50 × 10 −2 ... OVMA (NED) 2.39 × 10 11 5.10 × 10 −3 ... OVMA (NED)
Fig. 1 .Fig. 2 .
12Rest-frame spectral energy distributions of the sample. Fluxes are shifted for clarity.(a) F X /F IR (b) F opt /F IR Distributionof (a) X-ray-to-IR and (b) Optical-to-IR flux ratios for class A (dark grey, blue in the colour version) and class B (light grey, pink in the colour version) HLIRGs.(a) AGN I templates.(b) AGN II templates.
Fig. 3 .
3AGN templates. (a) The top line (blue in the colour version) is the standard SED for radio quiet quasar (AGN1, Richards et al. 2006). The group below is the luminosity-dependent SED for quasar (AGN1-L, Hopkins et al. 2007), plotted for several bolometric luminosities (the top line -red in the colour version -is for 10 10 L ⊙ and the bottom black line is for 10 16 L ⊙ ). (b) Listed downwards: NGC 5506 (AGN3, N H = 3 × 10 22 cm −2 ), NGC 4507 (AGN4, N H = 4 × 10 23 cm −2 ), Mrk 3 (AGN5, N H = 1.4 × 10 24 cm −2 ), NGC 3393 (AGN6, N H > 1 × 10 25 cm −2 ). The SED fluxes are shifted for clarity. See Sect. 4.1 for details. (a) SB templates. (b) Composite templates.
Fig. 4 .Fig. 5 .Fig. 5 .Fig. 6 .
4556(a) Pure starburst templates. Listed downwards: NGC 5253, NGC 7714, M82, IRAS 12112+0305. (b) Composite templates (AGN + SB). Listed downwards: NGC 1068, Mrk 231, IRAS 19254-7245, IRAS 22491-1808. The SED fluxes are shifted for clarity. See Sect. 4.1 for details. Rest-frame Spectral Energy Distributions of class A HLIRGs and their best fit models. Dotted lines (red in the colour version) are the AGN components and dashed lines (green in the colour version) are the SB components. Black solid lines are the sum of the AGN and SB components. ContinuedRest-frame Spectral Energy Distributions of class B HLIRGs and their best fit models. Symbols as in Fig. 5. The long-dashed lines (blue in the colour version) are the best fits obtained using composite templates (see Sects. 4.1 and 5.2).
Fig. 7 .
7Bolometric versus observed, absorption-corrected 2-10 keV AGN luminosities. Squares (blue in the colour version) are class A HLIRGs, triangles (red in the colour version) are class B HLIRGs. The dotted line reflects the ratio between these luminosities obtained by Sani et al. (a) Total IR luminosities.(b) AGN to total IR luminosity ratios.
Fig. 8 .
8(a) Total IR luminosity estimated using our templates (L IR ) and using radiative transfer models (L RTM IR ). (b) AGN to total IR luminosity ratios estimated through our model (R AGN ) and using radiative transfer models (R RTM AGN ). Symbols as inFig. 7. The dotted lines mean equal values.
Fig. A. 1 .
1(a) Optical and (b) MIR spectra of IRAS 13279+3401 in the observer frame. The slashed line (green in the colour version) in the right hand side plot is the SB template from Nardini et al. (2008). A. Ruiz et al.: Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies, Online Material p 1
4 3 .
380 × 10 −4 ISO (NED) 1.39 × 10 14 1.01 × 10 −4 5.00 × 10 −6 (Farrah et al. 2002a) 3.33 × 10 14 8.00 × 10 −5 1.00 × 10 −5 (Farrah et al. 2002a) 4.28 × 10 14 3.10 × 10 −5 7.00 × 10 −6 USNO-B1.0 6.81 × 10 14 1.80 × 10 −5 4.00 × 10 −6 USNO-B1.0 3.02 × 10 17 4.68 × 10 −10 ... XMM-Newton -EPIC 1.45 × 10 18 1.04 × 10 −10 ... XMM-Newton -EPIC
Table 1 .
1Best fit models for the HLIRG's SEDs Fraction of the bolometric luminosity originated in AGN and SB. Calculated through the parameter α of the best fit model. e Best fit using our original set of templates. f Best fit not using X-ray data. g Best fit including the templates of composite sources.Source
z
Type
CT a
Best Fit model b
log L BOL
c
AGN / SB d
all data e
no X-rays f
composite temp. g (erg s −1 )
Model
α
Model
α
Class A HLIRGs
PG 1206+459
1.158 QSO
N
AGN1-L
...
1
AGN1
...
1
AGN1-L
48.4
1 / 0
PG 1247+267
2.038 QSO
N
AGN1-L
...
1
AGN1
...
1
AGN1-L
49.2
1 / 0
IRAS F12509+3122 0.780 QSO
N
AGN1-L SB1
0.3
AGN1 SB4 0.5
AGN1-L+SB1
47.7
0.3 / 0.7
IRAS 14026+4341
0.323 QSO
N
...
SB2
0
AGN1 SB2 0.3
SB2
46.7
0.3 / 0.7
IRAS F14218+3845 1.21
QSO
N
AGN1
SB1
0.4
AGN1 SB1 0.3
AGN1+SB1
47.2
0.4 / 0.6
IRAS 16347+7037
1.334 QSO
N
AGN1-L
...
1
AGN1
...
1
AGN1-L
48.9
1 / 0
IRAS 18216+6418
0.297 QSO
N
AGN1
SB3
0.8
AGN1 SB3 0.8
AGN1+SB3
47.4
0.8 / 0.2
Class B HLIRGs
IRAS F00235+1024 0.575 NL-SB
Y
...
SB3
0
...
SB3
0
CP1
46.7
∼0.5 / ∼0.5
IRAS 07380-2342
0.292 SB
N
AGN4
SB1 0.06 AGN3 SB1 0.3
CP1
47.0
∼0.5 / ∼0.5
IRAS 00182-7112
0.327 QSO 2
Y
AGN3
SB4 0.06 AGN3 SB3 0.3
CP1
46.6
∼0.5 / ∼0.5
IRAS 09104+4109
0.442 QSO 2
Y
AGN4
SB4 0.09 AGN3 SB1 0.8
CP2
47.3
∼0.7 / ∼0.3
IRAS 12514+1027
0.32
Sy2
Y
AGN5
SB4 0.06 AGN3 SB2 0.9
CP2
46.7
∼0.7 / ∼0.3
IRAS F15307+3252 0.926 QSO 2
Y
AGN1
SB3 0.03 AGN3 SB1 0.8
CP1
47.9
∼0.5 / ∼0.5
a Compton Thick candidates.
b The best fit adopted to estimate the bolometric luminosity and the AGN and SB fraction is marked in bold fonts.
c Bolometric luminosity in CGS units.
d
Table 2 .
2SED templates used as models.Label
Source
Description
AGN1
...
local quasar's mean SED 1
AGN1-L ...
luminosity-dependent QSO SED 2
AGN3
NGC 5506
Sy2, N H = 3 × 10 22 cm −2
AGN4
NGC 4507
Sy2, N H = 4 × 10 23 cm −2
AGN5
Mnk 3
Sy2, N H = 1.4 × 10 24 cm −2
AGN6
NGC 3393
Sy2, N H > 1 × 10 25 cm −2
SB1
NGC 5253
Young and dusty SB
SB2
NGC 7714
Young and unobscured SB
SB3
M82
Old SB
SB4
IRAS 12112+0305 ULIRG
CP1
NGC1068
Composite template: AGN: ∼ 50%
CP2
Mnk 231
Composite template: AGN: ∼ 70%
CP3
IRAS 19254-7245
Composite template: AGN: ∼ 45%
CP4
IRAS 22491-1808
Composite template: AGN: ∼ 70%
References. (1) Richards et al. 2006; (2) Hopkins et al. 2007.
Table 3 .
3IR and X-ray luminosities.Source
log L tot
IR
a
log L AGN
IR
a
log L SB
IR
a
log L tot
IR,RTM
b
log L AGN
IR,RTM
b
log L SB
IR,RTM
b
log L AGN
X
c
N H
d
(erg s −1 )
(erg s −1 )
(erg s −1 )
(erg s −1 )
(erg s −1 )
(erg s −1 )
(erg s −1 )
(cm −2 )
Class A HLIRGs
PG 1206+459
48.0
48.0
0
47.8
47.8
< 46.7
45.11 +0.02
−0.04
...
PG 1247+267
48.8
48.8
0
47.9
47.9
< 46.8
45.93 +0.02
−0.03
...
IRAS F12509+3122
47.6
46.8
47.5
47.0
46.8
46.6
42.26 +0.05
−0.05
...
IRAS 14026+4341 e
46.5
45.8
46.5
46.5
46.3
46.1
42.7 +0.2
−0.5
...
IRAS F14218+3845
47.1
46.5
46.9
46.9
46.1
46.8
44.60 +0.03
−0.03
...
IRAS 16347+7037
48.5
48.5
0
47.7
47.7
< 46.8
46.00 +0.07
−0.09
...
IRAS 18216+6418
47.1
46.9
46.6
46.8
46.6
46.4
45.6 +0.04
−0.05
...
Class B HLIRGs
IRAS F00235+1024
46.7
46.4
46.4
46.7
46.4
46.4
< 42.2
> 10 25
IRAS 07380-2342
47.0
46.7
46.7
47.0
46.8
46.5
< 41.7
...
IRAS 00182-7112
46.6
46.3
46.3
46.7
< 46.5
46.7
44.82 +0.16
−0.14
> 10 25
IRAS 09104+4109
47.3
47.1
46.8
46.8
46.8
< 46.2
45.30 +0.36
−0.09
> 10 25
IRAS 12514+1027
46.7
46.5
46.2
46.5
46.2
46.2
43.3 +1.4
−0.7
(4 +20
−3 ) × 10 23
IRAS F15307+3252
47.9
47.6
47.6
46.9
46.6
46.7
45.49 +0.09
−0.11
Appendix B: Tables of dataAlong this appendix a table is presented for each object with the fluxes employed to build the SEDs and the origin of each data. The re-binned spectra from XMM-Newton and Spitzer are not included in these tables. Fluxes shown with no errors are upper limits.
Table B . 1 .
B1Photometric data for IRAS 00182-7112. ATCA(Drake et al. 2004) 3.00 × 10 12 1.19 × 10 0 1.19 × 10 −1 IRAS (NED) 5.00 × 10 12 1.20 × 10 0 8.37 × 10 −2 IRAS (NED) 1.20 × 10 13 1.33 × 10 −1 1.02 × 10 −2 IRAS (NED) 2.50 × 10 13 6.02 × 10 −2 ...IRAS (NED) 5.23 × 10 13 2.23 × 10 −2 4.78 × 10 −3 Spitzer -IRAC 8.44 × 10 13 2.53 × 10 −3 7.47 × 10 −4 Spitzer -IRAC 1.39 × 10 14 7.22 × 10 −4 7.83 × 10 −5 2MASS 1.80 × 10 14 4.37 × 10 −4 7.34 × 10 −5 2MASS 2.43 × 10 14 2.68 × 10 −4 4.34 × 10 −5 2MASS 3.33 × 10 14 2.02 × 10 −4 5.57 × 10 −5 SSS 4.28 × 10 14 2.28 × 10 −4 6.30 × 10 −5 SSS 6.81 × 10 14 2.97 × 10 −5 5.40 × 10 −6 XMM-Newton -OM 1.48 × 10 15 1.90 × 10 −5 1.30 × 10 −5 XMM-Newton -OMν (Hz)
F ν (Jy)
Error (Jy)
Ref.
8.43 × 10 8
4.23 × 10 −1 1.28 × 10 −2 SUMSS (NED)
1.40 × 10 9
3.17 × 10 −1 3.00 × 10 −3 ATCA (Drake et al. 2004)
2.50 × 10 9
1.97 × 10 −1 3.00 × 10 −3 ATCA (Drake et al. 2004)
4.80 × 10 9
9.80 × 10 −2 3.00 × 10 −3 ATCA (Drake et al. 2004)
8.60 × 10 9
5.70 × 10 −2 3.00 × 10 −3
Table B .
B2. Photometric data for IRAS F00235+1024.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B . 3 .
B3Photometric data for IRAS 07380-2342.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B .
B4. Photometric data for IRAS 09104+4109.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B . 5 .
B5Photometric data for PG 1206+459.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
4.90 × 10 9
1.20 × 10 −4 ...
VLA (NED)
Table B .
B6. Photometric data for PG 1247+267.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B . 7 .
B7Photometric data for IRAS F12509+3122.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B .
B8. Photometric data for IRAS 12514+1027.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
Table B . 9 .
B9Photometric data for IRAS 14026+4341.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
1.40 × 10 9
1.59 × 10 −3
1.39 × 10 −4
VLA (NED)
Table B .
B10. Photometric data for IRAS F14218+3845.ν (Hz)
F ν (Jy)
Error (Jy)
Ref.
http://www.ipac.caltech.edu/2mass/ 2 http://www.sdss.org/dr5 3 http://www-wfau.roe.ac.uk/sss/ 4 Through our X-ray data reduction we did not detect the source IRAS 14026+4341. Even so, this source has a counterpart in the 2XMMi catalogue(Watson et al. 2009). We have considered the five energy band fluxes as in the 2XMMi catalogue.
http://cxc.harvard.edu/ciao3.4/
http://sdc.laeff.inta.es/vosed 8 http://esavo.esa.int/vospec 9 Several photometric points are upper limits. The most conservative approach was chosen for the fit. We set the point to zero and the upper error bar to the upper limit value.
HR3 = CR(2.0−4.5 keV)−CR(1.0−2.0 keV) CR(2.0−4.5 keV)+CR(1.0−2.0 keV) , where CR is the count rate in the given energy band.
This ratio is obtained from theSteffen et al. (2006) relation between X-ray and 2500 Å luminosities and then linking the 2500 Å luminosity with the bolometric one through theElvis et al. (1994) SED.
Acknowledgements. We are grateful to the referee M. Rowan-Robinson for the constructive comments and suggestions that improved this paper. A.R. acknowledges support from a Universidad de Cantabria fellowship. Financial support for A.R. and F.J.C. was provided by the Spanish Ministry of Education and Science, under projects ESP2003-00812 and ESP2006-13608-C02-01. FP acknowledges financial support under the project ASI INAF I/08/07/0. GM thanks the Ministerio de Ciencia e Innovación and CSIC for support through a Ramón y Cajal contract.This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This paper is based also on data from the VOSED tool at LAEFF. The 2.5m Isaac Newton Telescope and its service programme are operated on the island of La Palma by the Isaac Newton Group in the Spanish 8 A. Ruiz et al.: Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
. J K Adelman-Mccarthy, M A Agüeros, S S Allam, ApJS. 172634Adelman-McCarthy, J. K., Agüeros, M. A., Allam, S. S., et al. 2007, ApJS, 172, 634
. R R J Antonucci, J S Miller, ApJ. 297621Antonucci, R. R. J. & Miller, J. S. 1985, ApJ, 297, 621
. S C Beck, J L Turner, P T P Ho, J H Lacy, D M Kelly, ApJ. 457610Beck, S. C., Turner, J. L., Ho, P. T. P., Lacy, J. H., & Kelly, D. M. 1996, ApJ, 457, 610
AA(Dipartimento di Astronomia, Univ. di Padova, Vicolo dell'Osservatorio 2, I-35122. S Berta, Padova, ItalyPhD thesisBerta, S. 2005, PhD thesis, AA(Dipartimento di Astronomia, Univ. di Padova, Vicolo dell'Osservatorio 2, I-35122, Padova, Italy)
. S Berta, J Fritz, A Franceschini, A Bressan, C Pernechele, A&A. 403119Berta, S., Fritz, J., Franceschini, A., Bressan, A., & Pernechele, C. 2003, A&A, 403, 119
. S Bianchi, M Guainazzi, M Chiaberge, A&A. 448499Bianchi, S., Guainazzi, M., & Chiaberge, M. 2006, A&A, 448, 499
. B R Brandl, D Devost, S J U Higdon, ApJS. 154188Brandl, B. R., Devost, D., Higdon, S. J. U., et al. 2004, ApJS, 154, 188
2MASS All Sky Catalog of point sources. (The IRSA 2MASS All-Sky Point Source Catalog. R M Cutri, M F Skrutskie, S Van Dyk, NASA/IPAC Infrared Science Archive. Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al. 2003, 2MASS All Sky Catalog of point sources. (The IRSA 2MASS All- Sky Point Source Catalog, NASA/IPAC Infrared Science Archive. http://irsa.ipac.caltech.edu/applications/Gator/ )
. R I Davies, L J Tacconi, R Genzel, ApJ. 613781Davies, R. I., Tacconi, L. J., & Genzel, R. 2004, ApJ, 613, 781
. M Davis, P Guhathakurta, N P Konidaris, ApJ. 6601Davis, M., Guhathakurta, P., Konidaris, N. P., et al. 2007, ApJ, 660, L1
. J R Deane, N Trentham, MNRAS. 3261467Deane, J. R. & Trentham, N. 2001, MNRAS, 326, 1467
. R Della Ceca, T Maccacaro, A Caccianiga, A&A. 428383Della Ceca, R., Maccacaro, T., Caccianiga, A., et al. 2004, A&A, 428, 383
. J M Dickey, F J Lockman, ARA&A. 28215Dickey, J. M. & Lockman, F. J. 1990, ARA&A, 28, 215
& The Goods Team. M Dickinson, M Giavalisco, The Mass of Galaxies at Low and High Redshift. R. Bender & A. Renzini324Dickinson, M., Giavalisco, M., & The Goods Team. 2003, in The Mass of Galaxies at Low and High Redshift, ed. R. Bender & A. Renzini, 324
. C L Drake, G V Bicknell, P J Mcgregor, M A Dopita, AJ. 128969Drake, C. L., Bicknell, G. V., McGregor, P. J., & Dopita, M. A. 2004, AJ, 128, 969
. A Efstathiou, M Rowan-Robinson, MNRAS. 273649Efstathiou, A. & Rowan-Robinson, M. 1995, MNRAS, 273, 649
. A Efstathiou, M Rowan-Robinson, R Siebenmorgen, MNRAS. 313734Efstathiou, A., Rowan-Robinson, M., & Siebenmorgen, R. 2000, MNRAS, 313, 734
. M Elvis, B J Wilkes, J C Mcdowell, ApJS. 951Elvis, M., Wilkes, B. J., McDowell, J. C., et al. 1994, ApJS, 95, 1
. A C Fabian, K Iwasawa, MNRAS. 30334Fabian, A. C. & Iwasawa, K. 1999, MNRAS, 303, L34
. D Farrah, J Afonso, A Efstathiou, MNRAS. 343585Farrah, D., Afonso, J., Efstathiou, A., et al. 2003, MNRAS, 343, 585
. D Farrah, M Rowan-Robinson, S Oliver, MNRAS. 3261333Farrah, D., Rowan-Robinson, M., Oliver, S., et al. 2001, MNRAS, 326, 1333
. D Farrah, S Serjeant, A Efstathiou, M Rowan-Robinson, A Verma, MNRAS. 3351163Farrah, D., Serjeant, S., Efstathiou, A., Rowan-Robinson, M., & Verma, A. 2002a, MNRAS, 335, 1163
. D Farrah, A Verma, S Oliver, M Rowan-Robinson, R Mcmahon, MNRAS. 329605Farrah, D., Verma, A., Oliver, S., Rowan-Robinson, M., & McMahon, R. 2002b, MNRAS, 329, 605
. A Franceschini, V Braito, M Persic, MNRAS. 3431181Franceschini, A., Braito, V., Persic, M., et al. 2003, MNRAS, 343, 1181
. A Franceschini, G Hasinger, T Miyaji, D Malquori, MNRAS. 3105Franceschini, A., Hasinger, G., Miyaji, T., & Malquori, D. 1999, MNRAS, 310, L5
P Freeman, S Doe, A Siemiginowska, Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference. J.-L. Starck & F. D. Murtagh4477Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesFreeman, P., Doe, S., & Siemiginowska, A. 2001, in Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, Vol. 4477, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. J.-L. Starck & F. D. Murtagh, 76-87
. R Gilli, A Comastri, G Hasinger, A&A. 46379Gilli, R., Comastri, A., & Hasinger, G. 2007, A&A, 463, 79
. R M Gonzalez-Delgado, E Perez, A I Diaz, ApJ. 439604Gonzalez-Delgado, R. M., Perez, E., Diaz, A. I., et al. 1995, ApJ, 439, 604
. P F Hopkins, G T Richards, L Hernquist, ApJ. 654731Hopkins, P. F., Richards, G. T., & Hernquist, L. 2007, ApJ, 654, 731
. M Imanishi, C C Dudley, R Maiolino, ApJS. 17172Imanishi, M., Dudley, C. C., Maiolino, R., et al. 2007, ApJS, 171, 72
. K Iwasawa, C S Crawford, A C Fabian, R J Wilman, MNRAS. 36220Iwasawa, K., Crawford, C. S., Fabian, A. C., & Wilman, R. J. 2005, MNRAS, 362, L20
IRAS Point Source Catalog. Joint IRAS Science Working Group. 1988, in IRAS Point Source Catalog (1988)
. B C Kelly, J Bechtold, J R Trump, M Vestergaard, A Siemiginowska, ApJS. 176355Kelly, B. C., Bechtold, J., Trump, J. R., Vestergaard, M., & Siemiginowska, A. 2008, ApJS, 176, 355
. S G Kleinmann, D Hamilton, W C Keel, ApJ. 328161Kleinmann, S. G., Hamilton, D., Keel, W. C., et al. 1988, ApJ, 328, 161
J Kormendy, K Gebhardt, 20th Texas Symposium on relativistic astrophysics. J. C. Wheeler & H. Martel586363American Institute of Physics Conference SeriesKormendy, J. & Gebhardt, K. 2001, in American Institute of Physics Conference Series, Vol. 586, 20th Texas Symposium on relativistic astrophysics, ed. J. C. Wheeler & H. Martel, 363
. K M Leighly, J P Halpern, D J Helfand, R H Becker, C D Impey, AJ. 1212889Leighly, K. M., Halpern, J. P., Helfand, D. J., Becker, R. H., & Impey, C. D. 2001, AJ, 121, 2889
. K M Leighly, J P Halpern, E B Jenkins, ApJ. 663103Leighly, K. M., Halpern, J. P., Jenkins, E. B., et al. 2007, ApJ, 663, 103
C J Lonsdale, D Farrah, H E Smith, Ultraluminous Infrared Galaxies. J. W. MasonSpringer Verlag285Lonsdale, C. J., Farrah, D., & Smith, H. E. 2006, Ultraluminous Infrared Galaxies, ed. J. W. Mason (Springer Verlag), 285
. P Magdziarz, A A Zdziarski, MNRAS. 273837Magdziarz, P. & Zdziarski, A. A. 1995, MNRAS, 273, 837
. J Magorrian, S Tremaine, D Richstone, AJ. 1152285Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285
. G Matt, M Guainazzi, F Frontera, A&A. 32513Matt, G., Guainazzi, M., Frontera, F., et al. 1997, A&A, 325, L13
. R J Mclure, J S Dunlop, MNRAS. 331795McLure, R. J. & Dunlop, J. S. 2002, MNRAS, 331, 795
M Moshir, G Kopan, T Conrow, IRAS Faint Source Catalogue. 0Moshir, M., Kopan, G., Conrow, T., et al. 1990, in IRAS Faint Source Catalogue, version 2.0 (1990)
. E Nardini, G Risaliti, M Salvati, MNRAS. 385130Nardini, E., Risaliti, G., Salvati, M., et al. 2008, MNRAS, 385, L130
. G Neugebauer, R F Green, K Matthews, ApJS. 63615Neugebauer, G., Green, R. F., Matthews, K., et al. 1987, ApJS, 63, 615
M J Page, F J Carrera, J Ebrero, J A Stevens, R J Ivison, Y Rephaeli, Studying Galaxy Evolution with Spitzer and Herschel. V. Charmandaris, D. Rigopoulou, & N. Kylafis Persic382843Page, M. J., Carrera, F. J., Ebrero, J., Stevens, J. A., & Ivison, R. J. 2007, in Studying Galaxy Evolution with Spitzer and Herschel, ed. V. Charmandaris, D. Rigopoulou, & N. Kylafis Persic, M. & Rephaeli, Y. 2002, A&A, 382, 843
. A Ptak, T Heckman, N A Levenson, K Weaver, D Strickland, ApJ. 592782Ptak, A., Heckman, T., Levenson, N. A., Weaver, K., & Strickland, D. 2003, ApJ, 592, 782
P J Quinn, D G Barnes, I Csabai, Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference. P. J. Quinn & A. Bridger5493Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesQuinn, P. J., Barnes, D. G., Csabai, I., et al. 2004, in Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, Vol. 5493, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. P. J. Quinn & A. Bridger, 137-145
. G T Richards, M Lacy, L J Storrie-Lombardi, ApJS. 166470Richards, G. T., Lacy, M., Storrie-Lombardi, L. J., et al. 2006, ApJS, 166, 470
G Risaliti, M Elvis, Supermassive Black Holes in the Distant Universe. 308187A Panchromatic View of AGN (ASSLRisaliti, G. & Elvis, M. 2004, A Panchromatic View of AGN (ASSL Vol. 308: Supermassive Black Holes in the Distant Universe), 187
. M Rowan-Robinson, MNRAS. 272737Rowan-Robinson, M. 1995, MNRAS, 272, 737
. M Rowan-Robinson, MNRAS. 316885Rowan-Robinson, M. 2000, MNRAS, 316, 885
. A Ruiz, F J Carrera, F Panessa, A&A. 471775Ruiz, A., Carrera, F. J., & Panessa, F. 2007, A&A, 471, 775
. N Scoville, H Aussel, M Brusa, ApJS. 1721Scoville, N., Aussel, H., Brusa, M., et al. 2007, ApJS, 172, 1
. J D Silverman, P J Green, W A Barkhouse, ApJ. 624630Silverman, J. D., Green, P. J., Barkhouse, W. A., et al. 2005, ApJ, 624, 630
. D N Spergel, L Verde, H V Peiris, ApJS. 148175Spergel, D. N., Verde, L., Peiris, H. V., et al. 2003, ApJS, 148, 175
. A T Steffen, I Strateva, W N Brandt, AJ. 1312826Steffen, A. T., Strateva, I., Brandt, W. N., et al. 2006, AJ, 131, 2826
. J A Stevens, M J Page, R J Ivison, MNRAS. 360610Stevens, J. A., Page, M. J., Ivison, R. J., et al. 2005, MNRAS, 360, 610
. I V Strateva, W N Brandt, D P Schneider, D G Vanden Berk, C Vignali, AJ. 130387Strateva, I. V., Brandt, W. N., Schneider, D. P., Vanden Berk, D. G., & Vignali, C. 2005, AJ, 130, 387
. D K Strickland, T M Heckman, E J M Colbert, C G Hoopes, K A Weaver, ApJS. 151193Strickland, D. K., Heckman, T. M., Colbert, E. J. M., Hoopes, C. G., & Weaver, K. A. 2004, ApJS, 151, 193
. C M Telesco, E E Becklin, C G Wynn-Williams, D A Harper, ApJ. 282427Telesco, C. M., Becklin, E. E., Wynn-Williams, C. G., & Harper, D. A. 1984, ApJ, 282, 427
. S H Teng, A S Wilson, S Veilleux, ApJ. 633664Teng, S. H., Wilson, A. S., Veilleux, S., et al. 2005, ApJ, 633, 664
. Y Ueda, M Akiyama, K Ohta, T Miyaji, ApJ. 598886Ueda, Y., Akiyama, M., Ohta, K., & Miyaji, T. 2003, ApJ, 598, 886
. S Veilleux, D.-C Kim, D B Sanders, ApJ. 522113Veilleux, S., Kim, D.-C., & Sanders, D. B. 1999, ApJ, 522, 113
. S Veilleux, D.-C Kim, D B Sanders, ApJS. 143315Veilleux, S., Kim, D.-C., & Sanders, D. B. 2002, ApJS, 143, 315
. S Veilleux, D.-C Kim, D B Sanders, J M Mazzarella, B T Soifer, ApJS. 98171Veilleux, S., Kim, D.-C., Sanders, D. B., Mazzarella, J. M., & Soifer, B. T. 1995, ApJS, 98, 171
. A Verma, M Rowan-Robinson, R Mcmahon, A E Efstathiou, MNRAS. 335574Verma, A., Rowan-Robinson, M., McMahon, R., & Andreas Efstathiou, A. E. 2002, MNRAS, 335, 574
. M G Watson, A C Schröder, D Fyfe, A&A. 493339Watson, M. G., Schröder, A. C., Fyfe, D., et al. 2009, A&A, 493, 339
. R J Wilman, A C Fabian, C S Crawford, R M Cutri, MNRAS. 33819Wilman, R. J., Fabian, A. C., Crawford, C. S., & Cutri, R. M. 2003, MNRAS, 338, L19
. R J Wilman, A C Fabian, R M Cutri, C S Crawford, W N Brandt, MNRAS. 3007Wilman, R. J., Fabian, A. C., Cutri, R. M., Crawford, C. S., & Brandt, W. N. 1998, MNRAS, 300, L7
. 2.30 × 10 11 1.80 × 10 −3 4.50 × 10 −4 IRAMBeppoSAX. NED) 1.72 × 10 12 1.89 × 10 −1 3.78 × 10 −2 ISO (NED) 2.93 × 10 12 3.53 × 10 −1 7.06 × 10 −2 ISO (NED) 4.93 × 10 12 2.57 × 10 −1 5.14 × 10 −2 ISO (NED) 1.20 × 10 13 1.13 × 10 −1 ..BeppoSAX (NED) 2.30 × 10 11 1.80 × 10 −3 4.50 × 10 −4 IRAM (NED) 1.72 × 10 12 1.89 × 10 −1 3.78 × 10 −2 ISO (NED) 2.93 × 10 12 3.53 × 10 −1 7.06 × 10 −2 ISO (NED) 4.93 × 10 12 2.57 × 10 −1 5.14 × 10 −2 ISO (NED) 1.20 × 10 13 1.13 × 10 −1 ...
IRAS (NED). IRAS (NED)
ISO (NED). ISO (NED)
. ( Farrah, (Farrah et al. 2002a)
IRAS (NED). IRAS (NED)
. Scuba (farrah, SCUBA (Farrah et al. 2002a)
. Scuba (farrah, SCUBA (Farrah et al. 2002a)
IRAS (NED). IRAS (NED)
A Ruiz, Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies, Online Material p 4. A. Ruiz et al.: Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies, Online Material p 4
Table B.12. Photometric data for IRAS 16347+7037. ν (Hz) F ν (Jy) Error (Jy). Table B.12. Photometric data for IRAS 16347+7037. ν (Hz) F ν (Jy) Error (Jy)
FCRAO (NED). FCRAO (NED)
. Nrao-12m, NEDNRAO-12m (NED)
IRAM (NED). IRAM (NED)
Hale-5m, Neugebauer et al. 1987) 8.10 × 10 13 1.07 × 10 −2 9.48 × 10 −4 Hale-5m. Neugebauer et al.Hale-5m (Neugebauer et al. 1987) 8.10 × 10 13 1.07 × 10 −2 9.48 × 10 −4 Hale-5m (Neugebauer et al. 1987)
Table B.13. Photometric data for IRAS 18216+6418. ν (Hz) F ν (Jy) Error (Jy). Table B.13. Photometric data for IRAS 18216+6418. ν (Hz) F ν (Jy) Error (Jy)
. ( Farrah, (Farrah et al. 2002a)
. ( Farrah, (Farrah et al. 2002a)
. Scuba (farrah, SCUBA (Farrah et al. 2002a)
21 × 10 −1 ISO (NED). 21 × 10 −1 ISO (NED)
70 × 10 −1 IRAS (NED). 70 × 10 −1 IRAS (NED)
96 × 10 −2 IRAS (NED). 96 × 10 −2 IRAS (NED)
IRAS (NED). IRAS (NED)
. ( Farrah, (Farrah et al. 2002a)
. ( Farrah, (Farrah et al. 2002a)
| []
|
[
"SEMILINEAR WAVE EQUATIONS ON ACCELERATED EXPANDING FLRW SPACETIMES",
"SEMILINEAR WAVE EQUATIONS ON ACCELERATED EXPANDING FLRW SPACETIMES"
]
| [
"João L Costa ",
"Anne T Franzen ",
"Jesús Oliver \nCalifornia State University East Bay\n25800 Carlos Bee Boulevard94542HaywardCaliforniaUSA\n",
"\nDepartamento de Matemática\nCenter for Mathematical Analysis, Geometry and Dynamical Systems\nInstituto Universitário de Lisboa (ISCTE-IUL)\nAv. das Forças Armadas1649-026LisboaPortugal\n",
"\nInstituto Superior Técnico\nUniversidade de Lisboa\nAv. Rovisco Pais1049-001LisboaPortugal\n"
]
| [
"California State University East Bay\n25800 Carlos Bee Boulevard94542HaywardCaliforniaUSA",
"Departamento de Matemática\nCenter for Mathematical Analysis, Geometry and Dynamical Systems\nInstituto Universitário de Lisboa (ISCTE-IUL)\nAv. das Forças Armadas1649-026LisboaPortugal",
"Instituto Superior Técnico\nUniversidade de Lisboa\nAv. Rovisco Pais1049-001LisboaPortugal"
]
| []
| We identify a large class of systems of semilinear wave equations, on fixed accelerated expanding FLRW spacetimes, with nearly flat spatial slices, for which we prove small data future global well-posedness. The family of systems we consider is large in the sense that, among other examples, it includes general wave maps, as well as natural generalizations of some of Fritz John's "blow up" equations (whose future blow up disappears, in our setting, as a consequence of the spacetime expansion). We also establish decay upper bounds, which are sharp within the family of systems under analysis.1 arXiv:2201.05210v1 [gr-qc] 13 Jan 2022The reader might find the lack of reference to the target manifold's topology strange, but please note that this is simply a manifestation of the fact that we are only considering small data problems. See Remark 2.2 below for more information.5 By which we mean the structure obtained by replacing, in the original null structure, the flat metric η by the FLRW metric g. 6 This is in stark contrast with what happens, for instance, in Minkowski spacetime, and was used in groundbreaking work by Ringström [22] to establish future non-linear stability of de Sitter spacetime (a particularly relevant example of our FLRW family) as a solution of appropriate Einstein-non-linear scalar field systems. | 10.1007/s00023-023-01319-9 | [
"https://arxiv.org/pdf/2201.05210v1.pdf"
]
| 245,986,658 | 2201.05210 | e3690d0a20b125c67bc6c98e7ba01a161dd64076 |
SEMILINEAR WAVE EQUATIONS ON ACCELERATED EXPANDING FLRW SPACETIMES
João L Costa
Anne T Franzen
Jesús Oliver
California State University East Bay
25800 Carlos Bee Boulevard94542HaywardCaliforniaUSA
Departamento de Matemática
Center for Mathematical Analysis, Geometry and Dynamical Systems
Instituto Universitário de Lisboa (ISCTE-IUL)
Av. das Forças Armadas1649-026LisboaPortugal
Instituto Superior Técnico
Universidade de Lisboa
Av. Rovisco Pais1049-001LisboaPortugal
SEMILINEAR WAVE EQUATIONS ON ACCELERATED EXPANDING FLRW SPACETIMES
We identify a large class of systems of semilinear wave equations, on fixed accelerated expanding FLRW spacetimes, with nearly flat spatial slices, for which we prove small data future global well-posedness. The family of systems we consider is large in the sense that, among other examples, it includes general wave maps, as well as natural generalizations of some of Fritz John's "blow up" equations (whose future blow up disappears, in our setting, as a consequence of the spacetime expansion). We also establish decay upper bounds, which are sharp within the family of systems under analysis.1 arXiv:2201.05210v1 [gr-qc] 13 Jan 2022The reader might find the lack of reference to the target manifold's topology strange, but please note that this is simply a manifestation of the fact that we are only considering small data problems. See Remark 2.2 below for more information.5 By which we mean the structure obtained by replacing, in the original null structure, the flat metric η by the FLRW metric g. 6 This is in stark contrast with what happens, for instance, in Minkowski spacetime, and was used in groundbreaking work by Ringström [22] to establish future non-linear stability of de Sitter spacetime (a particularly relevant example of our FLRW family) as a solution of appropriate Einstein-non-linear scalar field systems.
Introduction
It is well known that an accelerated expansion provides a mechanism that helps explain the high homogeneity and isotropy of the observed Universe [23]. At the level of wave equations, on fixed accelerated expanding cosmologies, such process of attenuation of perturbations provides a favorable environment to establish future global existence results closely related to "fast" decay estimates of some relevant quantities.
In this paper we realize these expectations by identifying a "large" class of Cauchy problems for systems of semilinear wave equations
(1.1) g φ A = a −2+δ0α+δ 0β N A,αβ BC (φ)∂ α φ B ∂ β φ C , φ A (t 0 , x) = φ A 0 (x) , ∂ t φ A (t 0 , x) = φ A 1 (x)
, on fixed accelerated expanding FLRW spacetimes with metric of the form (1.2) g := −dt 2 + a 2 (t)σ ij (x)dx i dx j , for which we prove small data future global well-posedness. We also establish decay upper bounds, which are sharp within the family of systems under consideration. We use the adjective "large" since (1.1) includes: i) general wave maps, which for small data and under the assumption of uniformly bounded geometry of the target manifold (see Remark 2.2) satisfy
(1.3) g φ A = −g αβ Γ A BC (φ)∂ α φ B ∂ β φ C , where Γ A
BC are the Christoffel symbols of the target manifold's Riemannian metric; ii) but also includes other examples of equations that do not exhibit any particular form of null structure. A noteworthy example of the later corresponds to Fritz John's equation [10] (1.4) g φ = (∂ t φ) 2 .
Recall that in 1+3 dimensional Minkowski spacetime, i.e., if g = η, where η is the flat metric, the semilinear term in John's equation is responsible for finite time blow-up of solutions arising from arbitrarily small, but non-trivial, smooth and compactly supported initial data. As we will see, as a consequence of our results, this is no longer the case if g corresponds to the metric of an accelerated expanding FLRW cosmology, with nearly flat spatial slices. Returning to wave maps, it is of interest to note that, on par with Einstein's equations, they arguably correspond to the class of geometric wave equations that have triggered the biggest developments on the geometric analysis of evolution equations. Most of the work on the field [26, Chapter 6] as been carried out in the context of Minkowski and perturbations thereof (as base manifolds 4 ) and a typical motivation for the study of such maps comes from cosmology [21,11,4].
The original motivation for our work was to identify classes of nonlinear wave equations, with relevant content in cosmological modeling, exhibiting future small data global existence. Wave maps were therefore a natural starting point. However, it rapidly became clear that our techniques applied to a wider class of wave equations and, therefore, focusing only on wave maps would be an artificial restriction that would obscure the mechanisms for decay and global existence, in accelerated expanding cosmologies. The end result of our research was the identification of a nonlinear structure (2.12) that takes advantage of the knowledge gained concerning the decay rate of derivatives of solutions to the linear homogeneous wave equations in FLRW [5] to create a favorable setup in the semilinear setting. Recall that time derivatives of the linear solutions decay with a rate dictated by the expansion factor a(t), while spatial derivatives are at best bounded and, in general, do not decay at all (see the next section for more information). The structure (2.12) is then designed to make sure that any badly decaying derivative is multiplied by a "good" derivative and/or by inverse powers of the expanding factor a(t). This is akin to the role played by the celebrated null structure in Minkowski, discovered by Klainerman [15]. Nonetheless, although similar in spirit the direct generalization of the null structure to the FLWR setting 5 and the nonlinear structure (2.12) identified in this paper are quite different both in form and content; some relevant similarities and distinctions have already been presented in the examples discussed above.
1.1. Some basic lessons from previous works. In this paper we will be concerned with accelerated expanding FLRW cosmologies (see Section 2 for more details) with spacetime topology M = {(t, x) | t ∈ R + , x ∈ R n } and we will assume that expansion occurs in the direction of increasing t, to which we will refer as the future direction. Two causal/geometric consequences of the accelerated expansion that are particularly relevant to our work are the following:
• Global in time information from local in space data: given a fixed x 0 ∈ R n , let γ(t) = (t, x 0 ) ∈ R + ×R n be an observer that "reaches infinity" and let D be its domain of dependence. Then, D ∩ {t ≥ T } is compact, for all T > 0, see Figure 1. • Cosmic silence: given two such curves γ i (t) = (t, x i ) ∈ R + × R n , i = 1, 2, if we denote by D i the corresponding domains of dependence, then, for all sufficiently big T > 0, D 1 ∩ D 2 ∩ {t ≥ T } = ∅, see Figure 2. As a consequence of the first property, we see that for any hyperbolic equation that, in particular, satisfies the domain of dependence property, we can obtain global in time information about its solutions, from localized initial data prescribed on a compact set of the form D ∩ {t = T } 6 . In particular, we can assume that our initial data is contained in a large enough torus T n . Then, if the torus is flat and we consider the homogeneous wave equation we can, in fact, derive explicit solutions using Fourier series, as done in Appendix A of [5]; these solutions can then be used to clarify what is the sharp asymptotic behavior of solutions. For instance, in the case of a power law expanding factor a(t) = t p , p > 1, we will show that small data solutions in our class of semilinear wave equations satisfy the following estimates (valid in the future region) and |∂ x φ| 1 .
Since within the referred Fourier mode solutions [5, (155)-(157)] there are solutions with this exact profile, with " " replaced by "∼", and since the homogeneous wave equation is a particular case of our setup, we can then conclude that these estimates are sharp within our class of equations. Now let us discuss an important consequence of cosmic silence. Let us start with the wave equation g φ = 0 and let us choose T 1 such that D 1 ∩ D 2 ∩ {t ≥ T } = ∅, as described before. Now consider the Cauchy problem with data posed on t = T such that φ |Di∩{t=T } = C i and ∂ t φ |Di∩{t=T } = 0, where the C i are distinct constants. Then, by the domain of dependence property and the fact that the homogenous wave equation admits constants as solutions, we conclude that φ |Di (t, x) = C i , for all t. In particular, φ does not converge to a constant at infinity, instead we have lim t→+∞ |φ(t, x) − φ ∞ (x)| = 0, for some (non-constant) function x → φ ∞ (x). We can now easily see that the exact same conclusions apply to any (non-linear) wave equation that admits constants as solutions; this is exactly what happens with our class of semilinear wave equations (see (2.20) and (2.21)). Moreover this should be contrasted with what happens if we consider the Klein-Gordon case g φ = m 2 φ with non-zero mass m: then the only constant solution is the trivial one and the remaining solutions, arising from appropriate initial data, decay to zero at future infinity (see for instance [16]).
1.2.
Other related works. The mathematical analysis of wave equations on expanding cosmological spacetimes has a long and rich history that can be traced back to the work of Klainerman and Sarnak [14]. Here we will not try to give a complete overview of the subject and will instead simply focus on previous works that are concerned with the analysis of such PDEs in fixed accelerated expanding FLRW cosmologies.
Sharp and almost sharp decay estimates for linear wave equations in accelerated expanding FLRW spacetimes, with special emphasis on de Sitter, can be found in [20,29,2,5,16,17]. For a detailed presentation of systems of linear wave equations on various cosmological backgrounds we refer to the monograph [24] of Ringström. In [3] Choquet-Bruhat investigated wave maps with FLRW base space and established global existence under appropriate smallness conditions on the data and the spatial geometry. There is some overlap between these results and the existence results of our paper in what pertains to wave maps. Nonetheless we should mention that Choquet-Bruhat's strategy is more geometric in nature and is restricted to wave maps with 1 + n dimensional base spaces and n ≤ 3; moreover her work does not provide an asymptotic analysis of the maps.
A thorough study of linear and semilinear wave equations, using representation formulas via integral transforms, has been performed by Galstian and Yagdjian (see [8,28,27] and references therein). These works consider non-linearities depending only on φ and are restricted to the Klein-Gordon case with non-vanishing mass term; this last fact reveals itself in the fact that in their case φ → 0, as t → 0 (see discussion in the end of Section 1.1). Results along the same line have been also obtained by Ebert and Reissig [7].
1.3. Overview. In Section 2 we present our geometrical setup, the structure of our systems of semilinear equations and our main results. In Section 3 we establish our basic energy estimate using the vector field method. The issue of local existence is settled in Section 4 where we use the conformal method to transform our equations into equations in Minkowski, where we can invoke classical local existence results. The proof of small data global existence is carried out in Section 5. To this end we use a bootstrap argument that extends the methods of proof developed in [18] and [19]: in a nutshell, the nonlinear structure (2.12) takes advantage of the integrability of 1/a(t) in order to achieve balanced commutator estimates for derivatives in L 2 and L ∞ ; this allows us to close the bootstrap. Finally, in Section 6 we establish the sharp future decay estimates. The simple proof presented here is a variation on an idea by Pedro Girão [9] to deal with the de Sitter case, which was previously implemented in [16]. Here we present a streamlined and extended version of this strategy; streamlined by avoiding the need to use conformal time and extended to semilinear equations and the entire FLRW family under consideration.
Setup and Main Results
Let (M, g) be a Friedman-Lemaitre-Robertson-Walker (FLRW) spacetime with topology R + × R n and metric
(2.1) g := −dt 2 + a 2 (t)σ ij dx i dx j ,
where σ ij := σ ij (x) are the components of a Riemannian metric in R n . We will consider cosmologies undergoing an accelerated expansion in the direction of positive time t: expansion corresponds to (2.2)ȧ := ∂ t a > 0 , and the accelerated character of this expansion can by codified by imposing the integrability condition
(2.3) ∞ t0 1 a(s) ds < ∞ ,
where, from now on, t 0 > 0 is fixed. We also assume that a(t) > 0 for all t ≥ t 0 . Consider the covariant wave operator, defined by
(2.4) g φ = 1 |g| ∂ α g αβ |g|∂ β φ ,
where |g| = − det(g αβ ), g αβ are the components of the inverse of g αβ , and where, as usual, greek indices run from 0 to n. For the FLRW metric (2.1),we have
g φ = −∂ 2 t φ − n ∂ t a a ∂ t φ + 1 a 2 ∆ σ φ , (2.5)
where ∆ σ is the Laplace operator of the metric σ, defined by
(2.6) ∆ σ φ = 1 |σ| ∂ i σ ij |σ|∂ j φ ,
for |σ| = det(σ ij ), and where σ ij are the components of the inverse of σ ij , with latin indices taking values in the range 1 to n. Let 1 ≤ A, B, C ≤ d and φ A : R 1+n → R. In this work we study solutions to the Cauchy problem for systems of semilinear wave equations of the form
(2.7) g φ A =Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C , φ A (t 0 , x) = φ A 0 (x) , ∂ t φ A (t 0 , x) = φ A 1 (x) , for (φ A 0 , φ A 1 ) ∈ H K+1 (R n ) × H K (R n ), K ≥ n + 1, where the nonlinearities take the form N A,00 BC = N A,00 BC , (2.8)Ñ A,0j BC = a −1 (t) N A,0j BC , (2.9)Ñ A,i0 BC = a −1 (t) N A,i0 BC , (2.10)Ñ A,ij BC = a −2 (t) N A,ij BC , (2.11) with N A,αβ BC ∈ C ∞ b (R d ),
i.e. the functions N have uniformly bounded derivatives of all orders. Note that by using the Kronecker symbol we can compress the form of the nonlinearities to a single expression by writing
N A,αβ BC = a −2+δ0α+δ 0β N A,αβ BC . (2.12)
The main results of our paper are compiled in Theorem 2.1. Let K ≥ n + 1, where n ≥ 2 is the spatial dimension of a FLRW spacetime, with topology R + × R n and smooth metric of the form (2.1), whose spatial geometry satisfies
(2.13) n i,j=1 σ ij − δ ij L ∞ (R n ) + K+1 k=1 ∂ k x σ ij L ∞ (R n ) =: C σ < ∞ . Consider initial data φ A 0 , φ A 1 : R n → R, 1 ≤ A ≤ d, such that, for a fixed K ≥ n + 1, (2.14) 1≤A≤d φ A 0 H K+1 (R n ) + φ A 1 H K (R n ) =: C 0 < ∞ .
Then, given t 0 > 0, there exists δ 0 > 0, such that, if C σ + C 0 ≤ δ 0 , the initial value problem
(2.15) g φ A = a −2+δ0α+δ 0β N A,αβ BC (φ)∂ α φ B ∂ β φ C , φ A (t 0 , x) = φ A 0 (x) , ∂ t φ A (t 0 , x) = φ A 1 (x) , with N A,αβ BC ∈ C ∞ b (R d ), admits a unique solution (φ A , ∂ t φ A ) ∈ L ∞ ([t 0 , T ), H K+1 (R n ))×L ∞ ([t 0 , T ), H K (R n ))
. Concerning the asymptotic behavior of the solutions, given a fixed 0 ≤ k < K − n 2 , we highlight that: (1) For a general expanding factor we have
∂ t ∂ k x φ A (t, · ) L ∞ C 0 t t0 a n−2 (s)ds a −n (t) . (2.16) (a) in the case of a power law expansion a(t) = t p , p > 1, (2.16) becomes ∂ t ∂ k x φ A (t, · ) L ∞ C 0 t −2p+1 , (2.17) (b) and in the de Sitter case a(t) = e Ht , H > 0, (2.16) reads ∂ t ∂ k x φ A (t, · ) L ∞ C 0 e −2Ht . (2.18)
(2) Moreover, for a general expanding factor there exists a function φ ∞ = (φ A ∞ ) : R n → R d , such that we have
(2.19) ∂ k x φ A (t, · ) − φ A ∞ L ∞ → 0 , as t → ∞.
(a) in the case of a power law expansion a(t) = t p , p > 1, we have
∂ k x φ A (t, · ) − φ A ∞ L ∞ C 0 t −2p+2 ,(2.20)
(b) and in the de Sitter case a(t) = e Ht , H > 0, we get
∂ k x φ A (t, · ) − φ A ∞ L ∞ C 0 e −2Ht . (2.21)
Remark 2.2. The previous result applies to the following particular cases:
• Let φ be a wave map with base manifold one of our FLRW spacetimes (M, g) and target manifold a given Riemannian manifold (N , h) with uniformly bounded geometry [26,Chapter 6], by which we mean that we can cover N with coordinate charts of radius bounded from below, where the Christoffel symbols of h, that we denote by Γ A BC , are bounded and have bounded derivatives of all orders. Then, in the small data setting, the wave map equations take the form
(2.22) g φ A = −g αβ Γ A BC (φ)∂ α φ B ∂ β φ C ,
which clearly fits into our framework. To this effect recall that g αβ are the components of the inverse metric which is given by
(2.23) g −1 := −∂ t ⊗ ∂ t + a −2 (t)σ ij ∂ x i ⊗ ∂ x j ,
where σ ij are the components of σ −1 . • Arguably the most famous example of Fritz John's "blow up" equations [10] is
(2.24) g φ = (∂ t φ) 2 . Recall that in 1+3 dimensional Minkowski spacetime, i.e., if g = η,
where η is the flat metric, all solutions arising from arbitrarily small, but non-trivial, smooth and compactly supported initial data, blow up in finite time. However, if g is one of our FLRW metrics then the equation fits into our framework and as a consequence our results show that, in such case, the equation satisfies small data global existence to the future; so we see that the (small data) finite time blow up to the future disappears as a consequence of the accelerated expansion.
• Linblad-Rodnianski's basic example of a system that does not satisfy the null condition but satisfies the weak-null condition (see [13] for more information) formally generalizes to our setting to yield
(2.25) g φ 1 = 0 , g φ 2 = (∂ t φ 1 ) 2 .
Contrary to what happens in Minkowski, in the FLRW case small data global existence for this system is not as surprising in view of the fact that we also have global existence for Fritz John's equation (2.24).
Energy Formalism
We define the Energy-Momentum Tensor to be
(3.1) T αβ = ∂ α φ ∂ β φ − 1 2 g αβ ∂ µ φ ∂ µ φ .
Let D be the Levi-Civita connection of the metric g. The divergence of the energy momentum is then
D α T αβ = ∂ β φ g φ .
Given a (smooth) vector field X, we define the 1-form (X) P α = T αβ X β .
Taking its divergence yields
(3.2) D α(X) P α = 1 2 (X) π αβ T αβ + X g φ , where (X) π αβ := L X g αβ = D α X β + D β X α ,
is a symmetric 2-tensor known as the Deformation Tensor of g with respect to X. Integrating the divergence identity (3.2) over the time slab
{(t, x) | t 0 ≤ t ≤ t 1 }
and using Stokes' theorem we get the following Multiplier Identity t=t0 (X) P α N α |g|
1 2 dx − t=t1 (X) P α N α |g| 1 2 dx = t1 t0 R n 1 2 (X) π αβ T αβ + Xφ · g φ |g| 1 2 dxdt , (3.3)
where N = ∂ t is the future pointing unit normal to the time slices t = const, |g| = − det(g αβ ) = a n det(σ) = a n |σ|, with |σ| := det(σ ij ), and dx = dx 1 · · · dx n . The integrand (X) P α N α in (3.3) is the Energy Density associated to X.
We can also control the sign of the contraction of the deformation tensor with the energy-momentum tensor, on the right hand side, by choosing an appropriate multiplier vector field X. In fact we have
Lemma 3.1. For X = a l ∂ t ,
where a is the expanding factor and l ∈ R, we have
(3.4) (X) π αβ T αβ = (n − l)a l−1ȧ (∂ t φ) 2 + (2 − n − l)a l−3ȧ σ ij ∂ i φ∂ j φ ,
where σ ij is the inverse of σ ij .
Proof. To compute the deformation tensor we start by noting that L X σ = 0,
L X dt = dι X dt = d(a l ) = la l−1ȧ dt ,
and that L X a = a lȧ , in order to compute
(X) π = L X g = L X (−dt 2 + a 2 σ ij dx i dx j ) = −2dtL X dt + 2a l+1ȧ σ ij dx i dx j = −2la l−1ȧ dt 2 + 2a l+1ȧ σ ij dx i dx j .
It then follows that
(X) π αβ T αβ = −2la l−1ȧ (∂ t φ) 2 + 1 2 ∂ α φ∂ α φ +2a l+1ȧ σ ij ∂ i φ∂ j φ − 1 2 g ij ∂ α φ∂ α φ ,
and the desired result is then a consequence of the identities
(3.5) ∂ α φ∂ α φ = − (∂ t φ) 2 + a −2 σ ij ∂ i φ∂ j φ , and (3.6) σ ij ∂ i φ∂ j φ = a −4 σ ij ∂ i φ∂ j φ .
We thus choose l = 2 − n, for which we have
(3.7) (X) π αβ T αβ ≥ 0 , and define (3.8) E[φ](t) := {t}×R n (X) P α N α |g| 1 2 dx = 1 2 R n a 2 (∂ t φ) 2 + σ ij ∂ i φ∂ j φ (t, x)|σ| 1 2 dx .
Consequently, the identities (3.3) and (3.4) give rise to
(3.9) E(t 1 ) ≤ E(t 0 ) + t1 t0 R n a 2 |∂ t φ g φ| |σ| 1 2 dxdt .
Introducing the L 2 norm defined by
f L 2 σ = R n |f (x)| 2 |σ| 1 2 dx 1 2 ,
we can apply the Cauchy-Schwarz inequality followed by Young's inequality to (3.9) to obtain
E(t) ≤ E(t 0 ) + t1 t0 a ∂ t φ L 2 σ a g φ L 2 σ dt ≤ E(t 0 ) + √ 2 sup t0≤t≤t1 E 1/2 (t) t1 t0 a g φ L 2 σ dt ≤ E(t 0 ) + 2 sup t0≤t≤t1 E(t) + 1 t1 t0 a g φ L 2 σ dt 2 ,
where can be any positive constant, which when chosen sufficiently small allows us to conclude that
sup t0≤t≤t1 E(t) ≤ 1 1 − 2 E(t 0 ) + 1 t1 t0 a g φ L 2 σ dt 2 .
From the previous we immediately obtain our main energy estimate:
Theorem 3.2. Let g be the FLRW metric (2.1). Then, there exists a constant C 1 > 0 such that given φ : [t 0 , T ) × R n → R, the following energy estimate holds
(3.10) sup t0≤t<T E 1/2 (t) ≤ C 1 E 1/2 (t 0 ) + T t0 a(t) g φ L 2 σ dt , for the energy E = E[φ] defined in (3.8).
local existence
Consider two conformally related metrics in R 1+n
g = Ω 2g
.
Then a direct computation shows that Using (4.1) with Ω = a andg = η σ we conclude that the semilinear equation
(4.1) g φ = Ω −2 g φ + (n − 1)Ω −3gαβ ∂ α Ω∂ β φ .g φ = F (φ, ∂φ) is equivalent to ησ φ = (n − 1) ∂ τ a a ∂ τ φ + a 2 F (φ, ∂φ) .
We can then apply classical local well-posedness results for nonlinear wave equations on perturbations of Minkowski (see for instance [25,
|σ ij − δ ij | < 1 10 .
Then, the initial value problem
(4.5) g φ = F (φ, ∂φ) , φ(t 0 , x) = φ 0 (x) , ∂ t φ(t 0 , x) = φ 1 (x) , with F ∈ C ∞ , F (0, 0) = 0, and (φ 0 , φ 1 ) ∈ H K+1 (R n ) × H K (R n ), K ≥ n + 1, admits a unique solution (φ, ∂ t φ) ∈ L ∞ ([t 0 , T ), H K+1 (R n )) × L ∞ ([t 0 , T ), H K (R n )), where T = T ( (φ 0 , φ 1 ) H K+1 ×H K ) > 0 .
Global Existence for small data
In this section we establish small data global existence for the system (2.15) under the conditions of Theorem 4.1. To do that recall that C 0 denotes the size of the initial data (2.14). We start by defining
E K (t) := 1≤A≤d K k=0 E 1/2 [∂ k x φ A ](t) = 1≤A≤d 1 √ 2 K k=0 {t}×R n a 2 (∂ t ∂ k x φ A ) 2 + σ ij ∂ i ∂ k x φ A ∂ j ∂ k x φ A 1 2 .
Some comments concerning notation are in order: first we are using ∂ k x to denote any differentiation of order k with respect to the spatial variables x i , i.e., any differentiation of the form
∂ k1 ∂x i1 · · · ∂ k l ∂x i l ,
with k i = k; secondly, we are omitting the volume form |σ| 1 2 dx in order to not overburden the notion; this is clearly not an issue since by the smallness condition on the metric we have |σ| ∼ 1, uniformly on x, which allows us to drop this coefficient from all spatial L 2 based norms. On this note it might be helpful to make it clear that by choosing C σ sufficiently small, there exits C > 0 such that
(5.1) C −1 δ ij ξ i ξ j ≤ σ ij ξ i ξ j ≤ Cδ ij ξ i ξ j , for all ξ = (ξ i ) ∈ R n .
We proceed by a continuity argument. Let K ≥ n + 1 and assume that M is a large enough constant so that
(5.2) E K (t 0 ) ≤ M C 0 C 2 ,
where C 2 is a constant, independent of C 0 and C σ , that will be specified in the course of the proof. Next we assume as bootstrap condition that T is the supremum over all times of existence t ≥ t 0 for which
(5.3) sup t0≤t<T E K (t) ≤ 4M C 0 .
That a T > t 0 in such conditions exists is a direct consequence of Theorem 4.1.
Using Sobolev embedding, the relation (5.1), and the bootstrap assumption we see that for 0 ≤ l < K − n 2 , and 0 ≤ t < T , we have
(5.4) ∂ l x ∂ x φ A (t, · ) L ∞ (R n ) ≤ C ∂ x φ A (t, · ) H K (R n ) ≤ CE K (t) ≤ 4M CC 0 , for all A ∈ {1, 2, .
.., d}. Moreover, for 0 ≤ l < K − n 2 , and 0 ≤ t < T , we get
(5.5) ∂ l x ∂ t φ A (t, · ) L ∞ (R n ) ≤ C ∂ t φ A (t, · ) H K (R n ) ≤ Ca −1 (t)E K (t) ≤ 4M CC 0 a −1 (t)
, again for all A ∈ {1, 2, ..., d}. Let 0 ≤ k ≤ K. The spatial derivatives satisfy the equation
g ∂ k x φ A = ∂ k x g φ A + [ g , ∂ k x ]φ A = ∂ k x Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C + [ g , ∂ k x ]φ A ,
so that applying the energy estimate (3.10) for A ∈ {1, 2, ..., d}, summing and taking the supremum gives
sup t0≤t<T E K (t) ≤ CE K (t 0 ) + C d A=1 K k=0 T t0 a(t) ∂ k x Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C L 2 σ dt + (5.6) + T t0 a(t) [ g , ∂ k x ]φ A L 2 σ dt .
Let us first concentrate on the second term on the right hand side of the previous energy estimate. Using the form of the non-linearities (2.12) we get
(5.7) ∂ k x Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C L 2 σ ≤ C (a(t)) δ0α+δ 0β −2 ∂ k x N A,αβ BC (φ)∂ α φ B ∂ β φ C L 2 (R n ) .
To simplify notation we will use N to collectively denote all the functions N A,αβ BC . That being said, we note that modulo some multiplicative positive constants arising from the application of Leibniz rule, the terms
∂ k x N (φ)∂ α φ B ∂ β φ C L 2 (R n )
can be bounded by sums of terms of the form
∂ k1 x N (φ)∂ k2 x ∂ α φ B ∂ β φ C L 2 (R n )
, with k 1 , k 2 ≥ 0 and k 1 + k 2 = k. In particular, either k 1 ≤ k/2 or k 2 ≤ k/2.
Let us start with the case k 1 ≤ k/2. Recall that, by assumption,
(5.8) ∂ l φ N L ∞ (R d ) ≤ C , for all 0 ≤ l ≤ K.
This gives the necessary control N (φ(t, · )) L ∞ (R n ) ≤ C needed in the case k 1 = 0. If k 1 ≥ 1, the chain rule implies that ∂ k1
x (N (φ)) L ∞ (R n ) is bounded by sums of terms of the form
O(1)Π si ∂ si x φ A L ∞ (R n )
, with s i = k 1 , and s i ≥ 1. In view of (5.4) we see that these terms are bounded, since we have 0 ≤ l = s i − 1 ≤ k 1 − 1 ≤ k/2 − 1 ≤ K/2 − 1 < K − n/2, where the last inequality follows from the fact that K ≥ n + 1 > n − 2. Consequently
(5.9) ∂ k1 x N (φ)∂ k2 x ∂ α φ B ∂ β φ C L 2 (R n ) ≤ C ∂ k2 x ∂ α φ B ∂ β φ C L 2 (R n )
. But now the right hand side is bounded by sums of terms of form
∂k 1 x ∂ α φ B ∂k 2 x ∂ β φ C L 2 (R n ) ,
withk 1 +k 2 = k 2 ≤ k. We may then assume without loss of generality thatk 1 ≤ k/2. Since K ≥ n + 1 we have 0 ≤k 1 ≤ k/2 ≤ K/2 < K − n/2 and therefore we are allowed to use either (5.4), if α = 0, or (5.5), if α = 0, to conclude that
(5.10) ∂k 1 x ∂ α φ B L ∞ (R n ) ≤ (a(t)) −δ0α 4M CC 0 .
But then we get
∂k 1 x ∂ α φ B ∂k 2 x ∂ β φ C L 2 (R n ) ≤ ∂k 1 x ∂ α φ B L ∞ (R n ) ∂k 2 x ∂ β φ C L 2 (R n ) (5.11) ≤ (a(t)) −δ0α 4M CC 0 (a(t)) −δ 0β E K (t) .
Using (5.9) and (5.11) we finally establish
(5.12) ∂ k1 x N (φ)∂ k2 x ∂ α φ B ∂ β φ C L 2 (R n ) ≤ (a(t)) −δ0α−δ 0β 4M CC 0 E K (t) ,
provided k 1 ≤ k/2. Let us now consider the case k 1 > k/2. In such case k 2 < k/2 and therefore we see that
∂ k2 x ∂ α φ B ∂ β φ C L ∞ (R n )
is bounded by terms of the form
(5.13) ∂k 1 x ∂ α φ B L ∞ (R n ) ∂k 2 x ∂ β φ C L ∞ (R n ) ≤ (a(t)) −δ0α−δ 0β 4M CC 0 E K (t) ,
where the last estimate is a consequence of (5.4) and (5.5) and the fact thatk 1 +k 2 = k 2 < k/2 ≤ K/2 and K ≥ n + 1.
Next we need to control (5.14) ∂ k1 x N (φ) L 2 (R n ) , k 1 > k/2 . Applying the chain rule and using (5.8), we see that |∂ k1
x N (φ)| is controlled by sums of terms of the form
|O(1)Π si ∂ si x φ A | , with s i = k 1 .
By an appropriate relabeling, set s 1 = max{s i } so that s i ≤ k 1 /2 ≤ k/2, for all i = 1, which, in view of (5.4), implies ∂ si x φ A L ∞ (R n ) ≤ 4M CC 0 , for all i = 1 . Consequently, using the bootstrap assumption,
Π si ∂ si x φ A L 2 (R n ) ≤ 4M CC 0 ∂ s1 x φ A L 2 (R n ) ≤ (4M CC 0 ) 2 ,
from which, by decreasing C 0 if necessary, we can establish
(5.15) ∂ k1 x N (φ) L 2 (R n ) ≤ 1 .
The last estimate together with (5.13) then allows us to conclude that (5.12) also holds in the case k 1 > k/2. So, for any 0 ≤ k ≤ K, using (5.7) leads to (5.16) ∂ k
x Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C L 2 σ ≤ a δ0α+δ 0β −2 a −δ0α−δ 0β 4M CC 0 E K (t) = a −2 4M CC 0 E K (t) , and (5.17) T t0 a(t) ∂ k x Ñ A,αβ BC (φ)∂ α φ B ∂ β φ C L 2 σ dt ≤ 4M CC 0 T t0 E K (t) a(t) dt .
We now consider the last term in (5.6). In the case k = K, the commutator [ g , ∂ k x ] is the difference of two differential operators of order K + 2 and this is worrisome since, a priori, our bootstrap assumption only gives control of derivatives up to order K + 1! But it is well known that the top derivatives cancel out (see for instance [1,Section 6.2] or [6]). For the sake of completeness, we show here that [ g , ∂ k
x ] is in fact of order k + 1, and that moreover enough factors involving the spatial metric σ appear and provide, via the Kth order near flatness condition (2.13), a small parameter C σ that will allow us to close our bootstrap argument.
Using (2.5) we see that
(5.18) [ g , ∂ k x ] = a −2 [∆ σ , ∂ k x ]
. Then, if we note that Leibniz rule can be written as
(5.19) ∂ k x (f g) = k1+k2=k c k1,k2 ∂ k1 x f ∂ k2 x g ,
with the c k1,k2 positive constant such that c 0,k = 1, we can use (2.6) to compute
[∆ σ , ∂ k x ]φ = ∆ σ ∂ k x φ − ∂ k x 1 |σ| ∂ i σ ij |σ|∂ j φ , = ∆ σ ∂ k x φ − k1+k2=k c k1,k2 ∂ k1 x |σ| −1/2 ∂ k2 x σ ij |σ|∂ j φ = ∆ σ ∂ k x φ − c 0k |σ| −1/2 ∂ i k1+k2=k c k1,k2 ∂ k1 x σ ij |σ| ∂ k2 x ∂ j φ − k1+k2=k , k1 =0 c k1,k2 ∂ k1 x |σ| −1/2 ∂ k2 x σ ij |σ|∂ j φ = ∆ σ ∂ k x φ − |σ| −1/2 ∂ i c 0k σ ij |σ|∂ j ∂ k x φ −|σ| −1/2 ∂ i k1+k2=k , k1 =0 c k1,k2 ∂ k1 x σ ij |σ| ∂ k2 x ∂ j φ − k1+k2=k , k1 =0 c k1,k2 ∂ k1 x |σ| −1/2 ∂ i k 1 +k2=k2 ck 1,k2 ∂k 1 x σ ij |σ| ∂k 2 x ∂ j φ ,
since the two terms in the first line of the last equality cancel out, we finally arrive at
−a 2 [ g , ∂ k x ]φ A = |σ| −1/2 ∂ i k1+k2=k , k1 =0 c k1,k2 ∂ k1 x σ ij |σ| ∂ k2 x ∂ j φ A + k1+k2=k , k1 =0 c k1,k2 ∂ k1 x |σ| −1/2 ∂ i k 1+k2=k2 ck 1 ,k2 ∂k 1 x σ ij |σ| ∂k 2 x ∂ j φ A . (5.20)
We can now use Jacobi's formula to write, given l ∈ R,
(5.21) ∂ x |σ| l = l|σ| l σ ij ∂ x σ ij .
Recall the well known fact that
(5.22) ∂ x σ ij = −σ kj σ is ∂ x σ sk .
By direct inspection of (5.20) we see that: i) all terms on the right hand side contain at least one factor involving derivatives of |σ| l or of σ ij which, according to the previous identities and (2.13) can be bounded, in L ∞ (R n ), by CC σ ; ii) all terms contain derivatives ∂ k x ∂ j φ A , with 0 ≤ k ≤ K, all of which can be bounded, in L 2 (R n ), by the energy E K ; iii) Finally, there are also terms involving factors of |σ| l or σ ij which are clearly bounded, in L ∞ (R n ). We thus conclude that
(5.23) [ g , ∂ k x ]φ A (t, · ) L 2 (R n ) ≤ Ca −2 (t)C σ E K (t) , and consequently (5.24) T t0 a(t) [ g , ∂ k x ]φ A L 2 σ dt ≤ CC σ T t0 E K (t) a(t) dt .
We are now ready to close our bootstrap argument. From estimates (5.6), (5.17) and (5.24), it follows that
E K (t) ≤ sup t0≤t<T E K (t) ≤ CE K (t 0 ) + d(K + 1)4M CC 0 T t0 E K (t) a(t) dt + d(K + 1)CC σ T t0 E K (t) a(t) dt ≤ C 2 E K (t 0 ) + C 0 T t0 E K (t) a(t) dt + C σ T t0 E K (t) a(t) dt .
Using Grönwall's inequality leads to
(5.25) E K (t) ≤ C 2 E K (t 0 ) exp C 2 (C 0 + C σ ) T t0 1 a(t) dt .
By the integrability condition (2.3) and choosing C 0 + C σ sufficiently small, we can ensure
exp C 2 (C 0 + C σ ) T t0 1 a(t) dt < 2.
Applying this, (5.2), and taking supremum in (5.25) then yields
(5.26) sup t0≤t<T E K (t) ≤ 2M C 0 ,
which corresponds to a strict improvement of the bootstrap assumption (5.3), from which we can conclude that T = +∞.
Sharp decay estimates
We will establish sharp decay upper bounds for the global solutions constructed in the previous section. Dropping the capital latin indices to simplify notation, we see that our wave equation can be written in the form ∂ t (a n ∂ t φ) = a n−2 ∆ σ φ − a 2Ñ αβ ∂ α φ∂ β φ . (6.1) From (2.13) and (5.4) we see that
∆ σ φ L ∞ ≤ |σ| −1/2 ∂ i σ ij |σ| ∂ j φ L ∞ + σ ij ∂ i ∂ j φ L ∞ C σ C 0 ,(6.a 2Ñ αβ ∂ α φ∂ β φ L ∞ C 2 0 . (6.3)
Then, integrating (6.1), we conclude a n (t)∂ t φ(t, x) = a n (t 0 )∂ t φ(t 0 , x) + t t0 a n−2 (s) ∆ σ φ − a 2Ñ αβ ∂ α φ∂ β φ (s, x)ds , (6.4) from which, in view of the previous estimates, it follows that |a n (t)∂ t φ(t, x)| C 0 + C 0 t t0 a n−2 (s)ds , (6.5) therefore |∂ t φ(t, x)| C 0 1 + t t0 a n−2 (s)ds a −n (t) C 0 t t0 a n−2 (s)ds a −n (t) . (6.6) Note that if we consider the de Sitter case a(t) = e Ht we immediately obtain (for n ≥ 2) |∂ t φ(t, x)| C 0 e −2Ht , (6.7)
while in the case a(t) = t p , p > 1, we have |∂ t φ(t, x)| C 0 t −2p+1 . (6.8)
If we commute the wave equation with spatial derivatives we obtain ∂ t (a n ∂ t ∂ k x φ) = a n−2 ∂ k x ∆ σ φ − a 2 ∂ k xÑ αβ ∂ α φ∂ β φ . (6.9)
Relying once again on (5.4) and (2.13), a simple adaptation of (6.2) shows that, for k < K − n 2 , ∂ k x ∆ σ φ L ∞ C σ C 0 . while (5.16) and Sobolev embedding give us, also for k < K − n 2 , a 2 ∂ k xÑ αβ ∂ α φ∂ β φ L ∞ C 0 .
So, by integrating (6.9) we conclude that |∂ t ∂ k x φ(t, x)| C 0 t t0 a n−2 (s)ds a −n (t) , (6.10) provided that k < K − n 2 .
Next we focus on constructing the limiting function φ ∞ in the case of a general expanding factor. To do that we start by noticing that since a(t) is positive and increasing we have ∂ t φ(s, x)ds , (6.12) which is well defined in view of (6.6), (6.11) and the fact that a −1 is integrable.
Then
|φ(t, x) − φ ∞ (x)| ≤ ∞ t |∂ t φ(s, x)|ds ≤ C ∞ t
a −1 (s)ds , (6.13) and φ(t, · ) − φ ∞ L ∞ C 0 ∞ t a −1 (s)ds → 0 , (6.14)
as t → ∞. In particular, this gives φ(t 0 , · ) − φ ∞ L ∞ C 0 . as t → ∞. From this uniform convergence and the already established convergence of φ( · , x), as t → ∞, we conclude that φ x,∞ = ∂ x φ ∞ . It is now easy to conclude by induction that, for all k < K − n 2 , we have
∂ k x (φ(t, · ) − φ ∞ ) L ∞ C 0 ∞ t a −1 (s)ds → 0 , (6.18)
as t → ∞.
The quantitative decay estimates (2.20) and (2.20), which are specific to the power law case a(t) = t p , p > 1, and the de Sitter case a(t) = e Ht , H > 0, respectively, now follow easily, by using the corresponding expansion factors in the previous procedure.
Figure 1 .
1a) Section of Penrose diagram with D ∩ {t ≥ T } depicted as the hatched region. b) 2-dimensional representation in R + × R n of D.
Figure 2 .
2a) Section of Penrose diagram with D 1 ∩ D 2 ∩ {t ≥ T } depicted as the hatched regions. b) 2-dimensional representation in R + × R n of D 1 and D 2 .
If we let g be the FLRW metric (2.1) and consider the standard change of time g = a 2 (τ ) −dτ 2 + σ ij dx i dx j =: a 2 η σ .
x,∞ (x) := ∂ x φ(t 0 , x) + lim t→∞ t t0 ∂ t ∂ x φ(s, x)ds ,(6.16) and obtain|∂ t φ(t, x) − φ x,∞ (x)| ≤ t t0 |∂ t ∂ x φ(s, x)|ds C 0 ∞ t a −1 (s)ds → 0 ,(6.17)
2 )
2while (5.16), (5.26) and Sobolev embedding imply
Geometric analysis of hyperbolic differential equations: An Introduction. S Alinhac, London Mathematical Society Lecture Note Series. 374Cambridge University PressS. Alinhac (2010). Geometric analysis of hyperbolic differential equations: An Introduction. London Mathematical Society Lecture Note Series: 374, (Cambridge University Press).
A parametrix for the fundamental solution for the Klein-Gordon equation on asymptotically de Sitter spaces. D Baskin, arXiv:0905.0447J. Funct. Anal. 2597math.APD. Baskin (2010). A parametrix for the fundamental solution for the Klein-Gordon equation on asymptotically de Sitter spaces. J. Funct. Anal., 259(7), p. 1673-1719, arXiv:0905.0447 [math.AP].
Global wave maps on Robertson-Walker spacetimes. Y Choquet-Bruhat, Nonlinear dynamics. 22Y. Choquet-Bruhat (2000). Global wave maps on Robertson-Walker spacetimes. Nonlinear dynamics, 22, p. 39-47.
Cosmological wave maps. S Cotsakis, J Miritzis, K Tzanni, arXiv:1905.11049International Journal of Modern Physics A. 34gr-qcS. Cotsakis, J. Miritzis and K. Tzanni (2019). Cosmological wave maps. International Journal of Modern Physics A, Vol. 34, 1950092, arXiv:1905.11049 [gr-qc].
Decay of solutions of the wave equation in expanding spacetimes. J L Costa, P Oliveira, J Natário, arXiv:1801.08944Journal of Hyperbolic Differential Equations. 1601gr-qcJ. L. Costa, P. Oliveira and J. Natário (2019). Decay of solutions of the wave equation in expanding spacetimes. Journal of Hyperbolic Differential Equations, Vol. 16, No. 01, p. 35-58, arXiv:1801.08944 [gr-qc].
M Dafermos, I Rodnianski, arxiv:gr-qc/0811.0354Lectures on black holes and linear waves. 17M. Dafermos and I. Rodnianski (2013). Lectures on black holes and linear waves. Clay Mathematics Proceedings, Amer. Math. Soc. 17, p. 97-205. arxiv:gr-qc/0811.0354.
Regularity theory and global existence of small data solutions to semi-linear de Sitter models with power non-linearity. M Ebert, M Reissig, arXiv:1703.09838Nonlinear Analysis: Real World Applications. 40math.APM. Ebert and M. Reissig (2018). Regularity theory and global existence of small data solutions to semi-linear de Sitter models with power non-linearity. Nonlinear Analysis: Real World Applications, 40, p. 14-54, arXiv:1703.09838 [math.AP].
Global in time existence of self-interacting scalar field in de Sitter spacetimes. A Galstian, K Yagdijan, arXiv:1602.03897Nonlinear Analysis: Real World Applications. 34A. Galstian and K. Yagdijan (2016). Global in time existence of self-interacting scalar field in de Sitter spacetimes. Nonlinear Analysis: Real World Applications, 34, p. 110-139, arXiv:1602.03897.
. P Girão, Private communicationP. Girão. Private communication.
Blow up for quasilinear wave equations in three space dimensions. F John, CPAM. 34F. John (1981). Blow up for quasilinear wave equations in three space dimensions. CPAM 34 , p. 29-51.
Wave maps in gravitational theory. M Narita, Advanced Studies in Pure Mathematics 47-1, Asymptotic Analysis and Singularities. M. Narita (2007). Wave maps in gravitational theory. Advanced Studies in Pure Mathematics 47-1, Asymptotic Analysis and Singularities, p. 253-272.
Introduction to nonlinear wave equations. J Luk, J. Luk. Introduction to nonlinear wave equations.
Global stability of Minkowski space-time in harmonic gauge. H Lindblad, I Rodnianski, arXiv:math/0411109Annals of Mathematics. 171math.APH. Lindblad and I. Rodnianski (2010). Global stability of Minkowski space-time in harmonic gauge. Annals of Mathematics, Vol. 171, 3, p. 1401-1477, arXiv:math/0411109 [math.AP].
Explicit solutions of u = 0 on the Friedmann-Robertson-Walker space-times. S Klainerman, P Sarnak, Annales de l'I. H. P., section A. 35S. Klainerman and P. Sarnak (1981). Explicit solutions of u = 0 on the Friedmann-Robertson-Walker space-times. Annales de l'I. H. P., section A, Vol. 35, no 4, p. 253-257.
Long time behavior of solutions to nonlinear wave equations. S Klainerman, Proceedings of the ICMA. the ICMAWarsawS. Klainerman (1982). Long time behavior of solutions to nonlinear wave equations. Proceedings of the ICMA, Warsaw, 1209-1215.
Decay of solutions to the Klein-Gordon equation on some expanding cosmological spacetimes. J Natário, A Sasane, arXiv:1909.01292gr-qcJ. Natário, A. Sasane (2019). Decay of solutions to the Klein-Gordon equation on some expanding cosmological spacetimes. arXiv:1909.01292 [gr-qc].
Explicit formulas and decay rates for the solution of the wave equation in cosmological spacetimes. J Natário, F Rossetti, arXiv:2112.00771gr-qcJ. Natário and F. Rossetti (2021). Explicit formulas and decay rates for the solution of the wave equation in cosmological spacetimes. arXiv:2112.00771 [gr-qc].
A vector field method for non-trapping, radiating spacetimes. J Oliver, arXiv:1410.5154Journal of Hyperbolic Differential Equations. 1304math.APJ. Oliver (2016). A vector field method for non-trapping, radiating spacetimes. Journal of Hyperbolic Differential Equations, Vol. 13, No. 04, p. 735-790, arXiv:1410.5154 [math.AP].
A vector field method for radiating black hole spacetimes. J Oliver, J Sterbenz, arXiv:1705.10714Analysis & PDE. 131math.APJ. Oliver and J. Sterbenz (2020). A vector field method for radiating black hole spacetimes. Analysis & PDE, 13, No. 1, p. 29-92, arXiv:1705.10714 [math.AP].
Asymptotics of solutions of the Einstein equations with positive cosmological constant. A , arXiv:gr-qc/0312020Ann. Henri Poincaré. 5A. Rendall (2005). Asymptotics of solutions of the Einstein equations with positive cosmological constant. Ann. Henri Poincaré 5, 1041-1064, arXiv:gr-qc/0312020.
On a wave map equation arising in general relativity. H Ringström, arXiv:gr-qc/0303062Comm.Pure Appl.Math. 57H. Ringström (2004). On a wave map equation arising in general relativity. Comm.Pure Appl.Math. 57, p. 657-703, arXiv:gr-qc/0303062.
Future stability of the Einstein-non-linear scalar field system. H Ringström, Invent. math. 173H. Ringström (2008). Future stability of the Einstein-non-linear scalar field system. Invent. math. 173, p. 123-208.
On the topology and future stability of the universe. H Ringström, Oxford University PressH. Ringström (2013). On the topology and future stability of the universe. Oxford University Press.
Linear systems of wave equations on cosmological backgrounds with convergent asymptotics. H Ringström, arXiv:1707.02803420gr-qcH. Ringström (2020). Linear systems of wave equations on cosmological backgrounds with convergent asymptotics. Astérisque No. 420, p. 1-526, arXiv:1707.02803 [gr-qc].
Lectures on nonlinear wave equations. C D Sogge, International PressC. D. Sogge (1995). Lectures on nonlinear wave equations. International Press.
Nonlinear Dispersive Equations: Local and Global Analysis. T Tao, American Mathematical SocietyT. Tao (2006). Nonlinear Dispersive Equations: Local and Global Analysis. American Mathematical Society.
Semilinear hyperbolic equations in curved spacetime. K Yagdjian, arXiv:1305.4404Fourier Analysis. Trends in Mathematics. Birkhäuser, Cham. Ruzhansky M., Turunen V.math.APK. Yagdjian (2014). Semilinear hyperbolic equations in curved spacetime. In: Ruzhansky M., Turunen V. (eds) Fourier Analysis. Trends in Mathematics. Birkhäuser, Cham., arXiv:1305.4404 [math.AP].
Global existence of the self-interacting scalar field in the de Sitter universe. K Yagdjian, arXiv:1706.07703Journal of Mathematical Physics. 6051503math.APK. Yagdjian (2019). Global existence of the self-interacting scalar field in the de Sitter universe. Journal of Mathematical Physics 60, 051503, arXiv:1706.07703 [math.AP].
The wave equation on asymptotically de Sitter-like spaces. A Vasy, arXiv:0706.3669Advances in Mathematics. 2231math.APA. Vasy (2010). The wave equation on asymptotically de Sitter-like spaces. Advances in Mathematics, Vol. 223, no. 1., p. 49-97, arXiv:0706.3669 [math.AP].
| []
|
[
"ON THE CLASSIFICATION OF (g, K)-MODULES GENERATED BY NEARLY HOLOMORPHIC HILBERT-SIEGEL MODULAR FORMS AND PROJECTION OPERATORS",
"ON THE CLASSIFICATION OF (g, K)-MODULES GENERATED BY NEARLY HOLOMORPHIC HILBERT-SIEGEL MODULAR FORMS AND PROJECTION OPERATORS"
]
| [
"Shuji Horinaga "
]
| []
| []
| We classify the (g, K)-modules generated by nearly holomorphic Hilbert-Siegel modular forms by the global method. As an application, we study the image of projection operators on the space of nearly holomorphic Hilbert-Siegel modular forms with respect to infinitesimal characters in terms of (g, K)-modules. | 10.1007/s40316-023-00211-6 | [
"https://arxiv.org/pdf/2201.06766v1.pdf"
]
| 246,015,928 | 2201.06766 | 6c9a565ffe1151eee5dcce99b2232c007e2e7053 |
ON THE CLASSIFICATION OF (g, K)-MODULES GENERATED BY NEARLY HOLOMORPHIC HILBERT-SIEGEL MODULAR FORMS AND PROJECTION OPERATORS
18 Jan 2022
Shuji Horinaga
ON THE CLASSIFICATION OF (g, K)-MODULES GENERATED BY NEARLY HOLOMORPHIC HILBERT-SIEGEL MODULAR FORMS AND PROJECTION OPERATORS
18 Jan 2022arXiv:2201.06766v1 [math.NT]
We classify the (g, K)-modules generated by nearly holomorphic Hilbert-Siegel modular forms by the global method. As an application, we study the image of projection operators on the space of nearly holomorphic Hilbert-Siegel modular forms with respect to infinitesimal characters in terms of (g, K)-modules.
1. Introduction 1.1. Algebraicity of special L values. The arithmeticity of special L values is a central problem in modern number theory. In the motivic setting, Deligne [Del79] conjectured the algebraicity of critical L values up to the period. For the critical values attached to scalar valued Hilbert-Siegel modular forms and Hermitian modular forms, Shimura proved the arithmeticity of them up to suitable periods in [Shi00] by using of nearly holomorphic modular forms. The period can be expressed by Petersson inner product times some power of π. Recently, in [HPSS21], Pitale, Saha, Schmidt and the author prove the arithmeticity of them attached to vector valued Siegel modular forms under the parity condition of weights. The purpose of this paper is to prepare to remove the parity condition by investigating the (g, K)-modules generated by nearly holomorphic Hilbert-Siegel modular forms.
1.2. (g, K)-modules generated by nearly holomorphic Siegel modular forms. Let F be a totally real field with degree d and a the set of embeddings of F into R. Put G n = Res F/Q Sp 2n . Here Res is the Weil restriction and Sp 2n is the symplectic group of rank n. Let H n be the Siegel upper half space of degree n. Put g n = Lie(G n (R)) ⊗ R C. We denote by K n,∞ and Z n the stabilizer of i = ( √ −1 1 n , . . . , √ −1 1 n ) ∈ H d n and the center of the universal enveloping algebra U(g n ), respectively. Let K n,C be the complexification of K n,∞ . Set k n = Lie(K n,∞ ) ⊗ R C. We then have the well-known decomposition: g n = k n ⊕ p n,+ ⊕ p n,− . Here p n,+ (resp. p n,− ) is the Lie subalgebra of g n corresponding to the holomorphic tangent space (resp. anti-holomorphic tangent space) of H d n at i. We take a Cartan subalgebra of k n . Then it is a Cartan subalgebra of g n . The root system Φ of sp 2n (C) is Φ = { ±(e i + e j ), ±(e k − e ℓ ), 1 ≤ i ≤ j ≤ n, 1 ≤ k < ℓ ≤ n }.
We consider the set Φ + = { −(e i + e j ), e k − e ℓ , 1 ≤ i ≤ j ≤ n, 1 ≤ k < ℓ ≤ n } to be a positive root system. Let ρ be half the sum of positive roots. Note that g n = v∈a sp 2n (C). We say that a weight λ = (λ 1,v , . . . , λ n,v ) v∈a which lies in v∈a C n is k n -dominant if λ i,v − λ i+1,v ∈ Z ≥0 for any 1 ≤ i ≤ n − 1 and v ∈ a. We also say that a k n -dominant integral weight λ = (λ 1,v , . . . , λ n,v ) v∈a is anti-dominant if λ n ≥ n. For any k n -dominant integral weight λ, there exist the (parabolic) Verma module N (λ) with respect to a parabolic subalgebra p n,− ⊕ k n and a unique irreducible highest weight (g n , K n,∞ )-module L(λ) of highest weight λ. Then, L(λ) is the unique irreducible quotient of N (λ). For a (g n , K n,∞ )-module π, the symbol π ∨ denotes the contragredient of π in the sense of [Hum08].
For an automorphic form ϕ on G n (A Q ), we say that ϕ is nearly holomorphic if ϕ is p n,− -finite, i.e., U(p n,− ) · ϕ is finite-dimensional. The goal of this paper is to classify the indecomposable (g n , K n,∞ )modules generated by nearly holomorphic automorphic forms.
Theorem 1.2.1 (Theorem 6.5.1). Let π be an indecomposable (g n , K n,∞ )-module generated by a nearly holomorphic automorphic form on G n (A Q ). If F = Q, π is irreducible. If F = Q, the length of π is at 1 most two. More precisely, if π is reducible, there exists an odd integer i and (λ 1 , . . . , λ n−i ) ∈ Z n−i with λ 1 ≥ · · · ≥ λ n−i ≥ n − (i − 3)/2 such that π ∼ = N (λ 1 , . . . , λ n−i , n − (i − 3)/2, . . . , n − (i − 3)/2) ∨ . This result is a generalization of [PSS21]. The key idea of proof is the harmonic analysis of the space of nearly holomorphic automorphic forms on G n (A Q ), which is investigated in [Hor20b].
1.3. Projection operators. Fix a weight ρ and a congruence subgroup Γ. Let N ρ (Γ) be the space of nearly holomorphic Hilbert-Siegel modular forms of weight ρ with respect to Γ. For an infinitesimal character χ of Z n , we can define the projection operator p χ ∈ End(N ρ (Γ)) associated to χ. Then, the projection operator p χ commutes with the Aut(C) action as follows: Theorem 1.3.1 (Theorem 7.2.2). For any f ∈ N ρ (Γ) and σ ∈ Aut(C), we have
p χ ( σ f ) = σ p χ (f ).
For a k n -dominant integral weight λ and v ∈ a, put j v (λ) = #{j | λ 1,v ≡ λ j,v (mod 2)}. Set
∧ jv (λ) = ∧ jv (λ) std GL n (C) , ρ v = det λ1,v −1 ⊗ ∧ jv std GL n (C) , and ρ = v∈a ρ v ,
where std GLn(C) is the standard representation of GL n (C) and ∧ jv (λ) std GLn(C) is the j v (λ)-th exterior product of std GL n (C) . Theorem 1.3.2 (Theorem 7.2.3). Let λ = (λ 1,v , . . . , λ n,v ) v be a regular anti-dominant integral weight. Put ρ = v∈a (det λ1,v −1 ⊗∧ jv (λ) ) and N ρ (Γ, χ λ ) = p χ λ (N ρ (Γ)). If F = Q and λ n,v = n + 1, any modular form in N ρ (Γ, χ λ ) generates L(λ) or N (λ 1 , . . . , λ n−1 , n − 1) ∨ . If not, any modular form in N ρ (Γ, χ λ ) generates L(λ).
The following is the analogue of holomorphic projection. Corollary 1.3.3. Let λ = (λ 1,v , . . . , λ n,v ) v be an anti-dominant k n -dominant integral weight and ρ the irreducible highest weight representation of K n,C with highest weight λ. Suppose λ 1,v − λ n,v ≤ 1 and λ n,v ≥ n + 1 for any v ∈ a. If F = Q or λ n,v = n + 1 for some v ∈ a, the projection p χ defines a projection onto M ρ (Γ), the subspace of holomorphic modular forms.
We then characterize the nearly holomorphic Hilbert-Siegel modular forms which generate a holomorphic discrete series representation in terms of projections p χ under a mild assumption. This gives a generalization of Shimura's holomorphic projection.
Notation. We denote by Mat m,n the set of m × n-matrices. Put Mat n = Mat n,n with the unit 1 n . Let GL n and Sp 2n be the algebraic groups defined by GL n (R) = {g ∈ Mat n | det g ∈ R × } and Sp 2n (R) = g ∈ GL 2n (R) t g 0 n −1 n 1 n 0 n g = 0 n −1 n 1 n 0 n for a ring R, respectively. Set Sym n = {g ∈ Mat n | t g = g}. Let B n be the subgroup of Sp 2n defined by B n = a * 0 t a −1 a is a upper triangular matrix. .
The group B n is a Borel subgroup of Sp 2n with the Levi decomposition B n = T n N n . Here T n ⊂ B n is the maximal diagonal torus of Sp 2n . A parabolic subgroup P of Sp 2n is called standard if P contains B n . Let A P be the split component of P and A ∞ P the identity component of A P (R). We denote by P i,n and Q i,n the standard parabolic groups of Sp 2n with the Levi subgroups GL i × Sp 2(n−i) and (GL 1 ) i × Sp 2(n−i) , respectively. Set P n = P n,n . For a parabolic subgroup P , let δ P be the modulus character of P .
For n ∈ Z ≥1 , set H n = {z ∈ Sym n (C) | Im(z) is positive definite}. The space H n is called the Siegel upper half space of degree n. The Lie group Sp 2n (R) acts on H n by the rule a b c d (z) = (az + b)(cz + d) −1 , a b c d ∈ Sp 2n (R), z ∈ H n .
Put
K n,∞ = g = a b c d ∈ Sp 2n (R) a = d, c = −b .
Then K n,∞ is the group of stabilizers of i = √ −1 1 n ∈ H n . For simplicity the notation, the symbol i also denotes the element ( √ −1 1 n , . . . , √ −1 1 n ) ∈ H d n . Since the action of Sp 2n (R) on H n is transitive, we have H n ∼ = Sp 2n (R)/K n,∞ .
Let F be a totally real field with degree d. Let a = {∞ 1 , . . . , ∞ d } be the set of embeddings of F into R. We denote by A F and A F,fin the adele ring of F and the finite part of A F , respectively. For a place v, let F v be the v-completion of F . Put F ∞ = v∈a F v . For a non-archimedean place v, let O Fv be the ring of integers of F v .
Set G n = Res F/Q Sp 2n where Res is the Weil restriction. We define the standard parabolic subgroups P i,n , Q i,n and B n of G n by the Weil restriction of parabolic subgroups P i,n , Q i,n and B n of Sp 2n , respectively. Let W n be the Weyl group of Sp 2n . For an archimedean place v, set K n,v = K n,∞ . For the sake of simplicity, the symbol K n,∞ denotes the maximal compact subgroup v∈a K n,v of G n (R). Let K n,C be the complexification of v∈a K n,v . Put g n = Lie(G n (R)) ⊗ R C and k n = Lie( v∈a K n,v ) ⊗ R C. Set K v = Sp 2n (O Fv ) for a non-archimedean place v. We denote by Z n the center of the universal enveloping algebra U(g n ). We then obtain the well-known decomposition g n = k n ⊕ p n,+ ⊕ p n,− where p n,+ (resp. p n,− ) is the Lie subalgebra of g n corresponding to the holomorphic tangent space (resp. anti-holomorphic tangent space) of H d n at i. It is well-known that the Lie algebras g n and k n have the same Cartan subalgebra. We fix such a Cartan subalgebra. Then the root system of sp 2n (C) is
Φ = { ±(e i + e j ), ±(e k − e ℓ ), 1 ≤ i ≤ j ≤ n, 1 ≤ k < ℓ ≤ n }.
We consider the set
Φ + = { −(e i + e j ), e k − e ℓ , 1 ≤ i ≤ j ≤ n, 1 ≤ k < ℓ ≤ n }
to be a positive root system. Let ρ be half the sum of positive roots. Put ρ i,n = n − (i − 1)/2 and ρ n = ρ n,n . This corresponds to half the sum of roots in the unipotent subgroup of P i,n . For
λ = (λ 1,v , . . . , λ n,v ) v ∈ v∈a C n , we say that λ is a weight if λ i,v − λ i+1,v ∈ Z for any v and 1 ≤ i ≤ n − 1. The weight λ is k n -dominant if λ i,v − λ i+1
,v ≥ 0 for any v and 1 ≤ i ≤ n − 1. For a k n -dominant weight λ, let ρ λ be an irreducible finite-dimensional representation of k n . When λ is integral, i.e., any entry of λ is an integer, we identify ρ λ as the derivative of an irreducible finite-dimensional representation of K n,C with highest weight λ. We then write the representation of K n,C by the same ρ λ .
We fix a non-trivial additive character ψ = v ψ v of F \A F as follows: If F = Q, let
ψ p (x) = exp(−2π √ −1 y), x ∈ Q p , ψ ∞ (x) = exp(2π √ −1 x), x ∈ R,
where y ∈ ∪ ∞ m=1 p −m Z such that x − y ∈ Z p . In general, for an archimedean place v of F , put ψ v = ψ ∞ and for a non-archimedean place v with the rational prime p divisible by v, put ψ v (x) = ψ p (Tr Fv /Qp (x)).
For a function f on a group G, let r be the right translation, i.e., r(g)f (h) = f (hg) for any g, h ∈ G. For a subset H of G, we denote by f | H the restriction of f to H. Let G be a Lie group with the Lie algebra g. For a smooth function f on G and X ∈ g, put
X · f (g) = d dt t=0 f (g exp(tX)), g ∈ G.
For the action of G n (A Q ), we mean the G n (A Q,fin ) × (g n , K n,∞ )-action.
Nearly holomorphic Hilbert-Siegel modular forms and automorphic forms
In this section, we review the definition and arithmeticity of nearly holomorphic Hilbert-Siegel modular forms. We also recall some properties of nearly holomorphic automorphic forms on G n (A Q ) and basic terminologies of automorphic forms.
2.1. Differential operators on the Siegel upper half space. We recall the differential operators on H n . For details, see [Shi00,§12]. Fix a basis on Sym n (C) by
{(1 + δ i,j ) −1 (e i,j + e j,i ) | 1 ≤ i ≤ j ≤ n}.
We denote the basis by {ε ν }. For u ∈ Sym n (C), write u = ν u ν ε ν with u ν ∈ C and for z ∈ H n , write z = ν z ν ε ν with z ν ∈ C. For a non-negative integer e and a finite-dimensional vector space V , let S e (Sym n (C), V ) be the space of V -valued homogeneous polynomial maps of degree e on Sym n (C) and Ml e (Sym n (C), V ) the space of e-multilinear maps on Sym n (C) e to V . Note that S e (Sym n (C), V ) can be viewed as the space of symmetric elements of Ml e (Sym n (C), V ). For a representation ρ of GL n (C) on V , we define representations ρ ⊗ τ e and ρ ⊗ σ e on Ml e (Sym n (C), V ) by ((ρ ⊗ τ e )(a)h)(u 1 , . . . , u e ) = ρ(a)h( t au 1 a, . . . , t au e a) and ((ρ ⊗ σ e )(a)h)(u 1 , . . . , u e ) = ρ(a)h(a −1 u 1 t a −1 , . . . , a −1 u e t a −1 ), respectively. Here, h ∈ Ml e (Sym n (C), V ), a ∈ GL n (C) and (u 1 , . . . , u e ) ∈ Sym n (C) e . The symbols ρ ⊗ τ e and ρ ⊗ σ e also denote the restrictions to the representations space S e (Sym n (C), V ).
For f ∈ C ∞ (H n , V ), we define functions Df, Df, Cf, E, f on C ∞ (H n , S 1 (Sym n (C), V )) by
((Df )(z))(u) = ν u ν ∂f ∂z ν (z), ((Df )(z))(u) = ν u ν ∂f ∂z ν (z), ((Cf )(z))(u) = 4((Df )(z))(yuy), ((Ef )(z))(u) = 4((Df )(z))(yuy).
Here, u = ν u ν ε ν ∈ Sym n (C), z = ν z ν ε ν ∈ H n and y = Im(z). For f ∈ C ∞ (H n , V ), we say that f is nearly holomorphic if there exists e such that E e f = 0.
2.2. Definition. Let F be the fixed totally real field. For an integral ideal n of F , set
Γ(n) = {γ ∈ Sp 2n (O F ) | γ − 1 2n ∈ Mat 2n (n)} .
The group Γ(n) is called the principal congruence subgroup of G n (Q) of level n. We say that a subgroup Γ of G n (Q) is a congruence subgroup if there exists an integral ideal n such that Γ contains Γ(n) and [Γ : Γ(n)] < ∞. In this subsection, we regard G n (Q) as a subgroup of
G n (R) = v∈a Sp 2n (F v ) by γ −→ (∞ 1 (γ), . . . , ∞ d (γ)
). Similarly, we regard a congruence subgroup Γ of G n (Q) as a subgroup of G n (R). We define the factor of automorphy j :
G n (R) × H d n −→ GL n (C) d by j(g, z) = (c v z v + d v ) v ∈ v∈a GL n (C) = GL n (C) d , g = a v b v c v d v v ∈ G n (R), z = (z v ) v ∈ H d n .
For a representation ρ of K n,C on V , set j ρ = ρ • j. For g ∈ G n (R), we define the slash operator | ρ g on
C ∞ (H d n , V ) by (f | ρ g)(z 1 , . . . , z d ) = j ρ (g, z) −1 f (γ(z 1 , . . . , z d )),
for f ∈ C ∞ (H d n , V ) and (z 1 , . . . , z d ) ∈ H d n . Let Γ be a congruence subgroup of G n (Q). Suppose that a function f ∈ C ∞ (H d n , V ) satisfies the automorphy f | ρ γ = f for any γ ∈ Γ. Then, f has the Fourier expansion
(f | ρ γ)(z) = h∈Sym n (F ) c f (h, y, γ)e(tr(hz)), z ∈ H d n , y = Im(z)
where e(tr(hz)) = exp(2π √ −1 h j=1 tr(∞ j (h)z j )) for (z 1 , . . . , z d ) ∈ H d n and h ∈ Sym n (F ). We consider the following condition: If c f (h, y, γ) = 0, the matrix h is positive semi-definite. We call this condition the cusp condition. We say that a V -valued C ∞ -function f on H d n is a nearly holomorphic Hilbert-Siegel modular form of weight ρ with respect to Γ if f satisfies the following conditions:
• f is a nearly holomorphic function.
• f | ρ γ = f for all γ ∈ Γ.
• f satisfies the cusp condition. We denote by N ρ (Γ) the space of nearly holomorphic Hilbert-Siegel modular forms of weight ρ with respect to Γ. In the following, for modular forms, we mean a (nearly holomorphic) Hilbert-Siegel modular forms. By Köecher principle, we can remove the cusp condition if n > 1 or F = Q. For the proof, see [Hor20a,Proposition 4.1] for n > 1. We can give the same proof for the case of F = Q. For simplicity, if ρ = det k , we say that a modular form of weight det k is a modular form of weight k.
2.3. Aut(C) action for nearly holomorphic Hilbert-Siegel modular forms and the holomorphic projection. Let f be a nearly holomorphic modular form of weight ρ with respect to Γ. Take a model V of ρ and fix a rational structure of V . Then, Shimura introduced the Aut(C)-action on f . For details, see [Shi00,§14.11] and [HPSS21, §3.3]. For σ ∈ Aut(C), we denote by σ f the action of σ on f . For a weight ρ = v∈a ρ v , put σ ρ = v∈a ρ σ•v . The following theorem is proved in [Shi00, Theorem 14.12].
Theorem 2.3.1. For f ∈ N ρ (Γ) and σ ∈ Aut(C), one has σ f ∈ Nσ ρ (Γ).
Let M ρ (Γ) be the space of holomorphic functions in N ρ (Γ). Set N p ρ (Γ) = N (pv )v ρ (Γ) = {f ∈ N ρ (Γ) | E pv +1 v f = 0 for any v ∈ a}. The, N 0 ρ (Γ) = M ρ (Γ). Let ρ = v ρ v be a character of K n,C with the weight (k v ) v∈a . Take non-negative integers p v satisfies k v > n + p v or k v < n + (3 − p v )/2 for any v ∈ a. Put p = (p v ) v . Then, in [Shi00, §15.3], Shimura introduced a projection A : N p ρ (Γ) −→ M ρ (Γ).
The projection A is called the holomorphic projection. By Shimura [Shi00,Proposition 15.3], it commutes with the Aut(C) actions as follows:
Theorem 2.3.2. With the above notation, for any σ ∈ Aut(C) and f ∈ N ρ (Γ), one has A( σ f ) = σ A(f ).
In [HPSS21,§3.4], we define other projection operators p χ associated to infinitesimal characters χ of Z n . This can be viewed as a generalization of the holomorphic projection A. In this paper, we study the image of p χ in terms of (g n , K n,∞ )-modules.
2.4. Automorphic forms on G n (A Q ). Let P = M N be a standard parabolic subgroup of G n . For a smooth function φ : N (A Q )M (Q)\G n (A Q ) −→ C, we say that φ is automorphic if it satisfies the following conditions:
• φ is right K n -finite. • φ is Z n -finite.
• φ is slowly increasing. We denote by A(P \G n ) the space of automorphic forms on N (A Q )M (Q)\G n (A Q ). For simplicity, we write A(G n ) when P = G n . The space A(P \G n ) is stable under the action of G n (A Q ).
For parabolic subgroups P and Q of G n , we say that P and Q are associate if the split components A P and A Q are G n (Q)-conjugate. We denote by {P } the associated class of the parabolic subgroup P . For a locally integrable function ϕ on N P (Q)\G n (A Q ), set
ϕ P (g) = NP (Q)\NP (A Q ) ϕ(ng) dn
where P = M P N P is the Levi decomposition of P and the Haar measure dn is normalized by
NP (Q)\NP (A Q ) dn = 1.
The function ϕ P is called the constant term of ϕ along P . If ϕ lies in A(P \G n ), ϕ Q is an automorphic form on N Q (A Q )M Q (Q)\G n (A Q ) for a parabolic subgroup Q ⊂ P . We call ϕ cuspidal if ϕ Q is zero for any standard parabolic subgroup Q of G with Q P . We denote by A cusp (P \G n ) the space of cusp forms in A(P \G n ). For a character ξ of the split component A ∞ P , put A(P \G) ξ = {ϕ ∈ A(P \G n ) | ϕ(ag) = a ξ+ρP ϕ(g) for any g ∈ G n (A Q ) and a ∈ A ∞ P }. Here, ρ P is the character of A ∞ P corresponding to half the sum of roots of N P relative to A P . We define A cusp (P \G) ξ similarly. Set
A(P \G n ) Z = ξ A(P \G n ) ξ , A cusp (P \G n ) Z = ξ A cusp (P \G n ) ξ .
Here, ξ runs over all the characters of A ∞ P . Let a P be the real vector space generated by coroots associated to the root system of G n relative to A P . Then, by [MW95, Lemma I.3.2], there exist canonical isomorphisms For a function f on G n (A Q ) and g ∈ G n (A Q ), let f g be the function on M P (A Q ) 1 defined by m −→ m −ρP f (mg). Put
C[a P ] ⊗ A(P \G n ) Z ∼ = A(P \G), C[a P ] ⊗ A cusp (P \G n ) Z ∼ = A cusp (P \G n ).A(G n ) {P } = ϕ ∈ A(G)
ϕ Q,ak is orthogonal to all cusp forms on M Q (A Q ) 1 for any a ∈ A Q , k ∈ K n , and Q ∈ {P } .
By [MW95, Lemma I.3.4], A(G n ) {G} is equal to A cusp (G n ).
More precisely, Langlands [Lan06] had proven the following result:
Theorem 2.4.1. With the above notation, we have
A(G n ) = {P } A(G n ) {P } ,
where {P } runs through all associated classes of parabolic subgroups.
Let M be a standard Levi subgroup of G n and τ an irreducible cuspidal automorphic representation of M (A Q ). We say that a cuspidal datum is a pair (M, τ ) such that M is a Levi subgroup of G n and that τ is an irreducible cuspidal automorphic representation of M (A Q ). Take w ∈ W n . Put M w = wM w −1 and let P w = M w N w be the standard parabolic subgroup with Levi subgroup M w . The irreducible cuspidal automorphic representation Theorem 2.4.2. The space A(G n ) is decomposed as
τ w of M w (A Q ) is defined by τ w (m ′ ) = τ (w −1 m ′ w) for m ′ ∈ M w (A Q ).A(G n ) = (M,τ ) A(G n ) (M,τ ) .
Here, (M, τ ) runs through all equivalence classes of cuspidal data.
Let P be a standard parabolic subgroup of G n with standard Levi subgroup M and π an irreducible cuspidal automorphic representation of M (A Q ). Put A cusp (P \G n ) π = {ϕ ∈ A(P \G n ) | ϕ k ∈ A cusp (M ) π for any k ∈ K n }. by Theorem 2.4.2. Let ϕ cusp P be the cuspidal part of ϕ P . Then, there exists a finite number of irreducible cuspidal automorphic representations π 1 , . . . , π ℓ of M P (A Q ) such that
ϕ cusp P ∈ ℓ j=1 C[a P ] ⊗ A cusp (P \G n ) πj .
We say that a set ∪ M {χ π1 , . . . , χ π ℓ } is the set of cuspidal exponents of ϕ. Here, χ πj is the central character of π j . For a character χ of the center of M P (A Q ), we call the restriction of χ to A ∞ P the real part of χ.
Let us now introduce the notion for some induced representations on G n (A Q ) and Sp 2n (F v ). For a character µ of GL n (A F ), we mean an automorphic character, i.e., GL n (F ) is contained in the kernel of µ. Let µ be a character of GL i (A F ) and an irreducible cuspidal automorphic representation π of G n−i (A Q ). We define the space Ind
Gn(A Q )
Pi,n(A Q ) (µ|·| s ⊠π) by the space of smooth functions ϕ on N Pi,n (A Q )P i,n (Q)\G n (A Q ) such that
• ϕ is an automorphic form.
• For any k ∈ K n , the function ϕ k lies in the µ| · | s ⊠ π-isotypic component of L 2 disc (M Pi,n (A Q )). We write I i,n (s, µ, π) = Ind
Gn(A Q )
Pi,n(A Q ) (µ| · | s ⊠ π) and I n (s, µ) = Ind
Gn(A Q )
Pn(A Q ) µ| · | s . For a place v of F , we similarly write
I i,n,v (s, µ v , π v ) = Ind Gn(Fv )
Pi,n(Fv) (µ v | · | s ⊠ π v ) and I n,v (s, µ v ) = Ind
Gn(Fv) Pn(Fv ) µ v | · | s . Here, µ v is a character of GL i (F v ) and π v is an irreducible representation of Sp 2(n−i) (F v ).
2.5. Nearly holomorphic automorphic forms. For an automorphic form ϕ on G n (A Q ), we say that ϕ is nearly holomorphic if ϕ is p n,− -finite. The symbol N (G n ) denotes the space of nearly holomorphic automorphic forms on G n (A Q ). Put N (G n ) (M,τ ) = N (G n ) ∩ A(G n ) (M,τ ) . We say that an irreducible cuspidal automorphic representation π = v π v of G n (A Q ) is holomorphic if π v is an irreducible unitary highest weight representation of Sp 2n (F v ) for any v ∈ a. In [Hor20b, Theorem 1.2], we determine the cuspidal components of nearly holomorphic automorphic forms as follows:
Proposition 2.5.1. Let P be a standard parabolic subgroup of G n with the standard Levi subgroup M .
(1) With the above notation, the space
N (G n ) (M,π) is non-zero only if P is associated to Q i,n for some i. (2) Let Π = µ 1 ⊠ · · · ⊠ µ i ⊠ π be an irreducible cuspidal automorphic representation of M Qi,n (A Q ) = (Res F/Q GL 1 )(A F ) i × G n−i (A Q ). If the space N (G n ) (Qi,n,Π) is non-zero, we have • µ 1 = · · · = µ i . • π is a holomorphic cuspidal automorphic representation of G n−i (A Q ).
Let µ be a character of GL 1 (A F ). For simplicity the notation, we denote by µ the character µ ⊠ · · · ⊠ µ of GL 1 (A F ) i . In [Hor20b], we determine the structure of the space N (G n ) (M,τ ) explicitly under several assumptions.
2.6. Modular forms and automorphic forms. We recall the correspondence of modular forms on the Siegel upper half space and automorphic forms on G n (A Q ). Fix a weight ρ and a congruence subgroup
Γ. We embed Γ into G n (A Q,fin ) diagonally. Let K Γ be the closure of Γ in G n (A Q,fin ). Then, K Γ is an open compact subgroup of G n (A Q,fin ).
By the strong approximation, one has G n (
A Q ) = G n (Q)G n (R)K Γ . For f ∈ N ρ (Γ) and v * ∈ ρ * , the dual of ρ, put ϕ f,v * (γg ∞ k) = (f | ρ g ∞ )(i), v * , γg ∞ k ∈ G n (Q)G n (R)K Γ = G n (A Q ). This is well-defined. The map f ⊗ v * −→ ϕ f,v * induces the inclusion N ρ (Γ) ⊗ ρ * −→ N (G n ). (2.6.1) Put N (G n ) KΓ ρ = ϕ ∈ N (G n )
ϕ generates ρ under the action of K n,∞ and ϕ(gk) = ϕ(g) for any g ∈ G n (A Q ) and k ∈ K Γ .
By the choice of embedding U(n) ֒−→ GL n (C), the map (2.6.1) induces the isomorphism
N ρ (Γ) ⊗ ρ * ∼ − − → N (G n ) KΓ ρ . (2.6.2)
For details, see [HPSS21, §3.2]. For a representation generated by f ∈ N ρ (Γ), we mean the representation generated by ϕ f,v * with 0 = v * ∈ ρ * . Note that the representation is independent of the choice of v * = 0.
Computations of unitary highest weight modules with a regular integral infinitesimal character
In this section, we introduce the parabolic BGG category O p and unitarizable modules in this category. For later use, we compute extensions of certain modules and multiplicities of K n,∞ -types.
3.1. parabolic BGG category. For simplicity the notation, throughout this section, we assume F = Q. Let n be a nilpotent subalgebra of g n . For a g n -module M , we say that M is locally n-finite if U(n) · v is finite-dimensional for any v ∈ M . We consider the parabolic subalgebra p = k n ⊕ p n,− . We define the full subcategory O p of the category of g n -modules whose objects M satisfy the following three conditions:
• M is finitely generated.
• M decomposes as a direct sum of irreducible finite-dimensional representations of k n .
• M is locally p n,− -finite. The category O p is called the parabolic BGG category O p with respect to p. For further properties of the BGG category O and a parabolic BGG category O p , see [Hum08].
Let us introduce the Verma modules. For a k n -dominant weight λ, let V λ be a model of ρ λ . We regard V λ as a p-module by letting p n,− act trivially. Put
N (λ) = U(g n ) ⊗ U (p) V λ .
Then, N (λ) has the canonical left g n -module structure. The module N (λ) is called the (parabolic) Verma module of weight λ. Since N (λ) is generated by a highest weight vector, N (λ) has the unique irreducible quotient L(λ). Note that N (λ) and L(λ) are objects in O p .
For a g n -module M , we say that M is a highest weight module if there exists a highest weight vector v ∈ M such that v generates M . By definition, Verma modules are highest weight modules. Moreover, N (λ) has the following universality: For a highest weight module M with the highest weight λ, there exists a surjective homomorphism N (λ) ։ M .
For a weight λ, let χ λ be the infinitesimal character with the Harish-Chandra parameter λ + ρ. Then, the Verma module N (λ) has the infinitesimal character χ λ . Note that for χ 0 , we mean the infinitesimal character of the trivial representation. The infinitesimal characters χ λ and χ µ are the same if and only if there exists w ∈ W n such that λ = w · µ. Here · is the dot action defined by
w · µ = w(µ + ρ) − ρ. For a weight λ, put O λ = {w · λ | w ∈ W n }. We say that λ is (dot-)regular if #O λ = #W n . If λ is not of (dot-)regular, we say that λ is (dot-)singular.
For a nearly holomorphic automorphic form ϕ, we consider the g n -module M generated by ϕ under the right translation. Then, M is a (g n , K n,∞ )-module. By the definition of the parabolic BGG category O p , the g n -module M is an object in O p .
First reduction point and unitarizability.
We recall the definition of the first reduction point in the sense of [EHW83]. Let λ = (λ 1 , . . . , λ n ) be a k n -dominant weight with λ n = n. We say that a real number r 0 = r 0 (λ) is the first reduction point if the module N (λ + r 0 (−1, . . . , −1)) is reducible and N (λ + r(−1, . . . , −1)) is irreducible for r < r 0 . Set p(λ) = #{i | λ i = λ n } and q(λ) = #{i | λ i = λ n + 1}. One can compute the first reduction point explicitly by the result of Enright-Howe-Wallach [EHW83, Theorem 2.10].
Theorem 3.2.1. Let λ = (λ 1 , . . . , λ n ) be a k n -dominant weight with λ n = n. Then, the first reduction point r 0 equals to (p(λ) + q(λ) + 1)/2. Let r 0 be the first reduction point. Then for r < r 0 , the irreducible representation L(λ+r(−1, . . . , −1)) is unitarizable. More precisely, we have the following by [EHW83, Theorem 2.8]:
Theorem 3.2.2. With the same notation as in Theorem 3.2.1, L(λ + r(−1, . . . , −1)) is unitarizable if and only if either of the following conditions holds:
• r ≤ (p(λ) + q(λ) + 1)/2.
• λ ∈ (1/2)Z n and r ≤ p(λ) + q(λ)/2.
3.3.
Dot-orbits of regular integral weights and unitary highest weight modules. Let λ = (λ 1 , . . . , λ n ) ∈ Z n be a k n -dominant integral weight. Let |λ| be a multiset {|λ 1 − 1|, |λ 2 − 2|, . . . , |λ n − n|}. Then, the multiset is invariant under the dot-action, i.e., |λ| = |w · λ| for any w ∈ W n . We then say that λ is anti-dominant if λ n ≥ n. We compute the dot-orbits of regular anti-dominant integral weights. Note that for any regular integral weight λ, there exists σ ∈ W such that σ · λ is anti-dominant. Moreover, such an anti-dominant weight is unique in the dot-orbit O λ .
Lemma 3.3.1. Let λ = (λ 1 , . . . , λ n ) be a regular anti-dominant integral weight and σ an element of the Weyl group W n . Suppose that the weight σ · λ is k n -dominant and L(σ · λ) is unitarizable. If σ · λ = λ, one has λ n = n + 1.
Proof. Put ω = σ · λ = (ω 1 , . . . , ω n ). Suppose that ω = λ and L(ω) is unitarizable. By ω = λ and the uniqueness of anti-dominant weights in O λ , one has ω n < n. Set p = p(ω) and q = q(ω). Since L(ω) is unitarizable, one has
n − p − q/2 ≤ ω n < n. (3.3.1)
If ω n > n − p, there exists n − p + 1 ≤ j ≤ n such that ω j − j = 0. Then, ω is singular. This is contradiction. Similarly, if ω n < n − p, one has q > 0 by (3.3.1). By (3.3.1) and the unitarizability of L(ω), either of the following statements holds:
• There exists j such that ω j = j.
• There exists i < j such that
ω i − i = j − ω j .
Thus, ω is singular. This is contradiction. Hence, one has ω n = n − p and in particular 1 ∈ |ω| = |λ|.
Indeed, |ω| ∋ |ω n−p+1 − (n − p + 1)| = |n − p − (n − p + 1)| = 1.
Since λ is anti-dominant, we obtain λ n = n + 1. This completes the proof.
For a k n -dominant integral weight λ, we put
O unit λ = {µ ∈ O λ | µ is k n -dominant and L(µ) is unitarizable}.
By the proof of the above lemma, we obtain the following corollary:
Corollary 3.3.2. Let λ be a regular anti-dominant integral weight. (1) If λ n > n + 1, one has O unit λ = {λ}. (2) If λ n = n + 1, one has O unit λ = {λ (0) , . . . , λ (p(λ)) }, where λ (j) = (λ 1 , . . . , λ n−j , n − j, . . . , n − j).
Proof. If #O unit λ > 1, one has λ n = n + 1 by Lemma 3.3.1. We may assume λ n = n + 1. In this case, the representation L(λ (ℓ) ) is unitary for any 1 ≤ ℓ ≤ p(λ). Thus, {λ (0) , . . . , λ p(λ) } ⊂ O unit λ . We prove the converse. Take µ = (µ 1 , . . . , µ n ) ∈ O unit λ . By the proof of Lemma 3.3.1, we obtain µ n = n − p(µ). Since λ is regular, the multiset |λ| is a set. Note that {|λ 1 − 1|, . . . , |λ n − n|} = {|µ 1 − 1|, . . . , |µ n − n|}. By the k n -dominance of λ and µ, we obtain
λ i − i = µ i − i for 1 ≤ i ≤ n − p(µ).
Thus, one has λ (p(µ)) = µ. This completes the proof.
3.4. Multiplicities of certain K-types. In this subsection, we distinguish L(λ) in terms of K n,∞ -types in the orbit O unit λ . For this, we first recall the embeddings of highest weight modules into principal series representations.
Bn(R) (µ 1 | · | s1 ⊠ · · · ⊠ µ n | · | sn ) with unitary characters µ i of R × ,s i = λ n−i+1 − n + i − 1, µ i = sgn λn−i+1 for any 1 ≤ i ≤ n.
For 0 ≤ j ≤ n, let ∧ j be the j-th exterior product of the standard representation of k n . This is an irreducible representation of k n with highest weight ( j 1, . . . , 1, 0, . . . , 0). Put
j(λ) = #{ℓ | λ 1 ≡ λ ℓ mod 2}.
The following statement follows from the Littlewood-Richardson rule.
Lemma 3.4.2. For a k n -dominant integral weight λ, one has
Hom kn (∧ j(λ) ⊗ det λ1−1 , N (λ)| kn ) = 0.
Proof. For an integral weight ω = (ω 1 , . . . , ω n ), we consider the following two step operation:
Step
1. Put ω ′ 1 = ω 1 . For 2 ≤ i ≤ n, set ω ′ i = ω i−1 if ω i−1 − ω i is even ω i−1 − 1 if ω i−1 − ω i is odd.
Step 2. Consider the set
X = X(ω) = {i | 2 ≤ i ≤ n, ω i−1 = ω ′ i }. Let a be the maximal element in X. We define a new set X ′ = X ′ (ω) by X ′ = X if #X is even and by X ′ = X \ {a} if #X is odd. Put ω ′′ i = ω ′ i if i ∈ X ′ ω ′′ i = ω ′ i + 1 if i ∈ X ′ .
We define a map g : Z n −→ Z n by g((ω 1 , . . . , ω n )) = (ω ′′ 1 , . . . , ω ′′ n ). Note that the image of k n -dominant weight is k n -dominant. We denote by g ℓ the ℓ-th composite of g. Set g ℓ (λ) = (λ 1,ℓ , . . . , λ n,ℓ ) and
a ℓ = n i=1 (λ i,ℓ − λ i,ℓ+1 )
. Then, by definition, a i ∈ 2Z. By the well-known correspondence of young diagrams and irreducible finite-dimensional representations of k n , one can show that (a 1 , . . . , a n ) is k ndominant. By the definition of g and the Littlewood-Richardson rule, the irreducible representation of k n with highest weight g n−1 (λ) occurs in the tensor product representation ρ λ ⊗ ρ (a1,...,an) of k n . We next compute the weight g n−1 (λ) = (λ 1,n−1 , . . . , λ n,n−1 ). By the construction, g n−1 (λ) is of the form (λ 1 , . . . , λ 1 , λ 1 − 1, . . . , λ 1 − 1). Indeed, by induction on ℓ, one has λ 1 − λ 1+ℓ,ℓ ≤ 1 for any 1 ≤ ℓ.
Thus, λ 1 − λ n,n−1 ≤ 1. We claim j(ω) = j(g(ω)) for any k n -dominant weight ω = (ω 1 , . . . , ω n ). Set g(ω) = (ω ′′ 1 , . . . , ω ′′ n ). We write X ′ (ω) = {x 1 , . . . , x 2m } with x 1 < x 2 < · · · < x 2m . Then, for any x i ≤ ℓ ≤ x i+1 with 0 ≤ i ≤ 2m − 1, one has ω 1 ≡ ω ℓ + (1 + (−1) i+1 )/2 mod 2.
Here, x 0 = 1. In particular, for any 1 ≤ ℓ ≤ m, we have ω x 2ℓ−1 ≡ ω x 2ℓ mod 2. By ω x ≡ ω ′ x + 1 mod 2 for any x ∈ X ′ (ω), we have j(ω) = j(g(ω)). Hence we obtain
g n−1 (λ) = ( j(λ) λ 1 , . . . , λ 1 , λ 1 − 1, . . . , λ 1 − 1).
By the claim, we see that ρ g n−1 (λ) = ∧ j(λ) ⊗ det λ1−1 occurs in ρ λ ⊗ ρ (a1,...,an) . The space U(p n,+ ) decomposes as
U(p n,+ ) = (b1,...,bn)∈(2Z) n ,b1≥···≥bn ρ (b1,...,bn)
as a representation of k n . Thus, ρ (a1,...,an) occurs in U(p n,+ ). Note that the restriction of N (λ) to k n is semisimple and N (λ)| kn = U(p n,+ )| kn ⊗ C ρ λ . We then have
Hom kn (∧ j(λ) ⊗ det λ1−1 , N (λ)| kn ) = 0.
This completes the proof.
Since a weight of ∧ ℓ is a permutation of ( j 1, . . . , 1, 0, . . . , 0), one has the following:
Proposition 3.4.3. For a regular anti-dominant integral weight λ and 1 ≤ j ≤ p(λ), one has dim C Hom kn (∧ j(λ) ⊗ det λ1−1 , L(λ)| kn ) = 1 and Hom kn (∧ j(λ) ⊗ det λ1−1 , L(λ (j) )| kn ) = 0.
Proof. Let I n (µ 1 , . . . , µ n ) be the principal series representation Ind Gn(R) Bn(R) (µ 1 ⊠ · · · ⊠ µ n ). Here, µ i are real valued characters of R × . Take ε j ∈ {0, 1} such that µ j (−1) = (−1) εj . Through the weight structure of ∧ j , one can find that the Hom space
Hom kn (∧ j , I n (µ 1 , . . . , µ n )| kn )
is non-zero if and only if n ℓ=1 ε ℓ = j by the Frobenius reciprocity. By the Frobenius reciprocity, one has the multiplicity free, i.e., dim C Hom kn (∧ j , I n (µ 1 , . . . , µ n )| kn ) ≤ 1.
For any ω ∈ O unit λ , the highest weight module L(ω) occurs in constituents of the induced representation I n (sgn λn | · | λn−n , . . . , sgn λ1 | · | λ1−1 ) by Theorem 3.4.1. Then, the statement follows from Lemma 3.4.2 and the above multiplicity free. This completes the proof.
For a k n -type σ, put O unit λ (σ) = {π ∈ O unit λ | Hom kn (σ, π| kn ) = 0}.O unit λ (det λ1−1 ⊗ ∧ j(λ) ) = {L(λ)}.
Proof. The statement follows immediately from Corollary 3.3.2 and Proposition 3.4.3.
3.5. Extensions of certain modules. Fix an odd integer i. Let λ = (λ 1 , . . . , λ n ) be a k n -dominant integral weight such that λ n−i+1 = · · · = λ n = n − (i − 3)/2. Put λ ′ = (λ ′ 1 , . . . , λ ′ n ) = (λ 1 , . . . , λ n−i , n − (i + 1)/2, . . . , n − (i + 1)/2). By |λ| = |λ ′ | as the multisets, the weights λ and λ ′ have the same dot-orbit.
Lemma 3.5.1. One has dim C Ext O p (L(λ ′ ), L(λ)) = 1.
Moreover an indecomposable module M with a non-trivial exact sequence
0 −→ L(λ) −→ M −→ L(λ ′ ) −→ 0 is isomorphic to N (λ ′ ).
Proof. Set λ ′′ = λ ′ + ((i + 1)/2, . . . , (i + 1)/2). Then, λ ′′ satisfies the condition as in Theorem 3.2.1, i.e., the n-th entry of λ ′′ is n. By p(λ ′′ ) = i and q(λ ′′ ) = 0, the weight λ ′ corresponds to the first reduction point r 0 (λ ′′ ). Thus, the Verma module N (λ ′ ) is reducible and N (λ) is irreducible. Let ω = (ω 1 , . . . , ω n ) be a k n -dominant integral weight such that L(ω) is a constituent of N (λ ′ ). Then, there exists w ∈ W n such that ω = w · λ ′ and ω ≤ λ ′ . We then have λ ′ j ≤ ω j for any j. The multiplicity of |λ j − j| in the multiset |λ| is one if and only if 1 ≤ j ≤ n − i + 2 or j = n − (i − 3)/2. Thus, ω satisfies the following conditions:
• ω j = λ j for 1 ≤ j ≤ n − i.
• For j ≥ n − i + 3, there exist k and ℓ such that ω k − k = ℓ − ω ℓ = |λ j − j|.
• For j = n − i + 1, n − i + 2, there exists k such that |ω k − k| = |λ j − j|. Indeed, it suffices to check the first condition. Suppose that there exist j ≤ n − i and k such that
ω k − k = −(λ j − j). Then, ω k = k + j − λ j ≤ k + n − i − (n − (i − 3)/2) = k − (i + 3)/2 < n − (i + 1)/2 ≤ λ ′
n . This contradicts to λ ′ k ≤ ω k . We note that λ is a minimal element in O unit λ . The candidates of ω are λ, λ ′ , (λ 1 , . . . , λ n−i , n − (i − 1)/2, n − (i + 1)/2, . . . , n − (i + 1)/2) and (λ 1 , . . . , λ n−i , n − (i + 1)/2, . . . , n − (i + 1)/2, n − (i − 1)/2).
Since L(ω) occurs in a constituent of N (λ ′ ), the representation ρ ω occur in the restriction of N (λ ′ ) to k n . However, the last two candidates of ω do not occur in the restriction N (λ)| kn . Thus, any constituent of N (λ) is of the form L(λ) and L(λ ′ ). The irreducible representations ρ λ and ρ λ ′ of k n have the multiplicity one in N (λ ′ ). Hence the multiplicities of L(λ) and L(λ ′ ) in the constituent of N (λ ′ ) are at most one. Since N (λ ′ ) is reducible and L(λ ′ ) is the irreducible quotient of N (λ ′ ), we obtain a non-split exact sequence
0 −→ L(λ) −→ N (λ ′ ) −→ L(λ ′ ) −→ 0.
By applying Hom O p ( · , L(λ)) to this short exact sequence, we obtain the following long exact sequence
0 −→ Hom O p (L(λ ′ ), L(λ)) −→ Hom O p (N (λ ′ ), L(λ)) −→ Hom O p (L(λ), L(λ)) −→ Ext 1 O p (L(λ ′ ), L(λ)) −→ Ext 1 O p (N (λ ′ ), L(λ)) −→ Ext 1 O p (L(λ)
, L(λ)) −→ · · · . By definition, we have Hom O p (N (λ ′ ), L(λ)) = 0. Consider an extension
0 −→ L(λ) −→ M −→ N (λ ′ ) −→ 0.
Let v be a weight vector in M of weight λ ′ such that the image of v in N (λ ′ ) is a non-zero highest weight vector. Since the weight of v is highest in M , there exists a splitting N (λ ′ ) −→ M by the universality of N (λ ′ ). Thus, the short exact sequence splits, i.e., Ext 1 O p (N (λ ′ ), L(λ)) = 0. We then obtain Ext 1 O p (L(λ ′ ), L(λ)) ∼ = Hom O p (L(λ), L(λ)). This is of dimension one. This completes the proof.
Siegel Eisenstein series
In this section, we compute the cuspidal components and exponents of Siegel Eisenstein series via the Siegel-Weil formula. We also show the near holomorphy of certain Siegel Eisenstein series.
V ) = ω ψ (V ) = v ω ψ,v (V v ) the Weil representation of G n (A Q ) × O(V )(A F ) on S(V (A F ) n ). Here V v is the v-completion of V for a place v of F . For ϕ ∈ S(V (A F ) n ), set θ(g, h; ϕ) = v∈V (F ) n ω(g)ϕ(v · h), g ∈ G n (A Q ), h ∈ O(V )(A F ).
The function θ is called the theta function. Put
I(g, ϕ) = O(V )(F )\O(V )(AF ) θ(g, h; ϕ) dh, g ∈ G n (A Q ).
The following condition (W) is called the Weil's convergence condition:
V is anisotropic m − r > n + 1,(W)
where r is the Witt index of V . If V satisfies the condition (W), the theta integral I( · , ϕ) converges absolutely.
For ϕ ∈ S(V (A F ) n ), set f ϕ (g) = ω ψ (g)ϕ(0). Let χ V be the quadratic character associated to V . Then f ϕ is an element of I n (s 0 , χ V ). We denote by f s,ϕ the standard section of I n (s, χ V ) such that
f s0,ϕ = f ϕ .
For a standard section f s of I n (s, µ), put
E(g, s, f ) = γ∈Pn(Q)\Gn(Q) f s (γg), E(g, s, ϕ) = E(g, s, f ϕ ).
The Siegel-Weil formula states the relationship between E(g, s 0 , ϕ) and I(g, ϕ). In this paper, we use the following Siegel-Weil formula due to Kudla-Rallis [KR88]. and c = 1 If m > n + 1 2 If m ≤ n + 1.
4.2. The representation R n . Let V v be a m-dimensional quadratic space over F v with m ∈ 2Z ≥0 . The character χ V denotes the quadratic character associated to V . The map ϕ −→ f ϕ induces a Sp 2n (F v )- intertwining map ω ψ,v (V v ) −→ I n (s 0 , χ). We denote by R n (V v ) the image of the intertwining map. Then, R n (V v ) can be viewed as the O(V v )(F v )- coinvariants of ω ψ,v (V v ).
Proposition 4.2.1. With the above notation, we obtain the following:
(1) For a non-archimedean place v, if χ 2 = 1 and s 0 ≥ 0, one has
I n,v (s 0 , χ) = R n (V 1 ) + R n (V 2 ).
Here V 1 and V 2 are the m-dimensional inequivalent quadratic spaces over F v with χ = χ V1 = χ V2 . For a quadratic space V over F and a place v of F , let V v be the v-completion of F . Put
R n (V ) = v R n (V v )
where v runs over all places of F . Proof. Suppose µ 2 = 1. This case is clear by the holomorphy of intertwining operators as in [Ike92].
Next, we suppose µ 2 = 1 and s 0 > 1. By Proposition 4.2.1 (3), we have I n (s 0 , µ) pn,−-fin = v<∞ I n,v (s 0 , µ v ) ⊗ L(m/2, . . . , m/2). (4.3.1)
We claim that the representation I n (s 0 , µ) pn,−-fin is contained in V R n (V ) where V runs through the m-dimensional quadratic spaces over F such that V satisfies the Weil's convergence condition (W) and χ V = µ. Take a function f = v f v in I n (s 0 , µ) pn,−-fin . We may assume that each local functions f v lie in R n (V v ) for some quadratic space V v over F v by Proposition 4. 2.1 (1). Let ε v be the Hasse invariant of V v . We denote by V (a, b) the non-degenerate real quadratic space with the signature (a, b). The Hasse invariants of V (m, 0) and V (m − 2, 2) are 1 and −1, respectively. Fix an archimedean place w. Then, there exists the quadratic space
W = v W v over F such that W v ∼ = V v for any non-archimedean place v, W v ∼ = V (m, 0) for any archimedean place v = w and W w = V (m, 0) if v<∞ ε v = 1 V (m − 2, 2) if v<∞ ε v = −1.
By Proposition 4.2.1 (2) and m > n+3, the quadratic space W satisfies the condition (W) and f ∈ R n (W ). Hence the claim holds. By the claim, the theta integral converges absolutely. This states that the theta integral is an intertwining map under the action of G n (A Q ). Hence we obtain the following diagram:
S(W (A F ) n ) − −−− → N (G n ) R n (W ) − −−− → I n (s 0 , χ W ) pn,−-fin .
Here the upper horizontal line is given by ϕ −→ I( · , ϕ), the left vertical line is the canonical surjective morphism and the right vertical line is given by f −→ E( · , s 0 , f ). By the definition of the theta integral, it factors through the O(W )(A F )-coinvariants R n (W ) of ω ψ . By the Siegel-Weil formula Theorem 4.1.1, the diagram is commutative. Hence the right vertical map is intertwining under the action of G n (A Q ). For the injectivity, we consider the constant term of I( · , ϕ) along P n . By the straightforward computation, one has I( · , ϕ) Pn (g) = ω(g)ϕ(0). Thus the right vertical line is injective. For the case F = Q, it suffices to show that the space of induced representations (4.3.1) are contained in V R n (V ) where V runs over all quadratic spaces over F with dimension m such that V satisfies the condition (W). The proof is similar. Take f = f v in the induced representation (4.3.1). We may assume f v ∈ R n (V v ) for any place v. Let ε v be the Hasse invariant of V v . Fix an archimedean place w. If v<∞ ε v = 1, we can find a positive definite quadratic space W over F such that f v ∈ R n (W v ). If v<∞ ε v = −1, we can find a quadratic space W over F such that W v ∼ = V v for any non-archimedean place v, W v is positive definite for any archimedean place v = w, W w is of signature (m − 2, 2) and f ∈ R n (W ). Then, W is anisotropic. We obtain the claim. This completes the proof.
In the following of this section, we assume F = Q.
Near holomorphy of Siegel Eisenstein series for s = 1.
Proposition 4.4.1. Let f s = v f v,s be a standard section of I n (s, µ) such that f 1 ∈ I n (1, µ) pn,−-fin . Then the Eisenstein series E( · , 1, f ) is nearly holomorphic, i.e., there exists ℓ such that p ℓ n,− · E( · , 1, f ) = 0.
Proof. We may assume that there exists a (n + 3)/2-dimensional quadratic space W v over F v such that f v,1 ∈ R(W v ) for any place v of F . Here W v is positive definite for the archimedean place v. Let ε v be the Hasse invariant of W v . If v ε v = 1, the corresponding Eisenstein series E( · , 1, f ) generates the representation v R(W v ), by the same method as in the proof of Lemma 4.3.1. Then, the archimedean component is a highest weight representation. In particular, E( · , 1, f ) is nearly holomorphic.
Suppose v ε v = −1. Let V be the quadratic space over F = Q with dimension (n + 3)/2 such that V v ∼ = W v for non-archimedean place v. Then, for the archimedean place v, we may assume that the quadratic space V v has the signature (n + 1, 2). We consider the constant term of E( · , 1, f ) along P 1,n .
For (t, g) ∈ GL 1 (A F ) × G n−1 (A Q ) and k ∈ K n , one has E((t, g)k, s, f ) P1,n = µ(t)|t| s+(n+1)/2 E g, s + 1 2 , ι * r(k)f (4.4.1)
+ µ(t) −1 |t| −s+(n+1)/2 E g, s − 1 2 , ι * U (s, µ)r(k)f
where ι is the embedding G n−1 ֒−→ P 1,n ֒−→ G n and U (s, µ) is the intertwining integral defined by
U (s, µ)f s = U1(AF ) f s (w 1 ug) du for w 1 = 0 0 −1 0 0 1 n−1 0 0 1 0 0 0 0 0 0 1 n−1 , U 1 = u = 1 x y 0 0 1 n−1 0 0 0 0 1 0 0 0 − t x 1 n−1 x ∈ A n−1 F , y ∈ A F . We denote by U v (s, µ)f v,s the local intertwining integral so that v U v (s, µ)f v,s = U (s, µ)f s .(s, µ) such that F s ′ 0 = ι * U (s, µ)r(k)f s | s=s ′ 0 for some s ′ 0 with Re(s ′ 0 ) ≫ 0.
We claim that there exists a non-zero constant c such that Res s=1/2 E(g, s, F ) = cE(g, 1/2, ι * U (s, µ)r(k)f ).
U * v (s, µ)f s = ι * U v (s, µ)f v,s if v is non-archimedean. Γ((s + n + 4)/2)Γ((s − 1)/2) Γ(s) ι * U v (s, µ)f v,s if v is archimedean.
For an unramified place v, by [KR94, (1.23)], one has Remark 4.4.2. In the above proof, we use the formula of d n,v (s, ℓ) as in [KR88,Lemma 4.6]. We should note that there is a typo in this formula. The correct one is d n,v (s, ℓ) = ( √ −1) nk 2 −ns (2π) n π n(n−1)/2 Γ n (s) Γ n ((s + ρ n + ℓ)/2)Γ n ((s + ρ n − ℓ)/2) .
ι * U (s, µ)f • v,s (1) = L v (s + (n − 1)/2, µ)L v (2s, µ 2 ) L v (s + (n + 1)/2, µ)L v (2s + n − 1, µ 2 ) f • v,−s (1) (4.4.3) where f • v,
Indeed, by the straightforward computation, d n,v (s, ℓ) equals to a non-zero constant multiple of a confluent hypergeometric function ξ(1, 0; (s + ρ n + ℓ)/2, (s + ρ n − ℓ)/2). For the explicit formulas of ξ, see [Shi82] and [Shi00, pp. 140].
4.5. Cuspidal components of Siegel Eisenstein series at s = 0. We recall the properties of Siegel Eisenstein series at s = 0. If the rank n is odd and µ is quadratic, one has
I n (0, µ) = V R n (V ) ⊕ C R n (C),
where V runs over all the quadratic spaces of dimension n + 1 over F such that µ = χ V and C = {W v } v runs through all incoherent families such that µ v = χ Wv for any place v of F . For the definition of incoherent family, see [KR94,pp. 7]. By [KR94,Theorem 4.10], one can identify a certain subspace of automorphic forms as the space of Eisenstein series at s = 0 as follows:
Theorem 4.5.1. The following statements hold.
(1) For a quadratic space V of dimension n + 1 over F , one has dim Hom Gn(A Q ) (R n (V ), A(G n )) = 1.
Moreover, the normalized Eisenstein series at s = 0 gives the non-trivial intertwining map R n (V ) −→ A(G n (A Q )).
(2) For an incoherent family C, one has dim Hom Gn(A Q ) (R n (C), A(G n )) = 0.
Moreover, for a standard section f s with f 0 ∈ R n (C), one has E(g, 0, f ) = 0.
The following statement follows from the theorem immediately.
Pullback formula
In this section, we compute the pullback formulas of Siegel Eisenstein series. As an application, we show the holomorphy and non-vanishing of Klingen Eisenstein series. 5.1. The formal identity and meromorphic sections. For m ≤ n, we define the embeddings ι ↑ m,n and ι ↓ m,n of G m into G n by
ι ↑ m,n a b c d = a b 1 n−m c d 1 n−m , ι ↓ m,n a b c d = 1 n−m a b 1 n−m c d .
Put G ↑ m = ι ↑ m,n (G m ) and G ↓ m = ι ↓ m,n (G m ). Take n, r ∈ Z >0 . For g ∈ G n+r (A Q ) and h ∈ G n (A Q ), put
g × h = a g b g c g d g × a h b h c h d h = ι ↑ n+r,2n+r (g) · ι ↓ n,2n+r (h) = a g b g a h b h c g d g c h d h ∈ G 2n+r (A Q ). Set H = G ↑ n+r × G ↓ n ⊂ G 2n+r , g = 0 1 n 1 n 0 g 0 1 n 1 n 0 , g ∈ G n .
Let f s be a standard section of I 2n+r (s, µ). For a cusp form ϕ on G n (A Q ) and g ∈ G n+r (A Q ), we consider the zeta integral
E(g, s; f, ϕ) = Gn(Q)\Gn(A Q ) E((g × h), s, f )ϕ(h) dh. Put f j = 0 0 0 1 j ∈ Mat n+r,n , τ j = 1 n+r 1 n f j 1 n+r t f j 1 n
for 0 ≤ j ≤ n. Note that for any g ∈ G j (A Q ) and h ∈ G 2n+r (A Q ), one has
f s τ j ((1 2(n+r−j) × g) × (1 2(n−j) × g))h = f s (τ j h).
The following double coset decomposition is well-known. For example, see [Shi00, Lemma 24.1].
Lemma 5.1.1. One has the decomposition
G 2n+r (Q) = 0≤j≤n P 2n+r (Q)τ j H(Q). Moreover, P 2n+r (Q)τ j H(Q) = ξ,β,γ P 2n+r (Q)τ j (((1 2(n+r−j) × ξ) × 1 2n ) · (β × γ)),
where ξ runs over G j (Q), β over P n+r−j,n+r (Q)\G n+r (Q), and γ over P n−j,n (Q)\G n (Q).
By the lemma, we compute the integral E(g, s; f, ϕ) as follows:
Gn(Q)\Gn(A Q ) E((g × h), s, f )ϕ(h) dh = Gn(Q)\Gn(A Q ) γ∈P2n+r(Q)\G2n+r(Q) f s (γ(g × h))ϕ(h) dh = Gn(Q)\Gn(A Q ) 0≤j≤n γ∈P2n+r(F )\P2n+r(Q)τj H(Q) f s (γ(g × h))ϕ(h) dh = 0≤j≤n Gn(Q)\Gn(A Q ) ξ∈Gj (A Q ) β∈Pn+r−j,n+r(Q)\Gn+r(Q) γ∈Pn−j,n(Q)\Gn(Q) f s (τ j ((1 2(n+r−j) × ξ) × 1 2n ) · (βg × γ h))ϕ(h) dh = 0≤j≤n ξ∈Gj (A Q ) β∈Pn+r−j,n+r(Q)\Gn+r(Q) Gn(Q)\Gn(A Q ) γ∈Pn−j,n(Q)\Gn(Q) f s (τ j ((1 2(n+r−j) × ξ) × 1 2n ) · (βg × γ h))ϕ(h) dh.
If j < n, we claim that the integral
Gn(Q)\Gn(A Q ) γ∈Pn−j,n(Q)\Gn(Q) f s (τ j ((1 2(n+r−j) × ξ) × 1 2n ) · (βg × γ h))ϕ(h) dh
vanishes. Put P ′ n−j,n = { p | p ∈ P n−j,n }. We write 1 2(n+r−j) × ξ by ξ for simplicity. Then, it equals to
P ′ n−j,n (Q)\Gn(A Q ) f s (τ j (ξβg × h))ϕ(h) dh = P ′ n−j,n (Q)N P ′ n−j,n (A Q )\Gn(A Q ) N P ′ n−j,n (Q)\N P ′ n−j,n (A Q ) f s (τ j (ξβg × nh))ϕ(nh) dndh = P ′ n−j,f s (τ n (1 2(n+r) × ξ) · (βg × h))ϕ(h) dh = β∈Pr,n+r(Q)\Gn+r(Q) Gn(Q)\Gn(A Q ) ξ∈Gn(A Q ) f s (τ n (βg × ξh))ϕ(h) dh = β∈Pr,n+r(Q)\Gn+r(Q) Gn(A Q ) f s (τ n (βg × h))ϕ(h) dh. Put Z(g, s; f, ϕ) = Gn(A Q ) f s (τ n (g × h))ϕ(h) dh, g ∈ G n+r (A Q ).
We then have E(g, s; f, ϕ) = γ∈Pr,n+r(Q)\Gn+r(Q) Z(γg, s; f, ϕ).
Lemma 5.1.2. The integral Z(g, s; f, ϕ) converges absolutely for s ∈ C with Re(s) ≫ 0 and can be meromorphically continued to whole s-plane.
Proof. Since E(g, s, f ) converges absolutely for s with Re(s) ≫ 0, the integral also converges absolutely for such s. When r = 0, the meromorphic continuation follows from the meromorphic continuation of E(g, s, f ). In general, we write g = n(t, m)k for n ∈ N Pr,n+r (A Q ), (t, m) ∈ GL r (A F ) × G n (A Q ) and k ∈ K n+r . Then, one has Z(g, s; f, ϕ) = µ(t)| det t| s+(2n+r+1)/2 Z(m, s; r(k)f, ϕ)
= µ(t)| det t| s+(2n+r+1)/2 Z(m, s + r/2; ι ↓, * 2n,2n+r r(k)f, ϕ). Thus, the meromorphic continuation follows from the case r = 0.
The section Z( · , s; f, ϕ) is then a meromorphic section of I r,n+r (s, µ, A cusp (G n )) .
Indeed, let P be a parabolic subgroup of G n with the unipotent radical N . It suffices to prove that the constant term of Z( · , s; f, ϕ) along P is zero. It equals to
Z(g, s; f, ϕ) P = N (Q)\N (A Q ) Z(ng, s; f, ϕ) dn = N (Q)\N (A Q ) Gn(A Q ) f s (τ n (ng × h))ϕ(h) dhdn = Gn(A Q ) N (Q)\N (A Q ) f s (τ n (g × n −1 h))ϕ(h) dndh = Gn(A Q ) f s (τ n (g × h)) N (Q)\N (A Q ) ϕ(nh) dn dh = 0,
by the cuspidality of ϕ. Take a cusp form φ on G n (A Q ). For any k ∈ K n+r , one has
Z(k, s; f, ϕ), φ = Gn(Q)\Gn(A Q ) Z((1 r × x)k, s; f, ϕ)φ(x) dx = Gn(Q)\Gn(A Q ) Gn(A Q ) f s (τ n (k × x −1 h))ϕ(h) dh φ(x) dx = Gn(A Q ) f s (τ n (k × h)) Gn(Q)\Gn(A Q ) ϕ(xh)φ(x) dx dh = Gn(A Q ) f s (τ n (k × h)) r(h)ϕ, φ dh.
The pairing Z(g, s; f, ϕ), φ is zero unless φ lies in the π ϕ -isotypic component of A cusp (G n ). Here the representation π ϕ is the representation of G n (A Q ) generated by ϕ. For any k ∈ K n+r , the function m −→ Z(mk, s; f, ϕ) on G n (A Q ) lies in the π ϕ -isotypic component. Hence, the section Z( · , s; f, ϕ) is a section of I r,n+r (s, µ, π ϕ ).
Let π = v π v be an irreducible cuspidal automorphic representation of G n (A Q ). By the above computations, we define a meromorphic section Z( · , s; f, ϕ) of I r,n+r (s, µ, π) for ϕ ∈ π = v π v . For
f s = v f v,s and ϕ = v ϕ v ∈ v π v , set Z v (g, s; f v , ϕ v ) = Sp 2n (Fv) f v,s (τ n (g × h))π v (h)ϕ v dh. Then, Z(g, s; f, ϕ) = v Z v (g, s; f v , ϕ v ).
In the following, we first consider the relationship between the constant terms of E( · , s; f, ϕ) and the global section Z( · , s; f, ϕ). After that, we compute the local sections Z( · , s; f v , ϕ v ).
Near holomorphy of Klingen Eisenstein series.
We prove the near holomorphy of Eisenstein series E( · , s 0 ; f, ϕ) on G n+r (A Q ) as follows:
Proposition 5.2.1. Fix r, n with 1 ≤ r ≤ n and s 0 ≥ 0 with s 0 ∈ Z + (2n + r + 1)/2. For a character µ of GL 2n+r (A F ), let f s be a standard section of I 2n+r (s, µ). We assume • f s0 is p 2n+r,− -finite.
• If F = Q and s 0 = 1/2, there exists a quadratic space V over F with dimension (n + 2)/2 such that W satisfies the condition (W) and f s0 ∈ R n (V ). Then, for a cusp form ϕ on G n (A Q ), the Eisenstein series E( · , s 0 ; f, ϕ) on G n+r (A Q ) is nearly holomorphic.
Proof. Under the assumptions, Siegel Eisenstein series E( · , s, f ) is nearly holomorphic at s = s 0 by the proof of Lemma 4.3.1 and Proposition 4.4.1. Take an integer ℓ ≫ 0 so that p ℓ 2n+r,− · E( · , s 0 , f ) = 0. Since the integral
E(g, s 0 ; f, ϕ) = Gn+r(Q)\Gn+r(A Q ) E((g × h), s 0 , f )ϕ(h) dh, g ∈ G n (A Q )
converges absolutely, one has p ℓ n+r,− · E(g, s 0 ; f, ϕ) = 0. This completes the proof. We next compute the constant term of E( · , s 0 ; f, ϕ) along P r,n+r . Let U be the subgroup of G 2n+r in which elements of the form
1 r * 1 n 1 n 1 r 1 n 1 n .
We may regard the group U as a subgroup of G ↑ n+r . Then, it is a subgroup of the unipotent radical of P r,n+r . Set
E(g, s 0 , f ) U = U(Q)\U(A Q ) E(ug, s 0 , f ) dn.
We compute E(g, s 0 , f ) U as follows:
Lemma 5.2.2. Let f s be a standard section of I 2n+r (s, µ). Suppose that f s satisfies the conditions as in Proposition 5.2.1 and moreover if F = Q, assume s 0 > 1. We then have
E((t, m), s 0 , f ) U = µ(t)| det t| s0+(2n+r+1)/2 E(m, s 0 + r/2, ι ↓, * 2n,2n+r f ) for (t, m) ∈ GL r (A F ) × G 2n (A Q ) = M Pr,2n+r (A Q ).
Proof. By the near holomorphy of E(g, s 0 , f ) and [Hor20b,Lemma 5.10], we have E(g, s 0 , f ) U = E(g, s 0 , f ) Qr,2n+r .
Thus, for (t, m) = (t 1 , . . . , t r , m) ∈ GL 1 (A F ) × · · · × GL 1 (A F ) × G ↓ 2n (A Q ) = M Qr,2n+r , by taking the constant terms successively, we obtain
E((t, m), s 0 , f ) U = · · · (E((t, m), s 0 , f ) Q1,2n+r | G ↓ 2n+r−1 (A Q ) Q1,2n+r−1 | G ↓ n+r−2 (A Q ) · · · Q1,2n+1 | G ↓ 2n (A Q ) .
We tacitly assume r = 1. By (4.4.1), one has E((t, m), s 0 , f ) U = µ(t)|t| s0+n+(r+1)/2 E(m, s 0 +1/2, ι ↓, * f )+µ(t) −1 |t| −s0+n+1 E(m, s 0 −1/2, ι ↓, * U (s, µ)f ).
Then, for s = s 0 and v ∈ a, the archimedean component U v (s, µ)f v has at least simple zero. Hence, by assumptions, the Eisenstein series E(m, s − 1/2, ι * U (s, µ)f ) is zero at s = s 0 . For general r, we thus have
E((t, m), s 0 , f ) U = r j=1 µ(t j )|t j | s0+(2n+r+1)/2 E(m, s 0 + r/2, ι ↓, * f ).
Let SL r be the derived subgroup of GL r ⊂ M Pr,2n+r . It suffices to show that E( · , s 0 , f ) is left SL r (A F ) invariant. It follows from [Hor20b, Lemma 5.7] by the near holomorphy of Eisenstein series. This completes the proof.
Proposition 5.2.3. With the notation as in Proposition 5.2.1, suppose s 0 > 1 if F = Q. Then, the constant term of E(g, s 0 ; f, ϕ) along P r,n+r equals to the zeta integral Z(g, s 0 ; f, ϕ) for any g ∈ G n+r (A Q ).
Proof. Since K n+r,∞ normalizes p n+r,− , the right translation r(k)f s satisfies the same condition as in Proposition 5.2.1. Thus, for any (t, m) ∈ GL r (A Q ) × G n (A Q ) = M Pr,n+r (A Q ) and k ∈ K n+r , we have
E((t, m)k, s 0 ; f, ϕ) Pr,n+r = U(Q)\U(A Q ) Gn(Q)\Gn(A Q ) E((u(t, m) × h), s 0 , r(k)f )ϕ(h) dhdu = µ(t)| det t| s0+ρ2n+r Gn(Q)\Gn(A Q ) E((m × h), s 0 + r/2, ι ↓, * 2n,2n+r r(k)f )ϕ(h) dh = µ(t)| det t| s0+ρ2n+r Z(m, s 0 + r/2; ι ↓, * 2n,2n+r r(k)f, ϕ) = Z((t, m), s 0 ; r(k)f, ϕ) = Z((t, m)k, s 0 ; f, ϕ).
For the first and second equality, we use Lemma 5.2.2. Hence we see E(g, s 0 ; f, ϕ) Pr,n+r = Z(g, s 0 ; f, ϕ). This completes the proof.
Corollary 5.2.4. With the notation as in Proposition 5.2.1, suppose s 0 > 1 if F = Q. Then, the zeta integral Z(g, s; f, ϕ) is holomorphic at s = s 0 .
Proof. The statement follows immediately from the definition of zeta integral and the holomorphy of E( · , s, f ) at s = s 0 . We next consider the local zeta integrals Z v ( · , s; f v , ϕ v ). 5.3. Unramified computations. We first compute Z v (g, s; f, ϕ) at unramified places.
Lemma 5.3.1. Let µ v be an unramified character of GL 2n+r (F v ), f v,s be an unramified standard section of I 2n+r,v (s, µ v ) with f v,s (1) = 1 and π v be an irreducible unramified representation of Sp 2n (F v ) with an invariant inner product , . Take an unramified vector ϕ v ∈ π v so that ϕ v , ϕ v = 1. We then have
Z(1, s; f v , ϕ v ) = L v (s + (r + 1)/2, π v , µ v ) L v (s + n + (r + 1)/2, µ v ) n j=1 L v (2s + 2n + r + 1 − 2j, µ 2 v ) × ϕ v .
Proof. The restriction ι ↓, * f s+r/2 of f v,s to G ↓ 2n is a standard unramified section of I 2n (s + r/2, µ). Since Z(1, s; f v , ϕ v ) is an unramified vector, it is a constant multiple of ϕ v . By definition of local zeta integral, we have L v (s + (r + 1)/2, π v , µ v ) L v (s + n + (r + 1)/2,
Z v (1, s; f v , ϕ v ), ϕ v = Sp 2n (Fv) f v,s (τ n (1 × h)) π v (h)ϕ v , ϕ v dh = Sp 2n (Fv) ι ↓, * 2n,2n+r f v,s+r/2 (τ n (1 × h)) π v (h)ϕ v , ϕ v dh.µ v ) n j=1 L v (2s + 2n + r + 1 − 2j, µ 2 v ) × ϕ v .
This completes the proof.
5.4.
Computations of ramified places. Fix a non-archimedean place v. In this subsection, we compute the zeta integrals at the non-archimedean ramified place v. We then show the following lemma.
Lemma 5.4.1. Let α s be a standard section of I r,n+r,v (s, µ v , π v ). There exists a finite number of standard sections f v,s,1 , . . . , f v,s,ℓ of I 2n+r,v (s, µ) and vectors ϕ v,1 , . . . , ϕ v,ℓ ∈ π v such that
ℓ j=1 Z v (g, s; f v,j , ϕ v,j ) = α s (g), g ∈ Sp 2n (F v ). Proof. Put K n,v (p a v ) = {k ∈ K n,v | k ≡ 1 n mod p a v }.
Let ℓ be a positive integer such that α s is fixed by K n+r,v (p ℓ v ). We write K = K n+r,v (p ℓ v ). Let {γ 1 , . . . , γ ℓ } ⊂ K n+r,v be a set of complete representatives of P r,n+r (F v )\Sp 2(n+r) (F v )/K. We may assume γ 1 = 1. Put ϕ j = α s (γ j ) and K ϕj = Stab Kn,v (ϕ j ). We claim that for any j, one has pr n (K n+r,v ∩ P r,n+r (F v )) ⊂ K ϕj . Here, pr n is the projection map pr n : P r,n+r (F v ) −→ GL r × Sp 2n −→ Sp 2n . Indeed, take k ∈ pr n (K n+r,v ∩ P r,n+r (F v )). Fix k ′ ∈ K such that pr n (k ′ ) = k. By the choice of K, one has π v (k)ϕ j = α s (k ′ γ j ). Since K is a normal subgroup of K n+r,v , one obtains α s (k ′ γ j ) = α s (γ j γ −1 j k ′ γ j ) = α s (γ j ). Thus, π v (k)ϕ j = ϕ j and k ∈ K ϕj . Let f v,s,j be a standard section of I 2n+r (s, µ) such that
• supp(f v,s,j ) ⊂ P 2n+r (F )τ n (K × K ′ ϕj ). • f v,s,j (pτ n (k 1 × k 2 )) = 1 vol(Kϕ j ) µ(p)|p| s+(2n+r+1)/2 for p ∈ P 2n+r (F v ) and k 1 × k 2 ∈ K × K ϕj . Here, K ′ ϕj = { k | k ∈ K ϕj }. Let k ∈ K. By the claim, if τ (k × h) ∈ supp(f v,s,j ), one has h ∈ K ϕj . Thus, we have Z v (k, s; f v,j , ϕ j ) = Sp 2n (Fv ) f v,s,j (τ n (k × h))π v (h)(ϕ j ) dh = Kϕ j f v,s,j (τ n (k × h))π v (h)(ϕ j ) dh = ϕ j .
Next, we compute the support of the section. For g ∈ Sp 2(n+r) (F v ), we assume Z v (g, s; f, ϕ j ) = 0. Suppose that g lies in P r,n+r (F v )γ q K q for some q = 1. Then, by the definition of f v,s,j , one has f v,s,j (τ n (g × h)) = f v,s,j (τ n (h −1 g × 1)) for any h. By h −1 g ∈ P r,n+r (F v )γ q K with q = 1, we get f v,s,j (τ n (g × h)) = 0. Hence we obtain Z v (g, s; f v,j , ϕ j ) = 0 and supp(r(γ −1 j )Z v ( · , s; f v,j , ϕ j )) = P r,n+r (F v )Kγ j = P r,n+r γ j K. We then have
α s (g) = ℓ j=1 r(γ −1 j )Z v (g, s; f, ϕ j ).
This completes the proof.
5.5.
Computations of archimedean places. In this subsection, we assume F = Q for simplicity. Let v be the archimedean place of F = Q. Let π be a holomorphic discrete series representation of G n (R) with highest weight λ = (λ 1,v , . . . , λ n,v ) v . For a standard section f s of I 2n+r (s, µ), put
Z v (g, s; f, ϕ, ϕ ′ ) = Z v (g, s; f, ϕ), ϕ ′ for g ∈ G n+r (F v ) and ϕ ′ ∈ π. Here, · , · is an invariant inner product on π.
Lemma 5.5.1. With the above notation, suppose that a real number s 0 satisfies s 0 ∈ Z + ρ 2n+r and −r/2 < s 0 ≤ λ n,v − ρ 2n+r for any v ∈ a. Let f s be a standard section of I 2n+r (s, µ) such that f s0 is p 2n+r,− -finite. Then, the integral Z v (g, s; f, ϕ, ϕ ′ ) converges absolutely at s = s 0 for any g ∈ G n+r (F v ) and v, v ′ ∈ π. Moreover, we may choose g, f s and ϕ, ϕ ′ ∈ π so that Z v (g, f ; s, ϕ, ϕ ′ ) is non-zero at s = s 0 .
Proof. For m ∈ G n (F v ), one has
Z v ((1 r × m)k, s; f, ϕ, ϕ ′ ) = Z v (1, s; r(k)f, ϕ, π(m −1 )ϕ ′ ).
Since the standard section r(k)f s satisfies the assumption as in the statement, we may assume g = 1.
Then, the integral equals to
Z v (1, s; f, v, v ′ ) = Gn(Fv ) f s (τ n (1 n+r × h)) π(h)v, v ′ dh = Gn(Fv) f s (τ n ((1 r × h) × 1 n )) π(h)v, v ′ dh.
Consider the restriction of f s to the subgroup G ↓ 2n (F v ). The restriction ι * ,↓ 2n,2n+r f s to G ↓ 2n (F v ) is a standard section of I 2n (s + r/2, µ). The restriction map ι ↓, * 2n,2n+r induces a non-zero intertwining map I 2n+r (s 0 , µ) p2n+r,−-fin −→ I 2n (s 0 + r/2, µ) p2n,−-fin = L(m/2, . . . , m/2).
Thus, Z v induces L(m/2, . . . , m/2) ⊗ π ⊗ π −→ C. Here, m = 2s 0 + 2n + r + 1 ≥ 2n + 2. This map is the same as in [Liu20,(4.3.4)]. Hence, the lemma follows from [Liu20, Proposition 4.3.1]. This completes the proof.
Corollary 5.5.2. With the above notation, the zeta integral at s = s 0 induces a non-zero intertwining map I 2n+r (µ, s 0 ) p2n+r,−-fin ⊗ π −→ I r,n+r (s 0 , µ, π) pn+r,−-fin .
Proof. By Lemma 5.5.1, the zeta integral defines a non-zero intertwining map I 2n+r (µ, s 0 ) p2n+r,−-fin ⊗ π −→ I r,n+r (s 0 , µ, π).
Since the integral is intertwining, the image is contained in the p n+r,− -finite vectors. This completes the proof.
6. Structure theorem of the space of nearly holomorphic automorphic forms
In this section, we compare the space of nearly holomorphic automorphic forms with the space of Eisenstein series.
6.1. Parametrization of infinitesimal characters. For an infinitesimal character χ of Z n , put N (G n , χ) = {ϕ ∈ N (G n ) | (z − χ(z))ϕ = 0 for any z ∈ Z n }.
By [Hor20b,Proposition 5.15], we have
N (G n ) = χ N (G n , χ), (6.1.1)
where χ runs over all integral infinitesimal characters of Z n . We define N (G n , χ) {P } and N (G n , χ) (M,π) similarly. By [Hor20b, Proposition 5.9], the constant term along Q i,n induces an embedding of the space N (G n , χ) (MQ i,n ,µ⊠π) into the direct sum s0 I i,n (s 0 , µ, π).
Here s 0 runs over all real numbers such that the induced representation I i,n (s 0 , µ, π) has the integral infinitesimal character χ. Take a real number t. We define the projection pr t by pr t : s0 I i,n (s 0 , µ, π) −→ I i,n (t, µ, π).
The infinitesimal character of the induced representation I i,n (s 0 , µ, π) has the Harish-Chandra parameter (λ 1,v , . . . , λ n−i,v , s 0 + n − (i − 1)/2, . . . , s 0 + n − (i − 1)/2) + ρ.
Here, (λ 1,v , . . . , λ n−i,v ) is the highest weight of π v for any v ∈ a. For χ s0 , we mean the infinitesimal character of the induced representation. Note that χ s0 depends on λ and i. Lemma 6.1.1. With the above notation, fix s 0 . Let {t 1 , . . . , t ℓ } be the set of real numbers such that pr tj (ϕ Qi,n ) = 0 for some ϕ ∈ N (G n , χ s0 ) (µ⊠π,MQ i,n ) . Then, for any j, the highest weight submodule of I i,n (t j , µ, π) is unitarizable.
Proof. We may assume t 1 < · · · < t ℓ ≤ ρ i,n +s 0 . Note that the highest weight of I i,n (t j , µ, π) is of the form a j = (λ 1,v , . . . , λ n−i,v , ρ i,n + t j , . . . , ρ i,n + t j ). Then, a 1 is maximal in {a 1 , . . . , a ℓ }. By assumption, there exists ϕ ∈ N (G n , χ s0 ) (µ⊠π,MQ i,n ) such that ϕ is of weight a 1 . Note that by maximality of a 1 , for j = 1, the K n,∞ -type ρ a1 does not occur in I i,n (t j , µ, π) pn,−-fin . Then, ϕ Qi,n lies in I i,n (t 1 , µ, π). By [Hor20b,Corollary 7.3], the module generated by ϕ Qi,n is isomorphic to L(a 1 ). By [MW95,I.4.11], if t 1 < 0, the automorphic form ϕ is of square-integrable. Thus, the highest weight module L(a 1 ) is unitarizable. If t 1 ≥ 0, the highest weight module L(a 1 ) is unitarizable by Theorem 3.2.2. Since the highest weight submodule of I i,n (t j , µ, π) is irreducible with integral weight a j , the highest weight submodules L(a j ) are unitarizable by Theorem 3.2.2 for all j. This completes the proof.
If the induced representation I i,n (s 0 , µ, π) contains a unitary highest weight representation, one has s 0 ∈ Z + n − (i − 1)/2 with n − i ≤ s 0 + n − (i − 1)/2 ≤ λ n,v . The following statement follows from the straightforward computation. For details, see [Hor20b,Proposition 6.4].
Lemma 6.1.2. With the above notation, suppose for simplicity F = Q. Let a, b be real numbers so that a, b ∈ Z + n − (i − 1)/2, n − i ≤ a + n − (i − 1)/2 ≤ λ n,v and n − i ≤ b + n − (i − 1)/2 ≤ λ n,v . Then, one has χ a = χ b if and only if |a| = |b|.
Put N 2 (G n , χ) (M,π) = {ϕ ∈ N (G n , χ) (M,π) | ϕ is square-integrable}.
In the following of this section, we study N (G n , χ) (M,π) in terms of N 2 (G n , χ) (M,π) and induced representations.
6.2. Constant terms of nearly holomorphic automorphic forms. Toward the classification of (g n , K n,∞ )-modules generated by nearly holomorphic automorphic forms on G n (A Q ), we investigate the embedding of N (G n ) (M,π) into a direct sum of induced representations. Fix a positive integer i ≤ n. Let µ be a character of GL 1 (A F ) and π an irreducible holomorphic cuspidal automorphic representation on
G n−i (A Q ) with π v = L(λ v ) = L(λ 1,v , . . . , λ n−i,v ) for v ∈ a.
Put Π = µ ⊠ π. For the notation, see §2.5. We consider the space N (G n , χ s0 ) (MQ i,n ,Π) . By Lemma 6.1.2, the constant term along P i,n induces the embedding Note that N (G n , χ s0 ) (MQ i,n ,Π) = N 2 (G n , χ s0 ) (MQ i,n ,Π) if and only if the image of (6.2.1) is contained in I i,n (−s 0 , µ, π). In this case, the space N (G n , χ s0 ) (MQ i,n ,Π) is semisimple as (g n , K n,∞ )-modules. The highest weights of the right hand side of (6.2.1) are of the form (λ 1,v , . . . , λ n−i,v , ρ i,n + s 0 , . . . , ρ i,n + s 0 ) v and (λ 1,v , . . . , λ n−i,v , ρ i,n − s 0 , . . . , ρ i,n − s 0 ) v if exist. If λ n−i,v < ρ i,n for some v, one has N (G n , χ s0 ) (MQ i,n ,Π) = N 2 (G n , χ s0 ) (MQ i,n ,Π) . Thus, for the classification, it suffices to consider the case where λ satisfies λ n−i,v ≥ ρ i,n for any v ∈ a. In this case, we may assume 0 ≤ s 0 ≤ min v∈a {λ n−i,v − ρ i,n }.
N (G n , χ s0 ) (MQ i,
Lemma 6.2.1. Under the above assumption, if s 0 = 0, the space N (G n , χ s0 ) (MQ i,n ,Π) is isotypic for ⊠ v∈a L(λ 1,v , . . . λ n−i,v , ρ i,n , . . . , ρ i,n ) as (g n , K n,∞ )-modules.
Proof. By (6.2.1), one has N (G n , χ s0 ) (MQ i,n ,Π) ֒− − → I i,n (0, µ, π).
Consider the induced representation I i,n,v (0, µ v , L(λ v )) for v ∈ a and a unitary character µ v . Since this induced representation lies in the unitary axis, it is unitary by the unitarizability of L(λ). Thus, it is semisimple as (g n , K n,∞ )-modules. Highest weights in it are of the form (λ 1,v , . . . λ n−i,v , ρ i,n , . . . , ρ i,n ). We then have I i,n,v (0, µ v , L(λ v )) pn,−-fin ⊂ L(λ 1,v , . . . λ n−i,v , ρ i,n , . . . , ρ i,n ).
This completes the proof.
In the following of this section, we assume λ n−i,v > ρ i,n for any v ∈ a, s 0 ∈ Z + ρ i,n and 0 < s 0 ≤ min v∈a {λ n−i,v − ρ i,n }. We then have N 2 (G n , χ s0 ) (Qi,n,µ⊠π) \N (G n , χ s0 ) (Qi,n,µ⊠π) ֒−−−→ I i,n (s 0 , µ, π) pn,−-fin . 6.3. Structure theorem for i = n. Proposition 6.3.1. We assume that either of the following conditions holds:
• F = Q and s 0 > 0.
• µ 2 = 1 and s 0 > 0.
We then have N (G n , χ s0 ) (B,µ) ∼ = N 2 (G n , χ s0 ) (B,µ) ⊕ I n (s 0 , µ) pn,−-fin .
Proof. By (6.2.1), the constant term along P n induces the injective map
N 2 (G n , χ s0 ) (B,µ) \N (G n , χ s0 ) (B,µ) ֒−−−→ I n (s 0 , µ) pn,−-fin .
By Lemma 4.3.1, the Eisenstein series at s = s 0 gives the splitting
I n (s 0 , µ) pn,−-fin ֒−−−→ N (G n , χ s0 ) (B,µ) .
Hence the statement follows.
Next we treat the case F = Q.
Proposition 6.3.2. The following statements hold.
(1) For s 0 > 1, one has
N (G n , χ s0 ) (B,µ) ∼ = N 2 (G n , χ s0 ) (B,µ) ⊕ I n (s 0 , µ) pn,−-fin .
(2) For s 0 = 1, one has Proof. The proof of (1) is the same as the proof of Proposition 6.3.1. Next we show (2). If µ v = sgn (n+3)/2 for some v ∈ a, one has N (G n , χ s0 ) (B,µ) = 0 and I n (1, µ) pn,−-fin = 0. We may assume µ v = sgn (n+3)/2 for any v ∈ a.
Take f = v f v ∈ I n (1, µ) such that f v lies in R n (W v ) for some W v and f ∈ V R n (V ).
Here V runs over all positive definite quadratic forms over F of dimension n + 3. Let f s be the standard section of I n (s, µ) such that f 1 = f . We assume that there exists a nearly holomorphic automorphic form ϕ ∈ N (G n ) {B} such that ϕ Pn = f . By Proposition 4.4.1, the difference ϕ − E( · , 1, f ) is non-zero and square integrable. However, for v ∈ a, the K n,v -type ((n+3)/2, . . . , (n+3)/2) in I n,v (−1, µ v ) generates a reducible indecomposable representation of Sp 2n (F v ). This contradicts to the square integrability. Hence there are no automorphic form ϕ such that ϕ Pn = f . Recall that the constant term along P n induces the inclusion
N 2 (G n , χ s0 ) (B,µ) \N (G n , χ s0 ) (B,µ) ֒− − → I n (1, µ) pn,−-fin .
The image of E( · , 1, f ) is the same as f . Hence the above inclusion is surjective and there are no splitting. This completes the proof of (2).
For (3), we assume µ v = sgn (n+2)/2 for any v ∈ a. Note that if I n,v (1/2, µ v ) has a highest weight vector, one has µ v = sgn (n+2)/2 . Thus, the constant term (6.2.1) induces the embedding N (G n , χ s0 ) (B,µ) ֒− − → I n (−1/2, µ) pn,−-fin .
We then have N (G n , χ s0 ) (B,µ) = N 2 (G n , χ s0 ) (B,µ) . The last statement follows immediately from (6.2.1). This completes the proof. 6.4. Structure theorem for P = B. Fix i. We consider the case P = P i,n Let µ be a character of GL i (A F ) and π an irreducible holomorphic cuspidal representation of G n−i (A Q ) and s 0 ∈ Z + ρ i,n . Suppose s 0 > 0. Let S be a finite set of places such that a ⊂ S and for v ∈ S, the representations µ v and π v are unramified. Set L S (s, π, µ) = v ∈S L(s, π v , µ v ).
Lemma 6.4.1. Let α = v α v ∈ I i,n (s 0 , µ, π) pn,−-fin and S the finite set of places such that for v ∈ S, the function α v is unramified. Then, there exists finite number of standard sections f 1 , . . . , f ℓ of I 2n+r (s, µ) and ϕ 1 , . . . ϕ ℓ ∈ π such that lim s→s0 1 L S (s + (r + 1)/2, π, µ) ℓ j=1 Z(g, s; f j , ϕ j ) = α(g), g ∈ G n (A Q ).
Proof. The statement follows from Lemma 5.3.1, Lemma 5.4.1 and Corollary 5.5.2.
Proposition 6.4.2. Suppose s 0 > 1 if F = Q. We then have
N (G n , χ s0 ) (MQ i,n ,µ⊠π) ∼ = N 2 (G n , χ s0 ) (MQ i,n ,µ⊠π) ⊕ I i,n (s 0 , µ, π) pn,−-fin .
Proof. It suffices to show that for any p n,− -finite function α ∈ I i,n (s 0 , µ, π), there exists a nearly holomorphic automorphic form ϕ ∈ N (G n , χ s0 ) (MQ i,n ,µ⊠π) such that ϕ Pi,n = α. This follows immediately from Proposition 5.2.3 and Lemma 6.4.1. This completes the proof.
In the following, we give partial results.
Proposition 6.4.3. Assume F = Q. Let Π = µ ⊠ π be an irreducible holomorphic cuspidal automorphic representation of M Pi,n (A Q ). Suppose that highest weights of the archimedean component v∈a π v = v∈a L(λ 1,v , . . . , λ n−i,v ) satisfies λ n−i,v ≥ ρ i,n + s 0 . We then obtain the following result:
(1) For s 0 = 1/2, the space N (G n , χ s0 ) (M,Π) is v∈a L(λ 1,v , . . . , λ n−i,v , ρ i,n +ε, . . . , ρ i,n +ε)-isotypic.
Here, ε ∈ {±1/2} is defined so that sgn ρi,n+ε = µ v for any v.
(2) For s 0 = 1, the space N (G n , χ s0 ) (MQ i,n ,Π) is contained in I i,n (−1, µ, π) ⊕ I i,n (1, µ, π).
Proof. The statements follow from (6.2.1) immediately. 6.5. Classification of (g n , K n,∞ )-module generated by nearly holomorphic automorphic forms. We finally show the following classification:
Theorem 6.5.1. Let M be an indecomposable reducible (g n , K n,∞ )-module generated by a nearly holomorphic modular form. Then, the length of M is at most two. Moreover, if F = Q, M is irreducible. If F = Q and M is reducible, let L(a 1 , . . . , a n ) be the socle of M and L(b 1 , . . . , b n ) the irreducible quotient of M . Then, there exists i such that
• a j = b j for j = 1, . . . , n − i.
• a n−i+1 = · · · = a n = ρ i,n − 1 and b n−i+1 = · · · = b n = ρ i,n + 1. N (a 1 , . . . , a n ) ∨ . Moreover, if a reducible module M has a regular infinitesimal character, one has i = 1.
• M ∼ =
Proof. We may assume M is reducible. There exists s 0 ∈ (1/2)Z ≥0 , a positive integer i, a character µ of GL i (A F ) and an irreducible cuspidal automorphic representation π of G n−i (A Q ) such that the indecomposable reducible module M can be embedded into N (G n , χ s0 ) (MQ i,n ,µ⊠π) . By Lemma 6.2.1, Proposition 6.3.1, Proposition 6.3.2, Proposition 6.4.2 and Proposition 6.4.3, since M is reducible, one has F = Q and s 0 = 1. In the following, we assume F = Q and s 0 = 1.
Put
M 1 = M ∩ N 2 (G n , χ s0 ) (MQ i,n ,Π) .
Then, the submodule M 1 is semisimple. Since the submodule M 1 occurs in I i,n (−1, µ, π) pn,−-fin , the module M 1 is isomorphic to L(λ 1 , . . . , λ n−i , ρ i,n − 1, . . . , ρ i,n − 1) with some multiplicities. Put M 2 = M/M 1 . Then, one obtains that M 2 is isomorphic to L(λ 1 , . . . , λ n−i , ρ i,n + 1, . . . , ρ n,i + 1) with some multiplicities by Proposition 6.3.2 (2) and Proposition 6.4.3. By Lemma 3.5.1, the module M is isomorphic to N (λ 1 , . . . , λ n−i , ρ i,n − 1, . . . , ρ n,i − 1) ∨ . If M has a regular infinitesimal character, the socle L(a 1 , . . . , a n ) has a regular infinitesimal character. Then, one has i = 1. This completes the proof.
Remark 6.5.2. A typical example of nearly holomorphic modular form that generates an indecomposable reducible module is E 2 . Here, E 2 is defined by
E 2 (z) = 3 πy − 1 + 24 ∞ n=1 0<d|n d exp(2π √ −1nz), z ∈ H 1 .
Then, E 2 generates N (0) ∨ . For details, see [Hor21].
Corollary 6.5.3. Let λ be a regular anti-dominant integral weight and χ = χ λ . Let N Rep n (χ) be the set of isomorphism classes of indecomposable (g n , K n,∞ )-modules with the regular integral infinitesimal character χ generated by nearly holomorphic Siegel modular forms of degree n. For a K n,∞ -type σ, put N Rep n (χ, σ) = {π ∈ N Rep n (χ) | π has the K n,∞ -type σ}. We then have N Rep n (χ) ⊂ {L(λ (0) ), . . . , L(λ (p) ), N (λ (1) ) ∨ } if λ n = n + 1 {L(λ)} if λ n = n + 1 and N Rep n (χ, det λ1−1 ⊗ ∧ j(λ) ) ⊂ {L(λ (0) ), N (λ (1) ) ∨ } if λ n = n + 1 {L(λ)} if λ n = n + 1.
Proof. The statement follows immediately from Proposition 3.4.4 and Theorem 6.5.1.
Projection operators
In this section, we investigate projection operators associated to infinitesimal characters.
7.1. Generators of Z n . In this subsection, we assume F = Q for simplicity. It is well-known that Z n is generated by n generators. We give generators explicitly. We first define matrices B i,j and E ±,i,j = E ±,j,i as follows: 2 (e i,j + e j,i ) Then, {B i,j | 1 ≤ i, j ≤ n} and {E ±,i,j | 1 ≤ i ≤ j ≤ n} are basis of k n and p n,± , respectively. We define B ∈ Mat n (Mat 2n (C)) and E ± ∈ Sym n (Mat 2n (C)) by B = (B k,ℓ ) k,ℓ , E ± = (E ±,k,ℓ ) k,ℓ ∈ Sym n (Mat 2n (C)).
B i,
Put B * = (B j,i ) i,j , the transpose of B. Let w = X 1 · · · X m be a word with letters B, B * and E ± . We assume the word w satisfies the following five conditions: • E + is followed by E − or B * .
• E − is followed by E + or B.
• B is followed by E + or B.
• B * is followed by E − or B * .
• E + and E − occur with the same multiplicity.
For a word, let tr(w) ∈ Mat 2n (C) be the trace as the Mat 2n (C)-valued matrix. We may identify tr(w) as an element of U(g n ). Let L(w) be the sum of number of times E − B and BE + occur isolatedly in w counted cyclicly. where w runs over all words of length 2r with the above five conditions.
Theorem 7. 1.1 ([Mau12]). The algebra Z n is generated by elements D 2 , . . . , D 2n as an algebra over C.
Projection operators.
Fix an infinitesimal character χ, a weight ρ and a congruence subgroup Γ. Let K Γ be the closure of Γ in G n (A Q,fin ). We now define a projection on N (G n ) KΓ ρ . Let λ be the highest weight of ρ. We define a set X(ρ) of k n -dominant weights by the set of k n -dominant weights µ such that µ satisfies the following three conditions:
• L(µ) is unitarizable.
• L(µ) has the K n,∞ -type ρ.
• λ ≤ µ. Then, X(ρ) is finite. Put χ(ρ) = {χ µ | µ ∈ X(ρ)}. For infinitesimal characters χ and ω, we define D χ,ω as follows: Let v ∈ a. If the local components χ v and ω v are the same, put D χ,ω,v = 1. If χ v = ω v , there exists i such that χ v (D 2i ) = ω v (D 2i ). Then, put D χ,ω,v = D 2i − ω v (D 2i ). Set D χ,ω = v∈a D χ,ω,v . By definition, for an ω-eigenvector v with ω ∈ χ(ρ), we have
1 χ(D χ,ω ) D χ,ω · v = v if χ = ω 0 if χ = ω.
We now can define the projection p χ ∈ End C (N (G n ) KΓ ρ ) by
p χ (f ) = 1 ω∈χ(ρ) χ(D χ,ω ) ω∈X(ρ) D χ,ω · f.
By (6.1.1), p χ defines a projection onto the χ-eigen subspace of N (G n ) KΓ ρ associated to χ. By Lemma 2.6.2, one has N ρ (Γ) ⊗ ρ * ∼ = N (G n ) KΓ ρ . The projection defines an endomorphism on N ρ (Γ) ⊗ ρ * .
Lemma 7.2.1. The projection p χ defines a projection on N ρ (Γ).
Proof. We have the map N ρ (Γ) −→ Hom Kn,∞ (ρ * , N ρ (Γ) ⊗ ρ * ) by f −→ (v −→ f ⊗ v). Since it is injective, it is isomorphism by comparing the dimensions. We identify N ρ (Γ) ⊗ ρ * as N (G n ) KΓ ρ . Let (N ρ (Γ) ⊗ ρ * ) χ be the χ-eigen subspace of N ρ (Γ) ⊗ ρ * associated to an infinitesimal character χ. Since the χ-isotypic component of N (G n ) KΓ ρ is K n,∞ -stable, the corresponding space (N ρ (Γ) ⊗ ρ * ) χ is K n,∞stable. Thus we can define the subspace Hom Kn,∞ (ρ * , (N ρ (Γ) ⊗ ρ * ) χ ) of Hom Kn,∞ (ρ * , N ρ (Γ)⊗ρ * ) and of N ρ (Γ). We denote the subspace of N ρ (Γ) by N ρ (Γ, χ). Since N (G n ) KΓ ρ decomposes as the direct sum of χ-eigen spaces, one has N ρ (Γ) = χ N ρ (Γ, χ). By the map F −→ p χ •F , one obtains a map Hom Kn,∞ (ρ * , N ρ (Γ) ⊗ ρ * ) −→ Hom Kn,∞ (ρ * , (N ρ (Γ) ⊗ ρ * ) χ ) and thus it induces the map N ρ (Γ) −→ N ρ (Γ, χ). It suffices to show that this map is a projection. For f ∈ N ρ (Γ, χ), one can regard f as an element F ∈ Hom Kn,∞ (ρ * , (N ρ (Γ) ⊗ ρ * ) χ ). Since p χ is projection, one has p χ • F = F . It shows that f is invariant under the map N ρ (Γ) −→ N ρ (Γ, χ). Thus, this map is an idempotent and hence a projection. This completes the proof.
We denote by the same letter p χ the projection on N ρ (Γ) as in the above lemma. Thus we have p χ (f ⊗ v * ) = p χ (f ) ⊗ v * for f ∈ N ρ (Γ) and v * ∈ ρ * . Set N ρ (Γ, χ) = p χ (N ρ (Γ)).
Theorem 7.2.2. The projection p χ on N ρ (Γ) commutes with the Aut(C)-action.
Proof. The case where F = Q is proved in [HPSS21, Proposition 3.16]. The general case is similar. We omit the details.
For an integral weight λ = (λ 1,v , . . . , λ n,v ) v , put j v (λ) = #{i | λ 1,v ≡ λ i,v mod 2}.
Theorem 7.2.3. Let λ = (λ 1,v , . . . , λ n,v ) v be a regular anti-dominant k n -dominant integral weight. Put ρ = v∈a (det λ1,v −1 ⊗∧ jv (λ) ). If F = Q and λ n,v = n + 1, any modular form in N ρ (Γ, χ λ ) generates L(λ) or N (λ (1) ) ∨ . If not, any modular form in N ρ (Γ, χ λ ) generates L(λ).
Proof. Take f ∈ N ρ (Γ, χ λ ). Then the (g n , K n,∞ )-module generated by f is a direct sum of modules in N Rep n (χ λ , ρ). Thus, the statement follows from Corollary 6.5.3.
We finally give an analogue of holomorphic projections.
Corollary 7.2.4. Let λ = (λ 1,v , . . . , λ n,v ) v be a regular anti-dominant integral weight with λ 1,v −λ n,v ≤ 1 for any v ∈ a and ρ the irreducible highest weight representation of K n,C with highest weight λ. If F = Q or λ n,v = n + 1 for some v ∈ a, the projection p χ defines a projection onto M ρ (Γ).
Proof. By λ 1,v − λ n,v ≤ 1 and Theorem 7.2.3, any modular form f in N ρ (Γ, χ λ ) generates L(λ). Since f is of weight λ, f corresponds to a highest weight vector. Thus, f is holomorphic and N ρ (Γ, χ λ ) = M ρ (Γ). This completes the proof.
standard Levi subgroup M , set M (A Q ) 1 = χ∈Homconti(M(A Q ),C × ) Ker(|χ|).
Two cuspidal data (M, τ ) and (M ′ , τ ′ ) are called equivalent if there exists w ∈ W (M ) such that M ′ = M w and that τ ′ = τ w . Here we put W (M ) = w ∈ W wM w −1 is a standard Levi subgroup of G n and w has a minimal length in wW M where W M is the Weyl group of M . Let A(G n ) (M,τ ) is the subspace of automorphic forms in A(G n ) with the cuspidal support (M, τ ). For the definition, see [MW95, §III.2.6]. Then the following result is well-known. For example, see [MW95, Theorem III.2.6].
Here, A cusp (M ) π is the π-isotypic component ofA cusp (M ). For an automorphic form ϕ, there exists a finite correction of cuspidal data (M, τ ) such that ϕ ∈ (M,τ ) A(G n ) (M,τ )
the induced representation contains a highest weight representation of weight (λ 1 , . . . λ n ) if and only if we have
Corollary 3.4. 4 .
4For a regular anti-dominant integral weight λ, one has
4. 1 .
1Siegel-Weil formula. Let m be a positive even integer. Set s 0 = (m − n − 1)/2. Let V be a m-dimensional quadratic space over F and S(V (A F ) n ) the space of Schwartz functions on V (A F ) n . We denote by ω(
Theorem 4.1. 1 .
1Suppose that V satisfies the condition (W). One has E(s 0 , ϕ) = cI(g, ϕ)
( 2 )
2For an archimedean place v of F , let V be a m-dimensional quadratic space over F v = R with the signature (m, 0) or (m − 2, 2). If s 0 > 0, the representation R n (V v ) contains L(m, . . . , m).(3) For v ∈ a and s 0 > 0, the space of p n,− -finite vectors in I n,v (s 0 , χ) forms 0 if χ = sgn m+1 L(m, . . . , m) if χ = sgn m . Proof. The statement (1) and (2) are proved in [KR94, Proposition 5.3] and [KR90, Proposition 2.1], respectively. For the last statement, see [Hor20b, Corollary 6.5].
4. 3 .
3Holomorphy of Siegel Eisenstein series for s 0 > 1 or F = Q. Let f s = v f v,s be a standard section of I n (s, µ). For a representation M of g n , we denote by M pn,−-fin the space of p n,− -finite vectors in M . Put s 0 = (m − n − 1)/2 with non-negative even integer m.Lemma 4.3.1. For s 0 > 1, the map I n (s 0 , µ) pn,−-fin −→ A(G n ) defined by f s −→ E( · , s 0 , f ) is injective and intertwining under the action of G n (A Q ). If F = Q or µ 2 = 1, the same statement holds for s 0 > 0.
For a non-zero standard section h s of I ∞ (s, µ) of weight k, the integral U ∞ (s, µ)h s (1) is a nonKR94, (1.22)] and [KR88, Lemma 4.6]. Substitute k = (n + 3)/2. Then, U ∞ (s, µ)h s has a simple zero at s = 1/2. Hence the integral U ∞ (s, µ)f s,∞ has a simple zero at s = 1. Indeed, at s = s 0 , f s can be written as a sum of right translations of a non-zero function of weight (n + 3)/2. Put
s is the unramified section of I n,v (s, µ). Thus, the meromorphic section U * (s, µ)f s is holomorphic for s = 1. By Lemma 4.3.1 and [KR94, Theorem 1.1], E( · , s − 1/2, ι * U (s, µ)r(k)f ) has at most simple pole at s = 1. We then have lim s→1 E(g, s − 1/2, ι * U (s, µ)f ) non-zero constant c. Hence the claim holds. Let V 0 be the complementary space of V in the sense of[KR94, pp. 34]. By [KR94, Corollary 6.3], the constant term of Res s=1 E(g, s−1/2, F ) along P n−1 lies in R n−1 (V 0 ) ⊂ I n−1(−1/2, µ). Thus, the constant term of E(g, s, f ) along the Borel subgroup is an element of weight k in a direct sum of principal series representations. Comparing the scalar K n,∞ -types of principal series representations and degenerate principal series representations, the constant term lies in I n (1, χ V ) ⊕ I n (−1, χ V ) of weight (n + 3)/2. Note that the K n,∞ -type with highest weight ((n + 3)/2, . . . , (n + 3)/2) occur in I n,∞ (−1, µ) pn,−-fin by [Hor20b, Lemma 3.5]. We also note that E( · , s, f ) concentrates on the Borel subgroup. Hence the Eisenstein series E( · , 1, f ) is nearly holomorphic. This completes the proof.
.
Let f s be a standard section of I n (s, µ). The candidates of real parts of non-zero cuspidal exponents of E( · , 0, f ) are only ((n − 1)/2, (n − 3)/2, . . . , (1 − n)/2).Proof. By Theorem 4.5.1, the constant term of Eisenstein series along B n lies in the direct sum of induced representations of the form I n (0, µ). The lemma then follows from E( · , s, f ) ∈ A(G n ) {B} and the definition of cuspidal exponents.
n (Q)N P ′ n−j,n (A Q )\Gn(A Q ) N P ′ n−j,n (Q)\N P ′ n−j,n (A Q ) f s (τ j (ξβg × h))ϕ(nh) dndh = P ′ n−j,n (Q)N P ′ n−j,n (A Q )\Gn(A Q ) f s (τ j (ξβg × γ h))
By [KR94, (7.2.8)], one has Z(1, s; f v , ϕ v ) =
n ,Π) ֒− − → (I i,n (−s 0 , µ, π) ⊕ I i,n (s 0 , µ, π)) pn,−-fin if s 0 = 0 I i,n (s 0 , µ, π) pn,−-fin if s 0 = 0. (6.2.1)
N 2 (
2G n , χ s0 ) (B,µ) \N (G n , χ s0 ) (B,µ) ∼ = I n (1, µ) pn,−-fin . Moreover, there are no splitting I n (1, µ) pn,−-fin −→ N (G n , χ s0 ) (B,µ) if I n (1, µ) pn,−-fin = 0. (3) For s 0 = 1/2, one has N (G n , χ s0 ) (B,µ) = N 2 (G n , χ s0 ) (B,µ) ,if µ v = sgn (n+2)/2 for any v ∈ a and N (G n , χ s0 ) (B,µ) ⊂ I n (1/2, µ) pn,−-fin , if µ v = sgn (n+2)/2 for any v ∈ a.
For example, L(E − BE + ) = 0, L(E − BE + B * ) = 1, L(E + E − BB) = L(E − BBE + ) = 2. Put D 2r = w (−1) L(w) tr(w)
Let ∞ be the archimedean place of Q. Let F s be the standard section of I n−1Note that
the local intertwining integral U v (s, µ) converges absolutely for Re(s) ≫ 0. Moreover, it is holomorphic
and non-zero for Re(s) > 0. See [PSR87, pp. 91].
(g, K)-modules generated by nearly holomorphic modular forms
P Deligne, Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math. N. Koblitz and A. OgusCorvallis, Ore; Providence, R.I.Amer. Math. Soc2Proc. Sympos. Pure Math., XXXIIIP. Deligne. Valeurs de fonctions L et périodes d'intégrales. In Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 2, Proc. Sympos. Pure Math., XXXIII, pages 313-346. Amer. Math. Soc., Providence, R.I., 1979. With an appendix by N. Koblitz and A. Ogus.
A classification of unitary highest weight modules. Thomas Enright, Roger Howe, Nolan Wallach, Representation theory of reductive groups. SpringerThomas Enright, Roger Howe, and Nolan Wallach. A classification of unitary highest weight modules. In Repre- sentation theory of reductive groups, pages 97-143. Springer, 1983.
Constructions of nearly holomorphic siegel modular forms of degree two. Shuji Horinaga, International Journal of Mathematics. 31012050002Shuji Horinaga. Constructions of nearly holomorphic siegel modular forms of degree two. International Journal of Mathematics, 31(01):2050002, 2020.
Nearly holomorphic automorphic forms on Sp 2n with sufficiently regular infinitesimal characters and applications. to appear in Pasific. Shuji Horinaga, J. of Math. Shuji Horinaga. Nearly holomorphic automorphic forms on Sp 2n with sufficiently regular infinitesimal characters and applications. to appear in Pasific J. of Math, 2020.
Nearly holomorphic automorphic forms on SL 2. Shuji Horinaga, Journal of Number Theory. 219Shuji Horinaga. Nearly holomorphic automorphic forms on SL 2 . Journal of Number Theory, 219:247-282, 2021.
The special values of the standard l-functions for GSp 2n × GL 1. Shuji Horinaga, Ameya Pitale, Abhishek Saha, Ralf Schmidt, Trans. of Amer. Math. Soc. to appear inShuji Horinaga, Ameya Pitale, Abhishek Saha, and Ralf Schmidt. The special values of the standard l-functions for GSp 2n × GL 1 . to appear in Trans. of Amer. Math. Soc., 2021.
Representations of Semisimple Lie Algebras in the BGG Category O. E James, Humphreys, American Mathematical Soc94James E Humphreys. Representations of Semisimple Lie Algebras in the BGG Category O, volume 94. American Mathematical Soc., 2008.
On the location of poles of the triple l-functions. Tamotsu Ikeda, Compositio Mathematica. 832Tamotsu Ikeda. On the location of poles of the triple l-functions. Compositio Mathematica, 83(2):187-237, 1992.
On the weil-siegel formula. Stephen Kudla, Stephen Rallis, J. reine angew. Math. 33Stephen Kudla and Stephen Rallis. On the weil-siegel formula. J. reine angew. Math, 387(1-68):33, 1988.
Degenerate principal series and invariant distributions. S Stephen, Stephen Kudla, Rallis, Israel Journal of Mathematics. 691Stephen S Kudla and Stephen Rallis. Degenerate principal series and invariant distributions. Israel Journal of Mathematics, 69(1):25-45, 1990.
A regularized siegel-weil formula: the first term identity. S Stephen, Stephen Kudla, Rallis, Annals of Mathematics. 1401Stephen S Kudla and Stephen Rallis. A regularized siegel-weil formula: the first term identity. Annals of Math- ematics, 140(1):1-80, 1994.
On the functional equations satisfied by Eisenstein series. P Robert, Langlands, Springer544Robert P Langlands. On the functional equations satisfied by Eisenstein series, volume 544. Springer, 2006.
p-adic l-functions for ordinary families on symplectic groups. Zheng Liu, Journal of the Institute of Mathematics of Jussieu. 194Zheng Liu. p-adic l-functions for ordinary families on symplectic groups. Journal of the Institute of Mathematics of Jussieu, 19(4):1287-1347, 2020.
Casimir operators for symplectic groups. Kathrin Maurischat, International Journal of Number Theory. 804Kathrin Maurischat. Casimir operators for symplectic groups. International Journal of Number Theory, 8(04):923-932, 2012.
Spectral decomposition and Eisenstein series: a paraphrase of the scriptures. Colette Moeglin, Jean-Loup Waldspurger, Number. 113Cambridge University PressColette Moeglin and Jean-Loup Waldspurger. Spectral decomposition and Eisenstein series: a paraphrase of the scriptures. Number 113. Cambridge University Press, 1995.
Rankin triple l functions. Ilya Piatetski, - Shapiro, Stephen Rallis, Compositio Mathematica. 641Ilya Piatetski-Shapiro and Stephen Rallis. Rankin triple l functions. Compositio Mathematica, 64(1):31-115, 1987.
Lowest weight modules of Sp 4 (R) and nearly holomorphic siegel modular forms. Ameya Pitale, Abhishek Saha, Ralf Schmidt, Kyoto Journal of Mathematics. 614Ameya Pitale, Abhishek Saha, and Ralf Schmidt. Lowest weight modules of Sp 4 (R) and nearly holomorphic siegel modular forms. Kyoto Journal of Mathematics, 61(4):745-814, 2021.
Confluent hypergeometric functions on tube domains. Goro Shimura, Mathematische Annalen. 2603Goro Shimura. Confluent hypergeometric functions on tube domains. Mathematische Annalen, 260(3):269-302, 1982.
Arithmeticity in the theory of automorphic forms. Goro Shimura, Number. 82American Mathematical SocGoro Shimura. Arithmeticity in the theory of automorphic forms. Number 82. American Mathematical Soc., 2000.
Highest weight vectors for the principal series of semisimple lie groups and embeddings of highest weight modules. Hiroshi Yamashita, Journal of Mathematics of Kyoto University. 291Hiroshi Yamashita. Highest weight vectors for the principal series of semisimple lie groups and embeddings of highest weight modules. Journal of Mathematics of Kyoto University, 29(1):165-173, 1989.
| []
|
[
"Study on the K ondo e ect in the tunneling phenom ena through a quantum dot",
"Study on the K ondo e ect in the tunneling phenom ena through a quantum dot"
]
| [
"O Sam U Sakai \nD epartm ent ofPhysics\n192-0397Tokyo M etropol itan U niversity, TokyoJapan\n",
"Izum I Da \nD epartm ent ofPhysics\n192-0397Tokyo M etropol itan U niversity, TokyoJapan\n"
]
| [
"D epartm ent ofPhysics\n192-0397Tokyo M etropol itan U niversity, TokyoJapan",
"D epartm ent ofPhysics\n192-0397Tokyo M etropol itan U niversity, TokyoJapan"
]
| []
| A bstract W e revi ew our recent studi es on the K ondo e ect i n the tunnel i ng phenom ena through quantum dot system s.N um eri calm ethods to cal cul ate rel i abl e tunnel i ng conductance are devel oped. In the rst pl ace, a case i n w hi ch el ectrons of odd num ber occupy the dot i s studi ed,and experi m entalresul ts are anal yzed based on the cal cul ated resul t.Tunnel i ng anom al y i n the even-num ber-el ectron occupati on case,w hi ch i s recentl y observed i n experi m ent and i s ascri bed to the K ondo e ect i n the spi n si ngl et-tri pl etcrossovertransi ti on regi on,i sal so exam i ned theoreti cal l y.K ey words: tunnel i ng,quantum dot,K ondo e ect,spi n crossover transi ti on, num eri calrenorm al i zati on group m ethod 1 C orrespondi ng author. D epartm ent of Physi cs, Tokyo M etropol i tan U ni versi ty, | 10.1016/s0921-4526(02)01826-4 | [
"https://export.arxiv.org/pdf/cond-mat/0208505v1.pdf"
]
| 11,685,850 | cond-mat/0208505 | 1cb320909ab2bf72cc3becf52cb1463ebb9da9f3 |
Study on the K ondo e ect in the tunneling phenom ena through a quantum dot
27 Aug 2002
O Sam U Sakai
D epartm ent ofPhysics
192-0397Tokyo M etropol itan U niversity, TokyoJapan
Izum I Da
D epartm ent ofPhysics
192-0397Tokyo M etropol itan U niversity, TokyoJapan
Study on the K ondo e ect in the tunneling phenom ena through a quantum dot
27 Aug 2002Project,ER AT O ,JST ,N T T A tsugiR & D C enter, A tsugi243-0198,Japan
A bstract W e revi ew our recent studi es on the K ondo e ect i n the tunnel i ng phenom ena through quantum dot system s.N um eri calm ethods to cal cul ate rel i abl e tunnel i ng conductance are devel oped. In the rst pl ace, a case i n w hi ch el ectrons of odd num ber occupy the dot i s studi ed,and experi m entalresul ts are anal yzed based on the cal cul ated resul t.Tunnel i ng anom al y i n the even-num ber-el ectron occupati on case,w hi ch i s recentl y observed i n experi m ent and i s ascri bed to the K ondo e ect i n the spi n si ngl et-tri pl etcrossovertransi ti on regi on,i sal so exam i ned theoreti cal l y.K ey words: tunnel i ng,quantum dot,K ondo e ect,spi n crossover transi ti on, num eri calrenorm al i zati on group m ethod 1 C orrespondi ng author. D epartm ent of Physi cs, Tokyo M etropol i tan U ni versi ty,
Introduction
Q uantum dotsystem sarenow desi gned asthearti ci alm agneti ci m puri ty and aregrow i ng asa el d ofdetai l ed experi m entalstudi esoftheK ondo probl em s[ 1] .In thi s report we revi ew our recent theoreti calworks on the K ondo e ect i n the tunnel i ng phenom ena through quantum dot[ 2{4] .
A fterthe paperspoi nted outthe possi bi l i ty to occurthe K ondo e ect i n tunnel i ng through a quantum dot [ 5] , m any theoreti cal studi es have been done on thi s probl em [ 6] .T he cal cul ati on of the tunnel i ng conductance needs the dynam i calexci tati on spectra.H owever we have notexactanal yti c cal cul ati on ofthe dynam i calexci tati on spectra forthe K ondo system s [ 7] .T wo num eri cal m ethodshave been recentl y devel oped to cal cul ate the tunnel i ng conductance ofthe quantum dot system s.O ne i s based on the num eri calrenorm al i zati on group techni que (N RG ) [ 8] ,and anotheri sbased on theQ uantum M onte C arl o (Q M C ) techni que [ 3] .Both techni ques are know n as rel i abl e m ethods to calcul ate the dynam i calexci tati on ofthe K ondo system s[ 7, 9,10] .
W hen the occupati on num ber of el ectrons on the dot i s odd,l ocal i zed spi n freedom appears on the dot,and i t coupl es w i th the conducti on el ectrons on the l eads.A t very l ow tem peratures,we can expect the i ncrease ofthe tunnel i ng conductance due to the resonance transm i ssi on via the K ondo peak i n the densi ty ofstates on the dot orbi tal s.T hi s i s the m ost typi calexam pl e of the K ondo e ect ofthe dot system s,and has been observed i n m any experim ents[ 1] .In the rstpartofthi sreport,we presentthe theoreti calcal cul ati on forthi scase[ 2]and com pare i tw i th experi m entaldata [ 11] .R ecentl y,anom al y i n a regi on of an even el ectron num ber occupati on case has been reported i n experi m ent [ 12] .T hi s phenom ena i s expected to rel ate to the K ondo e ect i n the spi n crossover regi on of even occupati on num ber case [ 4,13,14] . T hi s probl em i s di scussed i n the second part ofthi s report.
2 Single O rbitalC ase W e consi der the fol l ow i ng H am i l toni an,
H = H ' + H d + H ' d ;
(1)
H ' = X k " k c + k c k ;
(2) H d = X p " dp n dp + U X < p ;p 0 > n dp n dp 0 0;
H ' d = 1 p N X p k fv p k d + p c k + h: c: g:(3)
T he term s H ' and H d represent the el ectron i n the l eads and the dot,respecti vel y.T he term H ' d gi ves the el ectron tunnel i ng between the l eads and the dot.T he su x = L(R ) m eans the l eft(ri ght) l ead and dp m eans the dot orbi taldenoted by p.T he quanti ty " dp corresponds to the energy ofthe orbi tal ,and i t can be changed by appl yi ng gate vol tage.T he quanti ty U i s the C oul om b i nteracti on constant.
A t rst we consi der the m ost si m pl i ed m odelthat the dot has a si ngl e orbi tal .W e abbrevi ate the su x p.T here w i l lbe m any orbi tal si n doti n actual si tuati ons. T wo orbi tal s case w i l lbe di scussed i n x3.W e cal cul ate the conductance,G ,i n the l i near theory ofthe bi as vol tage.It i s obtai ned from the correl ati on functi on ofthe current operators [ 8] .Buti n the si ngl e orbi talcase, the cal cul ati on i s reduced to the fol l ow i ng expressi on [ 3] ,
G = 2e 2 h Z 4 L R L + R ( Im G dd ( ))( @f @ )d ;(5)
w here G dd ( ) i s G reen' s functi on of the dot orbi tal and f( ) i s the Ferm i di stri buti on functi on.W eusethi sform ul a i n thi ssecti on becausethenum eri cal cal cul ati on ofG dd ( ) i s easi er than the cal cul ati on ofthe current correl ati on functi on.D etai l ed com pari son ofboth m ethodsare m ade i n R ef. [ 3] .H ereafter we assum e that the l eads have constant densi ty ofstates from D to D w i th D = 1.T he hybri di zati on strength i s param eteri zed as = j v j 2 c ,w here c = 1=2D i s the densi ty ofstates for the l ead.
In Fi g. 1, we show the conductance as a functi on of the gate vol tage (" d ) for vari ous tem perature cases[ 2] .T hi s cal cul ati on i s carri ed out by the N RG m ethod. A t hi gher tem peratures we have pai red C oul om b osci l l ati on peaks at " d = U and 0.T hey grow w i thout i ncrease ofthei r w i dth,thus becom e very sharp asthe tem perature decreases.A tthe sam e ti m e the peak posi ti ons shi ftsl i ghtl y to " d = U =2 si de.W hen the tem perature decreases further,the i ntensi ty ofthe val l ey regi on between the two peaks gradual l y i ncreases,and the peaks m erge i nto a broad si ngl e peak at extrem el y l ow tem perature.T he characteri sti c tem perature T K M vari es drasti cal l y as G ol dhaber-G ordon et al .have show n the detai l ed tem perature dependence of the conductance [ 11] .W e di rectl y com pare the N RG data w i th experi m ental data i n Fi g. 2[ 2] .T heC oul om b repul si on hasbeen esti m ated to be1. 9 m eV .W e have esti m ated = 0: 12m eV = (1: 3 10 3 m K ),and thus = U = 1: 9 10 2 i n the experi m ental si tuati on. O n hori zontal axi s the poi nts " d = 0; U of the cal cul ated conductance are set at V g = 119; 92m eV ,respecti vel y,and factor 0. 31 i s m ul ti pl i ed to the N RG data to t the experi m ental data at the l owest tem perature.T hi s factor 0. 31 woul d be caused by the asym m etry L 6 = R .For the l eft hand si de peak,the conductance data agrees very wel l w i th theexperi m entalonei n 100 m K T 1500m K .T hi sagreem entsuggests that the behavi ors i n experi m ent are caused by the K ondo e ect i n the m i dtem perature regi on show n i n Fi g. 1. T he K ondo tem perature at the val l ey regi on seem s to be l ess than 10 m K .
T here are severaldi screpantpoi nts.A thi gh tem perature regi on T 2000m K , the conductance of the experi m ental data i s l arger than that of the N RG T he conductance at l ow tem perature i s strongl y suppressed by appl i cati on of the Zeem an el d. T he theoreti cal study has been carri ed out based on the Q M C m ethod [ 3] ,and the resul t agrees wel lw i th experi m entalone [ 15] .
Spin C rossover Transition
R ecentl y,Sasakiet al .observed l ow tem perature tunnel i ng anom al y i n evennum ber-el ectron-occupati on case [ 12] .T hey control l ed the energy spl i tti ng of orbi tal sby tuni ng the m agneti c el d [ 16] .W hen the l evelspl i tti ng i sgradual l y i ncreased i n the even el ectron num ber case, the energy of the spi n tri pl et state i ncreases com pared w i th that ofthe si ngl et state.T he el ectrons occupy di erent orbi tal s to gai n H und' s coupl i ng energy i n the form er state, w hi l e the el ectrons w i th opposi te spi n occupy the l ower energy orbi tali n the l atter state.It was suggested that the l ow tem perature anom al y i s rel ated to the K ondo e ect due to thi s spi n crossover transi ti on [ 13,14] . H owever detai l ed cal cul ati on ofthe conductance hasnotbeen done.Iti s observed thata bum p grow sbetween the two C oul om b peaks,i . e. ,a thi rd peak appearsbetween the C oul om b peaksand i tgrow sw hen thetem perature decreases.T hi sbehavi ori s qui te di erentfrom thatofthe odd num bercase di scussed i n previ oussecti on. N extwe consi dera case i n w hi ch l eadshave onl y one conducti on channel ,and the both dot orbi tal s connect to thi s si ngl e channel(si ngl e channelcase).In such case one orbi talw i l lhybri di ze m ai nl y w i th even com bi nati on ofL and R l ead states,and another m ai nl y w i th odd com bi nati on ofL and R states. T he tunnel i ng via di erent orbi tal s i nterferes i n thi s case. A t T = 0, the conductance i s gi ven as G F1 = (2e 2 =h)si n 2 ( (hn e i hn o i)=2) ,and i s sm al l w hen hn e i' hn o i.T herefore the conductance w i l ltend to zero asT decreases to extrem e l ow tem perature w hen H und' s rul e coupl i ng energy dom i nates. T hi s i s contrasted to the case (a) i n Fi g. 4. In the si ngl et-tri pl et K ondo e ect case w here the K ondo tem perature i s not l ow ,the cal cul ated tem perature dependence ofthe conductance i s not so di fferent from that of the two channelcase(Fi g. 4(b)) as seen from Fi g. 5.T hi s i s because hn e i and hn o i have di erent and not-i nteger val ues as seen from Fi g. 3(a).
A s a sum m ary,we have show n that the el ectron occupati on on orbi tal s redi stri bute to gai n H und' s coupl i ng energy i n the si ngl et-tri pl et crossover regi on w hen the potenti aldeepens.T hi sredi stri buti on causesa bum p,asseen i n the experi m ent,i n the conductance at l ow tem peratures.
A cknow ledgem ents
T he authorswoul d l i ke to thank ProfessorTarucha forcol l aborati on.T he num eri calcom putati on waspartl y perform ed i n the C om puterC enterofTohoku U ni versi ty and the Supercom puter C enter ofInsti tute forSol i d State Physi cs (U ni versi ty ofTokyo).
" d changes. (W e de ne T K M fol l ow i ng the usualde ni ti on,T K = 4 (T = 0),for each " d case).It i s the l owest at the m i d poi nt ofthe two peaks," d = U =2,and i s denoted as T K M ,w hi ch i s T K M =U = 5: 34 10 7 for the param eter case used i n Fi g. 1. T he conductance atthe m i d poi ntbegi n to i ncrease atabout10T K M ,and the val l ey di sappears at about 0: 2T K M .
Fi g. 1 .
1C onductance as a functi on of " d for the param eter =( U ) = 1: 0 10 2 (U = 1: 0 10 2 ; = = 1: 0 10 4 ).Fi g. 2. C om pari son between the experi m ental and the cal cul ated data. T he experi m entaldata i s reproduced from Fi g. 2 of R ef.[ 11] .T he param eter of the N R G cal cul ati on i s chosen to be =( U )= 1: 9 10 2Fi g.3. (a) O ccupati on num bers ofthe e and o orbi tal s at T = 0,hn e i,hn o i,and the totaloccupati on num beron the dot,N d .(b) C onductance at T = 0 for the two channelcase,G F 2 .T hesym bol sfor are com m on i n (a)and (b).T hedashed arrow s show the i ncreasi ng of .Param eters are J H =U = 0: 3, e =U = o =U = 0: 02. data.T hi sm i ghtbecaused by them ul ti -orbi tale ect.T he conductance show s di sagreem ent i n the val l ey and the ri ght hand peak posi ti ons.T he change of the gate vol tage on the dot w i l la ect not onl y the potenti alofthe dot,but al so the hybri di zati on strength between the dot and the l ead states.
W e cal cul ate the conductance around the si ngl et-tri pl et crossover regi on for a system w i th two orbi tal s[ 4] . T he orbi tal s are denoted as even(p= e) and odd(p= o).T he energy ofe(o) orbi tali s de ned as " d =2 (" d + =2).T he exchange term H ex = J HP 1 2 3 3 ( ) 1 2 ( ) 3 4 d + e 1 d e 2 d + o 3 d o 4 i s added to the H am i l toni an Eq. (1).(J H < 0) Fi g.4. C onductance at vari ous tem peratures as a functi on ofgate vol tage i n the two channelcase.(a) =U = 0: 24,(b) =U = 0: 282, (c) =U = 0: 3.T he sym bol s of the tem perature i n (b) and (c) are com m on. In Fi g. 3(a),we show the occupati on num bers of the even and odd orbi tal s, hn e iand hn o i,on the dot.T he param etersare J H =U = 0: 3, e =U = o =U = 0: 02 . W e assum e the case i n w hi ch both l eads have two conducti on channel s,and each dotorbi talconnectsto each conducti on channel(two channelcase).T he conductance i s gi ven as the sum ofones from two channel s.A t very l ow temperatures i t i s gi ven as, G F2 = (2e 2 =h) P p si n 2 ( hn p i=2), and i s show n i n Fi g. 3(b).
Fi g. 5 .
5C onductance as a functi on ofgate vol tage i n the si ngl e channelcase.Param eters are the sam e to those ofFi g. 4(b).H ere we de ne a quanti ty x = ( " d )=U .In the 0 x 1 regi on,an el ectron occupi es the e-orbi tal , (hn e i;hn o i) ' (1;0). N ext we see the 1x 1: 5 regi on.A t =U = 0: 24,the occupati ons on both orbi tal sare al m ostthe sam e, (hn e i;hn o i)' (1;1),due to the strong H und' scoupl i ng.W hen i ncreases,the occupati ons i n the 1 x 1: 5 regi on gradual l y spl i t to (2;0)w here the l ocal spi n state i s a si ngl et.T herefore the conductance decreases from 4e 2 =h to 0. W e stress that the spl i tti ng i s suppressed around x = 1: 5 as seen i n Fi g. 3(a).Forexam pl e at =U = 0: 28,the second el ectron tends to occupy the e-orbi tal w hen x sweeps to x = 1: 2 from sm al l er x,then they redistribute to the e-and o-orbi tal s w hen x further sweeps to x = 1: 5;(1;0) ! (1: 6;0: 3) ! (1: 2;0: 8). T hi s redi stri buti on re ects the stronger H und' s coupl i ng near x = 1: 5, at w hi ch the totaloccupati on i s cl ose to two.T he redi stri buti on causes a bum p i n the conductance around x = 1: 5 at l ow tem peratures,as seen i n 0: 276 =U 0: 284 i n Fi g. 3(b).InFi g. 4 we show the conductance at vari ous tem peratures around the l ocalspi n si ngl et-tri pl et degeneracy.T he param eters are:(a) =U = 0: 24,(b), =U = 0: 282, (c) =U = 0: 3. T here are the four C oul om b peaks on x = ( " d )=U 0, 1, 2, 3 at hi gh tem peratures. W hen the tem perature decreases,the conductance i n the regi on 0 x 1 (2 x 3),w here N d ' 1 (N d ' 3),i ncreases to ' 2e 2 =h,caused by the usualspi n-1/2 K ondo e ectfor the e-channel(o-channel ).O n the other hand,the regi on 1 x 2,w here N d ' 2,i s rem arkabl e.A t (a) =U = 0: 24,the conductance i s l arge,about 4e 2 =h.H owever,the l arge conductance appearsatextrem el y l ow tem perature (e. g. ,T=U = 10 8 correspondsto 10 3 m K forU ' 10m eV ).W hen i ncreases to (b) =U = 0a sm al lval ue.T he behavi ors (a)-(c) are cl assi ed i nto,(a)the l ocalspi n tri pl etK ondo e ect,(b)the si ngl et-tri pl et K ondo e ect,and (c) the usualeven-odd osci l l ati ons for the dot w i th l arge energy separati on [ 1, 8] .
]D .G ol dhaber-G ordon et al . ,N ature 391 (1998) 156, S.M .C ronenwett et al . , Sci ence 281 (1998) 540,W .G .van der W i elet al . ,Sci ence 289 (2000) 2105. [ 2]W .Izum i da et al . ,J.Phys.Soc.Jpn.70 (2001) 1045.
. O Sakai, J.Phys.Soc.Jpn. 681640O .Sakaiet al . ,J.Phys.Soc.Jpn.68 (1999) 1640.
. W Izum, Da, Phys.R ev.Lett. 87216803W .Izum i da et al . ,Phys.R ev.Lett.87 (2001) 216803.
) 1768,A .K awabata. L I T K , Phys.R ev.Lett. 473222J.Phys.Soc.Jpn.L.I.G l azm an etal . ,JET P Lett.47 (1988)452,T .K .N g etal . ,Phys.R ev.Lett. 61 (1988) 1768,A .K awabata J.Phys.Soc.Jpn.60 (1991) 3222.
H ew son,T he K ondo probl em to H eavy ferm i ons. A C , C am bri dge U ni versi ty Press,C am bri dge. A .C .H ew son,T he K ondo probl em to H eavy ferm i ons,(C am bri dge U ni versi ty Press,C am bri dge,1993)
. W Izum, Da, J.Phys.Soc.Jpn. 662444i bi dW .Izum i da et al . ,J.Phys.Soc.Jpn.66 (1997) 717,i bi d.67 (1998) 2444.
. O Sakai, J.Phys.Soc.Jpn. 581690O .Sakaiet al . ,J.Phys.Soc.Jpn.58 (1989) 1690.
. J E , Phys.R ev.B. 446011J.E.G ubernati s et al . ,Phys.R ev.B 44 (1991) 6011.
. D Ol Dhaber-G Ordon, Phys.R ev.Lett. 815225D .G ol dhaber-G ordon et al . ,Phys.R ev.Lett.81 (1998) 5225.
. S Sasaki, N ature764LondonS.Sasakiet al . ,N ature (London) 405 (2000) 764.
. M Eto, Phys.R ev.Lett. 851306M .Eto et al . ,Phys.R ev.Lett.85 (2000) 1306.
. M , Phys.R ev.Lett. 852993M .Pusti l ni k et al . ,Phys.R ev.Lett.85 (2000) 2993.
. L P , P.R ev.B. 6016319L.P.R okhi nson et al . ,P.R ev.B 60 (1999) R 16319.
W e can negl ect the Zeem an spl i tti ng i n thi s case,see for exam pl e. 4W e can negl ect the Zeem an spl i tti ng i n thi s case,see for exam pl e [ 4] .
| []
|
[
"Asymptotics of solution curves of Kirchhoff type elliptic equations with logarithmic Kirchhoff function",
"Asymptotics of solution curves of Kirchhoff type elliptic equations with logarithmic Kirchhoff function"
]
| [
"Tetsutaro Shibata \nGraduate School of Advanced Science and Engineering\nLaboratory of Mathematics\nHiroshima University\nHigashi-Hiroshima739-8527Japan\n"
]
| [
"Graduate School of Advanced Science and Engineering\nLaboratory of Mathematics\nHiroshima University\nHigashi-Hiroshima739-8527Japan"
]
| []
| We study the one-dimensional nonlocal elliptic equationwhere a ≥ 0, b > 0, p > 1 are given constants and λ > 0 is a bifurcation parameter.We establish the precise asymptotic formulas for u λ (x) as λ → ∞. | 10.1007/s12346-023-00762-7 | [
"https://export.arxiv.org/pdf/2210.14381v1.pdf"
]
| 253,116,674 | 2210.14381 | d225c2ce49d28bfcd0bafa6ab2d7f3ed7a73131c |
Asymptotics of solution curves of Kirchhoff type elliptic equations with logarithmic Kirchhoff function
25 Oct 2022
Tetsutaro Shibata
Graduate School of Advanced Science and Engineering
Laboratory of Mathematics
Hiroshima University
Higashi-Hiroshima739-8527Japan
Asymptotics of solution curves of Kirchhoff type elliptic equations with logarithmic Kirchhoff function
25 Oct 20222020 Mathematics Subject Classification: 34C2334F10 Keywords: nonlocal elliptic equationsbifurcation curvesasymptotic formulas
We study the one-dimensional nonlocal elliptic equationwhere a ≥ 0, b > 0, p > 1 are given constants and λ > 0 is a bifurcation parameter.We establish the precise asymptotic formulas for u λ (x) as λ → ∞.
Introduction
We consider the following one-dimensional nonlocal elliptic equation
− log(a u ′ 2 2 + b u 2 2 + 1)u ′′ (x) = λu(x) p , x ∈ I := (0, 1), u(x) > 0, x ∈ I, u(0) = u(1) = 0,
(1.1)
where a ≥ 0, b > 0, p > 1 are given constants, λ > 0 is a bifurcation parameter and · 2 denotes the usual L 2 -norm. Equation (1.1) is the nonlocal problem of Kirchhoff type, which is motivated by the following problem (1.2) in [7]: where A = A(w) ≥ 0, which is called Kirchhoff function, is a continuous function of w ≥ 0, Nonlocal problems have been studied by many investigators, since many problems come from the phenomena of, for instance, biological problems such as population dynamics. Moreover, nonlocal problems have been also derived from numerous physical models and the other area of science, and have been studied intensively. We refer to [1-4, 6-11, 13-19], and the references therein. It seems that the main interests in this area are existence, nonexistence and the number of positive solutions. On the other hand, as far as the author knows, there are a few works which treat (1.1) as the bifurcation problems. We refer to [14,19] and the references therein. In [19], the case where A( u ′ q ) = a u ′ 2 2 + b and f (x, u) = u p in (1.2) has been considered and the existence of a branch of positive solutions bifurcating from infinity at λ = 0 was studied. In this paper, we concentrate on the generalized lonlocal Emden-Fowler equation with logarithmic Kirchhoff function, and establish the precise global structure of solution curves u λ (x) as λ → 0 and λ → ∞. It should be mentioned that, in many cases, Kirchhoff function A contains only one of u ′ 2 or u 2 . However, the Kirchhoff function in (1.1) contains both of them simultaneously. It seems that there are few papers which treat such problems as (1.1). Therefore, little is known about the properties of the solutions of (1.1).
−A
To state our results, we prepare the following notation. For p > 1, let
−W ′′ (x) = W (x) p , x ∈ I, W (x) > 0, x ∈ I, W (0) = W (1) = 0. (1.3)
We know from [6] that there exists a unique solution W p (x) of (1.3). For m ≥ 1 and q ≥ 0, we put
L p,q := 1 0 s q √ 1 − s p+1 ds, M p,m := 1 0 (1 − s p+1 ) (m−1)/2 ds. (1.4) Then we have W ′ p m m = 2 mp/(p−1) (p + 1) m/(p−1) L (mp+m−p+1)/(p−1) p,0 M p,m , (1.5) W p ∞ = (2(p + 1)) 1/(p−1) L 2/(p−1) p,0 .
(1.6) (1.5) and (1.6) have been given in [16]. For completeness, the proof of (1.5) and (1.6) will be given in Appendix.
In the following Theorems 1.1-1.3, we consider the case a = 0 and b, which is rewritten by d in (1.1), since it is convenient to distinguish between the constant in front of u 2 in the case a = 0 and a = 0. Namely, we first consider the equation
− log(d u 2 2 + 1)u ′′ (x) = λu(x) p , x ∈ I := (0, 1), u(x) > 0, x ∈ I. u(0) = u(1) = 0, (1.7)
where d > 0 is a given constant. Now we state our main results. Theorem 1.1. Consider (1.7). Assume that p > 3. Then for any λ > 0, there exists a unique solution u λ of (1.7). Further, as λ → ∞,
u λ (x) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) d (p−1)/(p−3) (λ W p 1−p 2 ) 2/(3−p) (1 + o(1)) × W p −1 2 W p (x). (1.8) Moreover, as λ → 0, u λ (x) = 2 p − 1 1/(p−1) λ −1/(p−1) log 1 λ 1/(p−1) (1.9) × 1 + 1 p − 1 log log 1 λ log 1 λ (1 + o(1)) W p (x). Theorem 1.2. Consider (1.7). Assume that 1 < p < 3. Furthermore, put ν := 2 p − 1 dt (3−p)/2 2 dt 2 + 1 W p p−1 2 , (1.10)
where t 2 > 0 is a constant deretmined laler. Then the following three cases occur: (i) If 0 < λ < ν, then there exist exactly two solutions u λ,1 and u λ,3 of (1.7) satisfying u λ,1 < u λ,3 in I. Furthermore, as λ → 0,
u λ,1 (x) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) λ W p 1−p 2 2/(3−p) d (1−p)/(3−p) (1 + o(1)) × W p −1 2 W p (x), (1.11) u λ,3 (x) = 2 p − 1 1/(p−1) λ −1/(p−1) log 1 λ 1/(p−1) (1.12) × 1 + 1 p − 1 log(log 1 λ ) log 1 λ (1 + o(1)) W p (x).
(ii) If λ = ν, then there exists exactly one solution u λ (x) of (1.7). (iii) If λ > ν, then there exists no solution u λ (x) of (1.7
u λ (x) = √ 2 d d − λ W 3 −2 2 1 + 2 3 (d − λ W 3 −2 2 )(1 + o(1)) W 3 −1 2 W 3 (x). (1.13)
Furthermore, as λ → 0,
u λ (x) = λ −1/2 log 1 λ 1/2 1 + log(log 1 λ ) 2 log 1 λ (1 + o(1)) W p (x).
(1.14)
Now we consider the equation (
d 0 := 4aL 2 p,0 M p,2 L p,2 + b. (1.15)
Namely, the solution u λ of (1.1) with a > 0 and b > 0 satisfies (1.7) with d = d 0 . Therefore, the solution u λ of (1.1) satisfies all the results in Theorems
1.1-1.3 with d 0 .
The rest of this paper is organized as follows. In Sec. 2, we prove Theorems 1.1-1.3 by using the argument in [1] and time map method (cf. [12]). In Sect. 3, we prove Theorem 1.4. The final section is the Appendix, in which the proofs of (1.5) and (1.6) will be given for the reader's convenience.
Proofs of Theorems 1.1-1.3
In this section, we consider (1.7). By [5], we know that if u λ is a solution of (1.7), then u λ satisfies
u λ (x) = u λ (1 − x), 0 ≤ x ≤ 1 2 , (2.1) α := u λ ∞ = u λ 1 2 , (2.2) u ′ λ (x) > 0, 0 ≤ x < 1 2 . (2.3)
For a given λ > 0, let w λ (x) be a unique solution of
−w ′′ (x) = λw(x) p , x ∈ I, w(x) > 0, x ∈ I, w(0) = w(1) = 0. (2.4) It is clear that w λ = λ −1/(p−1) W p .
We explain the existence of the solutions u λ of (1.7) by using the idea in [1]. We put M(t) := log(dt + 1) and consider the equation for t > 0:
M(t) = log(dt + 1) = w λ 1−p 2 t (p−1)/2 . (2.5)
Assume that t λ > 0 satisfies (2.5). We put γ := t
1/2 λ w λ −1 2 and u λ := γw λ = t 1/2 λ w λ −1 2 w λ = t 1/2 λ W λ −1 2 W λ . (2.6)
Then by (2.5), we have M( γw λ
2 2 ) = M(t λ ) = γ p−1 . Then we have −M( u λ 2 2 )u ′′ λ (x) = −M( γw λ 2 2 )γw ′′ λ (x) (2.7) = γ p λw p λ = λ(γw λ (x)) p = λu λ (x) p . Let f (t) := log(dt + 1) t (p−1)/2 . (2.8)
By (2.5) and (2.7), to find the solutions of (1.7), we look for solutions t λ > 0 of the following equation of t > 0:
f (t) = λ W p 1−p 2 . (2.9)
Assume that u λ is a solution of (1.7). Then u λ is a solution of (2.4) with λ/M( u λ 2 2 ). Therefore, by the uniqueness of W p in (1.3), there exists a unique constant Λ > 0 such that
u λ = Λw λ . Then we see that Λ = u λ 2 w λ −1 2 . Then we put u λ = u λ 2 w λ −1 2 w λ and t λ := u λ 2 2 . Since u λ satisfies (1.7), we have −M( u λ 2 2 ) u λ 2 w λ −1 2 w ′′ λ (x) = λ u λ p 2 w λ −p 2 w λ (x) p . (2.10)
This implies that
M(t λ ) = M( u λ 2 2 ) = u λ p−1 2 w λ 1−p 2 = λ W p 1−p 2 u λ p−1 2 = λ W p 1−p 2 t (p−1)/2 λ .
This implies (2.5). Therefore, the solution of (1.7) coincides with the solution t of (2.5).
Lemma 2.1. Assume that p > 3. Then for any given λ > 0, there exists a unique t λ > 0 such that f (t λ ) = λ W p 1−p 2 . Furthermore, as λ → ∞, u λ (x) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) d (p−1)/(p−3) (λ W p 1−p 2 ) 2/(3−p) (1 + o(1)) × W p −1 2 W p (x). (2.11)
Moreover, as λ → 0,
u λ (x) = 2 p − 1 1/(p−1) λ −1/(p−1) log 1 λ 1/(p−1) (2.12) × 1 + 1 p − 1 log log 1 λ log 1 λ (1 + o(1)) W p (x). Proof. By (2.8), we have f ′ (t) = − p − 1 2 t −(1+p)/2 log(dt + 1) + t (1−p)/2 d dt + 1 (2.13) = t −(1+p)/2 g(t),
where
g(t) := − p − 1 2 log(dt + 1) + dt dt + 1 . (2.14) g ′ (t) = d (dt + 1) 2 − p − 1 2 (dt + 1) + 1 . (2.15)
By this, we see that g ′ (t) < 0 for t > 0. We know that g(0) = 0. Therefore, g(t) < 0 for t > 0. By this and (2.13), we see that f ′ (t) < 0 for t > 0. Namely, f (t) is strictly decreasing for t > 0. Further, lim t→0 f (t) = ∞ and lim t→∞ f (t) = 0. Therefore, there exists a unique t λ > 0 such that the equation (2.9) holds. We first assume that λ → ∞. We see that t λ → 0 as λ → ∞. By this and Taylor expansion, we have (1)).
log(dt λ + 1) = dt λ − 1 2 d 2 t 2 λ (1 + o
(2.16)
By this and (2.9), we have
f (t λ ) = log(dt λ + 1) t (p−1)/2 λ = dt (3−p)/2 λ − 1 2 d 2 t (5−p)/2 λ (1 + o(1)) = λ W p 1−p 2 . (2.17)
We put
t λ = λ W p 1−p 2 d 2/(3−p) (1 + R λ ), (2.18)
where R λ is the remainder term, which satisfies R λ → 0 as λ → ∞. By this and (2.17), we have
λ W p 1−p 2 (1 + R λ ) (3−p)/2 − 1 2 d (1−p)/(3−p) (λ W p 1−p 2 ) (5−p)/(3−p) (1 + R λ ) (5−p)/2 (2.19) = λ W p 1−p 2 .
Then by (2.19) and Taylor expansion, we have (1)).
3 − p 2 λ W p 1−p 2 R λ = 1 2 d (1−p)/(3−p) (λ W p 1−p 2 ) (5−p)/(3−p) (1 + o
(2.20)
This implies (1)).
R λ = 1 3 − p d (p−1)/(p−3) (λ W p 1−p 2 ) 2/(3−p) (1 + o
(2.21) By (2.6), (2.18), (2.21) and Taylor expansion, as λ → ∞, we obtain
u λ (x) = t 1/2 λ W p −1 2 W p (x) (2.22) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) d (p−1)/(p−3) (λ W p 1−p 2 ) 2/(3−p) (1 + o(1)) × W p −1 2 W p (x).
This implies (2.11). Next, we assume that λ → 0 and show (2.12). By Fig. 3, it is clear that t λ → ∞ as λ → 0. We look for t λ of the form t λ = Cλ −2/(p−1) log 1 λ q (1 + R λ ), where C and q are constants and R λ is the remainder term satisfying R λ → 0 as λ → 0. By Taylor expansion, we have
log(dt λ + 1) = log t λ + O(1) (2.23) = 2 p − 1 log 1 λ + q log log 1 λ + R λ + O(1), t (p−1)/2 λ λ W p 1−p 2 = λ Cλ −2/(p−1) log 1 λ q (1 + R λ ) (p−1)/2 W p 1−p 2 (2.24) = W p 1−p 2 C (p−1)/2 log 1 λ q(p−1)/2 1 + p − 1 2 R λ + o(R λ ) .
This implies C = (2/(p − 1)) 2/(p−1) W p 2 2 and q = 2/(p − 1). By this, (2.23) and (2.24), we have
2 p − 1 log log 1 λ + R λ + O(1) = R λ (1 + o(1)) log 1 λ . (2.25)
This implies that (1)).
R λ = 2 p − 1 log log 1 λ log 1 λ (1 + o
(2.26) By this and (2.6), as λ → 0,
u λ (x) = t 1/2 λ W p −1 2 W p (x) (2.27) = C 1/2 λ −1/(p−1) log 1 λ q/2 (1 + R λ + o(R λ )) 1/2 W p −1 2 W p (x) = 2 p − 1 1/(p−1) λ −1/(p−1) log 1 λ 1/(p−1) 1 + 1 p − 1 log log 1 λ log 1 λ (1 + o(1)) W p (x).
This implies (2.12). Thus the proof is complete.
We obtain Theorem 1.1 by Lemma 2.1. We next prove Theorem 1.2.
Lemma 2.2. Assume that 1 < p < 3. Then there exists a constant ν and the following three cases occur. (i) if 0 < λ < ν, then there exist two solutions u λ,1 and u λ,3 of (1.7). (ii) If λ = ν, then there exists exactly one solution u λ (x) of (1.7). (iii) If λ > ν, then there exists no solution u λ (x) of (1.7). Proof. By (2.15), we see that if t 0 = (3 − p)/(d(p − 1)), then g ′ (t 0 ) = 0. This implies that g(t) is increasing in 0 < t < t 0 and attains the maximum at t = t 0 and decreasing in t > t 0 . Since g(0) = 0, g(t 0 ) > 0 and g(t) → −∞ as t → ∞, we see that there exists t = t 2 such that g(t 2 ) = 0. Namely, f ′ (t 2 ) = 0. We know that lim x→0 f (x) = 0 and lim x→∞ f (x) = 0. Furthermore, f (t) attains the maximum that t = t 2 (cf. Fig. 4 below). By (2.13) and (2.14), we see that t 2 satisfies log(dt 2 + 1) = 2 p − 1 dt 2 dt 2 + 1 .
(2.28)
We note that there exists exactly one t 2 satisfying (2.28). The reason is simple. We consider the graph of h(x)
:= log(x + 1) − 2x (p−1)(x+1) . Then h ′ (x) = 1 (x+1) 2 (x − 3−p p−1 )
. Therefore, h(x) is strictly decreasing in 0 < x < (3 − p)/(p − 1) and strictly increasing in x > (3 − p)/(p − 1). Further, h(0) = 0 and lim x→∞ h(x) = ∞. Therefore, there exists a unique t 2 such that (3 − p)/(p − 1) < t 2 < C satisfying (2.28), since (2.28) does not hold for t 2 ≫ 1. By this, (2.8) and (2.28), we have
f (t 2 ) = 2 p − 1 dt (3−p)/2 2 dt 2 + 1 = ν W p 1−p 2 , (2.29) where ν := 2 p − 1 dt (3−p)/2 2 dt 2 + 1 W p p−1 2 . If 0 < λ < ν, then there exist exactly two t 1 < t 3 such that f (t j ) = λ W p p−1 2 (j = 1, 3).
This implies that the exists exactly two solutions of (1.7) u λ,1 and u λ,3 corresponding to t 1 and t 3 . Similarly, if λ = ν, then there exists one solution of (1.7) and if λ > ν, then there exist no solutions of (1.7). Thus the proof is complete. Now we consider the asymptotic behavior of u λ,1 and u λ,3 as λ → 0. Lemma 2.3. Assume that 1 < p < 3. Then as λ → 0,
u λ,1 (x) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) λ W p 1−p 2 2/(3−p) d (1−p)/(3−p) (1 + o(1)) × W p −1 2 W p (x), (2.30) u λ,3 (x) = 2 p − 1 1/(p−1) λ −1/(p−1) log 1 λ 1/(p−1) (2.31) × 1 + 1 p − 1 log(log 1 λ ) log 1 λ (1 + o(1)) W p (x).
Proof. Since 1 < p < 3, and t 1 < t 2 < t 3 , we see from Fig. 4 that t 1 → 0 and t 3 → ∞ as λ → 0. We first prove (2.30). By (2.4) and Taylor expansion, we have
dt 1 − (dt 1 ) 2 /2 + o(t 2 1 ) t (p−1)/2 1 = dt (3−p)/2 1 − 1 2 d 2 t (5−p)/2 1 (1 + o(1)) = λ W p 1−p 2 .
(2.32)
By this, we have
t 1 = λ W p 1−p 2 d 2/(3−p) (1 + η),(2.33)
where η → 0 as λ → 0. By (2.32), (2.33) and Taylor expansion, we have
λ W p 1−p 2 (1 + η) (3−p)/2 − 1 2 d 2 λ W p 1−p 2 d (5−p)/(3−p) (1 + o(1)) (2.34) = λ W p 1−p 2 1 + 3 − p 2 η + o(η) − 1 2 d 2 λ W p 1−p 2 d (5−p)/(3−p) (1 + o(1)) = λ W p 1−p 2 .
This implies that (1)).
η = 1 3 − p λ W p 1−p 2 2/(3−p) d (1−p)/(3−p) (1 + o
(2.35) By (2.6), (2.33), (2.35) and Taylor expansion, we have
u λ,1 (x) = t 1/2 1 W p −1 2 W p (x) (2.36) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2 η(1 + o(1)) W p −1 2 W p (x) = λ W p 1−p 2 d 1/(3−p) 1 + 1 2(3 − p) λ W p 1−p 2 2/(3−p) d (1−p)/(3−p) (1 + o(1)) × W p −1 2 W p (x).
This implies (2.30). We next prove (2.31). Since t 3 → ∞ as λ → 0, by (2.8) and (2.9) and Taylor expansion, we have log(bt 3 + 1) = log t 3 + O(1) = t
(p−1)/2 3 λ W p 1−p 2 . (2.37)
Then the situation is the same as that of (2.23), (2.24) and (2.26). Therefore, by the same argument as that to obtain (2.12), we obtain (2.31). So we omit the proof. Thus the proof is complete.
u λ (x) = √ 2 d d − λ W 3 −2 2 1 + 2 3 (d − λ W 3 −2 2 )(1 + o(1)) W 3 −1 2 W 3 (x). (2.38)
Furthermore, as λ → 0,
u λ (x) = λ −1/(2) log 1 λ 1/2 1 + log(log 1 λ ) 2 log 1 λ (1 + o(1)) W p (x).
(2.39)
Proof. We first prove (i) and (ii). By (2.8), we have
f (t) = log(dt + 1) t . (2.40)
Then by (2.14) and (2.15), for t > 0, we have
g(t) = − log(dt + 1) + dt dt + 1 , (2.41) g ′ (t) = − d 2 t (dt + 1) 2 < 0.
(2.42) By this, g(t) is strictly decreasing for t ≥ 0 and g(0) = 0. This implies that g(t) < 0 for t > 0. By this and (2.13), f ′ (t) < 0 for t > 0 and f (0) = lim t→0 f (t) = d > 0. Further, f (t) → 0 as t → ∞. By this and (2.9), we see that if λ W 3 −2 2 ≥ d, then there exists no solution t λ > 0 of (2.9). Further, if 0 < λ W p −2 2 < d, then there exists exactly one solution t λ > 0 of (2.9). Therefore, we obtain (i) and (ii).
We next prove (iii). In this case, it is clear that t λ → 0 as λ → d W 3 −2 2 . By this, (2.8), (2.9) and Taylor expansion, we have
f (t λ ) = log(dt λ + 1) t λ = dt λ − (1/2)d 2 t 2 λ + (1/3)d 3 t 3 λ + O(t 4 λ ) t λ (2.43) = d − 1 2 d 2 t λ + 1 3 d 3 t 2 λ + O(t 3 λ ) = λ W 3 −2 2 .
This implies that
t λ = 2 d 2 (d − λ W 3 −2 2 ) + R,(2.u λ (x) = t 1/2 λ W 3 −1 2 W 3 (x) (2.46) = √ 2 d d − λ W 3 −2 2 1 + 2 3 (d − λ W 3 −2 2 )(1 + o(1)) W 3 −1 2 W 3 (x).
This implies (2.39). Finally, we see that the argument as that to obtain (1.12) is available in the case p = 3. Therefore, (1.14) follows from (1.12) by putting p = 3. Thus the proof is complete. Proof. We put H := log(a u ′ λ 2 + b u λ 2 2 + 1). By (1.1), we have
Hu ′′ λ (x) + λu λ (x) p = 0. (3.3)
This implies
{Hu ′′ λ (x) + λu λ (x) p }u ′ λ (x) = 0. (3.4)
We recall that α = u λ ∞ , which is defined in (2.2). By this, (2.2) and putting x = 1/2, for x ∈Ī, we have
1 2 Hu ′ λ (x) 2 + 1 p + 1 λu λ (x) p = constant = 1 p + 1 λα p+1 . (3.5)
By this and (2.3), for 0 ≤ x ≤ 1/2, we have
u ′ λ (x) = k(α p+1 − u λ (x) p+1 ),(3.6)
where k := 2λ/(H(p + 1)). By this, (1.4), (2.1), (2.3) and putting u λ = θ = αs, we have We obtain Lemma 3.2 by the same argument as that to prove Lemma 3.1. Therefore, we omit the proof. Therefore, the solution u λ of (1.7) is also the solution of (1.1), since (3.14) holds. Therefore, we are able to apply the argument in Section 2 and obtain Theorem 1.4. Thus the proof is complete. This implies (1.5). Thus the proof is complete.
u ′ λ 2 2 = 2 1/2 0 k(α p+1 − u λ (x) p+1 )u ′ λ (x)dx (3.7) = 2 α 0 k(α p+1 − θ p+1 )dθ = 2 √ kα (p+3)/2 1 0 √ 1 − s p+1 ds = 2 √ kM p,2 α (p+3)/2 , u λ 2 2 = 2 1/2 0 u λ (x) 2 u ′ λ (x) k(α p+1 − u λ (x) p+1 ) dx (3.8) = 2 α 0 θ 2 k(α p+1 − θ p+1 ) dθ = 2 √ k α (5−p)/2 1 0 s 2 √ 1 − s p+1 ds = 2 √ k L p,2 α (5−p)/2 . 1 2 = 1/2 0 u ′ λ (x) k(α p+1 − u λ (x) p+1 ) dx (3.9) = 1 √ k α 0 1 √ α p+1 − θ p+1 dθ = 1 √ k α (1−p)/2 1 0 1 √ 1 − s p+1 ds = 1 √ k L p,0 α (1−p)/2 .
x)| q dx u ′′ (x) = λf (x, u(x)), x ∈ I, u(x) > 0, x ∈ I, u(0) = u(1) = 0,(1.2) E-mail: [email protected] This work was supported by JSPS KAKENHI Grant Number JP21K03310.
Lemma 2. 4 .
4Assume that p = 3. (i) Let λ ≥ d W 3 2 2 . Then there exists no solution of (1.7). (ii) Let 0 < λ < d W 3 2 2 . Then there exists exactly one solution u λ of (1.7). (iii) Let 0 < λ
).Theorem 1.3. Consider (1.7). Let p = 3.
(i) Let λ ≥ d W 3
2
2 . Then there exists no solution of (1.7).
(ii) Let 0 < λ < d W 3
2
2 . Then there exists exactly one solution u λ of (1.7).
(iii) Let 0 < λ < d W 3
2
2 . Then as λ → d W 3
2
2 ,
To do this, we need two lemmas.Lemma 3.1. Assume that u λ satisfies (1.1). Then3 Proof of Theorem 1.4
In this section, we consider (1.1). Let a > 0 and b > 0 in (1.1). We show that (1.1) is
equivalent to (1.7) if we put
d = d 0 := a
4aL 2
p,0 M p,2
L p,2
+ b.
(3.1)
u ′
λ
2
2 =
4L 2
p,0 M p,2
L p,2
u λ
2
2 .
(3.2)
By this and (4.3), for 0 ≤ x ≤ 1/2, we have, using θ = ξs,By this and (4.2), we have
1
2
W ′
p (x) 2 +
1
p + 1
W p (x) p+1 = constant =
1
p + 1
W p
1
2
p+1
=
1
p + 1
ξ p+1 .
(4.5)
W ′
p (x) =
2
p + 1
(ξ p+1 − W p (x) p+1 ).
(4.6)
By (4.1) and (4.6), we have
1
2
=
1/2
0
1dx =
1/2
0
W ′
p (x)
AppendixLet p > 1. We show (1.5) and (1.6), which was proved in[16], for completeness. We apply the time map argument to (1.3) (cf.[12]). Since (1.3) is autonomous, as (2.1)-(2.3), we have
Positive solutions for a quasilinear elliptic equation of Kirchhoff type. C O Alves, F J S A Corréa, T F Ma, Comput. Math. Appl. 49C. O. Alves, F. J. S. A. Corréa and T. F. Ma, Positive solutions for a quasilinear elliptic equation of Kirchhoff type. Comput. Math. Appl. 49 (2005), 85-93.
New existence and multiplicity of nontrivial solutions for nonlocal elliptic Kirchhoff type problems. B Cheng, J. Math. Anal. Appl. 3942B. Cheng, New existence and multiplicity of nontrivial solutions for nonlocal elliptic Kirchhoff type problems, J. Math. Anal. Appl. 394 (2012), no. 2, 488-495.
On positive solutions of nonlocal and nonvariational elliptic problems. F J S A Corrêa, Nonlinear Anal. 59F. J. S. A. Corrêa, On positive solutions of nonlocal and nonvariational elliptic problems, Nonlinear Anal. 59 (2004), 1147-1155.
On a class of nonlocal elliptic problems via Galerkin method. F J S A Corrêa, D C De Morais Filho, J. Math. Anal. Appl. 3101F. J. S. A. Corrêa, D. C. de Morais Filho, On a class of nonlocal elliptic problems via Galerkin method. J. Math. Anal. Appl. 310 (2005), no. 1, 177-187.
Symmetry and related properties via the maximum principle. B Gidas, W M Ni, L Nirenberg, Comm. Math. Phys. 68B. Gidas, W. M. Ni and L. Nirenberg, Symmetry and related properties via the maxi- mum principle. Comm. Math. Phys. 68 (1979), 209-243.
A one-dimensional Kirchhoff equation with generalized convolution coefficients. C S Goodrich, J. Fixed Point Theory Appl. 234ppC. S. Goodrich, A one-dimensional Kirchhoff equation with generalized convolution coefficients. J. Fixed Point Theory Appl. 23 (2021), no. 4, Paper No. 73, 23 pp.
A topological approach to nonlocal elliptic partial differential equations on an annulus. C S Goodrich, Math. Nachr. 294C. S. Goodrich, A topological approach to nonlocal elliptic partial differential equations on an annulus. Math. Nachr. 294 (2021), 286-309.
Differential equations with multiple sign changing convolution coefficients. C S Goodrich, Internat. J. Math. 328ppPaper No. 2150057C. S. Goodrich, Differential equations with multiple sign changing convolution coeffi- cients. Internat. J. Math. 32 (2021), no. 8, Paper No. 2150057, 28 pp.
An analysis of nonlocal difference equations with finite convolution coefficients. C S Goodrich, J. Fixed Point Theory Appl. 241ppC.S. Goodrich, An analysis of nonlocal difference equations with finite convolution co- efficients. J. Fixed Point Theory Appl. 24 (2022), no. 1, Paper No. 1, 19 pp.
Positive solutions of Kirchhoff-type non-local elliptic equation: a bifurcation approach. Z Liang, F Li, J Shi, Proc. Roy. Soc. Edinburgh Sect. A. 1474Z. Liang, F. Li and J. Shi, Positive solutions of Kirchhoff-type non-local elliptic equation: a bifurcation approach, Proc. Roy. Soc. Edinburgh Sect. A 147 (2017), no. 4, 875-894.
Global bifurcation and nodal solutions for homogeneous Kirchhoff type equations. F Liu, H Luo, G Dai, Electron. J. Differential Equations. 202029F. Liu, H. Luo and G. Dai, Global bifurcation and nodal solutions for homogeneous Kirchhoff type equations, Electron. J. Differential Equations 2020 (29) (2020), pp. 1- 13.
The number of solutions of a nonlinear two point boundary value problem. T Laetsch, Indiana Univ. Math. J. 20T. Laetsch, The number of solutions of a nonlinear two point boundary value problem, Indiana Univ. Math. J. 20 1970/1971 1-13.
Positive solutions to Kirchhoff type equations with nonlinearity having prescribed asymptotic behavior. Z Liang, F Li, J Shi, Ann. Inst. H. Poincaré C Anal. Non Linéaire. 311Z. Liang, F. Li, J. Shi, Positive solutions to Kirchhoff type equations with nonlinearity having prescribed asymptotic behavior. Ann. Inst. H. Poincaré C Anal. Non Linéaire 31 (2014), no. 1, 155-167.
On the eigenvalue problem for a class of Kirchhoff-type equations. O Méndez, J. Math. Anal. Appl. 4942ppPaper No. 124671O. Méndez, On the eigenvalue problem for a class of Kirchhoff-type equations, J. Math. Anal. Appl. 494 (2021), no. 2, Paper No. 124671, 15 pp.
Bifurcation diagrams of one-dimensional Kirchhoff type equations. T Shibata, Adv. Nonlinear Anal. 12T. Shibata, Bifurcation diagrams of one-dimensional Kirchhoff type equations, Adv. Nonlinear Anal. 12 (2023), 356-368.
Global and asymptotic behaviors of bifurcation curves of one-dimensional nonlocal elliptic equations. T Shibata, J. Math. Anal. Appl. 5162126525T. Shibata, Global and asymptotic behaviors of bifurcation curves of one-dimensional nonlocal elliptic equations, J. Math. Anal. Appl. 516 no.2 (2022), 126525.
Asymptotic behavior of solution curves of nonlocal one-dimensional elliptic equations. T Shibata, Bound. Value Probl. Paper No. 63T. Shibata, Asymptotic behavior of solution curves of nonlocal one-dimensional elliptic equations, Bound. Value Probl. (2022), Paper No. 63.
Nonlocal elliptic equations. R Stańczy, Nonlinear Anal. 47R. Stańczy, Nonlocal elliptic equations, Nonlinear Anal. 47 (2001), 3579-3584.
Bifurcation of positive solutions for a nonlocal problem. W Wang, W Tang, Mediterr. J. Math. 13W. Wang and W. Tang, Bifurcation of positive solutions for a nonlocal problem, Mediterr. J. Math. 13 (2016), 3955-3964.
| []
|
[
"On the notion of persistence of excitation for linear switched systems",
"On the notion of persistence of excitation for linear switched systems",
"On the notion of persistence of excitation for linear switched systems",
"On the notion of persistence of excitation for linear switched systems"
]
| [
"Mihály Petreczky \nMaastricht University\nP.O. Box 6166200 MDMaastrichtThe Netherlands\n\nUniv Lille Nord de France\nFrance, and EMDouai, IAF-59000, F-59500Lille, DouaiFrance\n",
"Laurent Bako [email protected]. ",
"M Petreczky@maastrichtuniversity ",
"Nl ",
"Mihály Petreczky \nMaastricht University\nP.O. Box 6166200 MDMaastrichtThe Netherlands\n\nUniv Lille Nord de France\nFrance, and EMDouai, IAF-59000, F-59500Lille, DouaiFrance\n",
"Laurent Bako [email protected]. ",
"M Petreczky@maastrichtuniversity ",
"Nl "
]
| [
"Maastricht University\nP.O. Box 6166200 MDMaastrichtThe Netherlands",
"Univ Lille Nord de France\nFrance, and EMDouai, IAF-59000, F-59500Lille, DouaiFrance",
"Maastricht University\nP.O. Box 6166200 MDMaastrichtThe Netherlands",
"Univ Lille Nord de France\nFrance, and EMDouai, IAF-59000, F-59500Lille, DouaiFrance"
]
| []
| The paper formulates the concept of persistence of excitation for discrete-time linear switched systems, and provides sufficient conditions for an input signal to be persistently exciting. Persistence of excitation is formulated as a property of the input signal, and it is not tied to any specific identification algorithm. The results of the paper rely on realization theory and on the notion of Markov-parameters for linear switched systems. | 10.1016/j.nahs.2022.101308 | [
"https://arxiv.org/pdf/1103.1349v1.pdf"
]
| 1,645,399 | 1103.1349 | 75057ff6f31332e8eba8bd213f058a25282a43cf |
On the notion of persistence of excitation for linear switched systems
7 Mar 2011
Mihály Petreczky
Maastricht University
P.O. Box 6166200 MDMaastrichtThe Netherlands
Univ Lille Nord de France
France, and EMDouai, IAF-59000, F-59500Lille, DouaiFrance
Laurent Bako [email protected].
M Petreczky@maastrichtuniversity
Nl
On the notion of persistence of excitation for linear switched systems
7 Mar 2011
The paper formulates the concept of persistence of excitation for discrete-time linear switched systems, and provides sufficient conditions for an input signal to be persistently exciting. Persistence of excitation is formulated as a property of the input signal, and it is not tied to any specific identification algorithm. The results of the paper rely on realization theory and on the notion of Markov-parameters for linear switched systems.
Introduction
The paper formulates the concept of persistence of excitation for discrete-time linear switched systems (abbreviated by DTLSSs). DTLSSs are one of the simplest and best studied classes of hybrid systems, [23]. A DTLSS is a discrete-time switched system, such that the continuous sub-system associated with each discrete state is linear. The switching signal is viewed as an external input, and all linear systems live on the same input-outputand state-space.
We define persistence of excitation for input signals. More precisely, we will call an input signal persistently exciting for an input-output map, if the response of the inputoutput map to that particular input determines the input-output map uniquely. In other words, the knowledge of the output response to a persistently exciting input should be sufficient to predict the response to any input.
Persistence of excitation is essential for system identification and adaptive control. Normally, in system identification the system of interest is tested only for one input sequence. One of the reason for this is that our notion of the system entails a fixed initial state. However, any experiment changes that particular initial state and it is in general not clear how to reset the system to a particular initial state. The objective is to find a system model based on the response to the chosen input. However, the knowledge of a model of the system immediately implies that the response of the system to any input is known. Hence, intuitively it is clear that persistence of excitation of the input signal is a prerequisite for a successful identification of a model.
Note that persistence of excitation is a joint property of the input and of the inputoutput map. That is, a particular input might be persistently exciting for a particular system and might fail to be persistently exciting for another system. In fact, it is not a priori clear if any system admits a persistently exciting input. This calls for investigating classes of inputs which are persistently exciting for some broad classes of systems.
In the existing literature, persistence of excitation is often defined as a specific property of the measurements which is sufficient for the correctness of some identification algorithm. In contrast, in this paper we propose a definition of persistence of excitation which is necessary for the correctness of any identification algorithm. 1 Obviously, the two approaches are complementary. In fact, we hope that the results of this paper can serve as a starting point to derive persistence of excitation conditions for specific identification algorithms.
Contribution of the paper We define persistence of excitation for finite input sequences and persistence of excitation for infinite input sequences.
We show that for every input-output map which is realizable by a reversible DTLSS, there exists a finite input sequence which is persistently exciting for that particular inputoutput map. A reversible DTLSS is a DTLSS continuous dynamics of which is invertible.
Such systems arise naturally by sampling continuous-time systems. In addition, we define the class of reversible input-output maps and show that there is a finite input sequence which is persistently exciting for all the input-output maps of that class. Moreover, we present a procedure for constructing such an input sequence.
We show that there exists a class of infinite input sequences which are persistently exciting for all the input-output maps which are realizable by a stable DTLSS. The conditions which the input sequence must satisfy is that each subsequence occurs there infinitely often (i.e. the switching signal is rich enough) and that the continuous input is a colored noise. Hence, this result is consistent with the classical result for linear systems.
It might be appealing to interpret the conditions above as ones which ensure that one stays in every discrete mode long enough and the continuous input is persistently exciting in the classical sense. One could then try to identify the linear subsystems separately and merge the results. Unfortunately, such an interpretation is in general incorrect. The reason for this is that there exists a broad class of input-output maps which can be realized by a linear switched system but not by a switched system whose linear subsystems are minimal, [20]. The above scheme obviously would not work for such systems. In fact, for such systems one has to test the system's response not only for each discrete mode, but for each combination of discrete modes.
The main idea behind the definition of persistence of excitation and the subsequent results is as follows. From realization theory [20] we know that the knowledge of (finitely many) Markov-parameters of the input-output map is sufficient for computing a DTLSS realization of that map. Hence, if the response of the input-output map to a particular input allows us to compute the necessary Markov-parameters, then we can compute a DTLSS representation of that map. This can serve as a definition of persistence of excitation. We call a input sequence persistently exciting, if the Markov-parameters of the input-output map can be computed from the response of the map to that input. We call an infinite sequence input persistently exciting, if from a large enough finite initial part of the response one can compute an arbitrarily precise approximation of the Markov-parameters.
Since the realization algorithm for DTLSS is continuous in the Markov-parameters, it means that a persistently exciting infinite input sequence allows the computation of an arbitrarily precise approximation of a DTLSS realizing the input-output map.
Motivation of the system class The class of DTLSSs is the simplest and perhaps the best studied class of hybrid systems. In addition to its practical relevance, it also serves as a convenient starting point for theoretical investigations. In particular, any piecewiseaffine hybrid system can be viewed as a feedback interconnection of a DTLSS with an event generating device. Hence, identification of a piecewise-affine system is related to the problem of closed-loop identification of a DTLSS. For the latter, it is indispensable to have a good notion of persistence of excitation. For this reason, we believe that the results of the paper will be relevant not only for identification of DTLSSs, but also for identification of piecewise-affine hybrid systems with autonomous switching.
Related work Identification of hybrid systems is an active research area, with several significant contributions [26,16,9,15,17,25,12,5,13,11,9,3,2,22,6,24,18,1].
While enormous progress was made in terms of efficient identification algorithms, the fundamental theoretical limitations and properties of these algorithms are still only partially understood. Persistence of excitation of hybrid systems were already addressed in [26,25,27,24,10]. However, the conditions of those papers are more method-specific and their approach is quite different from the one we propose. For linear systems, persistence of excitation has thouroughly been investigated, see for example [14,28] and the references therein.
Outline of the paper §2 presents the formal definition of DTLSSs and it formulates the major system-theoretic concepts for this system class. §3 presents a brief overview of realization theory for DTLSSs. §4 presents the main contribution of the paper.
Notation Denote by N the set of natural numbers including 0. The notation described below is standard in automata theory, see [8,4]. Consider a set X which will be called the alphabet. Denote by X * the set of finite sequences of elements of X. Finite sequences of elements of X are referred to as strings or words over X. Each non-empty word w is of the form w = a 1 a 2 · · · a k for some a 1 , a 2 , . . . , a k ∈ X. The element a i is called the ith letter of w, for i = 1, . . . , k and k is called the length of w. We denote by ǫ the empty sequence (word). The length of word w is denoted by |w|; note that |ǫ| = 0. We denote by X + the set of non-empty words, i.e. X + = X * \ {ǫ}. We denote by wv the concatenation of word w ∈ X * with v ∈ X * . For each j = 1, . . . , m, e j is the jth unit vector of R m , i.e. e j = (δ 1,j , . . . , δ n,j ), δ i,j is the Kronecker symbol.
Linear switched systems
In this section we present the formal definition of DTLSSs along with a number of relevant system-theoretic concepts for DTLSSs . Definition 1. Recall from [19] that a discrete-time linear switched system (abbreviated by DTLSS), is a discrete-time control system of the form
Σ x t+1 = A qt x t + B qt u t and x 0 = 0 y t = C qt x t .(1)
Here Q = {1, . . . , D} is the finite set of discrete modes, D is a positive integer, q t ∈ Q is the switching signal, u t ∈ R is the continuous input, y t ∈ R p is the output and A q ∈ R n×n , B q ∈ R n×m , C q ∈ R p×n are the matrices of the linear system in mode q ∈ Q.
Throughout the section, Σ denotes a DTLSS of the form (1). The inputs of Σ are the continuous inputs {u t } ∞ t=0 and the switching signal {q t } ∞ t=0 . The state of the system at time t is x t . Note that any switching signal is admissible and that the initial state is assumed to be zero. We use the following notation for the inputs of Σ.
Notation 1 (Hybrid inputs). Denote U = Q × R m .
We denote by U * (resp. U + ) the set of all finite (resp. non-empty and finite) sequences of elements of U . A sequence
w = (q 0 , u 0 ) · · · (q t , u t ) ∈ U + , t ≥ 0(2)
describes the scenario, when the discrete mode q i and the continuous input u i are fed to Σ at time i, for i = 0, . . . , t.
Definition 2 (State and output). Consider a state x init ∈ R n . For any w ∈ U + of the form (2), denote by x Σ (x init , w) the state of Σ at time t + 1, and denote by y Σ (x init , w)
the output of Σ at time t, if Σ is started from x init and the inputs {u i } t i=0 and the discrete modes {q i } t i=0 are fed to the system.
That is, x Σ (x init , w) is defined recursively as follows; x Σ (x init , ǫ) = x init , and if w = v(q, u) for some (q, u) ∈ U , v ∈ U * , then
x Σ (x init , w) = A q x Σ (x init , v) + B q u.
If w ∈ U + and w = v(q, u), (q, u) ∈ U , v ∈ U * , then
y Σ (x init , w) = C q x Σ (x init , v).
Definition 3 (Input-output map). The map y Σ :
U + → R p , ∀w ∈ U + : y Σ (w) = y(x 0 , w),
is called the input-output map of Σ.
That is, the input-output map of Σ maps each sequence w ∈ U + to the output generated by Σ under the hybrid input w, if started from the zero initial state. The definition above implies that the input-output behavior of a DTLSS can be formalized as a map
f : U + → R p .(3)
The value f (w) for w of the form (2) represents the output of the underlying black-box system at time t, if the continuous inputs {u i } t i=0 and the switching sequence {q i } t i=0 are fed to the system. For the notions of observability and span-reachability of DTLSSs we refer the reader to [20,23].
Definition 5 (Dimension). The dimension of Σ, denoted by dim Σ, is the dimension n of its state-space.
Definition 6 (Minimality). Let f be an input-output map. Then Σ is a minimal realization of f , if Σ is a realization of f , and for any DTLSSΣ which is a realization of f ,
dim Σ ≤ dimΣ.
Overview of realization theory
Below we present an overview of the results on realization theory of DTLSSs along with the concept of Markov-parameters. For more details on the topic see [20]. In the sequel, Σ denotes a DTLSS of the form (1), and f denotes an input-output map f : U + → R p .
For our purposes the most important result is the one which states that a DTLSS realization of f can be computed from the Markov-parameters of f . In order to present this result, we need to define the Markov-paramaters of f formally. Denote Q k, * = {w ∈ Q * | |w| ≥ k}. Define the maps S f j : Q 2, * → R p , j = 1, . . . , m as follows; for any v = σ 1 . . . σ |v| ∈ Q * with σ k ∈ Q, and for any q, q 0 ∈ Q,
S f j (q 0 vq) = f (q 0 , e j )(q, 0) if v = ǫ f (q 0 , e j )(σ 1 , 0) . . . (σ |v| , 0)(q, 0) if |v| ≥ 1(4)
with e j ∈ R m is the vector with 1 as its jth entry and zero everywhere else. The collection
of maps {S f j } m j=1 is called the Markov-parameters of f . The functions S f j , j = 1, .
. . , m can be viewed as input responses. The interpretation of S f j will become more clear after we define the concept of a generalized convolution representation. Note that the values of the Markov-parameters can be obtained from the values of f .
Notation 2 (Sub-word). Consider the sequence v = q 0 · · · q t ∈ Q + , q 0 , . . . , q t ∈ Q, t ≥ 0. For each j, k ∈ {0, . . . , t}, define the word v j|k ∈ Q * as follows; if j > k, then v j|k = ǫ, if j = k, then v j|j = q j and if j < k, then v j|k = q j q j+1 · · · q k . That is, v j|k is the sub-word
of v formed by the letters from the jth to the kth letter.
Definition 7 (Convolution representation). The input-output map f has a generalized convolution representation (abbreviated as GCR), if for all w ∈ U + of the form (2), f (w) can be expressed via the Markov-parameters of f as follows.
f (w) = t−1 k=0 S f (q k · v k+1|t−1 · q t )u k where S f (w) = S f 1 (w) . . . S f m (w) ∈ R p×m for all w ∈ Q * .
Remark 1. If f has a GCR, then the Markov-parameters of f determine f uniquely.
The motivation for introducing GCRs is that existence of a GCR is a necessary condition for realizability by DTLSSs. Moreover, if f is realizable by a DTLSS, then the Markov-parameters of f can be expressed as products of the matrices of its DTLSS realization. In order to formulate this result more precisely, we need the following notation.
Notation 3. Consider the collection of n × n matrices A σ , σ ∈ X. For any w ∈ Q * , the n × n matrix A w is defined as follows. If w = ǫ, then A ǫ is the identity matrix. If respectively. Note that ≺ is a complete ordering and
w = σ 1 σ 2 · · · σ k ∈ X * , σ 1 , · · · σ k ∈ X, k > 0, then A w = A σ k A σ k−1 · · · A σ 1 .(5)all v ∈ Q * , q, q 0 ∈ Q, S f j (q 0 vq) = C q A v B q 0 e j , j = 1, . . . , m.(6)Q * = {v 1 , v 2 , . . .} with v 1 ≺ v 2 ≺ . . .. Note that v 1 = ǫ and for all i ∈ N, q ∈ Q, v i ≺ v i q.
In order to simplify the definition of a Hankel-matrix, we introduce the notion of a combined Markov-parameter.
Definition 8 (Combined Markov-parameters). A combined Markov-parameter M f (v) of f indexed by the word v ∈ Q * is the following pD × Dm matrix M f (v) = S f (1v1), · · · , S f (Dv1) S f (1v2), · · · , S f (Dv2) . . . · · · . . . S f (1vD), · · · , S f (DvD) (7)
Definition 9 (Hankel-matrix). Consider the lexicographic ordering ≺ of Q * from Remark 2. Define the Hankel-matrix H f of f as the following infinite matrix
H f = M f (v 1 v 1 ) M f (v 2 v 1 ) · · · M f (v k v 1 ) · · · M f (v 1 v 2 ) M f (v 2 v 2 ) · · · M f (v k v 2 ) · · · M f (v 1 v 3 ) M f (v 2 v 3 ) · · · M f (v k v 3 ) · · · . . . . . . · · · . . . · · · ,
i.e. the pD × (mD) block of H f in the block row i and block column j equals the combined Note that Theorem 1 shows that the knowledge of the Markov-parameters is necessary and sufficient for finding a state-space representation of f . In fact, similarly to the continuous-time case [21], we can even show that the knowledge of finitely many
Markov-parameter M f (v j v i ) of f .
Markov-parameters is sufficient. This will be done by formulating a realization algorithm for DTLSSs, which computes a DTLSSs realization of f based on finitely many Markov-
parameters of f .
In order to present the realization algorithm, we need the following notation. where v 1 ≺ v 2 · · · . Denote by N(L) the number of sequences from Q * of length at most L.
It then follows that |v i | ≤ L if and only if i ≤ N(L).
Definition 10 (H f,L,M sub-matrices of H f ). For L, K ∈ N define the integers I L = N(L)pD and J K = N(K)mD Denote by H f,L,K the following upper-left I L × J K sub- matrix of H f , M f (v 1 v 1 ) M f (v 2 v 1 ) · · · M f (v N(K) v 1 ) M f (v 1 v 2 ) M f (v 2 v 2 ) · · · M f (v N(K) v 2 ) . . . . . . · · · .
. .
M f (v 1 v N(L) ) M f (v 2 v N(L) ) · · · M f (v N(K) v N(L) ) .
Notice that the entries of H f,L,K are Markov-parameters indexed by words of length
at most L + K, i.e. H f,L,K is uniquely determined by {M f (v i )} N(L+K) i=1 .
The promised realization algorithm is Algorithm 1, which takes as input the matrix H f,N,N +1 and produces a DTLSS. Note that the knowledge of H f,N,N +1 is equivalent to the
knowledge of the finite sequence {M f (v i )} N(2N +1) i=1
of Markov-parameters. The correctness of Algorithm 1 is stated below.
Theorem 2.
If rank H f,N,N = rank H f , then Algorithm 1 returns a minimal realization
Σ N of f . The condition rank H f,N,N = rank H f holds for a given N , if there exists a DTLSS realization Σ of f such that dim Σ ≤ N + 1.
The proof of Theorem 2 is completely analogous to its continuous-time counterpart [21].
Theorem 2 implies that if f is realizable by a DTLSS, then a minimal DTLSS realization of f is computable from finitely many Markov-parameters, using Algorithm 1. In fact, if f is realizable by a DTLSS of dimension n, then the first N(2n − 1) Markov-parameters
{M f (v i )} N(2n−1) i=1
uniquely determines f .
whereR + is the Moore-Penrose pseudoinverse ofR. 5: Return Σ N The intuition behind Algorithm 1 is the following. The state-space of the DTLSS Σ N returned by Algorithm 1 is an isomorphic copy of the space spanned by the columns of H f,N,N . The isomorphism is determined by the matrix R. The columns of B q , q ∈ Q are formed by the columns (q − 1)mD + 1, . . . , qmD of the block-matrix
M f (v 1 v 1 ) T . . . M f (v 1 v N(L) ) T T .
The rows of C q , q ∈ Q are formed by the rows (q − 1)p + 1, . . . , pq of H f,N,N +1 . Finally, the matrix A q , q ∈ Q is the matrix of a shift-like operator, which maps a block-column
M f (v j v i ) N(L) i=1 of H f,N,N to the block-column M f (v j qv i ) N(L) i=1 of H f,N,N +1 .
Main results of the paper
The main idea behind our definition of persistence of excitation is as follows. The measured time series is persistently exciting, if from this time-series we can reconstruct the Markovparameters of the underlying system. Note that by Theorem 2, it is enough to reconstruct finitely many Markov-parameters. This also means that our definition of persistence of excitation is also applicable to finite time series.
In order to present our main results, we will need some terminology.
parameters {M f (v i )} N(2n−1) i=1
can be computed from the data (w , O(f, w)). The knowledge
of {M f (v i )} N(2n−1) i=1
is sufficient for computing a DTLSS realization of f . Hence, persistence of excitation of w for f means that Algorithm 1 can serve as an identification algorithm for computing a DTLSS realization of f from the time-series (w, O(f, w)). Note, however, that our definition does not depend on Algorithm 1. Indeed, if there is any algorithm which can correctly find a DTLSS realization of f from (w, O(f, w)), then according to our definition, w is persistently exciting. Note that our definition of persistence of excitation involves only the inputs, but not the output response.
So far we have defined the persistence of excitation for finite sequences of inputs.
Next, we define the same notion for infinite sequences of inputs. To this end, we need the following notation.
Notation 5. We denote by U ω the set of infinite sequences of hybrid inputs. That is, any element w ∈ U ω can be interpreted as a time-series w = {(q t , u t )} ∞ t=0 . For each N ∈ N, denote by w N the sequence formed by the first N elements of w, i.e. w N = (q 0 , u 0 ) · · · (q N , u N ).
Definition 13 (Asymptotic persistence of excitation). An infinite sequence of inputs w ∈ U ω is called asymptotically persistently exciting for the input-output map f , if the following holds. For every sufficiently large N , we can compute from (w N , O(f, w N )) asymptotic estimates of the Markov-parameters of f . More precisely, for N ∈ N, we can compute
from (w N , O(f, w N )) some matrices {M f N (v)} v∈Q * such that lim N →∞ M f N (v) = M f (v) for all v ∈ Q * .
When clear from the context, we will use the term persistently exciting instead of asymptotically persistently exciting.
Remark 4 (Interpretation). The interpretation of asymptotic persistence of excitation is
that asymptotically persistently exciting inputs allow us to estimate a DTLSS realization of f with arbitrary accuracy. Indeed, assume that w ∈ U ω is asymptotically persistently exciting. Then for each N we can compute from the time-series (w N , O(f, w N )) an ap-
M f (v). Since M f N (v)
converges to M f (v) for all v ∈ Q * , we get that each entry of H N f,n−1,n converges to the corresponding entry of H f,n−1,n . Modify Algorithm 1 by fixing the choice of columns to (i 1 , . . . , i n ) in the first step. It is easy to see that the modified algorithm represents a continuous map from the input data (finite Hankel-matrix) to the output data (matrices of a DTLSS). For sufficiently large N , the columns of H N f,n−1,n indexed by (i 1 , . . . , i n ) also represent a basis of the column space of H N f,n−1,n . If we apply the modified Algorithm 1 to the sequence of matrices H N f,n−1,n , we obtain a sequence of DTLSSs Σ n,N and the parameters of Σ n,N converge to the parameters of the DTLSS Σ which we would obtain from Algorithm 1 if we applied it to H f,n−1,n . In particular, by choosing a sufficiently large N , the parameters of Σ n,N are sufficiently close to those of Σ.
We will show that for every reversible DTLSS there exists some input which is persistently exciting. In addition, we present a class of inputs which are persistently exciting of any input-output map f realizable by a stable DTLSS.
Persistently exciting input for specific systems
In this section we present results which state that for any input-output map f which is realizable by a DTLSS, there exists a persistently exciting finite input.
Note that from (4) it follows that the Markov-parameters of f can be obtained from finitely many input-output data. However, the application of (4) implies evaluating the response of the system for different inputs, while started from a fixed initial state. In order to simulate this by evaluating the response of the system to one single input (which is then necessarily persistently exciting), one has to provide means to reset the system to its initial state. In order to be able to do so, we restrict attention to reversible DTLSSs.
{M f (v i )} N(2n−1) i=1
can be computed from the response (w, O(f, w)).
Note that (4) implies that {M
f (v i )} N(2n−1) i=1
can be computed from the responses of f from finitely many inputs. More precisely,
{M f (v i )} N(2n−1) i=1 can be computed from {f (s) | s ∈ S}, where S = {(q 0 , e j )(σ 1 , 0) . . . (σ |v i | , 0)(q, 0) ∈ U + | q 0 , q ∈ Q, v i = σ 1 . . . σ |v i | , j = 1, . . . , m, i = 1, . . . , N(2n − 1)}.
Hence, if for each s ∈ S there exists a prefix p of w such that f (s) = f (p), then this w will be persistently exciting.
One way to construct such a w is to construct for each s ∈ S an input s −1 ∈ U + such that ∀v ∈ U + :
f (ss −1 v) = f (v).
That is, the input s −1 neutralizes the effect of the input s. We defer the construction of the input s −1 to the end of the proof. Assume for the moment being that such inputs
w = s 1 s −1 1 · · · s d−1 s −1 d−1 s d ,
then each f (s), s ∈ S can be obtained as a response of f to a suitable prefix of w. Hence, w is persistently exciting.
It is left to show that s −1 exists. Consider a reversible realization Σ of f . Then the controllable set and reachable set of Σ coincide by [7]. Hence, from any reachable state x of Σ, there exists an input w(x) such that w(x) drives Σ from x to zero, i.e. x Σ (x, w(x)) = 0.
For each s ∈ S, let x(s) = x Σ (0, s) and define s −1 = w(x(s)) as the input which drives
x(s) back to the initial zero state.
It is easy to see that Theorem 3 can be extended to any input-output map which admits a controllable DTLSS realization. However, it is not clear if every input-output map which is realizable by a DTLSS is also realizable by a controllable DTLSS. Note that the construction of the persistently exciting w from Theorem 3 requires the knowledge of a DTLSS realization of f . Below we present a subclass of input-output maps, for which the knowledge of a state-space representation is not required to construct a persistently exciting input.
Definition 15. Fix a map . −1 : U ∋ α → α −1 ∈ U . A input-output map f is said to be reversible with respect to the map . −1 , if for all α ∈ U , s, w ∈ U * , |sw| > 0, f (sαα −1 w) = f (sw).
Intuitively, f is reversible with respect to . From the proof of Theorem 3, we obtain the following corollary.
= (q 0 , u 0 ) · · · (q t , u t ) ∈ S define s −1 = (q t , u t ) −1 (q t−1 , u t−1 ) −1 · · · (q 0 , u 0 ) −1
Universal persistently exciting inputs
Next, we discuss classes of inputs which are persistently exciting for all input-output maps realizable by DTLSSs.
v ∈ Q + lim N →∞ 1 N N t=0 u t+j u T t χ(q t q t+1 · · · q t+|v|−1 = v) = 0 lim N →∞ 1 N N t=j u t−j u T t χ(q t−j q t−j+1 · · · q t−j+|v|−1 = v) = 0 lim N →∞ 1 N N t=0 u t u T t χ(q t · · · q t+|v|−1 = v) = π v R
where χ is the indicator function, i.e. χ(A) = 1 if A holds and χ(A) = 0 otherwise.
Remark 5 (PE condition implies rich switching).
Note that if w ∈ U ω satisfies the conditions of Definition 16, then
lim N →∞ N t=0 χ(q t · · · q t+|v|−1 = v) = π v > 0
for each v ∈ Q + . This implies that any sequence of discrete modes occurs in the switching signal. Hence, our condition for persistence of excitation implies that the switching signal should be rich enough. This is consistent with many of the existing definitions of persistence of excitation for hybrid systems. The requirement that π v > 0 for all v ∈ Q * is quite a strong one. At the end of this section we will discuss possible relaxations of this requirement.
Remark 6 (Relationship with stochastic processes). Fix a probability space (Ω, F, P ) and consider ergodic discrete-time stochastic processes u t : Ω → R m and q t : Ω → Q with values in R m and Q respectively. In addition, assume the following.
• The processes u t and q t are independent (i.e. the σ-algebras generated by {u t } ∞ t=0 and by {q t } ∞ t=0 are independent.
• The stochastic process u t is a colored noise, i.e. it is zero-mean, u t and u s are uncorrelated and E[u t u T t ] = R > 0, with E[·] denoting the expectation operator.
• For each v ∈ Q + , π v = P (q t · · · q t+|v|−1 = v) > 0.
It then follows that almost all sample paths of u t , q t satisfy the PE condition of Definition 16. That is, there exists a set A ∈ F, such that P (A) = 0 and for all ω ∈ Ω \ A, the sequence w = {(q t , u t ) = (q t (ω), u t (ω)} ∞ t=0 satisfies the PE condition.
Remark 7.
If u t is a white-noise Gaussian process and if the variables q t are uniformly distributed over Q (i.e. P (q t = q) = 1 |Q| and are independent from each other and from {u s } ∞ s=0 , then u t and q t satisfy the conditions of Remark 6 and hence almost any sample path of u t and q t satisfies the PE condition of Definition 16.
This special case also provides a simple practical way to generate inputs which satisfy the PE conditions.
We will show that input sequences which satisfy the conditions of Definition 16 are asymptotically persistently exciting for a large class of input-output maps. The main idea behind the theorem is as follows. Consider a DTLSS Σ which is realization of f , and suppose we feed a stochastic input {q t , u t } into Σ. Then the state x t and the output response y t of Σ will also be stochastic processes. Suppose that {q t , u t } are stochastic processes which satisfy the conditions of Remark 6. It is easy to see that
y t = t k=0 C q t A q t−1 · · · A q k+1 B q k u k .
and hence for all r, q ∈ Q, v ∈ Q * , |rvq| = t + 1,
E[y t u T 0 χ(q 0 · · · q t = rvq)] = t k=0 C q A v B r E[u k u T 0 χ(q 0 · · · q t = rvq)] = C q A v B r Rπ rvq = S f (rvq)Rπ rvq .(11)
Hence, if we know the expectations E[y t u T 0 χ(q 0 · · · q t = rvq)] for all r, q ∈ Q, v ∈ Q * , |rvq| = t + 1, t > 0, then we can find all the Markov-parameters of f , by the following
formula S f (rvq) = E[y t u T 0 χ(q 0 · · · q t+1 = rvq)]R −1 1 π rvq .
Hence, the problem of estimating the Markov-parameters reduces to estimating the
expectations E[y t u T 0 χ(q 0 · · · q t = rvq)].(12)
For practical purposes, the expectations in (12) have to be estimated from a sample-path of y t , u t and q t . The most natural way to accomplish this is to use the formula
lim N →∞ 1 N N t=i y i+t u T i χ(q i · · · q i+t = rvq)(13)
where y t , u t , q t denote the value at time t of a sample-path of y t , u t and q t respectively.
Note that y t is in fact the output of Σ at time t, if the input {u i } t i=0 and the switching signal {q i } t i=0 are fed to the system.
The problem with estimating (12) by (13) is that the limit (13) may fail to exist or to converge to (12).
A particular case when (13) converges to (12) is when the process (y t , u t , q t ) is ergodic. In that case, we can choose a sample-path (y t , u t , q t ) of (y t , u t , q t ) for which the limit in (13) equals the expectation (12) ; in fact 'almost all' sample paths will have this
property. This means that we can choose a suitable deterministic input sequence {u t } ∞ t=0 and a switching signal {q t } ∞ t=0 , such that for the resulting output {y t } ∞ t=0 , the limit (13) equals the expectation (12). That is, in that case the input w = (q 0 , u 0 ) · · · (q t , u t ) · · · is asymptotically persistently exciting. However, proving ergodicity of y t is not easy. In addition, even if y t is ergodic, the particular choice of the deterministic input w for which (13) equals (12) might depend on the DTLSS itself.
For this reason, instead of using the concepts of ergodicity directly, we just show that for the input sequences w which satisfy the conditions of Definition 16, the corresponding output {y t } ∞ t=0 has the property that the limit (13) exists and it equals S f (rvq)Rπ rvq , for any input-output map f which is realizable by a l 1 -stable DTLSS. This strategy allows us to use elementary techniques, while not compromising the practical relevance of the result.
In order to present the main result of this section, we have to define the notion of for every x ∈ R n , the series v∈Q * ||A v x|| 2 is convergent.
Remark 8 (Sufficient condition for stability). If for all q ∈ Q, ||A q || 2 < 1 |Q| , where ||A q || 2 is the matrix norm of A q induced by the standard Euclidean norm, then Σ is l 1 -stable.
Remark 9 (Asymptotic stability). If Σ is l 1 -stable, then it is asymptotically stable, in the sense that if s i ∈ Q * , i > 0 is a sequence of words such that lim i→∞ |s i | = +∞, then
lim i→∞ A s i x = 0 for all x ∈ R n .
Intuitively it is clear why we have to restrict attention to stable systems. Recall that (4) allows us to compute the Markov-parameters of f from the responses of f to finitely many inputs. In order to obtain the response of f to several inputs from the response of f to one input, one has to find means to suppress the contribution of the current state of the system to future inputs. In §4.1 this was done by feeding inputs which drive the system back to the initial state. Unfortunately, the choice of such inputs depended on the system itself. By assuming stability, we can make sure that the effect of the past state will asymptotically diminish in time. Hence, by waiting long enough, we can approximately recover the response of f to any input.
Another intuitive explanation for assuming stability is that it is necessary for the stationarity, and hence ergodicity, of the output and state processes y t , x t .
Equipped with the definitions above, we can finally state the main result of the section.
Theorem 5 (Main result). If w satisfies the PE conditions of Definition 16, then w is asymptotically persistently exciting for any input-output map f which admits a l 1 -stable DTLSS realization.
The theorem above together with Remark 7 imply that white noise input and a binary noise switching signal are asymptotically persistently exciting. The proof of Theorem 5 relies on the following technical result.
Theorem 6. Assume that Σ is a l 1 -stable DTLSS of the form (1), and assume that w satisfies the PE conditions. Let {y t } ∞ t=0 and {x t } ∞ t=0 be the output and state response of Σ to w, i.e. y t = y Σ (w t ) and x t = x Σ (0, w t ). Then for all v, β ∈ Q * , r, q ∈ Q
π rvqβ A v B r R = lim N →∞ 1 N N t=0 x t+|v|+1 u T t χ(t, rvqβ) (14) π rvqβ C q A v B r R = lim N →∞ 1 N N t=0 y t+|v|+1 u T t χ(t, rvqβ)(15)
Here we used the following notation: for all s ∈ Q + ,
χ(t, s) = 1 if s = q t q t+1 · · · q t+|s|−1 0 otherwise
Informally, Theorem 6 implies that if f is realizable by a l 1 -stable DTLSS, then the limit (13) equals (12). The proof of Theorem 6 can be found in Appendix A.
Proof of Theorem 5. For each t, denote by y t the response of f to the first t elements of w,
i.e. y t = f ((q 0 , u 0 ) · · · (q t , u t )). For each integer N ∈ N and for each word v ∈ Q * , define the matrix S N (rvq) as Remark 10 (Relaxation of PE condition). Assume that we restrict attention to inputoutput maps which are realizable by a l 1 -stable DTLSS of dimension at most n, and let f be such an input-output map. In this case, one can replace the conditions of Definition 16, that π v > 0 by the condition that π s > 0 for all |s| ≤ 2n + 1 and still obtain asymptotically persistently exciting inputs for f .
S N (rvq) = ( 1 N N t=0 y t+|v|+1 u T t χ(t,
Indeed, consider now any w ∈ U ω which satisfies Definition 16 with the exception that π v > 0 is required only for |v| ≤ 2n + 1. Then Theorem 6 remains valid for this case (the proof remains literally the same) and from the proof of Theorem 5 we get that for all i = 1, . . . , N(2n − 1),
S f (rv i q) = lim N →∞ ( 1 N N t=0 y t+|v|+1 u T t χ(t, rv i q))R −1 1 π rv i q Hence, {M f (v i )} N(2n−1) i=1
can asymptotically be estimated from (w N , O(f, w N )). Since the modified Algorithm 1 from Remark 4 determines a continuous map from
{M f (v i )} N(2n−1) i=1
to the other Markov-parameters of f , w is asymptotically persistently exciting for f .
Conclusions
We defined persistence of excitation for input signals of linear switched systems. We showed existence of persistently exciting input sequences and we identified several classes of input signals which are persistently exciting.
Future work includes finding less restrictive conditions for persistence of excitation and extending the obtained results to other classes of hybrid systems.
A Technical proofs
The proof of Theorem 6 relies on the following result.
Lemma 2. With the notation and assumptions of Theorem 6, for all v ∈ Q + ,
lim N →∞ 1 N N t=0 x t u T t χ(t, v) = 0
The intuition behind Lemma 2 is as follows. Each x t is a linear combination of inputs u 0 , . . . , u t−1 . Hence, 1 N N t=0 x t u T t can be expressed as linear combination of terms 1 N N t=k u t−k u T t χ(t, s) for some s ∈ Q * , k = 1, . . . , N . Since each such term converges to 0 as N → ∞, intuitively their linear combination should converge to 0 as well. Unfortunately, the number of summands of the above increases with N . In order to deal with this difficulty a technique similar to the M -test for double series has to be used. The assumption that Σ is l 1 -stable is required for this technique to work.
Proof of Theorem 6. We start with the proof of (14). The proof goes by induction on the length of v.
If v = ǫ, then 1 N N t=0 x t+1 u T t χ(t, rβ) = 1 N N t=0 (A qt x t + B qt u t )u T t χ(t, rβ) = 1 N N t=0 A qt x t u T t χ(t, rβ) + 1 N N t=0 B qt u t u T t χ(t, rβ).(16)Notice A qt x t u T t χ(t, rβ) = A r x t u T t χ(t, rβ) and B qt u t u T t χ(t, rβ) = B r u t u T t χ(t, rβ). Hence, 1 N N t=0 A qt x t u T t χ(t, rβ) + 1 N N t=0 B qt u t u T t χ(t, rβ) = A r ( 1 N N t=0 x t u T t χ(t, rβ)) + B r ( 1 N N t=0 u t u T t χ(t, rβ))(17)
From the assumptions on w it follows that
lim N →∞ 1 N N t=0 u t u T t χ(t, rβ) = Rπ rβ
Hence, from the PE conditions and Lemma 2 we get that
lim N →∞ 1 N N t=0 x t+1 u T t χ(t, rβ) = A r ( lim N →∞ 1 N n t=0
x t u T t χ(t, rβ))+
+ B r ( lim N →∞ 1 N n t=0 u t u T t χ(t, rβ)) = A r 0 + B r Rπ rβ = π rβ B r R,
i.e. (14) holds.
Assume that (14) holds for all words of length at most L, and assume that v = wq, |w| = L for some w ∈ Q * and q ∈ Q. Then by the induction hypothesis and the assumptions on
w lim N →∞ 1 N N t=0 x t+L+2 u T t χ(t, rwqβ) = lim N →∞ 1 N N t=0 A q x t+L+1 u T t χ(t, rwqβ)+ + lim N →∞ 1 N N t=0 B q u t+L+1 u T t χ(t, rwqβ) = = A q A w B r π rwqβ + B r 0 = A wq B r Rπ rwqβ .(18)
Finally, we prove (15). Notice that
y t+|v|+2 u T t χ(q, t, rvqβ) = C q x t+|v|+2 u T t χ(t, rvqβ)
and hence by applying (14),
lim N →∞ 1 N N t=0 y t+|v|+2 u T t χ(t, rvqβ) = C q lim N →∞ 1 N N t=0 x t+|v|+2 u T t χ(t, rvqβ) = C q A v B r Rπ rvqβ .
Proof of Lemma 2. Notice that
N t=1 x t u T t χ(t, v) = N t=1 t−1 j=1 A q t−1 · · · A q j B q j−1 u j−1 u T t χ(t, v) = N −1 k=1 ( N t=k A q t−1 · · · A q t−k+1 B q t−k u t−k u T t χ(t, v)) = r∈Q N −1 k=0 |s|=k A s B r N t=k+1 u t−k−1 u T t χ(t − k − 1, rsv) = N(N ) i=0 r∈Q A v i B r N t=|v i |+1 u t−|v i |−1 u T t χ(t − |v i | − 1, rv i v).
In the last step we used the lexicographic ordering of Q * from Remark 2. It then follows
that 1 N N t=1 x t u T t χ(t, v) = r∈Q N(N ) i=0 A v i B r 1 N N t=|v i |+1 u t−|v i |−1 u T t χ(t − |v i | − 1, rv i v). Define b r i,N = 1 N N t=|v i |+1 u t−|v i |−1 u T t χ(t − |v i | − 1, rv i v) a r i,N = A v i B r b r v i ,N .
Then the statement of the lemma can be shown by showing that for all r ∈ Q, i = 1, 2, . . ., Moreover, for a fixed N and k, we can get the following estimate ||a r i,N || 2 ≤ ||A v i B r || 2 ||b r i,N || 2 .
If we can show that ||b r v i ,N || 2 is bounded by a number K, then we get that
||a r i,N || 2 ≤ ||A v i B r || 2 K.
The latter inequality is already sufficient to finish the proof. Indeed, let D r i = ||A v i B r || 2 K and notice from the l 1 -stability assumption on the realization Σ that
∞ i=1 D r i = K v∈Q * ||A v B r || 2
is convergent. Hence, we get that for every ǫ > 0 there exists a I ǫ such that ∞ i=Iǫ+1 D r i < ǫ/2.
For every N > I ǫ ,
|| N(N ) i=1 a r i,N || 2 = || Iǫ i=1 a r i,N + N(N ) i=Iǫ+1 a r i,N || 2 ≤ Iǫ i=1 ||a r i,N || 2 + N(N ) i=Iǫ+1 D r i < Iǫ i=1
||a r i,N || 2 + ǫ/2.
Since lim N →∞ a r i,N = 0, there exists N ǫ ∈ N such that for all N > N ǫ , ||a r i,N || 2 < ǫ 2Iǫ . DefineN ǫ to be an integer such thatN ǫ > N ǫ and N(N ǫ ) > I ǫ . Then for every N >N ǫ , N(N ) ≥ N(N ǫ ) > I ǫ and
|| N(N ) i=1 a r i,N || 2 ≤ Iǫ i=1
||a r i,N || 2 + ǫ/2 < I ǫ ǫ 2I ǫ + ǫ/2 = ǫ/2 + ǫ/2 = ǫ.
In other words, lim N →0
N(N )
i=1 a r i,N = 0. It is left to show that ||b r i,N || 2 ≤ K for some K > 0 and for all i = 1, 2, . . ., r ∈ Q.
||b r i,N || 2 ≤ 1 N N t=|v i |+1 u t−|v i |−1 u T t χ(t − |v i | − 1, rv i v) 2 ≤ 1 N N t=|v i |+1 u t−|v i |−1 u T t χ(t − |v i | − 1, rv i v) F = m i,j=1 1 N 2 N t=|v i |+1 (u t−|v i |−1 ) i χ(t − |v i | − 1, rv i v)(u t ) j 2 1/2 .(19)
where ||.|| F denotes the matrix Frobenius-norm, and ||.|| 2 denotes the matrix norm induced by the Euclidean norm. The application of the Cauchy-Schwartz inequality to
( N t=|v i |+1 (u t−|v i |−1 ) i χ(t − |v i | − 1, rv i v)(u t ) j ) 2 leads to N t=|v i |+1 (u t−|v i |−1 ) i χ(t − |v i | − 1, rv i v)(u T t ) j 2 ≤ N t=|v i |+1 (u t−|v i |−1 ) 2 i χ(t − |v i | − 1, rv i v) N t=|v i | (u t ) 2 j .(20)
Notice that (u t−|v i |−1 ) 2 i χ(t − |v i | − 1, rv i v) ≤ (u t−|v i |−1 ) 2 i , since χ(t − |v i | − 1, rv i v) ∈ [0, 1].
Hence,
N t=|v i |+1 (u t−|v i |−1 ) 2 i χ(t − |v i | − 1, rv i v) ≤ ≤ N t=|v i |+1 (u t−|v i |−1 ) 2 i ≤ N t=0 (u t ) 2 i .
Similarly,
N t=|v i |+1 (u t ) 2 j ≤ N t=0 (u t ) 2 j .
Combining these remarks with (20), we obtain
1 N 2 N t=|v i |+1 (u t−|v i |−1 ) i χ(t − |v i | − 1, rv i v)(u T t ) j 2 ≤ 1 N N t=0 (u t ) 2 i 1 N N t=0 (u t ) 2 j .(21)
Notice that lim N →∞ 1 N N t=0 (u t ) 2 i = R ii and hence 1 N N t=0 (u t ) 2 i is bounded from above by some positive number K i . Using this fact and by substituting (21) into (19), we obtain ||b r i,N || 2 ≤ ( m i,j=1
K i K j ) 1/2 .
Hence, if we set K = m i,j=1 K i K j , then then ||b r i,N || 2 ≤ K, which is what had to be shown.
Next, we define when a general map f of the form (3) is adequately described by the DTLSS Σ, i.e. when Σ is a realization of f . Definition 4 (Realization). The DTLSS Σ is a realization of an input-output map f of the form (3), if f equals the input-output map of Σ, i.e. f = y Σ .
Next, we define the concept of a Hankel-matrix. Similarly to the linear case, the entries of the Hankel-matrix are formed by the Markov parameters. For the definition of the Hankel-matrix of f , we will use lexicographical ordering on the set of sequences Q * . Remark 2 (Lexicographic ordering). Recall that Q = {1, . . . , D}. We define a lexicographic ordering ≺ on Q * as follows. For any v, s ∈ Q * , v ≺ s if either |v| < |s| or 0 < |v| = |s|, v = s and for some l ∈ {1, . . . , |s|}, v l < s l with the usual ordering of integers and v i = s i for i = 1, . . . , l − 1. Here v i and s i denote the ith letter of v and s
Notation 4 .
4Consider the lexicographic ordering ≺ of Q * and recall that Q * = {v 1 , v 2 , . . . , }
2
see[20] for the definition of isomorphism between DTLSSsAlgorithm 1 Inputs: Hankel-matrix H f,N,N +1 . Output: DTLSS Σ N 1: Let n = rank H f,N,N +1 . Choose a tuple of integers (i 1 , . . . , i n ) such that the columns of H f,N,N +1 indexed by i 1 , . . . , i n form a basis of ImH f,N,N +1 . Let O be I N × n matrix formed by these linearly independent columns, i.e. the rth column of O equals the i r th column of H f,N,N +1 . Let R ∈ R n×J N+1 be the matrix, rth column of which is formed by coordinates of the rth column of H f,N,N +1 with respect to the basis consisting of the columns i 1 , . . . , i n of H f,N,N +1 , for every r = 1, . . . , J N +1 . It then follows that H f,N,N +1 = OR and rank R = rank O = n. 2: DefineR ∈ R n×J N as the matrix formed by the first J N columns of R. 3: For each q ∈ Q, let R q ∈ R n×J N be such that for each i = 1, . . . J N , the ith column of R q equals the r(i)th column of R. Here r(i) ∈ {1, . . . , J N +1 } is defined as follows. Consider the decomposition i = (r − 1)mD + z for some z = 1, . . . , mD and r = 1, . . . , N(N ). Consider the word v r q and notice that |v r q| ≤ N + 1. Hence, v r q = v d for some d = 1, . . . , N(N + 1). Then define r(i) as r(i) = (d − 1)mD + z. 4: Construct Σ N of the form (1) such that B 1 , . . . , B D = the first mD columns of R the first pD rows of O (9)∀q ∈ Q : A q = R qR +
Definition 11 (
11Output time-series). For any input-output map f and for any finite input sequence w ∈ U + we denote by O(f, w) the output time series induced by f and w, i.e. if w is of the form (2), then O(f, w) = {y t } T t=0 , such that y t = f ((q 0 , u 0 ) · · · (q t , u t )) for all t ≤ T .Definition 12 (Persistence of excitation). The finite sequence w ∈ U + is persistently exciting for the input-output map f , if it is possible to determine the Markov-parameters of f from the data (w, O(f, w)).Remark 3 (Interpretation). Theorem 2 and Algorithm 1 allow the following interpretation of persistence of excitation defined above. If w is persistently exciting, then the Markov-parameters of f can be computed from the response of f to the prefixes of w. In particular, if f admits a DTLSS realization of dimension at most n, then the Markov-
proximation
{M f N (v)} v∈Q * of the Markov-parameters of f . Suppose that f is realizable by a DTLSS of dimension n and we know the indices (i 1 , . . . , i n ) of those columns of H f,n−1,n which form a basis of the column space of H f,n−1,n . Let H N f,n−1,n be the matrix which is constructed in the same way as H f,n−1,n , but with M f N (v) instead of the Markovparameters
Definition 14 .
14A DTLSS Σ of the form (1) is reversible, if for every discrete mode q ∈ Q, the matrix A q is invertible.Reversible DTLSSs arise naturally when sampling continuous-time systems.Theorem 3. Consider an input-output map f . Assume that f has a realization by a reversible DTLSS. Then there exists an input w ∈ U + such that w is persistently exciting for f . Sketch of the proof. The main idea behind the proof of Theorem 3 is as follows. If f admits a DTLSS realization of dimension n, then the finite sequence{M f (v i )} N(2n−1) i=1of Markov-parameters determine all the Markov-parameters of f uniquely. Hence, in order for a finite input w to be persistently exciting for f , it is sufficient that
s
−1 exist. Let S = {s 1 , . . . , s d } be an enumeration of S. Then it is easy to see that f (s 1 s −1 1 s 2 ) = f (s 2 ), f (s 1 s −1 1 s 2 s −1 2 s 3 ) = f (s 3 ), etc. Hence, if we define
− 1
1, if the effect of any input α = (q, u) can be neutralized by the input α −1 . Such a property is not that uncommon, think for example of turning a valve on and off. For example, if f has a realization by a DTLSS Σ of the form (1), and Q = {1, . . . , 2K} such that for each q ∈ {1, . . . , K}, A q = A −1 q+K , B q = −AB q+K , then f is reversible and (q, u) −1 = (q + K, −u).
Theorem 4 .
4If f is reversible with respect to . −1 , then a persistently exciting input sequence w can be constructed for f . The construction does not require the knowledge of a DTLSS state-space realization of f . If the inputs α −1 from Definition 15 are computable from α, then the construction of w is effective. Proof of Theorem 4. The proof differs from that of Theorem 3 only in the definition of s −1 for each s ∈ S. More precisely, if f is reversible, then for each s
Definition 16 (
16Persistence of excitation cond). An infinite input w = {(q t , u t )} ∞ t=0 ∈ U ω satisfies PE condition, if there exists a strictly positive definite m × m matrix R and a collection of strictly positive numbers {π v } v∈Q + such that for any positive integer j ∈ N and any word
l 1 -stability of DTLSSs. Definition 17 (Stability of DTLSSs). A DTLSS Σ of the form (1) is called l 1 -stable, if
S
N (rvq) = S f (rvq) and hence lim N →∞ M N (v) = M f (v). Hence, w is indeed asymptotically persistently exciting.
(t − k − 1, rv i v) = 0.
Lemma 1. The map f is realized by the DTLSS Σ if and only if f has a GCR and for
The rank of H f , denoted by rank H f , is the dimension of the linear span of its columns.The main result on realization theory of DTLSSs can be stated as follows. The map f has a realization by a DTLSS if and only if f has a GCR and rank H f < +∞.2. A minimal DTLSS realization of f can be constructed from H f and any minimalDTLSS realization of f has dimension rank H f .3. A DTLSS Σ is a minimal realization of f if and only if Σ is span-reachable, observableand it is a realization of f . Any two DTLSSs which are minimal realizations of f are isomorphic 2 .Theorem 1 ([20]).
1.
In fact, we also propose a specific algorithm for the correctness of which persistence of excitation is sufficient, but we do not claim this is true for all identification algorithms.
Identification of switched linear systems via sparse optimization. L Bako, 10.1016/j.automatica.2011.01.036Automatica. L. Bako. Identification of switched linear systems via sparse optimization. Automatica, doi:10.1016/j.automatica.2011.01.036, 2011.
Online structured subspace identification with application to switched linear systems. L Bako, G Mercre, S Lecoeuche, International Journal of Control. 82L. Bako, G. Mercre, and S. Lecoeuche. Online structured subspace identification with application to switched linear systems. International Journal of Control, 82:1496- 1515, 2009.
Identification of switched linear state space models without minimum dwell time. L Bako, G Mercre, R Vidal, S Lecoeuche, IFAC Symposium on System Identification. Saint Malo, FranceL. Bako, G. Mercre, R. Vidal, and S. Lecoeuche. Identification of switched linear state space models without minimum dwell time. In IFAC Symposium on System Identification, Saint Malo, France, 2009.
Samuel Eilenberg Automata, Languages and Machines. New York, LondonAcademic PressSamuel Eilenberg. Automata, Languages and Machines. Academic Press, New York, London, 1974.
A clustering technique for the identification of piecewise-affine systems. G Ferrari-Trecate, M Musellu, D Liberati, M Morari, Automatica. 39G. Ferrari-Trecate, M. Musellu, D. Liberati, and M. Morari. A clustering technique for the identification of piecewise-affine systems. Automatica, 39:205-217, 2003.
Bayesian Nonparametric Learning of Complex Dynamical Phenomena. E B Fox, Cambridge, MAMITPh.D. thesisE.B. Fox. Bayesian Nonparametric Learning of Complex Dynamical Phenomena. Ph.D. thesis, MIT, Cambridge, MA, 2009.
Reachability and controllability of switched linear discrete-time systems. S S Ge, Zhendong Sun, T H Lee, IEEE Trans. Automatic Control. 469S.S. Ge, Zhendong Sun, and T.H. Lee. Reachability and controllability of switched linear discrete-time systems. IEEE Trans. Automatic Control, 46(9):1437 -1441, 2001.
Algebraic theory of automata. F Gécseg, Peák, Akadémiai Kiadó. F. Gécseg and I Peák. Algebraic theory of automata. Akadémiai Kiadó, Budapest, 1972.
Recursive identification of switched ARX models with unknown number of models and unknown orders. Y Hashambhoy, R Vidal, IEEE Conference on Decision and Control. Y. Hashambhoy and R. Vidal. Recursive identification of switched ARX models with unknown number of models and unknown orders. In IEEE Conference on Decision and Control, 2005.
Identifiability of hybrid system models. A Hiskens, Proceedings of the IEEE International Conference on Control Applications. the IEEE International Conference on Control ApplicationsAnchorage, AKA. Hiskens. Identifiability of hybrid system models. In Proceedings of the IEEE International Conference on Control Applications, Anchorage, AK, 2000.
Observer Design and Identification Methods for Hybrid Systems: Theory and Experiments. A Juloski, Eindhoven Universtity of TechnologyPhD thesisA. Juloski. Observer Design and Identification Methods for Hybrid Systems: Theory and Experiments. PhD thesis, Eindhoven Universtity of Technology, 2004.
Comparison of four procedures for the identification of hybrid systems. A Juloski, W P M H Heemels, G Ferrari-Trecate, R Vidal, S Paoletto, J Niessen, Hybrid Systems: Computation and Control. 3414A. Juloski, W.P.M.H. Heemels, G. Ferrari-Trecate, R. Vidal, S. Paoletto, and J. Niessen. Comparison of four procedures for the identification of hybrid systems. In Hybrid Systems: Computation and Control, volume 3414 of LNCS, pages 354-369.
. Springer-Verlag, BerlinSpringer-Verlag, Berlin, 2005.
A bayesian approach to identification of hybrid systems. A Juloski, S Weiland, M Heemels, Proceedings of 43rd IEEE Conference on Decision and Control. 43rd IEEE Conference on Decision and ControlA. Juloski, S. Weiland, and M. Heemels. A bayesian approach to identification of hybrid systems. In Proceedings of 43rd IEEE Conference on Decision and Control, 2004.
System Identification: Theory for the user. L Ljung, Prentice HallUpper Saddle River, USA2nd EdL. Ljung. System Identification: Theory for the user (2nd Ed.). PTR Prentice Hall., Upper Saddle River, USA, 1999.
A closed form solution to the identification of hybrid ARX models via the identification of algebraic varieties. Y Ma, R Vidal, Hybrid Systems: Computation and Control. Y. Ma and R. Vidal. A closed form solution to the identification of hybrid ARX models via the identification of algebraic varieties. In Hybrid Systems: Computation and Control, pages 449-465, 2005.
Identification of deterministic switched arx systems via identification of algebraic varieties. Yi Ma, R Vidal, Hybrid Systems: Computation and Control. 3414Yi Ma and R. Vidal. Identification of deterministic switched arx systems via identi- fication of algebraic varieties. In Hybrid Systems: Computation and Control, volume 3414 of Lecture Notes in Computer Science, pages 449 -465, 2005.
Identification of hybrid systems: A tutorial. S Paoletti, A Juloski, G Ferrari-Trecate, R Vidal, European Journal of Control. 132-3S. Paoletti, A. Juloski, G. Ferrari-Trecate, and R. Vidal. Identification of hybrid systems: A tutorial. European Journal of Control, 13(2-3):242 -260, 2007.
Input/ouput realization of piecewise affine state space models. S Paoletti, J Roll, A Garulli, A Vicino, 46th IEEE Conf. on Dec. and Control. S. Paoletti, J. Roll, A. Garulli, and A. Vicino. Input/ouput realization of piecewise affine state space models. In 46th IEEE Conf. on Dec. and Control, 2007.
Identifiability of discrete-time linear switched systems. M Petreczky, L Bako, J H Van Schuppen, Hybrid Systems: Computation and Control. ACMM. Petreczky, L. Bako, and J.H. van Schuppen. Identifiability of discrete-time linear switched systems. In Hybrid Systems: Computation and Control. ACM, 2010.
Realization theory for discrete-time linear switched systems. Mihaly Petreczky, Laurent Bako, Jan H Van Schuppen, ArXivTechnical reportMihaly Petreczky, Laurent Bako, and Jan H. van Schuppen. Realization theory for discrete-time linear switched systems. Technical report, ArXiv, 2011.
Partial-realization of linear switched systems: A formal power series approach. Mihaly Petreczky, Jan H Van Schuppen, arXiv:1010.5160v1ArXiv. Technical ReportMihaly Petreczky and Jan H. van Schuppen. Partial-realization of linear switched sys- tems: A formal power series approach. Technical Report arXiv:1010.5160v1, ArXiv, 2010. Available at http://arxiv.org/abs/1010.5160v1.
Identification of piecewise affine systems via mixed-integer programming. J Roll, A Bemporad, L Ljung, Automatica. 401J. Roll, A. Bemporad, and L. Ljung. Identification of piecewise affine systems via mixed-integer programming. Automatica, 40(1):37-50, 2004.
Switched linear systems : control and design. Zhendong Sun, S Shuzhi, Ge, SpringerLondonZhendong Sun and Shuzhi S. Ge. Switched linear systems : control and design. Springer, London, 2005.
Subspace identification of piecewise linear systems. V Verdult, M Verhaegen, Proc.Conf. Decision and Control. .Conf. Decision and ControlV. Verdult and M. Verhaegen. Subspace identification of piecewise linear systems. In Proc.Conf. Decision and Control, 2004.
Recursive identification of switched ARX systems. R Vidal, Automatica. 449R. Vidal. Recursive identification of switched ARX systems. Automatica, 44(9):2274 -2287, 2008.
Observability and identifiability of jump linear systems. R Vidal, A Chiuso, S Sastry, Proc. IEEE Conf. Dec. and Control. IEEE Conf. Dec. and ControlR. Vidal, A. Chiuso, and S. Sastry. Observability and identifiability of jump linear systems. In Proc. IEEE Conf. Dec. and Control, pages 3614 -3619, 2002.
Observability of linear hybrid systems. R Vidal, S Sastry, A Chiuso, Hybrid Systems: Computation and Control. R. Vidal, S. Sastry, and A. Chiuso. Observability of linear hybrid systems. In Hybrid Systems: Computation and Control, 2003.
A note on persistency of excitation. Jan C Willems, Paolo Rapisarda, Ivan Markovsky, Bart L M De Moor, Systems & Control Letters. 544Jan C. Willems, Paolo Rapisarda, Ivan Markovsky, and Bart L.M. De Moor. A note on persistency of excitation. Systems & Control Letters, 54(4):325 -329, 2005.
| []
|
[
"Published as a conference paper at ICLR 2023 CLIFFORD NEURAL LAYERS FOR PDE MODELING",
"Published as a conference paper at ICLR 2023 CLIFFORD NEURAL LAYERS FOR PDE MODELING"
]
| [
"Johannes Brandstetter [email protected] \nSystems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n\n",
"Rianne Van Den Berg [email protected] \nSystems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n\n",
"Max Welling [email protected] \nSystems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n\n",
"Jayesh K Gupta [email protected] \nSystems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n\n"
]
| [
"Systems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n",
"Systems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n",
"Systems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n",
"Systems and Robotics Research\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Research AI4Science\nMicrosoft Autonomous\n"
]
| []
| Partial differential equations (PDEs) see widespread use in sciences and engineering to describe simulation of physical processes as scalar and vector fields interacting and coevolving over time. Due to the computationally expensive nature of their standard solution methods, neural PDE surrogates have become an active research topic to accelerate these simulations. However, current methods do not explicitly take into account the relationship between different fields and their internal components, which are often correlated. Viewing the time evolution of such correlated fields through the lens of multivector fields allows us to overcome these limitations. Multivector fields consist of scalar, vector, as well as higher-order components, such as bivectors and trivectors. Their algebraic properties, such as multiplication, addition and other arithmetic operations can be described by Clifford algebras. To our knowledge, this paper presents the first usage of such multivector representations together with Clifford convolutions and Clifford Fourier transforms in the context of deep learning. The resulting Clifford neural layers are universally applicable and will find direct use in the areas of fluid dynamics, weather forecasting, and the modeling of physical systems in general. We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations. For similar parameter count, Clifford neural layers consistently improve generalization capabilities of the tested neural PDE surrogates. Source code for our PyTorch implementation is available at Marco AS Trindade, Vinicius NL Rocha, and S Floquet. Clifford algebras, quantum neural networks and generalized quantum fourier transform. arXiv preprint arXiv:2206.01808, 2022.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. | 10.48550/arxiv.2209.04934 | [
"https://export.arxiv.org/pdf/2209.04934v2.pdf"
]
| 252,199,584 | 2209.04934 | 84959e211a767f902cbf1695ec54a5b50148020f |
Published as a conference paper at ICLR 2023 CLIFFORD NEURAL LAYERS FOR PDE MODELING
Johannes Brandstetter [email protected]
Systems and Robotics Research
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Autonomous
Rianne Van Den Berg [email protected]
Systems and Robotics Research
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Autonomous
Max Welling [email protected]
Systems and Robotics Research
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Autonomous
Jayesh K Gupta [email protected]
Systems and Robotics Research
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Research AI4Science
Microsoft Autonomous
Published as a conference paper at ICLR 2023 CLIFFORD NEURAL LAYERS FOR PDE MODELING
Partial differential equations (PDEs) see widespread use in sciences and engineering to describe simulation of physical processes as scalar and vector fields interacting and coevolving over time. Due to the computationally expensive nature of their standard solution methods, neural PDE surrogates have become an active research topic to accelerate these simulations. However, current methods do not explicitly take into account the relationship between different fields and their internal components, which are often correlated. Viewing the time evolution of such correlated fields through the lens of multivector fields allows us to overcome these limitations. Multivector fields consist of scalar, vector, as well as higher-order components, such as bivectors and trivectors. Their algebraic properties, such as multiplication, addition and other arithmetic operations can be described by Clifford algebras. To our knowledge, this paper presents the first usage of such multivector representations together with Clifford convolutions and Clifford Fourier transforms in the context of deep learning. The resulting Clifford neural layers are universally applicable and will find direct use in the areas of fluid dynamics, weather forecasting, and the modeling of physical systems in general. We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations. For similar parameter count, Clifford neural layers consistently improve generalization capabilities of the tested neural PDE surrogates. Source code for our PyTorch implementation is available at Marco AS Trindade, Vinicius NL Rocha, and S Floquet. Clifford algebras, quantum neural networks and generalized quantum fourier transform. arXiv preprint arXiv:2206.01808, 2022.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis.
INTRODUCTION
Most scientific phenomena are described by the evolution and interaction of physical quantities over space and time. The concept of fields is one widely used construct to continuously parameterize these quantities over chosen coordinates (McMullin, 2002). Prominent examples include (i) fluid mechanics, which has applications in domains ranging from mechanical and civil engineering, to geophysics and meteorology, and (ii) electromagnetism, which provides mathematical models for electric, optical, or radio technologies. The underlying equations of these examples are famously described in various forms of the Navier-Stokes equations and Maxwell's equations. For the majority of these equations, solutions are analytically intractable, and obtaining accurate predictions necessitates falling back on numerical approximation schemes often with prohibitive computation costs. Deep learning's success in various fields has led to a surge of interest in scientific applications, especially at augmenting and replacing numerical solving schemes in fluid dynamics with neural networks (Li et al., 2020;Lu et al., 2021;Rasp & Thuerey, 2021;Keisler, 2022;Weyn et al., 2020;Sønderby et al., 2020;Pathak et al., 2022). Taking weather simulations as our motivating example to ground our discussion, two different kinds of fields emerge: scalar fields such as temperature or humidity, and vector fields such as wind velocity or pressure gradients. Current deep learning based approaches treat different vector field (a) Scalar pressure field (b) Vector wind velocity field Figure 1: Fields of the Earth's shallow water model. Vector components of the wind velocities (right) are strongly related, i.e. they form a vector field. Additionally, the wind vector field and the scalar pressure field (left) are related since the gradient of the pressure field causes air movement and subsequently influences the wind components. We therefore aim to describe scalar and vector field as one multivector field, which models the dependencies correctly.
components the same as scalar fields, and stack all scalar fields along the channel dimension, thereby omitting the geometric relations between different components, both within vector fields as well as between individual vector and scalar fields. This practice leaves out important inductive bias information present in the input data. For example, wind velocities in the xand ydirections are strongly related, i.e. they form a vector field. Additionally, the wind vector field and the scalar pressure field are related since the gradient of the pressure field causes air movement and subsequently influences the wind components. In this work, we therefore build neural PDE surrogates which model the relation between different fields (e.g. wind and pressure field) and field components (e.g. xand ycomponent of the wind velocities). Figure 1 shows an example of a wind vector field as per the Earth's shallow water model in two dimensions, and the related scalar pressure field.
Clifford algebras (Suter, 2003;2012;Dorst et al., 2010;Renaud, 2020) are at the core intersection of geometry and algebra, introduced to simplify spatial and geometrical relations between many mathematical concepts. For example, Clifford algebras naturally unify real numbers, vectors, complex numbers, quaternions, exterior algebras, and many more. Most notably, in contrast to standard vector analysis where primitives are scalars and vectors, Clifford algebras have additional spatial primitives for representing plane and volume segments. An expository example is the crossproduct of two vectors in 3 dimensions, which naturally translates to a plane segment spanned by these two vectors. The cross product is often represented as a vector due to its 3 independent components, but the cross product has a sign flip under reflection that a true vector does not. In Clifford algebras, different spatial primitives can be summarized into objects called multivectors, as illustrated in Figure 2. In this work, we replace operations over feature fields in deep learning architectures by their Clifford algebra counterparts, which operate on multivector feature fields. Operations on, and mappings between multivectors are defined by Clifford algebras. For example, we will endow a convolutional kernel with multivector components, such that it can convolve over multivector feature maps.
Bivectors e1 e2 e3 1 e1 e2 e3 e1e2 e3e1 e2e3 e1e2e3
BACKGROUND: CLIFFORD ALGEBRAS
We introduce important mathematical concepts and discuss three Clifford algebras, Cl 2,0 (R), Cl 0,2 (R), Cl 3,0 (R), which we later use for the layers introduced in Section 3. A more detailed introduction as well as connections to complex numbers and quaternions is given in Appendix A.
Clifford algebras. Consider the vector space R n with standard Euclidean product ., . , where n = p + q, and p and q are non-negative integers. A real Clifford algebra Cl p,q (R) is an associative algebra 1 generated by p + q orthonormal basis elements e 1 , . . . , e p+q of the generating vector space R n , such that the following quadratic relations hold: e 2 i = +1 for 1 ≤ i ≤ p; e 2 j = −1 for p < j ≤ p + q; e i e j = −e j e i for i = j .
(1)
The pair (p, q) is called the signature and defines a Clifford algebra Cl p,q (R), together with the basis elements that span the vector space G p+q of Cl p,q (R). Vector spaces of Clifford algebras have scalar elements and vector elements, but can also have elements consisting of multiple basis elements of the generating vector space R n , which can be interpreted as plane and volume segments. Exemplary low-dimensional Clifford algebras are: (i) Cl 0,0 (R) which is a one-dimensional algebra that is spanned by the basis element {1} and is therefore isomorphic to R, the field of real numbers;
(ii) Cl 0,1 (R) which is a two-dimensional algebra with vector space G 1 spanned by {1, e 1 } where the basis vector e 1 squares to −1, and is therefore isomorphic to C, the field of complex numbers; (iii) Cl 0,2 (R) which is a 4-dimensional algebra with vector space G 2 spanned by {1, e 1 , e 2 , e 1 e 2 }, where e 1 , e 2 , e 1 e 2 all square to −1 and anti-commute. Thus, Cl 0,2 (R) is isomorphic to the quaternions H.
Grade, dual, geometric product. The grade of a Clifford algebra basis element is the dimension of the subspace it represents. For example, the basis elements {1, e 1 , e 2 , e 1 e 2 } of the vector space G 2 of the Clifford algebra Cl 2,0 (R) have the grades {0, 1, 1, 2}. Using the concept of grades, we can divide Clifford algebras into linear subspaces made up of elements of each grade. The grade subspace of smallest dimension is M 0 , the subspace of all scalars (elements with 0 basis vectors of the generating vector space). Elements of M 1 are called vectors, elements of M 2 are bivectors, and so on. In general, a vector space G p+q of a Clifford algebra Cl p,q (R) can be written as the direct sum of all of these subspaces: G p+q = M 0 ⊕ M 1 ⊕ . . . ⊕ M p+q . The elements of a Clifford algebra are called multivectors, containing elements of subspaces, i.e. scalars, vectors, bivectors, . . . , k-vectors. The basis element with the highest grade is called the pseudoscalar 2 , which in R 2 corresponds to the bivector e 1 e 2 , and in R 3 to the trivector e 1 e 2 e 3 . The dual a * of a multivector a is defined as a * = ai p+q , where i p+q represents the respective pseudoscalar of the Clifford algebra. This definition allows us to relate different multivectors to each other, which is a useful property when defining Clifford Fourier transforms. For example, for Clifford algebras in R 2 the dual of the scalar is the bivector, and in R 3 , the dual of the scalar is the trivector. Finally, the geometric product is a bilinear operation on multivectors. For arbitrary multivectors a, b, c ∈ G p+q , and scalar λ, the geometric product has the following properties: (i) closure, i.e. ab ∈ G p+q , (ii) associativity, i.e. (ab)c = a(bc); (iii) commutative scalar multiplication, i.e. λa = aλ; (iv) distributive over addition, i.e. a(b + c) = ab + ac. The geometric product is in general non-commutative, i.e. ab = ba. Note that Equation 1 describe the geometric product specifically between basis elements of the generating vector space.
Clifford algebras Cl 2,0 (R) and Cl 0,2 (R). The 4-dimensional vector spaces of these Clifford algebras have the basis vectors {1, e 1 , e 2 , e 1 e 2 } where e 1 , e 2 square to +1 for Cl 2,0 (R) and to −1 for Cl 0,2 (R). For Cl 2,0 (R), the geometric product of two multivectors a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 and b = b 0 + b 1 e 1 + b 2 e 2 + b 12 e 1 e 2 is given by: ab = (a 0 b 0 + a 1 b 1 + a 2 b 2 − a 12 b 12 )1 + (a 0 b 1 + a 1 b 0 − a 2 b 12 + a 12 b 2 )e 1 + (a 0 b 2 + a 1 b 12 + a 2 b 0 − a 12 b 1 )e 2 + (a 0 b 12 + a 1 b 2 − a 2 b 1 + a 12 b 0 )e 1 e 2 ,
which can be derived by collecting terms that multiply the same basis elements, see Appendix A. A vector x = (x 1 , x 2 ) ∈ R 2 with standard Euclidean product ., . can be related to x 1 e 1 + x 2 e 2 ∈ R 2 ⊂ G 2 . Clifford multiplication of two vectors x, y ∈ R 2 ⊂ G 2 yields the geometric product xy: xy = (x 1 e 1 + x 2 e 2 )(y 1 e 1 + y 2 e 2 ) = x 1 y 1 e 2 1 + x 2 y 2 e 2 2 + x 1 y 2 e 1 e 2 + x 2 y 1 e 2 e 1 = x, y + x ∧ y ,
where ∧ is the exterior or wedge product. The asymmetric quantity x ∧ y = −y ∧ x is associated with the bivector, which can be interpreted as an oriented plane segment as shown in Figure 3. A unit bivector i 2 , spanned by the (orthonormal) basis vectors e 1 and e 2 is determined by the product:
i 2 = e 1 e 2 = e 1 , e 2 =0 + e 1 ∧ e 2 = − e 2 ∧ e 1 = − e 2 e 1 ,
which if squared yields i 2 2 = −1. Thus, i 2 represents a geometric √ −1. From Equation 4, it follows that e 2 = e 1 i 2 = −i 2 e 1 and e 1 = i 2 e 2 = −e 2 i 2 . Using the pseudoscalar i 2 , the dual of a scalar is a bivector and the dual of a vector is again a vector. The dual pairs of the base vectors are 1 ↔ e 1 e 2 and e 1 ↔ e 2 . For Cl 2,0 (R), these dual pairs allow us to write an arbitrary multivector a as a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 = 1 a 0 + a 12 i 2 spinor part +e 1 a 1 + a 2 i 2 vector part ,
which can be regarded as two complex-valued parts: the spinor 3 part, which commutes with the base element 1, i.e. 1i 2 = i 2 1, and the vector part, which anti-commutes with the respective base element e 1 , i.e. e 1 i 2 = e 1 e 1 e 2 = −e 1 e 2 e 1 = −i 2 e 1 . For Cl(0, 2)(R), the vector part changes to e 1 a 1 − a 2 i 2 . This decomposition will be the basis for Clifford Fourier transforms.
The Clifford algebra Cl 0,2 (R) is isomorphic to the quaternions H, which are an extension of complex numbers and are commonly written in the literature as a + bî + c + dk. Quaternions also form a 4-dimensional algebra spanned by {1,î,,k}, whereî,,k all square to −1. The algebra isomorphism to Cl 0,2 (R) is easy to verify since e 1 , e 2 , e 1 e 2 all square to −1 and anti-commute. The basis element 1 is called the scalar and the basis elementsî,,k are called the vector part of a quaternion.
Quaternions have practical uses in applied mathematics, particularly for expressing rotations, which we will use to define the rotational Clifford convolution layer in Section 3.
Clifford algebra Cl 3,0 (R). The 8-dimensional vector space G 3 of the Clifford algebra Cl 3,0 (R) has the basis vectors {1, e 1 , e 2 , e 3 , e 1 e 2 , e 3 e 1 , e 2 e 3 , e 1 e 2 e 3 }, i.e. it consists of one scalar, three vectors {e 1 , e 2 , e 3 }, three bivectors {e 1 e 2 , e 3 e 1 , e 2 e 3 } 4 , and one trivector e 1 e 2 e 3 . The trivector is the pseudoscalar i 3 of the algebra. The geometric product of two multivectors is defined analogously to the geometric product of Cl 2,0 (R), see Appendix A. The dual pairs of Cl 3,0 (R) are: 1 ↔ e 1 e 2 e 3 = i 3 , e 1 ↔ e 2 e 3 , e 2 ↔ e 3 e 1 , and e 3 ↔ e 1 e 2 . An intriguing example of the duality of the multivectors of Cl 3,0 (R) emerges when writing the expression of the electromagnetic field F in terms of an electric vector field E and a magnetic vector field B, such that F = E + Bi 3 , where E = E x e 1 +E y e 2 +E z e 3 and B = B x e 1 +B y e 2 +B z e 3 . In this way the electromagnetic field F decomposes into electric vector and magnetic bivector parts via the pseudoscalar i 3 . For example, for the base component B x e 1 of B it holds that B x e 1 i 3 = B x e 1 e 1 e 2 e 3 = B x e 2 e 3 which is a bivector and the dual to the base component E x e 1 of E. Consequently, the multivector representing F consists of three vectors (the electric field components) and three bivectors (the magnetic field components multiplied by i 3 ). This viewpoint gives Clifford neural layers a natural advantage over their default counterparts as we will see in Section 4.
CLIFFORD NEURAL LAYERS
Here, we introduce 2D Clifford convolution and 2D Clifford Fourier transform layers. Appendix B contains extensions to 3 dimensions. In Appendices B, D, related literature is discussed, most notably complex (Bassey et al., 2021) and quaternion neural networks (Parcollet et al., 2020).
Clifford CNN layers. Regular convolutional neural network (CNN) layers take as input feature maps f : Z 2 → R cin and convolve 5 them with a set of c out filters {w i } cout i=1 with w i : Z 2 → R cin :
[f w i ](x) = y∈Z 2 f (y), w i (y − x) = y∈Z 2 cin j=1 f j (y)w i,j (y − x) ,(6)
3 Spinors are elements of a complex vector space that can be associated with Euclidean space. Unlike vectors, spinors transform to their negative when rotated 360 • . 4 The bivector e1e3 has negative orientation. 5 In deep learning, a convolution operation in the forward pass is implemented as cross-correlation. which can be interpreted as an inner product of input feature maps with the corresponding filters at every point y ∈ Z 2 . By applying c out filters, the output feature maps can be interpreted as c out dimensional feature vectors at every point y ∈ Z 2 . We now extend CNN layers such that the element-wise product of scalars f j (y)w i,j (y−x) is replaced by the geometric product of multivector inputs and multivector filters f j (y)w i,j (y − x), where the chosen signature of Cl is reflected in the geometric product. We replace the feature maps f : Z 2 → R cin by multivector feature maps f : Z 2 → (G 2 ) cin and convolve them with a set of c out multivector filters
{w i } cout i=1 : Z 2 → (G 2 ) cin : f w i (x) = y∈Z 2 cin j=1 f j (y)w i,j (y − x) f j w i,j : G 2 ×G 2 →G 2 .(7)
Geometric product
Multivector input fields
Multivector kernels
Multivector output fields
f (x) w(x)
f (x) Figure 4: Sketch of Clifford convolution. Multivector input fields are convolved with multivector kernels.
Note that each geometric product, indexed by i ∈ {1, ..., c out } and j ∈ {1, ..., c in }, now results in a new multivector rather than a scalar. Hence, the output of a layer is a grid of c out multivectors. We can e.g. implement a Cl(2, 0)(R) Clifford CNN layer using Equation
2 where {b 0 , b 1 , b 2 , b 12 } → {w i,j 0 , w i,j 1 , w i,j 2 , w i,j
12 } correspond to 4 different kernels representing one 2D multivector kernel, i.e. 4 different convolution layers, and {a 0 , a 1 , a 2 , a 12 } → {f j 0 , f j 1 , f j 2 , f j 12 } correspond to the scalar, vector and bivector parts of the input multivector field. The channels of the different layers represent different stacks of scalars, vectors, and bivectors. Analogously, we can implement a Cl(3, 0)(R) CNN layer using Equation 42 in Appendix B. A schematic sketch of a Clifford convolution layer is shown in Figure 4.
Rotational Clifford CNN layers. Here we introduce an alternative parameterization to the Clifford CNN layer introduced in Equation 7 by using the isomorphism of the Clifford algebra Cl 0,2 (R) to quaternions. We take advantage of the fact that a quaternion rotation can be realized by a matrix multiplication (Jia, 2008;Kuipers, 1999;Schwichtenberg, 2015). Using the isomorphism, we can represent the feature maps f j and filters w i,j as quaternions:
f j = f j 0 + f j 1î + f j 2 + f j 3k and w i,j = w i,j 0 + w i,j 1î + w i,j 2 + w i,j 3k
6 . We can now devise an alternative parameterization of the product between the feature map f j and w i,j . To be more precise, we introduce a composite operation that results in a scalar quantity and a quaternion rotation, where the latter acts on the vector part of the quaternion f j and only produces nonzero expansion coefficients for the vector part of the quaternion output. A quaternion rotation w i,j f j (w i,j ) −1 acts on the vector part (î,,k) of f j , and can be algebraically manipulated into a vector-matrix operation R i,j f j , where R i,j : H → H is built up from the elements of w i,j (Kuipers, 1999). In other words, one can transform the vector part (î,,k) of f j ∈ H via a rotation matrix R i,j that is built from the scalar and vector part (1,î,,k) of w i,j ∈ H. Altogether, a rotational multivector filter
{w i rot } cout i=1 : Z 2 → (G 2 ) cin acts on the feature map f j through a rotational transformation R i,j (w i,j rot,0 , w i,j rot,1 , w i,j rot,2 , w i,j rot,12 )
acting on vector and bivector parts of the multivector feature map f : Z 2 → (G 2 ) cin , and an additional scalar response of the multivector filters: Clifford Fourier layers. The discrete Fourier transform of an n-dimensional complex signal f (x) = f (x 1 , . . . , x n ) : R n → C at M 1 × . . . × M n grid points is defined as:
f w i rot (x) = y∈Z 2 cin j=1 f j (y)w i,j rot (y − x)) 0 scalar output +R i,j (y − x) · Ñ f j 1 (y) f j 2 (y) f j 12 (y) é ,(8)where f j (y)w i,j rot (y − x)) 0 = f j 0 w i,j rot,0 − f j 1 w i,j rot,1 − f j 2 w i,j rot,2 − f j 12 w i,F{f }(ξ 1 , . . . , ξ n ) = M1 m1=0 . . . Mn mn=0 f (m 1 , . . . , m n ) · e −2πi· m 1 ξ 1 M 1 +...+ mn ξn Mn ,(9)
where (ξ 1 , . . . , ξ n ) ∈ Z M1 . . . × . . . Z Mn . In Fourier Neural Operators (FNO) (Li et al., 2020), discrete Fourier transforms on real-valued input fields and respective back-transforms -implemented as Fast Fourier Transforms on real-valued inputs (RFFTs) 7 -are interleaved with a weight multiplication by a complex weight matrix of shape c in ×c out for each mode, which results in a complex-valued weight tensor of the form W ∈ C cin×cout×(ξ max 1 ×...×ξ max n ) , where Fourier modes above cut-off frequencies (ξ max 1 , . . . , ξ max n ) are set to zero. Additionally, a residual connection is usually implemented as convolution layer with kernel size 1. In Figure 5a, a sketch of an FNO layer is shown. For Cl(2, 0)(R), the Clifford Fourier transform (Ebling & Scheuermann, 2005;Ebling, 2006;Hitzer, 2012) for multivector valued functions f (x) : R 2 → G 2 and vectors x, ξ ∈ R 2 is defined as:
f (ξ) = F{f }(ξ) = 1 2π R2 f (x)e −2πi2 x,ξ dx , ∀ξ ∈ R 2 ,(10)
provided that the integral exists. In contrast to standard Fourier transforms, f (x) andf (ξ) represent multivector fields in the spatial and the frequency domain, respectively. Furthermore, i 2 = e 1 e 2 is used in the exponent. Inserting the definition of multivector fields, we can rewrite Equation 10 as:
F{f }(ξ) = 1 2π R2 1 Å f 0 (x) + f 12 (x)i 2 spinor part ã + e 1 Å f 1 (x) + f 2 (x)i 2 vector part ã e −2πi2 x,ξ dx = 1 F Å f 0 (x) + f 12 (x)i 2 ã (ξ) + e 1 F Å f 1 (x) + f 2 (x)i 2 ã (ξ) .(11)
We obtain a Clifford Fourier transform by applying two standard Fourier transforms to the dual pairs f 0 = f 0 (x) + f 12 (x)i 2 and f 1 = f 1 (x) + f 2 (x)i 2 , which both can be treated as a complexvalued signals f 0 , f 1 : R 2 → C. Consequently, f (x) can be understood as an element of C 2 . The 2D Clifford Fourier transform is the linear combination of two classical Fourier transforms. Discrete versions of Equation 11 are obtained analogously to Equation 9, see Appendix B. Similar to FNO, multivector weight tensors W ∈ (G 2 ) cin×cout×(ξ max 1 ×ξ max 2 ) are applied, where again Fourier modes above cut-off frequencies (ξ max 1 , ξ max 2 ) are set to zero. In doing so, we point-wise modify the Clifford Fourier modesf (ξ) = F{f }(ξ) =f 0 (ξ)+f 1 (ξ)e 1 +f 2 (ξ)e 2 +f 12 (ξ)e 12 via the geometric product. The Clifford Fourier modes follow naturally when combining spinor and vector parts of Equation 11. Finally, the residual connection is replaced by a Clifford convolution with multivector kernel k. A schematic sketch is shown in Figure 5b. For Cl(3, 0)(R), Clifford Fourier transforms follow a similar elegant construction, where we apply four separate Fourier transforms to
f 0 (x) = f 0 (x) + f 123 (x)i 3 f 1 (x) = f 1 (x) + f 23 (x)i 3 f 2 (x) = f 2 (x) + f 31 (x)i 3 f 3 (x) = f 3 (x) + f 12 (x)i 3 ,(12)
i.e. scalar/trivector and vector/bivector components are combined into complex fields and then subjected to a Fourier transform.
EXPERIMENTS
We assess Clifford neural layers for different architectures in three experimental settings: the incompressible Navier-Stokes equations, shallow water equations for weather modeling, and 3dimensional Maxwell's equations. We replace carefully designed baseline architectures by their The real valued Fast Fourier transform (RFFT) over real valued scalar input fields f (x) is replaced by the complex Fast Fourier transform (FFT) over the complex valued dual parts v(x) and s(x) of multivector fields f (x). Pointwise multiplication in the Fourier space via complex weight tensor W is replaced by the geometric product in the Clifford Fourier space via multivector weight tensor W . Additionally, the convolution path is replaced by Clifford convolutions with multivector kernels w.
R F F T F(f (x)) RFFT −1 f (x) f * (x) f † (x) W w (a) FNO layer f (x) v(x) s(x) f * (x) v * (x) s * (x) FFT F(f (x)) FFT FFT −1 FFT −1 f † (x) W w (b) Clifford FNO layer
Clifford counterparts. Baseline ResNet architectures comprise 8 residual blocks, each consisting of two convolution layers with 3 × 3 kernels, shortcut connections, group normalization (Wu & He, 2018), and GeLU activation functions (Hendrycks & Gimpel, 2016). Baseline 2-dimensional Fourier Neural Operators (FNOs) consist of 8 (4) FNO blocks, GeLU activations and no normalization scheme, using 16 (8) Fourier modes for the 2and 3-dimensional equations, respectively. For Clifford networks, we change convolutions and Fourier transforms to their respective Clifford operation, and substitute normalization techniques and activation functions with Clifford counterparts, keeping the number of parameters similar. We evaluate different training set sizes, and report losses for scalar and vector fields. All datasets share the common trait of containing multiple input and output fields. More precisely, one scalar and one 2-dimensional vector field in case of the Navier-Stokes and the shallow water equations, and a 3-dimensional (electric) vector field and its dual (magnetic) bivector field in case of the Maxwell's equations.
Input
Target
Scalar
Vector x Vector y Figure 6: Example input and target fields for the Navier-Stokes experiments. Input fields comprise a t = 2 timestep history.
Example inputs and targets of the neural PDE surrogates are shown in Figure 6. The number of input timesteps t vary for different experiments. The one-step loss is the mean-squared error at the next timestep summed over fields. The rollout loss is the mean-squared error after applying the neural PDE surrogate 5 times, summing over fields and time dimension. More information on the implementation details of the tested architectures, loss functions, and more detailed results can be found in Appendix C.
Navier-Stokes in 2D. The incompressible Navier-Stokes equations (Temam, 2001) conserve the velocity flow fields v : X → R 2 where X ∈ R 2 via:
∂v ∂t = −v · ∇v + µ∇ 2 v − ∇p + f , ∇ · v = 0 ,(13)
where v · ∇v is the convection, i.e. the rate of change of v along v, µ∇ 2 v the viscosity, i.e. the diffusion or net movement of v, ∇p the internal pressure and f an external force, which in our case is a buoyancy force. An additional incompressibility constraint ∇ · v = 0 yields mass conservation of the Navier-Stokes equations. In addition to the velocity field, we introduce a scalar field representing a scalar quantity, i.e. smoke, that is being transported via the velocity field. The scalar field is advected by the vector field, i.e. as the vector field changes, the scalar field is transported along with it, whereas the scalar field influences the vector field only via an external force term. We call this weak coupling between vector and scalar fields. We implement the 2D Navier-Stokes equation using ΦFlow 8 (Holl et al., 2020), obtaining data on a grid with spatial resolution of 128 × 128 (∆x = 0.25, ∆y = 0.25), and temporal resolution of ∆t = 1.5 s. Results for one-step loss and rollout loss on the test set are shown in Figure 7a. For ResNet-like architectures, we observe that both CResNet and CResNet rot improve upon the ResNet baseline. Additionally, we observe that rollout losses are also lower for the two Clifford based architectures, which we attribute to better and more stable models that do not overfit to one-step predictions so easily. Lastly, while in principle CResNet and CResNet rot based architectures are equally flexible, CResNet rot ones in general perform better than CResNet ones. For FNO and respective Clifford Fourier based (CFNO) architectures, the loss is in general much lower than for ResNet based architectures. CFNO architectures improve upon FNO architectures for all dataset sizes, and for one-step as well as rollout losses.
Shallow water equations. This set of coupled equations (Vreugdenhil, 1994) can be derived from integrating the incompressible Navier-Stokes equations, in cases where the horizontal length scale is much larger than the vertical length scale. As such, the equations model a thin layer of fluid of constant density in hydrostatic balance, bounded from below by the bottom topography and from above by a free surface via 3 coupled PDEs, describing the velocity in xdirection, the velocity in the ydirection, and the scalar pressure field. The shallow water equations can be therefore be used as simplified weather model, as done in this work and exemplified in Figure 1. The relation between vector and scalar components is relatively strong (strong coupling due to the 3-coupled PDEs). We obtain data for the 2D shallow water equations on a grid with spatial resolution of 192 × 96 (∆x = 1.875 • , ∆y = 3.75 • ), and temporal resolution of ∆t = 6 h. We observe similar results than for the Navier-Stokes experiments. For low number of trajectories, ResNet architectures seem to lack expressiveness, where arguably some data smoothing is learned first. Thus, ResNets need significantly more trajectories compared to (C)FNO architectures to obtain reasonable loss values, which seems to go hand in hand with Clifford layers gaining advantage. In general, performance differences between baseline and Clifford architectures are even more pronounced, which we attribute to the stronger coupling of the scalar and the vector fields.
Maxwell's equations in matter in 3D. In isotropic media, Maxwell's equations (Griffiths, 2005) propagate solutions of the displacement field D, which is related to the electrical field via D = 0 r E, where 0 is the permittivity of free space and r is the permittivity of the medium, and the magnetization field H, which is related to the magnetic field B via H = µ 0 µ r B, where µ 0 is the permeability of free space and µ r is the permeability of the medium. The electromagnetic field F has the intriguing property that the electric field E and the magnetic field B are dual pairs, thus F = E + Bi 3 , i.e. strong coupling between the electric field and its dual (bivector) magnetic field. This duality also holds for D and H. Concretely, the fields of interest are the vector-valued D-field (D x , D y , D z ) and the vector-valued H-field (H x , H y , H z ). We obtain data for the 3D Maxwell's equations on a grid with spatial resolution of 32 × 32 × 32 (∆x = ∆y = ∆z = 5 · 10 −7 m), and temporal resolution of ∆t = 50 s. We randomly place 18 different light sources outside a cube which emit light with different amplitude and different phase shifts, causing the resulting D and H fields to interfere. The wavelength of the emitted light is 10 −5 m. We test FNO based architectures and respective Clifford counterparts (CFNO). Due to the vector-bivector character of electric and magnetic field components, Maxwell's equations are an ideal playground to stress-test the inductive bias advantages of Clifford base architectures. Results for one-step loss and rollout loss on the test set are shown in Figure 8. CFNO architectures improve upon FNO architectures, especially for low numbers of trajectories. Results demonstrate the much stronger inductive bias of Clifford based 3-dimensional Fourier layers, and their general applicability to 3-dimensional problems, which are structurally even more interesting than 2-dimensional ones.
CONCLUSION
We introduced Clifford neural layers that handle the various scalar (e.g. charge density), vector (e.g. electric field), bivector (magnetic field) and higher order fields as proper geometric objects organized as multivectors. This geometric algebra perspective allowed us to naturally generalize convolution and Fourier transformations to their Clifford counterparts, providing an elegant rule to design new neural network layers. The multivector viewpoint denotes an inductive bias advantage, leading to a better representation of the relationship between fields and their individual components, which is prominently demonstrated by the fact that our Clifford layers significantly outperformed equivalent standard neural PDE surrogates.
Limitations. One limitation is the current speed of Fast Fourier Transform (FFT) operations on machine learning accelerators like GPUs. While an active area of research, current available versions of cuFFT 9 kernels wrapped in PyTorch (Paszke et al., 2019) are not yet as heavily optimized 10 , especially for the gradient pass. In contrast to FNO layers, which operate on real-valued signals, Clifford Fourier layers use complex-valued FFT operations where the backward pass is approximately twice as slow. For similar parameter counts, inference times of FNO and CFNO networks are similar. Similar to Grassucci et al. (2021) who investigated the speed of geometric convolution layers, we found that Clifford convolutions are more parameter efficient since they share parameters among filters, with the downside that the net number of operations is larger, resulting in increased training times by a factor of about 2. Finally, from a PDE point of view, the presented approaches to obtain PDE surrogates are limited since the neural networks have to be retrained for different equation parameters or e.g. different ∆t.
Future work. Besides modeling of PDEs, weather, and fluid dynamics, we see potential applications of Clifford layers for e.g. MRI or radar data, and for neural implicit representations (Xie et al., 2022;Rella et al., 2022). Extensions towards graph networks and attention based models will be useful to explore. Furthermore, custom multivector GPU kernels can overcome many of the speed issues as the compute density of Clifford operations is much higher which is better for hardware accelerators (Hoffmann et al., 2020). The use of a just-in-time compiled language with better array abstractions like Julia (Bezanson et al., 2017) could significantly simplify the interface. Finally, combining the ideas of multivector modeling together with various physics-informed neural network approaches (Raissi et al., 2019;Lutter et al., 2018;Gupta et al., 2019;Cranmer et al., 2020;Zubov et al., 2021) is an attractive next step.
REPRODUCIBILITY AND ETHICAL STATEMENT
Reproducibility statement. We have included error bars, and ablation studies wherever we found it necessary and appropriate. We have described our architectures in Section 4 and provided further implementation details in Appendix Section C. We have further include pseudocode for the newly proposed layers in Appendix Section B.6. We open-sourced our PyTorch implementation at https://microsoft.github.io/cliffordlayers/ for others to use. We aim to develop this codebase further in the future.
Ethical statement. Neural PDE surrogates will play an important role in modeling many natural phenomena, and thus developing them further might enable us to achieve shortcuts or alternatives for computationally expensive simulations. For example, if used as such, PDE surrogates will potentially help to advance different fields of research, especially in the natural sciences. Examples related to this paper are fluid dynamics or weather modeling. Therefore, PDE surrogates might potentially be directly or indirectly related to reducing the carbon footprint. On the downside, relying on simulations always requires rigorous cross-checks and monitoring, especially when we "learn to simulate". A MATHEMATICAL BACKGROUND This appendix supports Section 2 of the main paper. We give a more detailed explanation of real Clifford algebras and have a closer look at Cl 2,0 (R), Cl 0,2 (R), and Cl 3,0 (R). For a detailed introduction into Clifford algebras we recommend Suter (2003); 2012); Dorst et al. (2010); Renaud (2020) A.1 CLIFFORD ALGEBRAS Vector spaces and algebras over a field. A vector space over a field F is a set V together with two binary operations that satisfy the axioms for vector addition and scalar multiplication. The axioms of addition ensure that if two elements of V get added together, we end up with another element of V . The elements of F are called scalars. Examples of a field F are the real numbers R and the complex numbers C. Although it is common practice to refer to the elements of a general vector space V as vectors, to avoid confusion we will reserve the usage of this term to the more specific case of elements of R n . As we will see below, general vector spaces can consist of more complicated, higher-order objects than scalars, vectors or matrices.
An algebra over a field consists of a vector space V over a field F together with an additional bilinear law of composition of elements of the vector space, V × V → V , that is, if a and b are any two elements of V , then ab : V × V → V is an element of V , satisfying a pair of distribution laws: a(λ 1 b + λ 2 c) = λ 1 ab + λ 2 ac and (λ 1 a + λ 2 b)c = λ 1 ac + λ 2 bc for λ 1 , λ 2 ∈ F and a, b, c ∈ V . Note that general vector spaces don't have bilinear operations defined on their elements.
Clifford algebras over R. In this manuscript we will focus on Clifford algebras over R. For a more general exposition on Clifford algebras over different fields the reader is referred to Lounesto (1986).
A real Clifford algebra is generated by the n-dimensional vector space R n through a set of relations that hold for the basis elements of the vector space R n . Let us denote the basis elements of R n with e 1 , ..., e n , and without loss of generality choose these basis elements to be mutually orthonormal.
Taking two nonnegative integers p and q, such that p + q = n, then a real Clifford algebra Cl p,q (R) with the "signature" (p, q), is generated through the following relations that define how the bilinear product of the algebra operates on the basis elements of R n :
e 2 i = +1 for 1 ≤ i ≤ p ,(14)e 2 j = −1 for p < j ≤ p + q ,(15)e i e j = −e j e i for i = j .(16)
Through these relations we can generate a basis for the vector space of the Clifford algebra, which we will denote with G. Equations 14 and 15 show that the product between two vectors yields a scalar. According to the aforementioned definition of an algebra over a field, a Clifford algebra with a vector space G is equipped with a bilinear product G × G → G, that combines two elements from the vector space G and yields another element of the same space G. Therefore, both scalars and vectors must be elements of the vector space G. Equation 16 shows that besides scalar and vector elements, higher order elements consisting of a combination of two basis elements, such as e i e j and e j e i , are also part of the vector space G. Finally, by combining Equations 14, 15, 16 we can create even higher order elements such as e i e j e k for i = j = k, or e 1 e 2 ...e p+q , which all must be part of the vector space G.
In order to determine what the basis elements are that span the vector space G of Cl p,q (R), we note that elements e σ(1) e σ(2) ...e σ(k) and e 1 e 2 ...e k are related through a simple scalar multiplicative factor of plus or minus one, depending on the sign of the permutation σ. Therefore, it suffices to consider the unordered combinations of basis elements of R n : the basis of the vector space G is given by {1, e 1 , e 2 , ..., e p+q , e 1 e 2 , ..., e p+q−1 e p+q , ..., e 1 e 2 ...e p+q }.
In summary, we have introduced two different vector spaces. First, the vector space R n which generates the Clifford algebra, and second the vector space G, which is the vector space spanned by the basis elements of the Clifford algebra Cl p,q (R). Convention is to denote the vector space of a real Clifford algebra with a superscript n of the dimension of the generating vector space, yielding G n for a generating vector space R n . Note that the dimension of the vector space G n is 2 n = 2 p+q .
Exemplary low-dimensional Clifford algebras are: (i) Cl 0,0 (R) which is a one-dimensional algebra that is spanned by the vector {1} and is therefore isomorphic to R, the field of real numbers; (ii) Cl 0,1 (R) which is a two-dimensional algebra with vector space G 1 spanned by {1, e 1 } where the basis vector e 1 squares to −1, and is therefore isomorphic to C, the field of complex numbers; (iii) Cl 0,2 (R) which is a 4-dimensional algebra with vector space G 2 spanned by {1, e 1 , e 2 , e 1 e 2 }, where e 1 , e 2 square to −1 and anti-commute. Thus, Cl 0,2 (R) is isomorphic to the quaternions H.
Definition 1: Grade of Clifford algebra element
The grade of a Clifford algebra basis element is the dimension of the subspace it represents.
For example, the basis elements {1, e 1 , e 2 , e 1 e 2 } of the Clifford algebras Cl 0,2 (R) and Cl 2,0 (R) have the grades {0, 1, 1, 2}. Using the concept of grades, we can divide the vector spaces of Clifford algebras into linear subspaces made up of elements of each grade. The grade subspace of smallest dimension is M 0 , the subspace of all scalars (elements with 0 basis vectors). Elements of M 1 are called vectors, elements of M 2 are bivectors, and so on. In general, the vector space G p+q of a Clifford algebra Cl p,q can be written as the direct sum of all of these subspaces:
G p+q = M 0 ⊕ M 1 ⊕ . . . ⊕ M p+q .(17)
The elements of a Clifford algebra are called multivectors, containing elements of subspaces, i.e. scalars, vectors, bivectors, trivectors etc. The basis element with the highest grade is called the pseudoscalar 11 , which in R 2 corresponds to the bivector e 1 e 2 , and in R 3 to the trivector e 1 e 2 e 3 . The pseudoscalar is often denoted with the symbol i p+q . From hereon, only multivectors will be denoted with boldface symbols.
Geometric product. Using Equations 14, 15, 16, we have seen how basis elements of the vector space G p+q of the Clifford algebra are formed using basis elements of the generating vector space V . We now, look at how elements of G p+q are combined, i.e. how multivectors are bilinearly operated on. The geometric product is the bilinear operation on multivectors in Clifford algebras. For arbitrary multivectors a, b, c ∈ G p+q , and scalar λ the geometric product has the following properties:
ab ∈ G p+q closure ,(18)(ab)c = a(bc) associativity ,(19)λa = aλ commutative scalar multiplication ,(20)a(b + c) = ab + ac distributive over addition .(21)
The geometric product is in general non-commutative, i.e. ab = ba. As we describe later, the geometric product is made up of two things: an inner product (that captures similarity) and exterior (wedge) product that captures difference.
Definition 2: Dual of a multivector
The dual a * of a multivector a is defined as:
a * = ai p+q ,(22)
where i p+q represents the respective pseudoscalar of the Clifford algebra.
This definition allows us to relate different multivectors to each other, which is a useful property when defining Clifford Fourier transforms. For example, for Clifford algebras in R 2 the dual of a scalar is a bivector, and for the Clifford algebra R 3 the dual of a scalar is a trivector.
A.2 EXAMPLES OF LOW-DIMENSIONAL CLIFFORD ALGEBRAS
A.2.1 CLIFFORD ALGEBRA Cl 0,1 (R)
The Clifford algebra Cl 0,1 (R) is a two-dimensional algebra with vector space G 1 spanned by {1, e 1 }, and where the basis vector e 1 squares to −1. Cl 0,1 (R) is thus algebra-isomorphic to C, the field of complex numbers. This becomes more obvious if we identify the basis element with the highest grade, i.e. e 1 , as the pseudoscalar i 1 which is the imaginary part of the complex numbers. The geometric product between two multivectors a = a 0 + a 1 e 1 and b = b 0 + b 1 e 1 is therefore also isomorphic to the product of two complex numbers:
ab = a 0 b 0 + a 0 b 1 e 1 + a 1 b 0 e 1 + a 1 b 1 e 1 e 1 = (a 0 b 0 − a 1 b 1 ) + (a 0 b 1 + a 1 b 0 ) e 1 .(23)
A.2.2 CLIFFORD ALGEBRA Cl 2,0 (R)
The Clifford algebra Cl 2,0 (R) is a 4-dimensional algebra with vector space G 2 spanned by the basis vectors {1, e 1 , e 2 , e 1 e 2 } where e 1 , e 2 square to +1. The geometric product of two multivectors a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 and b = b 0 + b 1 e 1 + b 2 e 2 + b 12 e 1 e 2 is defined via: ab = a 0 b 0 + a 0 b 1 e 1 + a 0 b 2 e 2 + a 0 b 12 e 1 e 2 + a 1 b 0 e 1 + a 1 b 1 e 1 e 1 + a 1 b 2 e 1 e 2 + a 1 b 12 e 1 e 1 e 2 + a 2 b 0 e 2 + a 2 b 1 e 2 e 1 + a 2 b 2 e 2 e 2 + a 2 b 12 e 2 e 1 e 2 + a 12 b 0 e 1 e 2 + a 12 b 1 e 1 e 2 e 1 + a 12 b 2 e 1 e 2 e 2 + a 12 b 12 e 1 e 2 e 1 e 2 .
Using the relations e 1 e 1 = 1, e 2 e 2 = 1, and e i e j = −e j e i for i = j ∈ {e 1 , e 2 }, from which it follows that e 1 e 2 e 1 e 2 = −1, we obtain:
ab = a 0 b 0 + a 1 b 1 + a 2 b 2 − a 12 b 12 + (a 0 b 1 + a 1 b 0 − a 2 b 12 + a 12 b 2 ) e 1 + (a 0 b 2 + a 1 b 12 + a 2 b 0 − a 12 b 1 ) e 2 + (a 0 b 12 + a 1 b 2 − a 2 b 1 + a 12 b 0 ) e 1 e 2 .(25)
A vector x ∈ R 2 ⊂ G 2 is identified with x 1 e 1 + x 2 e 2 ∈ R 2 ⊂ G 2 . Clifford multiplication of two vectors x, y ∈ R 2 ⊂ G 2 yields the geometric product xy: xy = (x 1 e 1 + x 2 e 2 )(y 1 e 1 + y 2 e 2 ) = x 1 y 1 e 2 1 + x 2 y 2 e 2 2 + x 1 y 2 e 1 e 2 + x 2 y 1 e 2 e 1 = x, y + x ∧ y ,
Inner product Outer/Wedge product
The asymmetric quantity x∧y = −y ∧x is associated with the now often mentioned bivector, which can be interpreted as an oriented plane segment.
Equation 26 can be rewritten to express the (symmetric) inner product and the (anti-symmetric) outer product in terms of the geometric product:
x ∧ y = 1 2 (xy − yx)(27)
x, y = 1 2 (xy + yx) .
Published as a conference paper at ICLR 2023
From the basis vectors of the vector space G 2 of the Clifford algebra Cl 2,0 (R), i.e. {1, e 1 , e 2 , e 1 e 2 }, probably the most interesting is e 1 e 2 . We therefore have a closer look the unit bivector i 2 = e 1 e 2 which is the plane spanned by e 1 and e 2 and determined by the geometric product:
i 2 = e 1 e 2 = e 1 , e 2 =0 + e 1 ∧ e 2 = − e 2 ∧ e 1 = − e 2 e 1 ,
where the inner product e 1 , e 2 is zero due to the orthogonality of the base vectors. The bivector i 2 if squared yields i 2 2 = −1, and thus i 2 represents a true geometric √ −1. From Equation 29, it follows that
e 2 = e 1 i 2 = − i 2 e 1 e 1 = i 2 e 2 = − e 2 i 2 .(30)
Using definition 2, the dual of a multivector a ∈ G 2 is defined via the bivector as i 2 a. Thus, the dual of a scalar is a bivector and the dual of a vector is again a vector. The dual pairs of the base vectors are 1 ↔ e 1 e 2 and e 1 ↔ e 2 . These dual pairs allow us to write an arbitrary multivector a as a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 12 , a = 1 a 0 + a 12 i 2 spinor part
+ e 1 a 1 + a 2 i 2 vector part ,(31)
which can be regarded as two complex-valued parts: the spinor part, which commutes with i 2 and the vector part, which anti-commutes with i 2 .
A.2.3 CLIFFORD ALGEBRA Cl 0,2 (R)
The Clifford algebra Cl 0,2 (R) is a 4-dimensional algebra with vector space G 2 spanned by the basis vectors {1, e 1 , e 2 , e 1 e 2 } where e 1 , e 2 square to −1. The Clifford algebra Cl 0,2 (R) is algebraisomorphic to the quaternions H, which are commonly written in literature (Schwichtenberg, 2015) as a + bî + c + dk, where the (imaginary) base elementsî,, andk fulfill the relations:
ı 2 = 2 = −1 ı =k î = −k k 2 =îî = −îî =îî = −1 .(32)
Quaternions also form a 4-dimensional algebra spanned by {1,î,,k}, whereî,,k all square to −1. The basis element 1 is often called the scalar part, and the basis elementsî,,k are called the vector part of a quaternion.
The geometric product of two multivectors a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 and b = b 0 + b 1 e 1 + b 2 e 2 + b 12 e 1 e 2 is defined as: ab = a 0 b 0 + a 0 b 1 e 1 + a 0 b 2 e 2 + a 0 b 12 e 1 e 2 + a 1 b 0 e 1 + a 1 b 1 e 1 e 1 + a 1 b 2 e 1 e 2 + a 1 b 12 e 1 e 1 e 2 + a 2 b 0 e 2 + a 2 b 1 e 2 e 1 + a 2 b 2 e 2 e 2 + a 2 b 12 e 2 e 1 e 2 + a 12 b 0 e 1 e 2 + a 12 b 1 e 1 e 2 e 1 + a 12 b 2 e 1 e 2 e 2 + a 12 b 12 e 1 e 2 e 1 e 2 .
Using the relations e 1 e 1 = −1, e 2 e 2 = −1, and e i e j = − e j e i for i = j ∈ {e 1 , e 2 }, from which it follows that e 1 e 2 e 1 e 2 = −1, we obtain:
ab = a 0 b 0 − a 1 b 1 − a 2 b 2 − a 12 b 12 + (a 0 b 1 + a 1 b 0 + a 2 b 12 − a 12 b 2 ) e 1 + (a 0 b 2 − a 1 b 12 + a 2 b 0 + a 12 b 1 ) e 2 + (a 0 b 12 + a 1 b 2 − a 2 b 1 + a 12 b 0 ) e 1 e 2 .(34)
A.2.4 CLIFFORD ALGEBRA Cl 3,0 (R)
The Clifford algebra is a 8-dimensional algebra with vector space G 3 spanned by the basis vectors {1, e 1 , e 2 , e 3 , e 1 e 2 , e 1 e 3 , e 2 e 3 , e 1 e 2 e 3 }, i.e. one scalar, three vectors {e 1 , e 2 , e 3 }, three bivectors {e 1 e 2 , e 1 e 3 , e 2 e 3 }, and one trivector e 1 e 2 e 3 . The trivector is the pseudoscalar i 3 of the algebra. The geometric product of two multivectors is defined analogously to the geometric product of Cl 2,0 (R), following the associative and bilinear multiplication of multivectors follows:
e 2 i = 1 for i = 1, 2, 3 (35) e i e j = −e j e i for i, j = 1, 2, 3, i = j .(36)
Using Definition 2, the dual pairs of Cl 3,0 are:
1 ↔ e 1 e 2 e 3 = i 3 (37) e 1 ↔ e 2 e 3 (38) e 2 ↔ e 3 e 1 (39) e 3 ↔ e 1 e 2 .
The geometric product for Cl 3,0 (R) is defined analogously to the geometric product of Cl 2,0 (R) via:
ab = a 0 b 0 + a 0 b 1 e 1 + a 0 b 2 e 2 + a 0 b 3 e 3 + a 0 b 12 e 1 e 2 + a 0 b 13 e 1 e 3 + a 0 b 23 e 2 e 3 + a 0 b 123 e 1 e 2 e 3 + a 1 b 0 e 1 + a 1 b 1 e 1 e 1 + a 1 b 2 e 1 e 2 + a 1 b 3 e 1 e 3 + a 1 b 12 e 1 e 1 e 2 + a 1 b 13 e 1 e 1 e 3 + a 1 b 23 e 1 e 2 e 3 + a 1 b 123 e 1 e 1 e 2 e 3 + a 2 b 0 e 2 + a 2 b 1 e 2 e 1 + a 2 b 2 e 2 e 2 + a 2 b 3 e 2 e 3 + a 2 b 12 e 2 e 1 e 2 + a 2 b 13 e 1 e 3 e 2 + a 2 b 23 e 2 e 2 e 3 − a 2 b 123 e 2 e 2 e 1 e 3 + a 3 b 0 e 3 + a 3 b 1 e 3 e 1 + a 3 b 2 e 3 e 2 + a 3 b 3 e 3 e 3 + a 3 b 12 e 1 e 3 e 2 − a 3 b 13 e 1 e 3 e 3 − a 3 b 23 e 2 e 3 e 3 + a 3 b 123 e 1 e 2 e 3 e 3 + a 12 b 0 e 1 e 2 − a 12 b 1 e 2 e 1 e 1 + a 12 b 2 e 1 e 2 e 2 + a 12 b 3 e 1 e 2 e 3 + a 12 b 12 e 1 e 2 e 1 e 2 − a 12 b 13 e 1 e 1 e 2 e 3 + a 12 b 23 e 2 e 2 e 1 e 3 + a 12 b 123 e 1 e 2 e 1 e 2 e 3 + a 13 b 0 e 1 e 3 − a 13 b 1 e 3 e 1 e 1 + a 13 b 2 e 1 e 3 e 2 + a 13 b 3 e 1 e 3 e 3 a 13 b 12 e 1 e 1 e 3 e 2 + a 13 b 13 e 1 e 3 e 1 e 3 − a 13 b 23 e 1 e 2 e 3 e 3 + a 13 b 123 e 1 e 3 e 1 e 2 e 3 + a 23 b 0 e 2 e 3 + a 23 b 1 e 1 e 3 e 2 + a 23 b 2 e 2 e 3 e 2 + a 23 b 3 e 2 e 3 e 3 + a 23 b 12 e 2 e 3 e 1 e 2 − a 23 b 13 e 2 e 1 e 3 e 3 + a 23 b 23 e 2 e 3 e 2 e 3 + a 23 b 123 e 2 e 3 e 1 e 2 e 3 + a 123 b 0 e 1 e 2 e 3 + a 123 b 1 e 1 e 2 e 3 e 1 − a 123 b 2 e 1 e 3 e 1 e 2 + a 123 b 3 e 1 e 2 e 3 e 3 + a 123 b 12 e 1 e 2 e 2 e 1 e 2 + a 123 b 13 e 1 e 2 e 3 e 1 e 3 + a 123 b 23 e 1 e 2 e 3 e 2 e 3 + a 123 b 123 e 1 e 2 e 3 e 1 e 2 e 3 ,
where minus signs appear to do reordering of basis elements. Equation 41 simplifies to ab = a 0 b 0 + a 1 b 1 + a 2 b 2 + a 3 b 3 − a 12 b 12 − a 13 b 13 − a 23 b 23 − a 123 b 123 '
+ (a 0 b 1 + a 1 b 0 − a 2 b 12 − a 3 b 13 + a 12 b 2 + a 13 b 3 − a 23 b 123 − a 123 b 23 ) e 1 + (a 0 b 2 + a 1 b 12 + a 2 b 0 − a 3 b 23 − a 12 b 1 + a 13 b 123 + a 23 b 3 + a 123 b 13 ) e 2 + (a 0 b 3 + a 1 b 13 + a 2 b 23 + a 3 b 0 − a 12 b 123 − a 13 b 1 − a 23 b 2 − a 123 b 12 ) e 3 + (a 0 b 12 + a 1 b 2 − a 2 b 1 + a 3 b 123 + a 12 b 0 − a 13 b 23 + a 23 b 13 + a 123 b 3 ) e 1 e 2 + (a 0 b 13 + a 1 b 3 − a 2 b 123 − a 3 b 1 + a 12 b 23 + a 13 b 0 − a 23 b 12 − a 123 b 2 ) e 1 e 3 + (a 0 b 23 + a 1 b 123 + a 2 b 3 − a 3 b 2 − a 12 b 13 + a 13 b 12 + a 23 b 0 + a 123 b 1 ) e 2 e 3 + (a 0 b 123 + a 1 b 23 − a 2 b 13 + a 3 b 12 + a 12 b 3 − a 13 b 2 + a 23 b 1 + a 123 b 0 ) e 1 e 2 e 3 .(42)
A.3 THE ELECTROMAGNETIC FIELD IN 3 DIMENSIONS
Through the lense of Cl(3, 0)(R), an intriguing example of the duality of multivectors is found when writing the expression of the electromagnetic field F in terms of an electric vector field E and a magnetic vector field B (Hestenes & Sobczyk, 2012;, such that
F = E + Bi 3 .(43)
Both the electric field E and the magnetic field B are described by Maxwell's equations (Griffiths, 2005). The two fields are strongly coupled, e.g. temporal changes of electric fields induce magnetic fields and vice versa. Probably the most illustrative co-occurence of electric and magnetic fields is when describing the propagation of light. In standard vector algebra, E is a vector while B is a pseudovector, i.e. the two kinds of fields are distinguished by a difference in sign under space inversion. Equation 43 naturally decomposes the electromagnetic field into vector and bivector parts via the pseudoscalar i 3 . For example, for the base component B x e 1 of B it holds that B x e 1 i 3 = B x e 1 e 1 e 2 e 3 = B x e 2 e 3 , which is a bivector and the dual to the base component e 1 of E. Geometric algebra reveals that a pseudovector is nothing else than a bivector represented by its dual, so the magnetic field B in Equation 43 is fully represented by the complete bivector Bi 3 , rather than B alone. Consequently, the multivector representing F consists of three vectors (the electric field components) and three bivectors e 1 i 3 = e 2 e 3 , e 2 i 3 = e 3 e 1 , e 3 i 3 = e 1 e 2 (the magnetic field components multiplied by i 3 ).
B CLIFFORD NEURAL LAYERS
This appendix supports Section 3 of the main paper. Probably the most related work are (i) by Zang et al. (2022) who build geometric algebra convolution networks to process spatial and temporal data, and (ii) Spellings (2021) who build rotation-and permutationequivariant graph network architectures based on geometric algebra products of node features. Higher order information is built from available node inputs.
B.1 CLIFFORD CONVOLUTION LAYERS
We derive the implementation of translation equivariant Clifford convolution layers for multivectors in G 2 , i.e. multivectors of Clifford algebras generated by the 2-dimensional vector space R 2 . Finally, we make the extension to Clifford algebras generated by the 3-dimensional vector space R 3 .
Regular CNN layers. Regular convolutional neural network (CNN) layers take as input feature maps f : Z 2 → R cin and convolve 12 them with a set of c out filters
{w i } cout i=1 : Z 2 → R cin : f w i (x) = y∈Z 2 f (y), w i (y − x) (44) = y∈Z 2 cin j=1 f j (y)w i,j (y − x) .(45)
Equation 44 can be interpreted as inner product of the input feature maps with corresponding filters at every point y ∈ Z 2 . By applying c out filters, the output feature maps can be interpreted as c out − dimensional features vectors at every point y ∈ Z 2 . We now want to extend convolution layers such that the elementwise product of scalars f j (y)w i,j (y − x) are replaced by the geometric product of multivector inputs and multivector filters f j (y)w i,j (y − x).
Clifford CNN layers. We replace the feature maps f : Z 2 → R cin by multivector feature maps f : Z 2 → (G 2 ) cin and convolve them with a set of c out multivector filters
{w i } cout i=1 : Z 2 → (G 2 ) cin : f w i (x) = y∈Z 2 cin j=1 f j (y)w i,j (y − x) f j w i,j : G 2 ×G 2 →G 2 .(46)
B.1.1 TRANSLATION EQUIVARIANCE OF CLIFFORD CONVOLUTIONS
Theorem 1: Translation equivariance of Clifford convolutions
Let f : Z 2 → (G 2 ) cin be a multivector feature map and let w : Z 2 → (G 2 ) cin be a multivector kernel, then for Cl(2, 0)
(R) [[L t f ] w] (x) = [L t [f w]] (x). Proof. [[Ltf ] w] (x) = y∈Z 2 c in j=1 f (y − t)w(y − x) = y∈Z 2 c in j=1 f0(y − t)w0(y − x) + f1(y − t)w1(y − x) + f2(y − t)w2(y − x) − f12(y − t)w12(y − x) + Å f0(y − t)w1(y − x) + f1(y − t)w0(y − x) − f2(y − t)w12(y − x) + f12(y − t)w2(y − x) ã e1 + Å f0(y − t)w2(y − x) + f1(y − t)w12(y − x) + f2(y − t)w0(y − x) − f12(y − t)w1(y − x) ã e2 + Å f0(y − t)w12(y − x) + f1(y − t)w2(y − x) − f2(f − t)w1(y − x) + f12(y − t)w0(y − x) ã e1e2 (using y → y − t) = y∈Z 2 c in j=1 f0(y)w0(y − (x − t)) + f1(y)w1(y − (x − t)) + f2(y)w2(y − (x − t)) − f12(y)w12(y − (x − t)) + Å f0(y)w1(y − (x − t)) + f1(y)w0(y − (x − t)) − f2(y)w12(y − (x − t)) + f12(y)w2(y − (x − t)) ã e1 + Å f0(y)w2(y − (x − t)) + f1(y)w12(y − (x − t)) + f2(y)w0(y − (x − t)) − f12(y)w1(y − (x − t)) ã e2 + Å f0(y)w12(y − (x − t)) + f1(y)w2(y − (x − t)) − f2(y)w1(y − (x − t)) + f12(y)w0(y − (x − t)) ã e1e2 = [Lt [f w]] (x) .(47)
Implementation of Cl 2,0 (R) and Cl 0,2 (R) layers. We can implement a Cl(2, 0)(R) Clifford CNN layer using Equation 25 where {b 0 , b 1 , b 2 , b 12 } → {w i,j 0 , w i,j 1 , w i,j 2 , w i,j 12 } correspond to 4 different kernels representing one 2D multivector kernel, i.e. 4 different convolution layers, and {a 0 , a 1 , a 2 , a 12 } → {f j 0 , f j 1 , f j 2 , f j 12 } correspond to the scalar, vector and bivector parts of the input multivector field. The channels of the different layers represent different stacks of scalars, vectors, and bivectors. All kernels have the same number of input and output channels (number of input and output multivectors), and thus the channels mixing occurs for the different terms of Equations 25, 42 individually. Lastly, usually not all parts of the multivectors are present in the input vector fields. This can easily be accounted for by just omitting the respective parts of Equations 25, 42. A similar reasoning applies to the output vector fields. For Cl(0, 2)(R), the signs within the geometric product change slightly.
B.1.2 ROTATIONAL CLIFFORD CNN LAYERS
Here we introduce an alternative parameterization to the Clifford CNN layer introduced in Equation 7 by using the isomorphism of the Clifford algebra Cl 0,2 (R) to quaternions 13 . We take advantage of the fact that a quaternion rotation can be realized by a matrix multiplication (Jia, 2008;Kuipers, 1999;Schwichtenberg, 2015). Using the isomorphism, we can represent the feature maps f j and filters w i,j as quaternions:
f j = f j 0 +f j 1î +f j 2 +f j 3k and w i,j = w i,j 0 +w i,j 1î +w i,j 2 +w i,j 3k
14 . Leveraging this quaternion representation, we can devise an alternative parameterization of the product between the feature map f j and w i,j . To be more precise, we introduce a composite operation that results in a scalar quantity and a quaternion rotation, where the latter acts on the vector part of the quaternion f j and only produces nonzero expansion coefficients for the vector part of the quaternion output. A quaternion rotation w i,j f j (w i,j ) −1 acts on the vector part (î,,k) of f j , and can be algebraically manipulated into a vector-matrix operation R i,j f j , where R i,j : H → H is built up from the elements of w i,j (Kuipers, 1999). In other words, one can transform the vector part (î,,k) of f j ∈ H via a rotation matrix R i,j that is built from the scalar and vector part (1,î,,k) of w i,j ∈ H. Altogether, a rotational multivector filter {w i rot } cout i=1 : Z 2 → (G 2 ) cin acts on the feature map f j through a rotational transformation R i,j (w i,j rot,0 , w i,j rot,1 , w i,j rot,2 , w i,j rot,12 ) acting on vector and bivector parts of the multivector feature map f : Z 2 → (G 2 ) cin , and an additional scalar response of the multivector filters: 12 , which is the scalar output of Equation 34. The rotational matrix R i,j (y − x) in written out form reads:
f w i rot (x) = y∈Z 2 cin j=1 f j (y)w i,j rot (y − x) = y∈Z 2 cin j=1 f j (y)w i,j rot (y − x)) 0 scalar output +R i,j (y − x) · Ñ f j 1 (y) f j 2 (y) f j 12 (y) é ,(48)where f j (y)w i,j rot (y − x)) 0 = f j 0 w i,j rot,0 − f j 1 w i,j rot,1 − f j 2 w i,j rot,2 − f j 12 w i,j rot,R i,j = Ñ 1 − 2 (ŵ i,j rot,2 ) 2 + (ŵ i,j rot,12 ) 2 2 ŵ i,j rot,1ŵ i,j rot,2 −ŵ i,j rot,0ŵ i,j rot,12 1 − 2 (ŵ i,j rot,1 ) 2 + (ŵ i,j rot,2 ) 2 é ,(49)whereŵ i,j rot (y − x) =ŵ i,j rot,0 (y − x) +ŵ i,j rot,1 (y − x)e 1 +ŵ i,j rot,2 (y − x)e 2 +ŵ i,j rot,12 (y − x)e 12
is the normalized filter with ŵ i,j rot = 1. The dependency (y − x) is omitted inside the rotation matrix R i,j for clarity.
B.1.3 3D CLIFFORD CONVOLUTION LAYERS
Implementation of Cl 3,0 (R) layers. Analogously to the 2-dimensional case, we can implement a 3D Clifford CNN layer using Equation 42, where {b 0 , b 1 , b 2 , b 12 , b 13 , b 23 , b 123 } correspond to 8 different kernels representing one 3D multivector kernel, i.e. 8 different convolution layers, and {a 0 , a 1 , a 2 , a 12 , a 13 , a 23 , a 123 } correspond to the scalar, vector, bivector, and trivector parts of the input multivector field. Convolution layers for different 3-dimensional Clifford algebras change the signs in the geometric product.
B.2 CLIFFORD NORMALIZATION
Different normalization schemes have been proposed to stabilize and accelerate training deep neural networks (Ioffe & Szegedy, 2015;Ba et al., 2016;Wu & He, 2018;Ulyanov et al., 2017). Their standard formulation applies only to real values. Simply translating and scaling multivectors such that their mean is 0 and their variance is 1 is insufficient because it does not ensure equal variance across all components. Trabelsi et al. (2017) extended the batch normalization formulation to apply to complex values. We build on the same principles to first propose an appropriate batch normalization scheme for multivectors, similar to the work of for quaternions. For 2D multivectors of the form a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 , we can formulate the problem of batch normalization as that of whitening 4D vectors:
Batch normalization
a = (V) − 1 2 (a − E[a])(50)
where the covariance matrix V is
V = Ö V a0a0 V a0a1 V a0a2 V a0a12 V a1a0 V a1a1 V a1a2 V a1a12 V a2a0 V a2a1 V a2a2 V a2a12 V a12a0 V a12a1 V a12a2 V a12a12 è .(51)
The shift parameter β is a multivector with 4 learnable components and the scaling parameter γ is 4 × 4 positive matrix. The multivector batch normalization is defined as:
BN (a) = γa + β(w i,j (T x) = Tw i,j (x) ,
for 0 ≤ j < c in . We first define an orthogonal transformation on a multivector by,
Tf = ±uf u † , u † u = 1(53)
where u and f are multivectors which are multiplied using the geometric product. The minus sign is picked by reflections but not by rotations, i.e. it depends on the parity of the transformation. This construction is called a "versor" product. The construction can be found in e.g. Suter (2003) for vectors and its extension to arbitrary multivectors. The above construction makes it immediately clear that T(f g) = (Tf )(Tg). When we write T x, we mean an orthogonal transformation of an Euclidean vector (which can in principle also be defined using versors). To show equivariance, we wish to prove for multivectors f : Z 2 → (G) cin and a set of c out multivector filters
{w i } cout i=1 : Z 2 → (G) cin that: f (T x) = Tf (x) ,(54)
and
w i (T x) = Tw i (x) ,(55)
Equations 54, 55 yield:
⇒ f w i (T x) = T f w i (x) .(56)
That is: if the input multivector field transforms as a multivector, and the kernel satisfies the stated equivariance constraint, then the output multivector field also transforms properly as a multivector. Note that T might act differently on the various components (scalars, vectors, pseudoscalars, pseudovectors) under rotations and/or reflections. Now,
f w i (T x) = y∈Z 2 cin j=1 f j (y)w i,j (y − T x)) = y∈Z 2 cin j=1 f j (y)w i,j (T (T −1 y − x))) = T y ∈Z 2 cin j=1 f j (T y )w i,j (T (y − x))), y = T −1 y = y ∈Z 2 cin j=1 f j (T y )w i,j (T (y − x))) = y ∈Z 2 cin j=1 Tf j (y )Tw i,j (y − x)) = y ∈Z 2 cin j=1 T(f j (y )w i,j (y − x))) = T y ∈Z 2 cin j=1 (f j (y )w i,j (y − x))) = T [f w i ] (x)(57)
where in the fourth line we transform variables y → y , in the fifth line we use the invariance of the summation "measure" under T , in the sixth line we use the transformation property of f and equivariance for w i , in the seventh line we use the property of multivectors, and in the eighth line we use linearity of T.
B.5 CLIFFORD FOURIER LAYERS
We derive the implementation of Clifford Fourier layers for multivectors in G 2 and G 3 , i.e. multivectors of Clifford algebras generated by the 2-dimensional vector space R 2 and the 3-dimensional vector space R 3 .
Classical Fourier transform. In arbitrary dimension n, the Fourier transformf (ξ) = F{f }(ξ) for a continuous n-dimensional complex-valued signal f (x) = f (x 1 , . . . , x n ) : R n → C is defined as:f
(ξ) = F{f }(ξ) = 1 (2π) n/2 R n f (x)e −2πi x,ξ dx , ∀ξ ∈ R n ,(58)
provided that the integral exists, where x and ξ are n-dimensional vectors and x, ξ is the contraction of x and ξ. Usually, x, ξ is the inner product, and ξ is an element of the dual vector space R n . The inversion theorem states the back-transform from the frequency domain into the spatial domain:
f (x) = F −1 {F{f }}(x) = 1 (2π) n/2 R nf (ξ)e 2πi x,ξ dξ , ∀x ∈ R n .(59)
We can rewrite the Fourier transform of Equation 58 in coordinates:
f (ξ 1 , . . . , ξ n ) = F{f }(ξ 1 , . . . , ξ n ) = 1 (2π) n/2 R n f (x 1 , . . . , x n )e −2πi(x1ξ1+...+xnξn) dx 1 . . . dx n .
where (ξ 1 , . . . , ξ n ) ∈ Z M1 . . . × . . . Z Mn . Fast Fourier transforms (FFTs) (Cooley & Tukey, 1965;Van Loan, 1992) immensely accelerate the computation of the transformations of Equation 61 by factorizing the discrete Fourier transform matrix into a product of sparse (mostly zero) factors.
B.5.1 2D CLIFFORD FOURIER TRANSFORM
Analogous to Equation 58, for Cl(2, 0)(R) the Clifford Fourier transform (Ebling & Scheuermann, 2005;Hitzer, 2012) and the respective inverse transform for multivector valued functions f (x) : R 2 → G 2 and vectors x, ξ ∈ R 2 are defined as:
f (ξ) = F{f }(ξ) = 1 2π R2 f (x)e −2πi2 x,ξ dx , ∀ξ ∈ R 2 ,(62)f (x) = F −1 {F{f }}(x) = 1 2π R2f (ξ)e 2πi2 x,ξ dξ , ∀x ∈ R 2 ,(63)
provided that the integrals exist. The differences to Equations 58 and 59 are that f (x) andf (ξ) represent multivector fields in the spatial and the frequency domain, respectively, and that the pseudoscalar i 2 = e 1 e 2 is used in the exponent. Inserting the definition of multivector fields, we can rewrite Equation 62 as:
F{f }(ξ) = 1 2π R2 f (x)e −2πi2 x,ξ dx , = 1 2π R2 1 Å f 0 (x) + f 12 (x)i 2 spinor part ã + e 1 Å f 1 (x) + f 2 (x)i 2 vector part ã e −2πi2 x,ξ dx = 1 2π R2 1 Å f 0 (x) + f 12 (x)i 2 ã e −2πi2 x,ξ dx + 1 2π R2 e 1 Å f 1 (x) + f 2 (x)i 2 ã e −2πi2 x,ξ dx = 1 F Å f 0 (x) + f 12 (x)i 2 ã (ξ) + e 1 F Å f 1 (x) + f 2 (x)i 2 ã (ξ) .(64)
We obtain a Clifford Fourier transform by applying two standard Fourier transforms for the dual pairs f 0 = f 0 (x) + f 12 (x)i 2 and f 1 = f 1 (x) + f 2 (x)i 2 , which both can be treated as a complexvalued signal f 0 , f 1 : R 2 → C. Consequently, f (x) can be understood as an element of C 2 . The 2D Clifford Fourier transform is the linear combination of two classical Fourier transforms. The discretized versions of the spinor/vector part (f s/v ) reads analogously to Equation 61:
f s/v (ξ 1 , ξ 2 ) = F{f s/v }(ξ 1 , ξ 2 ) = M1 m1=0 M2 m2=0 f s/v (m 1 , m 2 ) · e −2πi2 Å m 1 ξ 1 M 1 + m 2 ξ 2 M 2 ã ,(65)
where again (ξ 1 , ξ 2 ) ∈ Z M1 × Z Mn . Similar to Fourier Neural Operators (FNOs) where weight tensors are applied pointwise in the Fourier space, we apply multivector weight tensors W ∈ (G 2 ) cin×cout×(ξ max 1 ×ξ max 2 ) point-wise. Fourier modes above cut-off frequencies (ξ max 1 , ξ max 2 ) are set to zero. In doing so, we modify the Clifford Fourier modeŝ f (ξ) = F{f }(ξ) =f 0 (ξ) +f 1 (ξ)e 1 +f 2 (ξ)e 2 +f 12 (ξ)e 12
via the geometric product. The Clifford Fourier modes follow naturally when combining spinor and vector parts of Equation 64. Analogously to FNOs, higher order modes are cut off. Finally, the residual connections used in FNO layers is replaced by a multivector weight matrix realized as Clifford convolution, ideally a Cl 2,0 (R) convolution layer. A schematic sketch of a Clifford Fourier layer is shown in Figure 5b in the main paper. For Cl(0, 2)(R), the the vector part changes to
e 1 Å f 1 (x) − f 2 (x)i 2 ã .
B.5.2 2D CLIFFORD CONVOLUTION THEOREM
In contrast to Ebling & Scheuermann (2005), we proof the 2D Clifford convolution theorem for multivector valued filters applied from the right, such that filter operations are consistent with Clifford convolution layers. We first need to show that the Clifford kernel commutes with the spinor and anti-commutes with the vector part of multivectors. We can write the product ae i2s for every scalar s ∈ R and multivector a ∈ G 2 as ae i2s = a cos(s) + i 2 sin(s) .
For the basis of the spinor part, we obtain 1i 2 = i 2 1, and for the basis of the vector part e 1 i 2 = e 1 e 1 e 2 = −e 1 e 2 e 1 = −i 2 e 1 . Thus, the Fourier kernel e −2πi2 x,ξ commutes with the spinor part, and anti-commutes with the vector part of a, both for Cl(2, 0)(R) and Cl(0, 2)(R). We therefore proof the convolution theorem for the commuting spinor and the anti-commuting vector part of a.
Theorem 2: 2D Clifford convolution theorem.
Let the field f : R 2 → G 2 be multivector valued, the filter k s : R 2 → G 2 be spinor valued, and the filter k v : R 2 → G 2 be vector valued, and let F{f }, F{k s }, F{k v } exist, then
F{f k s }(ξ) = F{f }(ξ) · F † {k s }(ξ) , F{f k v }(ξ) = F{f }(ξ) · F {k v }(ξ) , where F † {k s }(ξ) = F{k s }(−ξ) and F † {k v }(ξ) = F{k v }(−ξ).
Proof.
F{f k s }(ξ) = 1 (2π) 2 R 2 R 2 f (y)k s (y − x)dy e −2πi2 x,ξ dx = 1 (2π) 2 R 2 f (y) R 2 k s (y − x)e −2πi2 x,ξ dx dy = 1 (2π) 2 R 2 f (y) R 2 k s (x)e −2πi2 y−x,ξ dx F † {ks}(ξ)e −2πi 2 y,ξ =e −2πi 2 y,ξ F † {ks}(ξ) dy = 1 2π R 2 f (y)e −2πi2 y,ξ dy F † {k s }(ξ) = F{f }(ξ) · F † {k s }(ξ) .(68)F{f k v }(ξ) = 1 (2π) 2 R 2 R 2 f (y)k v (y − x)dy e −2πi2 x,ξ dx = 1 (2π) 2 R 2 f (y) R 2 k v (y − x)e −2πi2 x,ξ dx dy = 1 (2π) 2 R 2 f (y) R 2 k v (x)e −2πi2 y−x,ξ dx F † {kv}(ξ)e 2πi 2 y,ξ =e −2πi 2 y,ξ F {kv}(ξ) , where −ξ→ ξ dy = 1 2π R 2 f (y)e −2πi2 y,ξ dy F{k v }(ξ) = F{f }(ξ) · F {k v }(ξ) .(69)
B.5.3 3D CLIFFORD FOURIER TRANSFORM
For Cl(3, 0)(R), analogous to Equation 58, the Clifford Fourier transform (Ebling & Scheuermann, 2005) and the respective inverse transform for multivector valued functions f : R 3 → G 3 and vectors x, ξ ∈ R 3 are defined as:
f (ξ) = F{f }(ξ) = 1 (2π) 3/2 R3 f (x)e −2πi3 x,ξ dx , ∀ξ ∈ R 3 ,(70)f (x) = F −1 {F{f }}(x) = 1 (2π) 3/2 R3f (ξ)e 2πi3 x,ξ dξ , ∀x ∈ R 3 ,(71)
provided that the integrals exist. A multivector valued function f : R 3 → G 3 , f = f 0 + f 1 e 1 + f 2 e 2 + f 3 e 3 + f 12 e 12 + f 13 e 13 + f 23 e 23 + f 123 e 123 (72) can be expressed via the pseudoscalar i 3 = e 1 e 2 e 3 as:
f = (f 0 + f 123 i 3 )1 + (f 1 + f 23 i 3 )e 1 + (f 2 + f 31 i 3 )e 2 + (f 3 + f 12 i 3 )e 3 ,(73)
We obtain a 3-dimensional Clifford Fourier transform by applying four standard Fourier transforms for the four dual pairs
f 0 = f 0 (x) + f 123 (x)i 3 , f 1 = f 1 (x) + f 23 (x)i 3 , f 2 = f 2 (x) + f 31 (x)
i 3 , and f 3 = f 3 (x)+f 12 (x)i 3 , which all can be treated as a complex-valued signal f 0 , f 1 , f 2 , f 3 : R 3 → C.
Consequently, f (x) can be understood as an element of C 4 . The 3D Clifford Fourier transform is the linear combination of four classical Fourier transforms:
F{f }(ξ) = 1 (2π) 3/2 R3 f (x)e −2πi3 x,ξ dx , = 1 (2π) 3/2 R3 1 Å f 0 (x) + f 123 (x)i 3 ã + e 1 Å f 1 (x) + f 23 (x)i 3 ã + e 2 Å f 2 (x) + f 31 (x)i 3 ã + e 3 Å f 3 (x) + f 12 (x)i 3 ã e −2πi3 x,ξ dx = 1 (2π) 3/2 R3 1 Å f 0 (x) + f 123 (x)i 3 ã e −2πi3 x,ξ dx + 1 (2π) 3/2 R3 e 1 Å f 1 (x) + f 23 (x)i 3 ã e −2πi3 x,ξ dx + 1 (2π) 3/2 R3 e 2 Å f 2 (x) + f 31 (x)i 3 ã e −2πi3 x,ξ dx + 1 (2π) 3/2 R3 e 3 Å f 3 (x) + f 12 (x)i 3 ã e −2πi3 x,ξ dx = 1 F Å f 0 (x) + f 12 (x)i 3 ã (ξ) + e 1 F Å f 1 (x) + f 23 (x)i 3 ã (ξ) + e 2 F Å f 2 (x) + f 31 (x)i 3 ã (ξ) + e 3 F Å f 3 (x) + f 12 (x)i 3 ã (ξ) .(74)
Analogous to the 2-dimensional Clifford Fourier transform, we apply multivector weight tensors W ∈ (G 3 ) cin×cout×(ξ max 1 ×ξ max 2 ×ξ max 3 ) point-wise. Fourier modes above cut-off frequencies (ξ max 1 , ξ max 2 , ξ max 3 ) are set to zero. In doing so, we modify the Clifford Fourier modeŝ
f (ξ) = F{f }(ξ)
=f 0 (ξ) +f 1 (ξ)e 1 +f 2 (ξ)e 2 +f 3 (ξ)e 3 +f 12 (ξ)e 12 +f 31 (ξ)e 31 +f 23 (ξ)e 23 +f 123 (ξ)e 123 (75) via the geometric product. The Clifford Fourier modes follow naturally when combining the four parts of Equation 74 . Finally, the residual connections used in FNO layers is replaced by a multivector weight matrix realized as Clifford convolution, ideally a Cl 3,0 (R) convolution layer. For other 3-dimensional Clifford algebras, the signs of the dual pairs in Equation 73 change accordingly.
B.5.4 3D CLIFFORD CONVOLUTION THEOREM
This theorem adapted from Ebling & Scheuermann (2005)). First, again let's check if the Clifford kernel commutes with the different parts of multivectors. We can write the product ae i3s for every scalar s ∈ R and multivector a ∈ G 3 as ae i3s = a cos(s) + i 3 sin(s) .
First, we check again if the different basis vectors of the Fourier transforms of Equation 74 commute with the pseudoscalar i 3 :
1i 3 = i 3 1 e 1 i 3 = e 1 e 1 e 2 e 3 = −e 1 e 2 e 1 e 3 = e 1 e 2 e 3 e 1 = i 3 e 1 e 2 i 3 = e 2 e 1 e 2 e 3 = −e 1 e 2 e 2 e 3 = e 1 e 2 e 3 e 2 = i 3 e 2 e 3 i 3 = e 3 e 1 e 2 e 3 = −e 1 e 3 e 2 e 3 = e 1 e 2 e 3 e 3 = i 3 e 3
In contrast to the 2-dimensional Clifford Fourier transform, now all four parts of the multivector of Equation 73 commute with i 3 . This holds for all 3-dimensional Clifford algebras.
Theorem 3: 3D Clifford convolution theorem.
Let the field f : R 3 → G 3 be multivector valued, the filter k a : R 3 → G 3 be multivector valued, and let F{f }, F{k a } exist, then
F{f k a }(ξ) = F{f }(ξ) · F † {k a }(ξ) , where F † {k a }(ξ) = F{k a }(−ξ).
Proof.
F{f k a }(ξ) = 1 (2π) 3 R 3 R 3 f (y)k a (y − x)dy e −2πi3 x,ξ dx = 1 (2π) 3 R 3 f (y) R 3 k a (y − x)e −2πi3 x,ξ dx dy = 1 (2π) 3 R 3 f (y) R 3 k a (x)e −2πi3 y−x,ξ dx F † {ka}(ξ)e −2πi 3 y,ξ =e −2πi 3 y,ξ F † {ka}(ξ) dy = 1 (2π) 3/2 R 3 f (y)e −2πi3 y,ξ dy F † {k a }(ξ) = F{f }(ξ) · F † {k a }(ξ) .(78)
B.5.5 IMPLEMENTATION OF CLIFFORD FOURIER LAYERS
We implement a 2D Clifford Fourier layer by applying two standard Fourier transforms on the dual pairs of Equation 11. These dual pairs can be treated as complex valued inputs. Similarly, we implement a 3D Clifford Fourier layer by applying four standard Fourier transforms on the dual pairs of e.g. Cl 3,0 (Equation 37 -Equation 40). Since Clifford convolution theorems hold both for the vector and the spinor parts and for the four dual pairs for Cl 2,0 and Cl 3,0 , respectively, we multiply the modes in the Fourier space using the geometric product. Finally, we apply an inverse Fourier transformation and resemble the multivectors in the spatial domain.
B.6 PSEUDOCODE
Algorithm 1 sketches the implementation of a Clifford convolution, Algorithm 2 of a rotational Clifford convolution, and Algorithm 3 of a Clifford Fourier layer.
1: function CLIFFORDKERNEL2D(W ) 2: kernel ← W [0] W [1] W [2] −W [3] W [1] W [0] −W [3] W [2] W [2] W [3] W [0] −W [1] W [3] W [2] −W [1] W [0] 3:
return kernel 4: function CLIFFORDCONV2D(W , x) Algorithm 2: Pseudocode for 2D rotational Clifford convolution using Cl 0,2 . 1: function CLIFFORDSPECTRALCONV2D(W , x, m 1 , m 2 ) 2:
x v , x v ← VIEW AS DUAL PARTS(x) 3: f (x v ) ← FFT2(x v ) Complex 2D FFT of vector part 4: f (x s ) ← FFT2(x s ) Complex 2D FFT of scalar part 5: f * (x v ) ← ï f (x v )[f * (x) ← f * (x s ).r + f * (x v ).r + f * (x v ).i + f * (x s ).i Multivector Fourier modes 8:f * (x) ← f * (x)W
Geometric product in the Fourier space 9:
x v ← IFFT2(f * (x)[1] +f * (x)[2])
Inverse 2D FFT of vector part 10:
x 2 ← IFFT2(f * (x)[0] +f * (x)[3])
Inverse 2D FFT of scalar part 11:x ← VIEW AS MULTIVECTOR(x v ,x s ) 12:
returnx 13: function CLIFFORDFOURIERLAYER2D(W f , W c , x) 14: y 1 ← CLIFFORDSPECTRALCONV(W f , x, m 1 , m 2 )
15:
x 2 ← VIEW AS REALVECTOR(x) 16:
y 2 ← CLIFFORDCONV(W c , x 2 )
C EXPERIMENTS
This appendix supports Section 4 of the main paper.
C.1 LOSS FUNCTION AND METRICS
We report the summed MSE (SMSE) loss defined as:
L SMSE = 1 N y y∈Z 2 (orZ 3 ) Nt j=1 Nfields i=1 u i (y, t j ) −û i (y, t j ) 2 2 ,(79)
where u is the target,û the model output, N fields comprises scalar fields as well as individual vector field components, and N y is the total number of spatial points. Equation 79 is used for training with N t = 1, and further allows us to define four metrics:
• One-step loss where N t = 1 and N fields comprises all scalar and vector components.
• Vector loss where N t = 1 and N fields comprises only vector components.
• Scalar loss where N t = 1 and N fields comprises only the scalar field.
• Rollout loss where N t = 5 and N fields comprises all scalar and vector components.
For Maxwell's equation, electric and magnetic loss are defined analogously to the vector and the scalar loss for Navier-Stokes and shallow water experiments.
C.2 MODELS
We experiment with two architecture families: ResNet models (He et al., 2016) and Fourier Neural Operators (FNOs) (Li et al., 2020). All baseline models are fine-tuned for all individual experiments with respect to number of blocks, number of channels, number of modes (FNO), learning rates, normalization and initialization procedures, and activation functions. The best models are reported, and for reported Clifford results each convolution layer is substituted with a Clifford convolution, each Fourier layer with a Clifford Fourier layer, each normalization with a Clifford normalization and each non-linearity with a Clifford non-linearity. A Clifford non-linearity in this context is a the application of the corresponding default linearity to the different multivector components.
ResNet architectures. For Navier-Stokes and shallow water experiments, we use ResNet architectures with 8 residual blocks, each consisting of two convolution layers with 3×3 kernels, shortcut connections, group normalization (Wu & He, 2018), and GeLU activation functions (Hendrycks & Gimpel, 2016). We further use two embedding and two output layers, i.e. the overall architectures could be classified as Res-20 networks. In contrast to standard residual networks for image classification, we don't use any down-projection techniques, e.g. convolution layers with strides larger than 1 or via pooling layers. In contrast, the spatial resolution stays constant throughout the network. We therefore also use the same number of hidden channels throughout the network, that is 128 channels per layer. Overall this results in roughly 2.4 million parameters. Increasing the number of residual blocks or the number of channels did not increase the performance significantly.
Clifford ResNet architectures. For every ResNet-based experiment, we replaced the fine-tuned ResNet architectures with two Clifford counterparts: each CNN layer is replaced with a (i) Clifford CNN layer, and (ii) with a rotational Clifford CNN layer. To keep the number of weights similar, instead of 128 channels the resulting architectures have 64 multivector channels, resulting again in roughly 1.6 million floating point parameters. Additionally for both architectures, GeLU activation functions are replaced with Clifford GeLU activation functions, group normalization is replaced with Clifford group normalization. Using Clifford initialization techniques did not improve results.
Fourier Neural Operator architectures. For Navier-Stokes and shallow water experiments, we used 2-dimensional Fourier Neural Operators (FNOs) consisting of 8 FNO blocks, two embedding and two output layers. Each FNO block comprised a convolution path with a 1 × 1 kernel and an FFT path. We used 16 Fourier modes (for x and y components) for point-wise weight multiplication, and overall use 128 hidden channels. We used GeLU activation functions (Hendrycks & Gimpel, 2016). Additional shortcut connections or normalization techniques, such as batchnorm or group, norm did not improve performance, neither did larger numbers of hidden channels, nor more FNO blocks. Overall this resulted in roughly 140 million parameters for FNO based architectures.
For 3-dimensional Maxwell experiments, we used 3-dimensional Fourier Neural Operators (FNOs) consisting of 4 FNO blocks, two embedding and two output layers. Each FNO block comprised a 3D convolution path with a 1 × 1 kernel and an FFT path. We used 6 Fourier modes (for x, y, and z components) for point-wise weight multiplication, and overall used 96 hidden channels. Interestingly, using more layers or more Fourier modes degraded performances. Similar to the 2D experiments, we applied GeLU activation functions, and neither apply shortcut connections nor normalization techniques, such as batchnorm or groupnorms. Overall this resulted in roughly 65 million floating point parameters for FNO based architectures.
Clifford Fourier Neural Operator architectures. For every FNO-based experiment, we replaced the fine-tuned FNO architectures with respective Clifford counterparts: each FNO layer is replaced by its Clifford counterpart. To keep the number of weights similar, instead of 128 channels the resulting architectures have 48 multivector channels, resulting in roughly the same number of parameters. Additionally, GeLU activation functions are replaced with Clifford GeLU activation functions. Using Clifford initialization techniques did not improve results.
For 3-dimensional Maxwell experiments, we replaced each 3D Fourier transform layer with a 3D Clifford Fourier layer and each 3D convolution with a respective Clifford convolution. We also use 6 Fourier modes (for x, y, and z components) for point-wise weight multiplication, and overall used 32 hidden multivector channels, which results in roughly the same number of parameters (55 millions). In contrast to 2-dimensional implementations, Clifford initialization techniques proved important for 3-dimensional architectures. Most notably, too large initial values of the weights of Clifford convolution layers hindered gradient flows through the Clifford Fourier operations.
C.3 TRAINING AND MODEL SELECTION.
We optimized models using the Adam optimizer (Kingma & Ba, 2014) with learning rates [10 −4 , 2 · 10 −4 , 5 · 10 −4 ] for 50 epochs and minimized the summed mean squared error (SMSE) which is outlined in Equation 79. We used cosine annealing as learning rate scheduler (Loshchilov & Hutter, 2016) with a linear warmup. For baseline ResNet models, we optimized number of layers, number of channels, and normalization procedures. We further tested different activation functions. For baseline FNO models, we optimized number of layers, number of channels, and number of Fourier modes. Larger numbers of layers or channels did not improve the performances for both ResNet and FNO models. For the respective Clifford counterparts, we exchanged convolution and Fourier layers by Clifford convolution and Clifford Fourier layers. We further used Clifford normalization schemes. We decreased the number of layers to obtain similar numbers of parameters. We could have optimized Clifford architectures slightly more by e.g. using different numbers of hidden layers than the baseline models did. However, this would (i) slightly be against the argument of having "plug-and play" replace layers, and (ii) would have added quite some computational overhead. Finally, we are quite confident that the used architectures are very close to the optimum for the current tasks.
Computational resources. All FNO and CFNO experiments used 4×16 GB NVIDIA V100 machines for training. All ResNet and Clifford ResNet experiments used 8×32 GB NVIDIA V100 machines. Average training times varied between 3 h and 48 h, depending on task and number of trajectories. Clifford runs on average took twice as long to train for equivalent architectures and epochs.
C.4 NAVIER-STOKES IN 2D
The incompressible Navier-Stokes equations are built upon momentum and mass conservation of fluids. Momentum conservation yields for the velocity flow field v ∂v ∂t
= −v · ∇v + µ∇ 2 v − ∇p + f ,(80)
where v · ∇v is the convection, µ∇ 2 v the viscosity, ∇p the internal pressure and f an external force. Convection is the rate of change of a vector field along a vector field (in this case along itself), viscosity is the diffusion of a vector field, i.e. the net movement form higher valued regions to lower concentration regions, µ is the viscosity coefficient. The incompressibility constrained yields mass conservation via
∇ · v = 0 .(81)
Additional to the velocity field v(x), we introduce a scalar field s(x) representing a scalar quantity that is being transported through the velocity field. For example, v might represent velocity of air inside a room, and s might represent concentration of smoke. As the vector field changes, the scalar field is transported along it, i.e. the scalar field is advected by the vector field. Similar to convection, advection is the transport of a scalar field along a vector field:
ds dt = −v · ∇s .(82)
We implement the 2D Navier-Stokes equation using ΦFlow 16 (Holl et al., 2020). Solutions are propagated where we solve for the pressure field and subtract its spatial gradients afterwards. Semi-Lagrangian advection (convection) is used for v, and MacCormack advection for s. Additionally, we express the external buoyancy force f in Equation 80 as force acting on the scalar field. Solutions are obtained using Boussinesq approximation (Kleinstreuer, 1997), which ignores density differences except where they appear in terms multiplied by the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids.
Equation details
. We obtain data for the 2D Navier-Stokes equations on a grid with spatial resolution of 128 × 128 (∆x = 0.25, ∆y = 0.25), and temporal resolution of ∆t = 1.5 s. The equation is solved on a closed domain with Dirichlet boundary conditions (v = 0) for the velocity, and Neumann boundaries ∂s ∂x = 0 for the scalar smoke field. The viscosity parameter is set to ν = 0.01, and a buoyancy factor of (0, 0.5) T is used. The scalar field is initialized with random Gaussian noise fluctuations, and the velocity field is initialized to 0. We run the simulation for 21 s and sample every 1.5 s. Trajectories contain scalar and vector fields at 14 different time points.
Results. Results are summarized in Figures 10, 9, and detailed in Table 1. Figure 11 displays examples of Navier-Stokes rollouts of scalar and vector fields obtained by Clifford Fourier surrogates, and contrasts them with ground truth trajectories. For ResNet-like architectures, we observe that both CResNet and CResNet rot improve upon the ResNet baseline. Additionally, we observe that rollout losses are also lower for the two Clifford based architectures, which we attribute to better and more stable models that do not overfit to one-step predictions so easily. Lastly, while in principle CResNet and CResNet rot based architectures are equally flexible, CResNet rot ones in general perform better than CResNet ones. For FNO and respective Clifford Fourier based (CFNO) architectures, the loss is in general much lower than for ResNet based architectures. CFNO architectures improve upon FNO architectures for all dataset sizes, and for one-step as well as rollout losses. The shallow water equations (Vreugdenhil, 1994) describe a thin layer of fluid of constant density in hydrostatic balance, bounded from below by the bottom topography and from above by a free surface. For example, the deep water propagation of a tsunami can be described by the shallow water equations, and so can a simple weather model. The shallow water equations read:
∂v x ∂t + v x ∂v x ∂x + v y ∂v x ∂y + g ∂η ∂x = 0 , ∂v y ∂t + v x ∂v y ∂x + v y ∂v y ∂y + g ∂η ∂y = 0 , ∂η ∂t + ∂ ∂x ï (η + h)v x ò + ∂ ∂y ï (η + h)v y ò = 0 ,(83)
where v x is the velocity in the x-direction, or zonal velocity, v y is the velocity in the y-direction, or meridional velocity, g is the acceleration due to gravity, η(x, y) is the vertical displacement of free surface, which subsequently is used to derive pressure fields; h(x, y) is the topography of the earth's surface. We modify the implementation in SpeedyWeather.jl 17 (Klöwer et al., 2022) to further randomize initial conditions to generate our dataset. SpeedyWeather.jl combines the shallow water equations with spherical harmonics for the linear terms and Gaussian grid for the non-linear terms with the appropriate spectral transforms. It internally uses a leapfrog time scheme with a Robert and William's filter to dampen the computational modes and achieve 3rd oder accuracy. SpeedyWeather.jl is based on the atmospheric general circulation model SPEEDY in Fortran (Molteni, 2003;Kucharski et al., 2013).
Equation details
. We obtain data for the 2D shallow water equations on a grid with spatial resolution of 192 × 96 (∆x = 1.875 • , ∆y = 3.75 • ), and temporal resolution of ∆t = 6 h. The equation is solved on a closed domain with periodic boundary conditions. We rollout the simulation for 20 days and sample every 6 h. Here 20 days is of course not the actual simulation time but rather the simulated time. Trajectories contain scalar pressure and wind vector fields at 84 different time points.
Results. Results are summarized in Figures 12, 13, 14, and detailed in Tables 2, 3. Figure 15 displays examples of shallow water equations rollouts of scalar pressure and vector wind fields obtained by Clifford Fourier surrogate models, and contrasts them with ground truth trajectories. The predictions are fairly indistinguishable from ground truth trajectories. We observe similar results than for the Navier-Stokes experiments. However, performance differences between baseline and Clifford architectures are even more pronounced, which we attribute to the stronger coupling of the scalar and the vector fields. For ResNet-like architectures, CResNet and CResNet rot improve upon the ResNet baseline, rollout losses are much lower for the two Clifford based architectures, and CResNet rot based architectures in general perform better than CResNet based ones. For Fourier based architectures, the loss is in general much lower than for ResNet based architectures (a training set size of 56 trajectories yields similar (C)FNO test set performance than a training set size of 896 trajectories for ResNet based architectures). CFNO architectures improve upon FNO architectures for all dataset sizes, and for one-step as well as rollout losses, which is especially pronounced for low number of training trajectories. Maxwell's equations in matter read:
∇ · D = ρ Gauss's law (84) ∇ · B = 0
Gauss's law for magnetism (85)
∇ × E = − ∂B ∂t
Faraday's law of induction (86)
∇ × H = ∂D ∂t + j Ampère's circuital law(87)
In isotropic media, the displacement field D is related to the electrical field via D = 0 r E, where 0 is the permittivity of free space and r is the permittivity of the media. Similarly, the magnetization field H in isotropic media is related to the magnetic field B via H = µ 0 µ r B, where µ 0 is the permeability of free space and µ r is the permeability of the media. Lastly, j is the electric current density and ρ the total electric charge density.
We propagate the solution of Maxwell's equation in matter using a finite-difference time-domain method 18 , where the discretized Maxwell's equations are solved in a leapfrog manner. First, the electric field vector components in a volume of space are solved at a given instant in time. Second, the magnetic field vector components in the same spatial volume are solved at the next instant in time.
Equation details
. We obtain data for the 3D Maxwell's equations on a grid with spatial resolution of 32×32×32 (∆x = ∆y = ∆z = 5·10 −7 m), and temporal resolution of ∆t = 50 s. We randomly place 18 (6 in the x−y plane, 6 in the x−z plane, 6 in the y−z plane) different light sources outside a cube which emit light with different amplitude and different phase shifts, causing the resulting D and H fields to interfere with each other. The wavelength of the emitted light is 10 −5 m. The equation is solved on a closed domain with periodic boundary conditions. We run the simulation for 400 s and sample data every 50 s. Trajectories contain displacement D and the magnetization field H components. Exemplary trajectories are shown in Figure 16. Results. Results are summarized in Figure 17 and detailed in Table 4. 18 https://github.com/flaport/fdtd A practical use case for neural PDE surrogates is replacing expensive classical PDE solvers. There is however a major chicken-and-egg problem here (Brandstetter et al., 2022a;Shi et al., 2022): obtaining high quality ground truth training data for neural PDE surrogates often requires using these expensive solvers. Minimizing this data requirement is beginning to be approached in recent works. Geneva & Zabaras (2020);Wandel et al. (2020;2022) achieve "data-free" training in various settings. "Data-free" refers to the self-supervised training steps, which are done without ground truth data. The current state-of-art generic approach is introduced in Shi et al. (2022) as the mean squared residual (MSR) loss constructed by the discretized PDE itself. However, for e.g. generating realistic initial conditions numerical solvers are still needed. Pestourie et al. (2021) identify how incorporating limited physical knowledge in the form of a low-fidelity "coarse" solver can allow training PDE surrogate models with an order of magnitude less data. Another direction to improve data efficiency is by exploiting the Lie point symmetries of the underlying PDEs, either via data augmentation (Brandstetter et al., 2022a) or by building equivariant PDE surrogates (Wang et al., 2020b). Our current work in a way also improves data efficiency by capturing the inductive bias appropriate for multivector fields. Overall we believe hybrids of such approaches are going to be necessary for making neural PDE surrogates of practical use in many domains. This exhaustive list of neural PDE solver surrogates shows that many of the architectures are based on convolutional or Fourier layers. For these two, Clifford layers are applicable as a drop-in replacement in almost all cases. For graph neural network and attention based architectures, we leave the implementation of respective Clifford counterparts to future work.
Geometric deep learning. The core idea of geometric deep learning (Bronstein et al., 2017; is to exploit underlying low-dimensionality and structure of the physical world, in order to design deep learning models which can better learn in high dimensional spaces. Incorporating underlying symmetries would be one way to achieve this. If done correctly, it can drastically shrink the search space, which has proven to be quite successful in multiple scenarios. The most obvious examples are CNNs (Fukushima & Miyake, 1982;LeCun et al., 1998), where the convolution operation commutes with the shift operator, and thus provides a way to equip layers and subsequently networks with translation equivariant operations. Group convolution networks (Cohen & Welling, 2016a;Kondor & Trivedi, 2018;Cohen et al., 2019) generalize equivariant layers beyond translations, i.e. provide a concept of how to build general layers that are equivariant to a broader range of groups, such as rotation groups. An appealing way of how to build such group equivariant layers is via so-called steerable basis functions (Hel-Or & Teo, 1998), which allow to write transformation by specific groups as a linear combination of a fixed, finite set of basis functions. This concept leads to steerable group convolution approaches (Cohen & Welling, 2016b;Worrall et al., 2017). Two concrete examples are: (i) circular harmonics, which are respective basis functions for building layers that are equivariant to the group SO(2), the rotation group in 2 dimensions (Worrall et al., 2017; Weiler & Cesa, 2019); (ii) spherical harmonics, which are respective basis functions for building layers that are equivariant to the group SO(3), the rotation group in 3 dimensions (Weiler et al., 2018;Geiger & Smidt, 2022;Brandstetter et al., 2021). The similarity to multivector fields becomes more obvious if we have a closer look at spherical harmonics, which are defined as homogeneous polynomials of degree l, where the l = 0 case corresponds to scalars, the l = 1 case to vectors, and l ≥ 2 to higher order objects. Finally, Jenner & Weiler (2021) built steerable PDE operators such as curl or divergence as equivariant neural network components.
Grouped convolution. In their seminal work, Krizhevsky et al. (2012) introduced filter grouping, which allowed them to reduce the parameters in CNNs. The respective grouped convolutions (not to be confused with group convolutions) divide the filter maps at channel dimension, as the channel dimension most of the time increases strongly for deeper layers, and thus dominates the parameter count. Subsequent work showed that it is beneficial to additionally shuffle the channels for each filter group (Zhang et al., 2018), and to adaptively recalibrate channel-wise feature responses (Hu et al., 2018). All these approaches can be seen in the wider spectrum of effective model scaling (Tan & Le, 2019;Sandler et al., 2018).
Clifford convolutions in contrast do not have groupings in the channel dimensions, but instead group together elements as multivectors. In Clifford convolution, the Clifford kernel is therefore a constrained object where weight blocks appear multiple times (due to the nature of the geometric product). Thus, Clifford convolutions are more parameter efficient than standard convolutions, and all tricks of effective model scaling could in principle be applied on top of Clifford convolutions. Findings from Hoffmann et al. (2020) with respect to higher compute density of alternative algebras are applicable to our work as well.
E GLOSSARY
This short appendix summarizes notations used throughout the paper (Table 5), and contrasts the most fundamental concepts which arise when using Clifford algebras. Notation Meaning e 1 , e 2 , e 3 Basis vectors of the generating vector space of the Clifford algebra. e i ∧ e j Wedge (outer) product of basis vectors e i and e j . e i · e j = e 1 , e j Inner product of basis vectors e i and e j . e 1 e 2 , e 3 e 1 , e 2 e 3 Basis bivectors of the vector space of the Clifford algebra. e 1 e 2 e 3 Basis trivector of the vector space of the Clifford algebra. i 2 = e 1 e 2 Pseudoscalar for Clifford algebras of grade 2. i 3 = e 1 e 2 e 3 Pseudoscalar for Clifford algebras of grade 3. x
Euclidean vector ∈ R n . x ∧ y wedge (outer) product of Euclidean vectors x and y. x · y = x, y Inner product of vectors x and y. a
Multivector. ab
Geometric product of multivectors a and b. ı,,k Base elements of quaternions.
Geometric, Exterior, and Clifford algebras. A geometric algebra is a Clifford algebra of the real numbers. Since we are only using Cl 2,0 (R), Cl 0,2 (R), and Cl 3,0 (R), we are effectively working with geometric algebras. The exterior or Grassmann algebra is built up from the same concepts of scalars, vectors, bivectors, . . . , k-vectors, but only exterior (wedge) products exist. Therefore, the exterior algebra has a zero quadratic form (all base vectors square to zero). Clifford algebras are a generalization thereof with nonzero quadratic forms.
Complex numbers, quaternions, hypercomplex numbers. Hypercomplex numbers are elements of finite-dimensional algebras over the real numbers that are unital, i.e. contain a multiplicative identity element, but not necessarily associative or commutative. Elements are generated for a basis {î,, . . .} such thatî 2 , 2 , . . . ∈ {−1, 0, 1}. Complex numbers, quaternions, octonions are all hypercomplex numbers which can be characterized by different Clifford algebras. The bivector, trivector (and higher objects) of the Clifford algebras directly translate into basis elements of the respective algebras. For example, quaternions (which are of the form a + bî + c + dk, wherê ı 2 = 2 =k 2 = −1) are isomorphic to the Clifford algebra Cl 0,2 (R) where the basis element e 1 , e 2 , and e 1 e 2 directly translate toî,,k.
Spinor. Spinors arise naturally in discussions of the Lorentz group, the group to describe transformations in special relativity. One could say that a spinor is the most basic sort of mathematical object that can be Lorentz-transformed. In its essence, a spinor is a complex two-component vectorlike quantity in which rotations and Lorentz boosts (relativistic translations) are built into the overall formalism. More generally, spinors are elements of complex vector spaces that can be associated with Euclidean vector spaces. However, unlike vectors, spinors transform to their negative when the space is rotated by 360 • . In this work, the subalgebra Cl 0 (2, 0)(R), spanned by even-graded basis elements of Cl 2,0 (R), i.e. 1 and e 1 e 2 , determines the space of spinors via linear combinations of 1 and e 1 e 2 . It is thus isomorphic to the field of complex numbers C. Most notably, spinors of Cl 2,0 (R) commute with the Fourier kernel, whereas vectors do not. For a detailed introduction to spinors we recommend Steane (2013), and the comprehensive physics book of Schwichtenberg (2015).
Pseudoscalar.
A pseudoscalar -unlike a scalar -changes sign when you invert the coordinate axis. The easiest example of a pseudoscalar is the scalar triplet product of three arbitrary vectors of an Euclidean vector space x, y, z ∈ R n with inner product ., . . The scalar triplet product becomes negative for any parity inversion, i.e. swapping any two of the three operands: x · (y × z) = −x · (z × y) = −y · (x × z) = −z · (y × x).
Scalar field, vector field. A field is any (physical) quantity which takes on different values at different points in space (space-time). A scalar field is map D → R, where D ⊆ R n . A vector field is map D → R n , where D ⊆ R n . For example, n = 2 results in a vector field in plane, and n = 3 in a vector field in space. For an interesting history of the evolution of the concept of fields in physics we recommend Mirowski (1991);McMullin (2002). In Table 6, we list various important vector and scalar fields for comparison. Force per unit square (N/m 2 ) = energy per unit volume (J/m 3 ) Scalar R 3 → R Mean sea level pressure Pressure field at mean sea level Scalar R 2 → R Flow velocity field Change of point along its streamline 1 (v) Vector R 2 → R 2 , R 3 → R 3 Flow speed field Length of flow velocity vector (|v|) Scalar R 2 → R, R 3 → R Wind velocity field Air flow velocity field (v) Vector R 2 → R 2 , R 3 → R 3 Temperature field Temperature at space point (K) Scalar R 3 → R Signed distance field (SDF) Signed distance Scalar R 3 → R Occupancy field Occupancy Scalar R 3 → R 1 Streamlines are a family of curves whose tangent vectors constitute the velocity vector field of the flow. Streamlines differ over time when the flow of a fluid changes. The flow velocity vector field itself shows the direction in which a massless fluid element will travel at any spatial coordinate in time, and therefore describes and characterizes a fluid.
Figure 2 :
2Multivector components of Clifford algebras.
Figure 3 :
3e2 = −e2 ∧ e1 e1 Antisymmetry of bivector exterior (wedge) product.
Figure 5 :
5Sketch of Fourier Neural Operator (FNO) and Clifford Fourier Operator (CFNO) layers.
Figure 7 :
7Results for ResNet based (left) and Fourier based (right) architectures on the 2-dimensionalNavier-Stokes and Shallow water experiments. One-step and rollout loss are shown.
Figure 8: Results for Fourier based architectures on Maxwell equation's.
Clifford convolutions are related to the work on complex networks byTrabelsi et al. (2017), and closely related to work on quaternion neural networks Parcollet et al., 2018a;Parcollet et al., 2018b;Nguyen et al., 2021).
( 60 )
60Discrete/Fast Fourier transform. The discrete counterpart of Equation 58 transforms an ndimensional complex signal f (x) = f (x 1 , . . . , x n ) : R n → C at M 1 × . . . × M n grid points into its complex Fourier modes via:f (ξ 1 , . . . , ξ n ) = F{f }(ξ 1 , . . . , ξ n
CONV2D(kernel, input) 8:return VIEW AS MULTIVECTOR(output) Algorithm 1: Pseudocode for 2D Clifford convolution using Cl 2,0 .
f
. . . , : m 1 , : m 2 ] f (x v )[. . . , : m 1 , −m 2 :] f (x v )[. . . , −m 1 :, : m 2 ] f (x v )[. . . , −m 1 :, −m 2 :] * (x s ) ← ï f (x s )[. . . , : m 1 , : m 2 ] f (x s )[. . . , : m 1 , −m 2 :] f (x s )[. . . , −m 1 :, : m 2 ] f (x s )[. . . , −m 1 :, −m 2 :]
:
Pseudocode for 2D Clifford Fourier layer using Cl 2,0 .
Figure 9 :Figure 10 :
910Results on Navier-Stokes equations obtained by ResNet based architectures. Unrolled loss, one-step loss, scalar loss and vector loss are reported for ResNet, CResNet, and CResNet rot architectures. Models are trained on training sets with increasing number of trajectories. ResNet based architectures have a much higher loss than FNO based architectures in the low data regime, where possibly smearing and averaging operations are learned first. Results on Navier-Stokes equations obtained by Fourier based architectures. Rollout loss, one-step loss, scalar loss and vector loss are reported for FNO and CFNO architectures. Models are trained on three training sets with increasing number of trajectories.
Figure 11 :
11Example rollouts of the scalar and vector field of the Navier-Stokes experiments, obtained by a Clifford Fourier PDE surrogate and the ground truth.C.5 SHALLOW WATER EQUATIONS.
Figure 12 :Figure 13 :Figure 14 :Figure 15 :
12131415Results on the shallow water equations obtained by Fourier based architectures using a two timestep history input. Unrolled loss, one-step loss, scalar loss and vector loss are reported for FNO and CFNO architectures. Models are trained on three training sets with increasing number of trajectories. Results on the shallow water equations obtained by Fourier based architectures using a four timestep history input. Rollout loss, one-step loss, scalar loss and vector loss are reported for FNO and CFNO architectures. Models are trained on three training sets with increasing number of trajectories. Results on the shallow water equations obtained by ResNet based architectures using a two timestep history input. Rollout loss, one-step loss, scalar loss and vector loss are reported for ResNet, CResNet, and CResNet rot architectures. Models are trained on training sets with increasing number of trajectories. ResNet based architectures have a much higher loss than FNO based architectures in the low data regime, where possibly smearing and averaging operations are learned first. Example rollouts of the scalar and vector field of the shallow water experiments, obtained by a Clifford Fourier PDE surrogate (top) and the ground truth (bottom).
Figure 16 :
16An example propagation of the displacement field D and the magnetization field H. Shown are the field components for an arbitrary slice of the x−y plane.
Figure 17 :
17Results on the Maxwell equations obtained by Fourier based architectures using a two timestep history input. Rollout loss, one-step loss, displacement field D loss, and magnetization field H loss are reported for FNO and CFNO architectures. Models are trained on four training sets with increasing number of trajectories.
Neural PDE surrogates for fluid flow and weather forecasting applications are gaining momentum. In weather forecasting, Pathak et al. (2022) introduced FourCastNet as high-resolution weather modeling built on Adaptive Fourier Neural Operators (Guibas et al., 2021), Keisler (2022) successfully applied a graph neural network based approach to weather forecasting, Rasp & Thuerey (2021) achieved data-driven medium-range weather prediction with a ResNet which was pretrained on climate simulations, Weyn et al. (2020) use CNNs on a cubed sphere for global weather prediction, Weyn et al. (2021) forecast weather sub-seasonally with a large ensemble of deep-learning weather prediction models, Arcomano et al. (2020) build a reservoir computing-based, low-resolution, global prediction model, and MetNet (Sønderby et al., 2020) takes as input radar and satellite data to forecast probabilistic precipitation maps. Finally, data assimilation is improved by deep learning techniques in Frerix et al. (2021) and Maulik et al. (2022). Similarly, in fluid dynamics, Ma et al. (2021a) applied U-Nets (Ronneberger et al., 2015) to achieve physics-driven learning of steady Navier-Stokes equations, Stachenfeld et al. (2021) learned coarse models for turbulence simulations, TF-Net (Wang et al., 2020a) introduced domain-specific variations of U-Nets along with trainable spectral filters in a coupled model of Reynolds-averaged Navier-Stokes and Large Eddy Simulation.
j rot,12 , i.e., the scalar output of the geometric product of Cl 0,2 (R) as in Equation 34. A detailed description of the rotational multivector filters R i,j (y − x) is outlined in Appendix B. While in principle the Clifford CNN layer in Equation 7 and the rotational Clifford CNN layer in Equation 8 are equally flexible, our experiments in Section 4 show that rotational Clifford CNN layers lead to better performance.Clifford convolutions satisfy the property of equivariance under translation of the multivector inputs, as shown in theorem 1 in Appendix B. Analogous to Theorem 1, translation equivariance can be derived for rotational Clifford CNN layers.
Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond euclidean data. Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. Advances in Neural Information Processing Systems (NeurIPS), 32, 2019. James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297-301, 1965. Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Julia Ebling. Visualization and analysis of flow fields based on clifford convolution. 2006. Julia Ebling and Gerik Scheuermann. Clifford convolution and pattern matching on vector fields. In IEEE Visualization, 2003. VIS 2003., pp. 193-200. IEEE, 2003. David Hestenes and Garret Sobczyk. Clifford algebra to geometric calculus: a unified language for mathematics and physics, volume 5. Springer Science & Business Media, 2012. Eckhard Hitzer. The clifford fourier transform in real clifford algebras. 2012. Eckhard Hitzer. Quaternion and Clifford Fourier Transforms. Chapman and Hall/CRC, 2021. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132-7141, 2018. Rakhoon Hwang, Jae Yong Lee, Jin Young Shin, and Hyung Ju Hwang. Solving pde-constrained control problems using operator learning. In AAAI Conference on Artificial Intelligence, volume 36, pp. 4504-4512, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Zijie Li, Kazem Meidani, and Amir Barati Farimani. Transformer for partial differential equations' operator learning. arXiv preprint arXiv:2205.13671, 2022a. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020. Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linarès, Renato De Mori, and Yoshua Bengio. Quaternion convolutional neural networks for end-to-end automatic speech recognition. arXiv preprint arXiv:1806.07789, 2018b. Titouan Parcollet, Mohamed Morchid, and Georges Linarès. Quaternion convolutional neural networks for heterogeneous image processing. In ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8514-8518, 2019. doi: 10.1109/ICASSP.2019.8682495. Titouan Parcollet, Mohamed Morchid, and Georges Linarès. A survey of quaternion neural networks. Artificial Intelligence Review, 53(4):2957-2982, 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems (NeurIPS), pp. 8024-8035. 2019. Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, Pedram Hassanzadeh, Karthik Kashinath, and Animashree Anandkumar. Fourcastnet: A global datadriven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022. JK Pearson and DL Bisset. Neural networks in the clifford domain. In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94), volume 3, pp. 1465-1469. IEEE, 1994. Justin Pearson. Clifford networks. In Complex-Valued Neural Networks: Theories and Applications, pp. 81-106. World Scientific, 2003. Raphaël Pestourie, Youssef Mroueh, Chris Rackauckas, Payel Das, and Steven G Johnson. Physicsenhanced deep surrogates for pdes. arXiv preprint arXiv:2111.05841, 2021. Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W Battaglia. Learning meshbased simulation with graph networks. arXiv preprint arXiv:2010.03409, 2020. David Pfau, James S Spencer, Alexander GDG Matthews, and W Matthew C Foulkes. Ab initio solution of the many-electron schrödinger equation with deep neural networks. Timothy Praditia, Matthias Karlbauer, Sebastian Otte, Sergey Oladyshkin, Martin V Butz, and Wolfgang Nowak. Finite volume neural network: Modeling subsurface contaminant transport. arXiv preprint arXiv:2104.06010, 2021. Md Ashiqur Rahman, Manuel A Florez, Anima Anandkumar, Zachary E Ross, and Kamyar Azizzadenesheli. Generative adversarial neural operators. arXiv preprint arXiv:2205.03017, 2022a. Md Ashiqur Rahman, Zachary E Ross, and Kamyar Azizzadenesheli. U-no: U-shaped neural operators. arXiv preprint arXiv:2204.11127, 2022b. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686-707, 2019. Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367(6481):1026-1030, 2020. Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, and Jie Zhou. Global filter networks for image classification. Advances in Neural Information Processing Systems (NeurIPS), 34, 2021. Stephan Rasp and Nils Thuerey. Data-driven medium-range weather prediction with a resnet pretrained on climate simulations: A new model for weatherbench. Journal of Advances in Modeling Earth Systems, 13(2):e2020MS002405, 2021. Edoardo Mello Rella, Ajad Chhatkuli, Ender Konukoglu, and Luc Van Gool. Neural vector fields for surface representation and inference. arXiv preprint arXiv:2204.06552, 2022. Pierre Renaud. Clifford algebras lecture notes on applications in physics, 2020. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234-241. Springer, 2015. Cornelis Boudewijn Vreugdenhil. Numerical methods for shallow-water flow, volume 13. Springer Science & Business Media, 1994. Nils Wandel, Michael Weinmann, and Reinhard Klein. Learning incompressible fluid dynamics from scratch-towards fast, differentiable fluid models that generalize. arXiv preprint arXiv:2006.08762, 2020. Nils Wandel, Michael Weinmann, Michael Neidlin, and Reinhard Klein. Spline-pinn: Approaching pdes without data using fast, physics-informed hermite-spline cnns. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8529-8538, 2022. Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards physicsinformed deep learning for turbulent flow prediction. In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1457-1466, 2020a. Rui Wang, Robin Walters, and Rose Yu. Incorporating symmetry into deep dynamics models for improved generalization. arXiv preprint arXiv:2002.03061, 2020b. Sifan Wang, Hanwen Wang, and Paris Perdikaris. Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Maurice Weiler and Gabriele Cesa. General e(2)-equivariant steerable CNNs. Advances in Neural Information Processing Systems (NeurIPS), 32, 2019. Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable CNNs: Learning rotationally equivariant features in volumetric data. Advances in Neural Information Processing Systems (NeurIPS), 31, 2018. Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson. Ufno-an enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163:104180, 2022. Jonathan A Weyn, Dale R Durran, and Rich Caruana. Improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere. Journal of Advances in Modeling Earth Systems, 12(9):e2020MS002109, 2020. Jonathan A Weyn, Dale R Durran, Rich Caruana, and Nathaniel Cresswell-Clay. Sub-seasonal forecasting with a large ensemble of deep-learning weather prediction models. Journal of Advances in Modeling Earth Systems, 13(7):e2021MS002502, 2021. Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5028-5037, 2017. Tailin Wu, Takashi Maruyama, and Jure Leskovec. Learning to accelerate partial differential equations via latent global evolution. arXiv preprint arXiv:2206.07681, 2022. Yuxin Wu and Kaiming He. Group normalization. In European Conference on Computer Vision (ECCV), pp. 3-19, 2018. Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. In Computer Graphics Forum, volume 41, pp. 641-676. Wiley Online Library, 2022. Yan Yang, Angela F Gao, Jorge C Castellanos, Zachary E Ross, Kamyar Azizzadenesheli, and Robert W Clayton. Seismic wave propagation and inversion with neural operators. The Seismic Record, 1(3):126-134, 2021. Di Zang, Xihao Chen, Juntao Lei, Zengqiang Wang, Junqi Zhang, Jiujun Cheng, and Keshuang Tang. A multi-channel geometric algebra residual network for traffic data prediction. IET Intelligent Transport Systems, 2022. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6848-6856, 2018. Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. Quaternion convolutional neural networks. In European Conference on Computer Vision (ECCV), 2018. Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366:415-447, 2018. Kirill Zubov, Zoe McCarthy, Yingbo Ma, Francesco Calisto, Valerio Pagliarino, Simone Azeglio, Luca Bottero, Emmanuel Luján, Valentin Sulzer, Ashutosh Bharambe, et al. Neuralpde: Automating physics-informed neural networks (pinns) with error approximations. arXiv preprint arXiv:2107.09443, 2021. A.1 Clifford algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.2 Examples of low-dimensional Clifford algebras . . . . . . . . . . . . . . . . . . . 24 A.2.1 Clifford algebra Cl 0,1 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.2.2 Clifford algebra Cl 2,0 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.2.3 Clifford algebra Cl 0,2 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . 25 A.2.4 Clifford algebra Cl 3,0 (R) . . . . . . . . . . . . . . . . . . . . . . . . . . 26 A.3 The electromagnetic field in 3 dimensions . . . . . . . . . . . . . . . . . . . . . . 27 B.1 Clifford convolution layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 B.1.1 Translation equivariance of Clifford convolutions . . . . . . . . . . . . . . 28 B.1.2 Rotational Clifford CNN layers . . . . . . . . . . . . . . . . . . . . . . . 29 B.1.3 3D Clifford convolution layers . . . . . . . . . . . . . . . . . . . . . . . . 30 B.2 Clifford normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 B.3 Clifford initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 B.4 Equivariance under rotations and reflections . . . . . . . . . . . . . . . . . . . . . 31 B.5 Clifford Fourier layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.5.1 2D Clifford Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 33 B.5.2 2D Clifford convolution theorem . . . . . . . . . . . . . . . . . . . . . . . 34 B.5.3 3D Clifford Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 35 B.5.4 3D Clifford convolution theorem . . . . . . . . . . . . . . . . . . . . . . . 36 B.5.5 Implementation of Clifford Fourier layers . . . . . . . . . . . . . . . . . . 37 B.6 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C.1 Loss function and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 C.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 C.3 Training and model selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 C.4 Navier-Stokes in 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 C.5 Shallow water equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 C.6 Maxwell's equations in matter in 3D. . . . . . . . . . . . . . . . . . . . . . . . . . 51IEEE Signal Processing Magazine, 34(4):18-42,
2017.
Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning:
Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021.
Sven Buchholz. A theory of neural computation with clifford algebras. 2005.
Sven Buchholz and Gerald Sommer. Introduction to neural computation in clifford algebra. In
Geometric computing with Clifford algebras, pp. 291-314. Springer, 2001.
Shuhao Cao. Choose a transformer: Fourier or galerkin. Advances in neural information processing
systems, 34:24924-24940, 2021.
Gengxiang Chen, Yingguang Li, Qinglu Meng, Jing Zhou, Xiaozhong Hao, et al. Residual fourier
neural operator for thermochemical curing of composites. arXiv preprint arXiv:2111.10262,
2021.
Taco Cohen and Max Welling. Group equivariant convolutional networks. In International Confer-
ence on Machine Learning (ICML), pp. 2990-2999. PMLR, 2016a.
Taco S Cohen and Max Welling. Steerable cnns. arXiv preprint arXiv:1612.08498, 2016b.
Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
Leo Dorst, Daniel Fontijne, and Stephen Mann. Geometric algebra for computer science: an object-
oriented approach to geometry. Elsevier, 2010.
J. Ebling and G. Scheuermann. Clifford Fourier transform on vector fields. IEEE Transactions on
Visualization and Computer Graphics, 11(4):469-479, 2005. doi: 10.1109/TVCG.2005.54.
Todd A Ell. Quaternion-fourier transforms for analysis of two-dimensional linear time-invariant
partial differential systems. In Proceedings of 32nd IEEE Conference on Decision and Control,
pp. 1830-1841. IEEE, 1993.
Todd A Ell and Stephen J Sangwine. Hypercomplex fourier transforms of color images. IEEE
Transactions on image processing, 16(1):22-35, 2006.
Todd A Ell, Nicolas Le Bihan, and Stephen J Sangwine. Quaternion Fourier transforms for signal
and image processing. John Wiley & Sons, 2014.
Todd Anthony Ell. Hypercomplex spectral transformations. PhD thesis, University of Minnesota,
1992.
Thomas Frerix, Dmitrii Kochkov, Jamie Smith, Daniel Cremers, Michael Brenner, and Stephan
Hoyer. Variational data assimilation with a learned inverse observation operator. In International
Conference on Machine Learning (ICML), pp. 3449-3458. PMLR, 2021.
Kunihiko Fukushima and Sei Miyake. Neocognitron: A self-organizing neural network model for
a mechanism of visual pattern recognition. In Competition and cooperation in neural nets, pp.
267-285. Springer, 1982.
David Hestenes. New foundations for classical mechanics, volume 15. Springer Science & Business
Media, 2012.
Eckhard Hitzer and Stephen J Sangwine. Quaternion and Clifford Fourier transforms and wavelets.
Springer, 2013.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):
1735-1780, 1997.
Jordan Hoffmann, Simon Schmitt, Simon Osindero, Karen Simonyan, and Erich Elsen. Algebranets.
arXiv preprint arXiv:2006.07360, 2020.
Philipp Holl, Vladlen Koltun, Kiwon Um, and Nils Thuerey. phiflow: A differentiable pde solving
framework for deep learning via physical simulations. In NeurIPS Workshop, volume 2, 2020.
Jun-Ting Hsieh, Shengjia Zhao, Stephan Eismann, Lucia Mirabella, and Stefano Ermon. Learning
neural PDE solvers with convergence guarantees. arXiv preprint arXiv:1906.01200, 2019.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by
reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp.
448-456. PMLR, 2015.
Erik Jenner and Maurice Weiler. Steerable partial differential operators for equivariant neural net-
works. arXiv preprint arXiv:2106.10163, 2021.
Yan-Bin Jia. Quaternions and rotations. Com S, 477(577):15, 2008.
Xiaowei Jin, Shengze Cai, Hui Li, and George Em Karniadakis. Nsfnets (navier-stokes flow nets):
Physics-informed neural networks for the incompressible navier-stokes equations. Journal of
Computational Physics, 426:109951, 2021.
Ryan Keisler.
Forecasting global weather with graph neural networks.
arXiv preprint
arXiv:2202.07575, 2022.
arXiv preprint
arXiv:1412.6980, 2014.
Georgios Kissas, Jacob H Seidman, Leonardo Ferreira Guilhoto, Victor M Preciado, George J Pap-
pas, and Paris Perdikaris. Learning operators with coupled attention. Journal of Machine Learning
Research, 23(215):1-63, 2022.
Clement Kleinstreuer. Engineering fluid dynamics: an interdisciplinary systems approach. Cam-
bridge University Press, 1997.
Milan Klöwer, Tom Kimpson, Alistair White, and Mosè Giordano. milankl/speedyweather.jl:
v0.2.1, July 2022. URL https://doi.org/10.5281/zenodo.6788067.
Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan
Hoyer. Machine learning-accelerated computational fluid dynamics. Proceedings of the National
Academy of Sciences, 118(21):e2101784118, 2021.
Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural
networks to the action of compact groups. In International Conference on Machine Learning
(ICML), pp. 2747-2755. PMLR, 2018.
Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra. On universal approximation and error
bounds for fourier neural operators. Journal of Machine Learning Research (JMLR), 22:Art-No,
2021.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep con-
volutional neural networks. Advances in Neural Information Processing Systems (NeurIPS), 25,
2012.
Fred Kucharski, Franco Molteni, Martin P. King, Riccardo Farneti, In-Sik Kang, and Laura Feu-
dale. On the need of intermediate complexity general circulation models: A "SPEEDY" exam-
ple. Bulletin of the American Meteorological Society, 94(1):25-30, January 2013. doi: 10.1175/
bams-d-11-00238.1. URL https://doi.org/10.1175/bams-d-11-00238.1.
Jack B Kuipers. Quaternions and rotation sequences: a primer with applications to orbits,
aerospace, and virtual reality. Princeton university press, 1999.
Yasuaki Kuroe. Models of clifford recurrent neural networks and their dynamics. In The 2011
international joint conference on neural networks, pp. 1035-1041. IEEE, 2011.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, An-
drew Stuart, and Anima Anandkumar. Markov neural operators for learning chaotic systems.
arXiv preprint arXiv:2106.06898, 2021a.
Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar
Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial
differential equations. arXiv preprint arXiv:2111.03794, 2021b.
Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator
with learned deformations for pdes on general geometries. arXiv preprint arXiv:2207.05209,
2022b.
Marten Lienen and Stephan Günnemann. Learning the dynamics of physical systems from sparse
observations with finite element networks. arXiv preprint arXiv:2203.08852, 2022.
Joowon Lim and Demetri Psaltis. Maxwellnet: Physics-driven deep neural network training based
on maxwell's equations. Apl Photonics, 7(1):011301, 2022.
Burigede Liu, Nikola Kovachki, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, An-
drew M Stuart, and Kaushik Bhattacharya. A learning-based multiscale method and its applica-
tion to inelastic impact problems. Journal of the Mechanics and Physics of Solids, 158:104668,
2022.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv
preprint arXiv:1608.03983, 2016.
Winfried Lötzsch, Simon Ohler, and Johannes S Otterbach. Learning the solution operator of bound-
ary value problems using graph neural networks. arXiv preprint arXiv:2206.14092, 2022.
Pertti Lounesto. Clifford algebras and spinors. In Clifford Algebras and Their Applications in
Mathematical Physics, pp. 25-37. Springer, 1986.
Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning
nonlinear operators via deeponet based on the universal approximation theorem of operators.
Nature Machine Intelligence, 3(3):218-229, 2021.
Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and
George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with
practical extensions) based on fair data. Computer Methods in Applied Mechanics and Engineer-
ing, 393:114778, 2022.
Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model
prior for deep learning. In International Conference on Learning Representations (ICLR), 2018.
Hao Ma, Yuxuan Zhang, Nils Thuerey, Xiangyu Hu, and Oskar J Haidn. Physics-driven learning
of the steady navier-stokes equations using deep convolutional neural networks. arXiv preprint
arXiv:2106.09301, 2021a.
Wei Ma, Zhaocheng Liu, Zhaxylyk A Kudyshev, Alexandra Boltasseva, Wenshan Cai, and Yongmin
Liu. Deep learning for the design of photonic structures. Nature Photonics, 15(2):77-90, 2021b.
Romit Maulik, Vishwas Rao, Jiali Wang, Gianmarco Mengaldo, Emil Constantinescu, Bethany
Lusch, Prasanna Balaprakash, Ian Foster, and Rao Kotamarthi. Efficient high-dimensional varia-
tional data assimilation with machine-learned reduced-order models. Geoscientific Model Devel-
opment, 15(8):3433-3445, 2022.
Andreas Mayr, Sebastian Lehner, Arno Mayrhofer, Christoph Kloss, Sepp Hochreiter, and Jo-
hannes Brandstetter. Boundary graph neural networks for 3d simulations. arXiv preprint
arXiv:2106.11299, 2021.
Ernan McMullin. The origins of the field concept in physics. Physics in Perspective, 4(1):13-39,
2002.
Pavlo Melnyk, Michael Felsberg, and Mårten Wadenbäck. Embed me if you can: A geometric
perceptron. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.
1276-1284, 2021.
Philip Mirowski. More heat than light: Economics as social physics, physics as nature's economics.
Cambridge University Press, 1991.
Franco Molteni. Atmospheric simulations using a gcm with simplified physical parametrizations.
i: Model climatology and variability in multi-decadal experiments. Climate Dynamics, 20(2):
175-191, 2003.
C Eddie Moxey, Stephen J Sangwine, and Todd A Ell. Hypercomplex correlation techniques for
vector images. IEEE Transactions on Signal Processing, 51(7):1941-1953, 2003.
E Ulises Moya-Sánchez, Sebastià Xambó-Descamps, Abraham Sánchez Pérez, Sebastián Salazar-
Colores, and Ulises Cortés. A trainable monogenic convnet layer robust in front of large contrast
changes in image classification. IEEE access, 9:163735-163746, 2021.
Tu Dinh Nguyen, Dinh Phung, et al. Quaternion graph neural networks. In Asian Conference on
Machine Learning, pp. 236-251. PMLR, 2021.
Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid, Georges Linarès, Chiheb Trabelsi, Re-
nato De Mori, and Yoshua Bengio. Quaternion recurrent neural networks. arXiv preprint
arXiv:1806.04418, 2018a.
Physical Review
Research, 2(3):033429, 2020.
Published as a conference paper at ICLR 2023
Science advances, 7(40):eabi8605,
2021.
Appendices
CONTENTS
1 Introduction
1
2 Background: Clifford algebras
2
3 Clifford neural layers
4
4 Experiments
6
5 Conclusion
9
A Mathematical background
22
B Clifford neural layers
28
C Experiments
39
D Related work
53
E Glossary
56
52 )
52When the batch sizes are small, it can be more appropriate to use Group Normalization or Layer Normalization. These can be derived with appropriate application of Eq.Clifford convolutions satisfy the property of equivariance under translation of the multivector inputs, as shown in this Appendix B. However, the current definition of Clifford convolutions is not equivariant under multivector rotations or reflections. Here, we derive a general kernel constraint which allows us to build generalized Clifford convolutions which are equivariant w.r.t rotations or reflections of the multivectors. That is, we like to prove equivariance of a Clifford layer under rotations and reflections (i.e. orthogonal transformations) if the multivector kernel multivector filters {w i } cout i=1 : Z 2 → (G) cin satisfies the constraint:50 along appropriate tensor
dimensions. As such, batch, layer, and group normalization can be easily extended to 3-dimensional
Clifford algebras.
B.3 CLIFFORD INITIALIZATION
Parcollet et al. (2018a); Gaudet & Maida (2018) introduced initialization schemes for quaternions
which expands upon deep network initialization schemes proposed by Glorot & Bengio (2010); He
et al. (2015). Similar to Clifford normalization, quaternion initialization schemes can be adapted to
Clifford layers in a straight forward way. Effectively, tighter bounds are required for the uniform
distribution form which Clifford weights are sampled. However, despite intensive studies we did
not observe any performance gains over default PyTorch initialization schemes 15 for 2-dimensional
experiments. Similar findings are reported in Hoffmann et al. (2020). However, 3-dimensional
implementations necessitate much smaller initialization values (factor 1/8).
B.4 EQUIVARIANCE UNDER ROTATIONS AND REFLECTIONS
Table 1 :
1Model comparison on four different metrics for neural PDE surrogates which are trained on Navier-Stokes training datasets of varying size. Error bars are obtained by running experiments with three different initial seeds.METHOD Trajs.
SMSE
scalar
vector
onestep
rollout
Table 2 :
2Model comparison on four different metrics for neural PDE surrogates which are trained on the shallow water equations training datasets of varying size. Results are obtained by using a two timestep history input. Error bars are obtained by running experiments with three different initial seeds. METHOD Trajs. ResNet 0.0060 ± 0.0002 0.0121 ± 0.0003 0.0181 ± 0.0005 0.4480 ± 0.0058 CResNet 2048 0.0039 ± 0.0001 0.0072 ± 0.0002 0.0111 ± 0.0003 0.2816 ± 0.0065 CResNet rot 0.0028 ± 0.0005 0.0480 ± 0.0006 0.0075 ± 0.0011 0.2164 ± 0.0070 FNO 56 0.0271 ± 0.0016 0.0345 ± 0.0007 0.0616 ± 0.0022 0.8032 ± 0.0043 CFNO 0.0071 ± 0.0003 0.0177 ± 0.0004 0.0250 ± 0.0007 0.4323 ± 0.0046SMSE
scalar
vector
onestep
rollout
ResNet
0.0240 ± 0.0002 0.0421 ± 0.0010 0.0661 ± 0.0011 1.1195 ± 0.0197
CResNet
192 0.0617 ± 0.0016 0.0823 ± 0.0027 0.1440 ± 0.0042 2.0423 ± 0.0494
CResNet rot
0.0319 ± 0.0003 0.0576 ± 0.0005 0.0894 ± 0.0007 1.4756 ± 0.0044
ResNet
0.0140 ± 0.0003 0.0245 ± 0.0007 0.0385 ± 0.0010 0.7083 ± 0.0119
CResNet
448 0.0238 ± 0.0007 0.0448 ± 0.0023 0.0685 ± 0.0030 1.1727 ± 0.0483
CResNet rot
0.0114 ± 0.0001 0.0221 ± 0.0001 0.0335 ± 0.0002 0.6127 ± 0.0018
ResNet
0.0086 ± 0.0000 0.0156 ± 0.0003 0.0242 ± 0.0003 0.4904 ± 0.0080
CResNet
896 0.0095 ± 0.0002 0.0183 ± 0.0004 0.0278 ± 0.0006 0.5247 ± 0.0101
CResNet rot
0.0055 ± 0.0000 0.0106 ± 0.0001 0.0161 ± 0.0001 0.3096 ± 0.0010
ResNet
0.0061 ± 0.0002 0.0123 ± 0.0009 0.0184 ± 0.0010 0.4780 ± 0.0062
CResNet 1792 0.0039 ± 0.0000 0.0071 ± 0.0000 0.0111 ± 0.0001 0.2842 ± 0.0067
CResNet rot
0.0025 ± 0.0000 0.0044 ± 0.0000 0.0069 ± 0.0000 0.2370 ± 0.0000
FNO
192 0.0021 ± 0.0002 0.0057 ± 0.0001 0.0077 ± 0.0003 0.1444 ± 0.0026
CFNO
0.0012 ± 0.0000 0.0040 ± 0.0001 0.0053 ± 0.0001 0.0941 ± 0.0021
FNO
448 0.0007 ± 0.0001 0.0026 ± 0.0000 0.0034 ± 0.0001 0.0651 ± 0.0014
CFNO
0.0005 ± 0.0000 0.0020 ± 0.0000 0.0026 ± 0.0001 0.0455 ± 0.0009
FNO
896 0.0004 ± 0.0000 0.0016 ± 0.0000 0.0020 ± 0.0001 0.0404 ± 0.0005
CFNO
0.0003 ± 0.0000 0.0013 ± 0.0000 0.0017 ± 0.0001 0.0315 ± 0.0004
Table 3 :
3Model comparison on four different metrics for neural PDE surrogates which are trained on the shallow water equations training datasets of varying size. Results are obtained by using a four timestep history input. Error bars are obtained by running experiments with three different initial seeds.C.6 MAXWELL'S EQUATIONS IN MATTER IN 3D.Electromagnetic simulations play a critical role in understanding light-matter interaction and designing optical elements. Neural networks have been already successful applied in inverse-designing photonic structures(Ma et al., 2021b; Lim & Psaltis, 2022).METHOD Trajs.
SMSE
scalar
vector
onestep
rollout
FNO
56 0.0276 ± 0.0017 0.0388 ± 0.0023 0.0663 ± 0.0038 0.6821 ± 0.0379
CFNO
0.0093 ± 0.0003 0.0252 ± 0.0005 0.0345 ± 0.0009 0.4357 ± 0.0056
FNO
192 0.0033 ± 0.0007 0.0069 ± 0.0009 0.0102 ± 0.0015 0.1612 ± 0.0057
CFNO
0.0015 ± 0.0001 0.0050 ± 0.0003 0.0065 ± 0.0003 0.1023 ± 0.0026
FNO
448 0.0009 ± 0.0001 0.0023 ± 0.0002 0.0032 ± 0.0003 0.0687 ± 0.0023
CFNO
0.0010 ± 0.0006 0.0039 ± 0.0027 0.0050 ± 0.0033 0.1156 ± 0.0913
FNO
896 0.0004 ± 0.0001 0.0012 ± 0.0001 0.0016 ± 0.0001 0.0436 ± 0.0011
CFNO
0.0003 ± 0.0000 0.0012 ± 0.0001 0.0015 ± 0.0001 0.0353 ± 0.0010
Table 4 :
4Model comparison on four different metrics for neural PDE surrogates which are trained on the Maxwell equations training datasets of varying size. Results are obtained by using a two timestep history input. Error bars are obtained by running experiments with three different initial seeds.This appendix supports detailed discussions of how our work relates to complex and quaternion neural networks, to work on Clifford algebras and Clifford Fourier transforms in computer vision, to Fourier Neural Operators, equivariant neural networks and geometric deep learning approaches, to neural operator learning and neural PDE surrogates.ison of these two neural operator approaches is done byLu et al. (2022). Other directions include the modeling of PDE solution operators via latent space models, transformers, and graph neural networks (GNNs).Wu et al. (2022) present the modeling of the systems dynamics in a latent space with fixed dimension where the latent modeling is done via MLPs, and the encoding and decoding via CNNs, which can also be replaced by graph neural networks (GNNs). Cao(2021)propose the Galerkin transformer, a simple attention based operator learning method without softmax normalization, LOCA (Learning Operators with Coupled Attention) (Kissas et al., 2022) maps the input functions to a finite set of features and attends to them by output query locations, and Li et al. (2022a) propose a transformer which provides a flexible way to implicitly exploit the patterns within inputs. Brandstetter et al. (2022b) formulated a message passing neural network approach that representationally contains several conventional numerical PDE solving schemes. Further GNN based approaches are Lötzsch et al. (2022) who learn the operator for boundary value problems on finite element method (FEM) (Brenner et al., 2008) ground truth data, and Lienen & Günnemann (2022) who derive their GNN models from FEM in a principled way.METHOD Trajs.
SMSE
D
H
onestep
rollout
FNO
640 0.0030 ± 0.0006 0.002 33 ± 0.000 50 0.0054 ± 0.0011 0.0186 ± 0.0083
CFNO
0.0006 ± 0.0001 0.000 72 ± 0.000 10 0.0013 ± 0.0002 0.0054 ± 0.0023
FNO 1280 0.0010 ± 0.0002 0.000 85 ± 0.000 20 0.0019 ± 0.0004 0.0068 ± 0.0036
CFNO
0.0003 ± 0.0001 0.000 41 ± 0.000 10 0.0007 ± 0.0002 0.0029 ± 0.0016
FNO 3200 0.0003 ± 0.0001 0.000 25 ± 0.000 10 0.0005 ± 0.0001 0.0020 ± 0.0011
CFNO
0.0002 ± 0.0000 0.000 20 ± 0.000 10 0.0004 ± 0.0001 0.0015 ± 0.0009
FNO 6400 0.0001 ± 0.0000 0.000 09 ± 0.000 00 0.0002 ± 0.0000 0.0008 ± 0.0004
CFNO
0.0001 ± 0.0000 0.000 09 ± 0.000 00 0.0002 ± 0.0000 0.0007 ± 0.0004
Table 5 :
5Notations used throughout the paper.
Table 6 :
6Examples of various vector and scalar fields. Vector fields ascribe a vector to each point in space, e.g. force, electric current (stream of charged particles), or velocity. Scalar fields on the other hand collate each field point with a scalar value such as temperature.Gravitational field (strength) Force per unit mass (N/kg)Vector R 3 → R 3 Electric field (strength)Force per unit electric charge (N/C) Vector R 3 → R 3 Magnetic field (strength)Electric current per meter (A/m) Vector R 3 → R 3 Pressure fieldExample
Field quantity
Type
Coordinates
Operations of addition and multiplication are associative. 2 In contrast to scalars, pseudoscalars change sign under reflections.
Note that the expansion coefficients for the feature map f j and filters w i,j in terms of the basis elements of G 2 and in terms of quaternion elementsî, andk are the same.
The FFT of a real-valued signal is Hermitian-symmetric, so the output contains only the positive frequencies below the Nyquist frequency for the last spatial dimension.
https://github.com/tum-pbs/PhiFlow
https://developer.nvidia.com/cufft 10 For alternative efficient GPU-accelerated multidimensional FFT libraries see e.g. https://github. com/DTolm/VkFFT
In contrast to scalars, pseudoscalars change sign under reflection.
In deep learning, a convolution operation in the forward pass is implemented as cross-correlation.
We could not find neural rotational quaternion convolutions in existing literature, we however used the codebase of https://github.com/Orkis-Research/Pytorch-Quaternion-Neural-Networks as inspiration.14 Note that the expansion coefficients for the feature map f j and filters w i,j in terms of the basis elements of G 2 and in terms of quaternion elementsî, andk are the same.
ŵ i,j rot,1ŵ i,j rot,12 +ŵ i,j rot,0ŵ i,j rot,2 2 ŵ i,j rot,1ŵ i,j rot,2 +ŵ i,j rot,0ŵ i,j rot,12 1 − 2 (ŵ i,j rot,1 ) 2 + (ŵ i,j rot,12 ) 2 2 ŵ i,j rot,2ŵ i,j rot,12 −ŵ i,j rot,0ŵ i,j rot,1 2 ŵ i,j rot,1ŵ i,j rot,12 −ŵ i,j rot,0ŵ i,j rot,2 2 ŵ i,j rot,2ŵi,j rot,12 +ŵ i,j rot,0ŵi,j rot,12
The default PyTorch initialization of linear and convolution layers is He Uniform initialization(He et al., 2015) for 2-dimensional problems. The gain is calculated using LeakyRelu activation functions with negative part of 5, which effectively results in Glorot Uniform initialization.
https://github.com/tum-pbs/PhiFlow
https://github.com/milankl/SpeedyWeather.jl
Geometric algebra generation of molecular surfaces. Azzam Alfarraj, Guo-Wei Wei, Journal of the Royal Society Interface. 1918920220117Azzam Alfarraj and Guo-Wei Wei. Geometric algebra generation of molecular surfaces. Journal of the Royal Society Interface, 19(189):20220117, 2022.
A machine learning-based global atmospheric forecast model. Troy Arcomano, Istvan Szunyogh, Jaideep Pathak, Alexander Wikner, R Brian, Edward Hunt, Ott, Geophysical Research Letters. 479Troy Arcomano, Istvan Szunyogh, Jaideep Pathak, Alexander Wikner, Brian R Hunt, and Edward Ott. A machine learning-based global atmospheric forecast model. Geophysical Research Letters, 47(9):e2020GL087776, 2020.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Learning data-driven discretizations for partial differential equations. Yohai Bar-Sinai, Stephan Hoyer, Jason Hickey, Michael P Brenner, Proceedings of the National Academy of Sciences. the National Academy of Sciences116Yohai Bar-Sinai, Stephan Hoyer, Jason Hickey, and Michael P Brenner. Learning data-driven dis- cretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31):15344-15349, 2019.
A survey of complex-valued neural networks. Joshua Bassey, Lijun Qian, Xianfang Li, arXiv:2101.12249arXiv preprintJoshua Bassey, Lijun Qian, and Xianfang Li. A survey of complex-valued neural networks. arXiv preprint arXiv:2101.12249, 2021.
Julia: A fresh approach to numerical computing. Jeff Bezanson, Alan Edelman, Stefan Karpinski, Shah, 10.1137/141000671SIAM review. 591Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. Julia: A fresh approach to nu- merical computing. SIAM review, 59(1):65-98, 2017. URL https://doi.org/10.1137/ 141000671.
Prediction of aerodynamic flow fields using convolutional neural networks. Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, Shailendra Kaushik, Computational Mechanics. 642Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Predic- tion of aerodynamic flow fields using convolutional neural networks. Computational Mechanics, 64(2):525-545, 2019.
History of quaternion and clifford-fourier transforms and wavelets. Quaternion and Clifford Fourier transforms and wavelets. Fred Brackx, Eckhard Hitzer, Stephen J Sangwine, XI-XXVII27Fred Brackx, Eckhard Hitzer, and Stephen J Sangwine. History of quaternion and clifford-fourier transforms and wavelets. Quaternion and Clifford Fourier transforms and wavelets, 27:XI- XXVII, 2013.
Geometric and physical quantities improve e (3) equivariant message passing. Johannes Brandstetter, Rob Hesselink, Elise Van Der Pol, Erik Bekkers, Max Welling, arXiv:2110.02905arXiv preprintJohannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik Bekkers, and Max Welling. Ge- ometric and physical quantities improve e (3) equivariant message passing. arXiv preprint arXiv:2110.02905, 2021.
Johannes Brandstetter, Max Welling, Daniel E Worrall, arXiv:2202.07643Lie point symmetry data augmentation for neural pde solvers. arXiv preprintJohannes Brandstetter, Max Welling, and Daniel E Worrall. Lie point symmetry data augmentation for neural pde solvers. arXiv preprint arXiv:2202.07643, 2022a.
Johannes Brandstetter, Daniel Worrall, Max Welling, arXiv:2202.03376Message passing neural PDE solvers. arXiv preprintJohannes Brandstetter, Daniel Worrall, and Max Welling. Message passing neural PDE solvers. arXiv preprint arXiv:2202.03376, 2022b.
The mathematical theory of finite element methods. C Susanne, L Ridgway Brenner, L Ridgway Scott, Scott, Springer3Susanne C Brenner, L Ridgway Scott, and L Ridgway Scott. The mathematical theory of finite element methods, volume 3. Springer, 2008.
Combining generative and discriminative models for hybrid inference. Zeynep Victor Garcia Satorras, Max Akata, Welling, Advances in Neural Information Processing Systems (NeurIPS). H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. GarnettCurran Associates, IncVictor Garcia Satorras, Zeynep Akata, and Max Welling. Combining generative and discriminative models for hybrid inference. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems (NeurIPS), pp. 13802-13812. Curran Associates, Inc., 2019.
Deep quaternion networks. J Chase, Anthony S Gaudet, Maida, International Joint Conference on Neural Networks (IJCNN). IEEEChase J Gaudet and Anthony S Maida. Deep quaternion networks. In International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2018.
Mario Geiger, Tess Smidt, arXiv:2207.09453Euclidean neural networks. 3arXiv preprintMario Geiger and Tess Smidt. e3nn: Euclidean neural networks. arXiv preprint arXiv:2207.09453, 2022.
Modeling the dynamics of pde systems with physicsconstrained deep auto-regressive networks. Nicholas Geneva, Nicholas Zabaras, Journal of Computational Physics. 403109056Nicholas Geneva and Nicholas Zabaras. Modeling the dynamics of pde systems with physics- constrained deep auto-regressive networks. Journal of Computational Physics, 403:109056, 2020.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR Workshop and Conference ProceedingsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 249-256. JMLR Workshop and Conference Proceedings, 2010.
Eleonora Grassucci, Aston Zhang, Danilo Comminiello, arXiv:2110.04176Lightweight convolutional neural networks by hypercomplex parameterization. arXiv preprintEleonora Grassucci, Aston Zhang, and Danilo Comminiello. Lightweight convolutional neural net- works by hypercomplex parameterization. arXiv preprint arXiv:2110.04176, 2021.
Learning to optimize multigrid PDE solvers. Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, Ron Kimmel, International Conference on Machine Learning (ICML). Daniel Greenfeld, Meirav Galun, Ronen Basri, Irad Yavneh, and Ron Kimmel. Learning to optimize multigrid PDE solvers. In International Conference on Machine Learning (ICML), pp. 2415- 2423, 2019.
. J David, Griffiths, Introduction to electrodynamicsDavid J Griffiths. Introduction to electrodynamics, 2005.
Fourier neural operator networks: A fast and general solver for the photoacoustic wave equation. Steven Guan, Ko-Tsung Hsu, Parag V Chitnis, arXiv:2108.09374arXiv preprintSteven Guan, Ko-Tsung Hsu, and Parag V Chitnis. Fourier neural operator networks: A fast and general solver for the photoacoustic wave equation. arXiv preprint arXiv:2108.09374, 2021.
John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro, arXiv:2111.13587Adaptive fourier neural operators: Efficient token mixers for transformers. arXiv preprintJohn Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catan- zaro. Adaptive fourier neural operators: Efficient token mixers for transformers. arXiv preprint arXiv:2111.13587, 2021.
Convolutional neural networks for steady flow approximation. Xiaoxiao Guo, Wei Li, Francesco Iorio, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningXiaoxiao Guo, Wei Li, and Francesco Iorio. Convolutional neural networks for steady flow ap- proximation. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 481-490, 2016.
A general framework for structured learning of mechanical systems. K Jayesh, Kunal Gupta, Zachary Menda, Manchester, Kochenderfer, arXiv:1902.08705arXiv preprintJayesh K Gupta, Kunal Menda, Zachary Manchester, and Mykel J Kochenderfer. A general frame- work for structured learning of mechanical systems. arXiv preprint arXiv:1902.08705, 2019.
Solving high-dimensional partial differential equations using deep learning. Jiequn Han, Arnulf Jentzen, Weinan E , Proceedings of the National Academy of Sciences. 11534Jiequn Han, Arnulf Jentzen, and Weinan E. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505-8510, 2018.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, IEEE International Conference on Computer Vision (ICCV). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpass- ing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision (ICCV), pp. 1026-1034, 2015.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
Canonical decomposition of steerable functions. Yacov Hel, - Or, C Patrick, Teo, Journal of Mathematical Imaging and Vision. 91Yacov Hel-Or and Patrick C Teo. Canonical decomposition of steerable functions. Journal of Mathematical Imaging and Vision, 9(1):83-95, 1998.
Gaussian error linear units (gelus). Dan Hendrycks, Kevin Gimpel, arXiv:1606.08415arXiv preprintDan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
Deep-neural-network solution of the electronic schrödinger equation. Jan Hermann, Zeno Schätzle, Frank Noé, Nature Chemistry. 1210Jan Hermann, Zeno Schätzle, and Frank Noé. Deep-neural-network solution of the electronic schrödinger equation. Nature Chemistry, 12(10):891-897, 2020.
Oersted medal lecture 2002: Reforming the mathematical language of physics. David Hestenes, David Hestenes. Oersted medal lecture 2002: Reforming the mathematical language of physics, 2003.
Learning to simulate complex physics with graph networks. Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter W Battaglia, arXiv:2002.09405arXiv preprintAlvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Pe- ter W. Battaglia. Learning to simulate complex physics with graph networks. arXiv preprint arXiv:2002.09405, 2020.
Mo-bilenetv2: Inverted residuals and linear bottlenecks. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mo- bilenetv2: Inverted residuals and linear bottlenecks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520, 2018.
Colour image filters based on hypercomplex convolution. J Stephen, Todd A Sangwine, Ell, IEE Proceedings-Vision, Image and Signal Processing. 147Stephen J Sangwine and Todd A Ell. Colour image filters based on hypercomplex convolution. IEE Proceedings-Vision, Image and Signal Processing, 147(2):89-93, 2000.
Physics from symmetry. Jakob Schwichtenberg, SpringerJakob Schwichtenberg. Physics from symmetry. Springer, 2015.
Wenlei Shi, Xinquan Huang, Xiaotian Gao, Xinran Wei, Jia Zhang, Jiang Bian, Mao Yang, Tie-Yan Liu, arXiv:2206.09418Lordnet: Learning to solve parametric partial differential equations without simulated data. arXiv preprintWenlei Shi, Xinquan Huang, Xiaotian Gao, Xinran Wei, Jia Zhang, Jiang Bian, Mao Yang, and Tie- Yan Liu. Lordnet: Learning to solve parametric partial differential equations without simulated data. arXiv preprint arXiv:2206.09418, 2022.
Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, Wang-Chun Woo, 28Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Ad- vances in neural information processing systems, 28, 2015.
Dgm: A deep learning algorithm for solving partial differential equations. Justin Sirignano, Konstantinos Spiliopoulos, Journal of computational physics. 375Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375:1339-1364, 2018.
Lasse Casper Kaae Sønderby, Jonathan Espeholt, Mostafa Heek, Avital Dehghani, Oliver, arXiv:2003.12140Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. arXiv preprintCasper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Sal- imans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. arXiv preprint arXiv:2003.12140, 2020.
Geometric algebra attention networks for small point clouds. Matthew Spellings, arXiv:2110.02393arXiv preprintMatthew Spellings. Geometric algebra attention networks for small point clouds. arXiv preprint arXiv:2110.02393, 2021.
Learned coarse models for efficient turbulence simulation. Kimberly Stachenfeld, B Drummond, Dmitrii Fielding, Miles Kochkov, Tobias Cranmer, Jonathan Pfaff, Can Godwin, Shirley Cui, Peter Ho, Alvaro Battaglia, Sanchez-Gonzalez, arXiv:2112.15275arXiv preprintKimberly Stachenfeld, Drummond B Fielding, Dmitrii Kochkov, Miles Cranmer, Tobias Pfaff, Jonathan Godwin, Can Cui, Shirley Ho, Peter Battaglia, and Alvaro Sanchez-Gonzalez. Learned coarse models for efficient turbulence simulation. arXiv preprint arXiv:2112.15275, 2021.
An introduction to spinors. M Andrew, Steane, arXiv:1312.3824arXiv preprintAndrew M Steane. An introduction to spinors. arXiv preprint arXiv:1312.3824, 2013. Jaap Suter. Geometric algebra primer. http://www.jaapsuter.com/ geometric-algebra.pdf, 2003.
Efficientnet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, Quoc Le, International Conference on Machine Learning (ICML). PMLRMingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural net- works. In International Conference on Machine Learning (ICML), pp. 6105-6114. PMLR, 2019.
Roger Temam, 2001. ResNet 0.003 00 ± 0.000 03 0.012 55 ± 0.000 08 0.015 53 ± 0.000 11 0.113 62 ± 0.000 48 CResNet 2080 0.005 66 ± 0.000 62 0.022 52 ± 0.002 84 0.028 06 ± 0.003 46 0.158 44 ± 0.026 77 CResNet rot 0.003 76 ± 0.000 28 0.014 13 ± 0.001 16 0.017 80 ± 0.001 43 0.106 81 ± 0.004 76Navier-Stokes equations: theory and numerical analysis. American Mathematical Soc343Roger Temam. Navier-Stokes equations: theory and numerical analysis, volume 343. American Mathematical Soc., 2001. ResNet 0.003 00 ± 0.000 03 0.012 55 ± 0.000 08 0.015 53 ± 0.000 11 0.113 62 ± 0.000 48 CResNet 2080 0.005 66 ± 0.000 62 0.022 52 ± 0.002 84 0.028 06 ± 0.003 46 0.158 44 ± 0.026 77 CResNet rot 0.003 76 ± 0.000 28 0.014 13 ± 0.001 16 0.017 80 ± 0.001 43 0.106 81 ± 0.004 76
Clifford (geometric) algebra and Clifford Fourier transform. (Real) Clifford algebras (also known as geometric algebras), as an extension of elementary algebra to work with geometrical objects such as vectors are extensively discussed in Suter. Clifford (geometric) algebra and Clifford Fourier transform. (Real) Clifford algebras (also known as geometric algebras), as an extension of elementary algebra to work with geometrical ob- jects such as vectors are extensively discussed in Suter (2003);
. Hestenes, Hestenes (2003);
. Dorst, Dorst et al. (2010);
. Hestenes, Hestenes (2012);
Compared to other formalisms for manipulating geometric objects, Clifford algebras are tailored towards vector manipulation of objects of different dimensions. Hypercomplex and quaternion Fourier transforms are extensively discussed in Ell. Renaud , Renaud (2020). Compared to other formalisms for manipulating geometric ob- jects, Clifford algebras are tailored towards vector manipulation of objects of different dimensions. Hypercomplex and quaternion Fourier transforms are extensively discussed in Ell (1992; 1993);
. & Ell, Sangwine, Ell & Sangwine (2006);
Ell, This work heavily builds on the concepts introduced in Ebling & Scheuermann. Ell et al. (2014). This work heavily builds on the concepts introduced in Ebling & Scheuermann (2003; 2005);
Comprehensive summaries of Clifford and quaternion Fourier transforms can be found in Hitzer & Sangwine. Ebling, Ebling (2006). Comprehensive summaries of Clifford and quaternion Fourier transforms can be found in Hitzer & Sangwine (2013);
. Brackx, Brackx et al. (2013);
. Hitzer, Hitzer (2021).
More precisely, the Clifford-Fourier transform is used to solve the mode decomposition process in PDE transforms. Clifford Algebras, Clifford , Fourier transforms are already deployed to solve PDEs numerically in Alfarraj & Wei (2022)Clifford algebras and Clifford Fourier transforms are already deployed to solve PDEs numerically in Alfarraj & Wei (2022). More precisely, the Clifford-Fourier transform is used to solve the mode decomposition process in PDE transforms.
These works put the emphasis on the geometric perceptron. Melnyk, Neural networks in the Clifford domain were proposed already in 1994 by Pearson & Bisset. Pearsoni.e. how to recast vanilla multilayer perceptrons (MLPs) asClifford neural networks. Neural networks in the Clifford domain were proposed already in 1994 by Pearson & Bisset (1994), and later by Pearson (2003). These works put the emphasis on the geo- metric perceptron (Melnyk et al., 2021), i.e. how to recast vanilla multilayer perceptrons (MLPs) as
2020) generalized from complex numbers and quaternions to a set of alternative algebras. Besides Clifford MLPs, Clifford algebras have been used in recurrent neural networks (RNNs) (Kuroe, 2011), and have been used to formulate quantum neural networks. Clifford Mlps, Similarly, Hoffmann, exploring global exponential stabilities of Clifford MLPs with time-varying delays and impulsive effects. BuchholzBuchholz & SommerTheir applicability to neural computing has been studied in general. Probably the most related wors areClifford MLPs. Similarly, (Hoffmann et al., 2020) generalized from complex numbers and quater- nions to a set of alternative algebras. Besides Clifford MLPs, Clifford algebras have been used in recurrent neural networks (RNNs) (Kuroe, 2011), and have been used to formulate quantum neu- ral networks (Trindade et al., 2022). Their applicability to neural computing has been studied in general (Buchholz & Sommer, 2001; Buchholz, 2005), exploring global exponential stabilities of Clifford MLPs with time-varying delays and impulsive effects. Probably the most related wors are:
2022) who build geometric algebra convolution networks to process spatial and temporal data of 3D traffic data. Multidimensional traffic parameters are encoded as multivectors which allows to model correlation between traffic data in both spatial and temporal domains. (ii) Spellings (2021) who build rotation-and permutation-equivariant graph network architectures based on geometric algebra products of node features. Zang, Higher order information is built from available node inputsZang et al. (2022) who build geometric algebra convolution networks to process spatial and tem- poral data of 3D traffic data. Multidimensional traffic parameters are encoded as multivectors which allows to model correlation between traffic data in both spatial and temporal domains. (ii) Spellings (2021) who build rotation-and permutation-equivariant graph network architectures based on ge- ometric algebra products of node features. Higher order information is built from available node inputs.
the first to introduce the multivector viewpoint of field components which allows us to effectively connect Clifford neural layers with the geometric structure of the input data. We further connect neural Clifford convolutions on multivectors with various works on complex numbers and quaternions. We are further the first to introduce neural Clifford Fourier transforms. In contrast to previous works, we areIn contrast to previous works, we are the first to introduce the multivector viewpoint of field compo- nents which allows us to effectively connect Clifford neural layers with the geometric structure of the input data. We further connect neural Clifford convolutions on multivectors with various works on complex numbers and quaternions. We are further the first to introduce neural Clifford Fourier transforms.
introduced the key components for complex-valued deep neural networks. More precisely, they introduced convolutional. Complex, Trabelsi, Complex and quaternion neural networks. Trabelsi et al. (2017) introduced the key compo- nents for complex-valued deep neural networks. More precisely, they introduced convolutional (Le- Cun et al., 1998) feed-forward and convolutional LSTM (Shi et al., 2015;
Already in classical computer vision, quaternions as hypercomplex convolution (Sangwine & Ell, 2000) and hypercomplex correlation (Moxey et al., 2003) techniques were introduced for color image processing. Quaternion based deep learning architectures are a natural extension of complex neural networks. & Hochreiter, Schmidhuber, Zhu, together with complex batch-normalization, and complex weight initialization strategies. Quaternions are a natural extension of complex neural networks. In quaternion neural networks. Parcollet et al.Hochreiter & Schmid- huber, 1997) networks, together with complex batch-normalization, and complex weight initial- ization strategies. Quaternions are a natural extension of complex neural networks. Already in classical computer vision, quaternions as hypercomplex convolution (Sangwine & Ell, 2000) and hypercomplex correlation (Moxey et al., 2003) techniques were introduced for color image pro- cessing. Quaternion based deep learning architectures are a natural extension of complex neural networks. In quaternion neural networks (Zhu et al., 2018; Parcollet et al., 2018a;
. & Gaudet, Maida, Parcollet, Gaudet & Maida, 2018; Parcollet et al., 2018b; 2019; 2020; Nguyen et al., 2021;
concepts such as complex convolution, complex batchnorm, and complex initialization are transfered from the complex numbers C, which are algebra-isomorph to Cl(0, 1)(R) to Cl(0, 2)(R), which is algebra-isomorph to the quaternions H. Although Hoffmann et al. (2020) generalized these from complex numbers and quaternions to a set of alternative algebras. Moya-Sánchez, their tasks did not really leverage any multivector structure in dataMoya-Sánchez et al., 2021), con- cepts such as complex convolution, complex batchnorm, and complex initialization are transfered from the complex numbers C, which are algebra-isomorph to Cl(0, 1)(R) to Cl(0, 2)(R), which is algebra-isomorph to the quaternions H. Although Hoffmann et al. (2020) generalized these from complex numbers and quaternions to a set of alternative algebras, their tasks did not really leverage any multivector structure in data.
as spectral surrogate for vision transformer architectures. ( Li, 2020) have had tremendous impact towards improving neural PDE solver surrogates. Efficient implementations of FNO layers come as physics-informed neural networks. Furthermore, universal approximation and error bounds have been studied for FNOsFourier Neural Operators. Fourier Neural Operators (FNOs) (Li et al., 2020) have had tremen- dous impact towards improving neural PDE solver surrogates. Efficient implementations of FNO layers come as physics-informed neural networks (PINO) (Li et al., 2021b), as U-shaped network architectures (UNO) (Rahman et al., 2022b), as spectral surrogate for vision transformer archi- tectures (Rao et al., 2021; Guibas et al., 2021), as Markov neural operators (MNO) for chaotic systems (MNO) (Li et al., 2021a), and as generative adversarial neural operators (GANOs) (Rah- man et al., 2022a). Applications range from weather forecasting (Pathak et al., 2022), CO 2 -water multiphase problems (Wen et al., 2022), multiscale method for crystal plasticity (Liu et al., 2022), seismic wave propagation (Yang et al., 2021), photoaccustic wave propagation (Guan et al., 2021), PDE-constrained control problems (Hwang et al., 2022), and for thermochemical curing of com- posites (Chen et al., 2021). Recently, FNOs have been successfully applied to PDEs on general geometries (Li et al., 2022b). Furthermore, universal approximation and error bounds have been studied for FNOs (Kovachki et al., 2021).
We roughly group recent approaches to learn neural PDE surrogates and neural PDE solvers into three categories: (i) hybrid approaches, where neural networks augment numerical solvers or replace parts of numerical solvers. Pde Neural, Thuerey, The intersection of PDE solving, deep learning, fluid dynamics, and weather forecasting has developed into a very active hub of research latelyNeural PDE solvers/surrogates. The intersection of PDE solving, deep learning, fluid dynamics, and weather forecasting has developed into a very active hub of research lately (Thuerey et al., 2021). We roughly group recent approaches to learn neural PDE surrogates and neural PDE solvers into three categories: (i) hybrid approaches, where neural networks augment numerical solvers or replace parts of numerical solvers;
ii) direct approaches, (a) where the mapping from an initial state to a solution is learned, i.e. the solution function of the underlying PDE is approximated. (ii) direct approaches, (a) where the mapping from an initial state to a solution is learned, i.e. the solution function of the underlying PDE is approximated;
2020) learn a correction function of conventional PDE solvers to improve accuracy. All these approaches are hybrid approaches (Garcia Satorras et al., 2019), where the computational graph of the solver is preserved and heuristically-chosen parameters are predicted with a neural network. A different flavor of hy. ; Ad, Bar-Sinai, Neural networks augment numerical solvers by learning data-driven discretizations for PDEs. 2019) or by controlling learned approximations inside the calculation of standard numerical solver used for computational fluid dynamics. brid approaches can be assigned to the works of Sanchez-Gonzalez et al.Ad (i): Neural networks augment numerical solvers by learning data-driven discretizations for PDEs (Bar-Sinai et al., 2019) or by controlling learned approximations inside the calculation of standard numerical solver used for computational fluid dynamics (Kochkov et al., 2021). In Green- feld et al. (2019), a prolongation is learned which maps from discretized PDE solutions to multigrid solutions, Hsieh et al. (2019) learn to modify the updates of an existing solver, Praditia et al. (2021) adopt the numerical structure of Finite Volume Methods (FVMs), and Um et al. (2020) learn a cor- rection function of conventional PDE solvers to improve accuracy. All these approaches are hybrid approaches (Garcia Satorras et al., 2019), where the computational graph of the solver is preserved and heuristically-chosen parameters are predicted with a neural network. A different flavor of hy- brid approaches can be assigned to the works of Sanchez-Gonzalez et al. (2020);
. Pfaff, Pfaff et al. (2020);
2020) approximate wave-functions of many-electron systems, and thus replace the hand-crafted ansatz which is conventionally used in variational quantum Monte Carlo methods. Mayr, 2020) and FermiNet. 2021) who predict accelerations of particles/meshes to numerical update the respective positions. Finally, PauliNetMayr et al. (2021) who predict accelerations of particles/meshes to numerical update the respective positions. Finally, PauliNet (Hermann et al., 2020) and FermiNet (Pfau et al., 2020) approximate wave-functions of many-electron systems, and thus replace the hand-crafted ansatz which is con- ventionally used in variational quantum Monte Carlo methods.
. Ad (ii.a): Sirignano & Spiliopoulos. Ad (ii.a): Sirignano & Spiliopoulos (2018);
2019) embed the underlying physics in the training process, and can be used to solve both forward. Han, approximate the solution of high-dimensional Black-Scholes and Hamilton-Jacobi-Bellman equations, respectively. Physicsinformed neural networks (PINNs). Jin et al., 2021) as well as backward. Raissi et al., 2020) dynamics. Zubov et al. (2021) allow automating many of these aspects under a single coherent interfaceHan et al. (2018) approximate the solution of high-dimensional Black-Scholes and Hamilton-Jacobi-Bellman equations, respectively. Physics- informed neural networks (PINNs) (Raissi et al., 2019) embed the underlying physics in the training process, and can be used to solve both forward (Jin et al., 2021) as well as backward (Raissi et al., 2020) dynamics. Zubov et al. (2021) allow automating many of these aspects under a single coherent interface.
learned a surrogate CNN-based model to approximate steady-state flow field predictions, similarly Bhatnagar et al. (2019) trained a surrogate CNN-based model to predict solutions for unseen flow conditions and geometries, and Zhu & Zabaras (2018) used Baysian CNNs for surrogate PDE modeling and uncertainty quantification. Fourier Neural Operators (FNOs) (Li et al., 2020) proposed the mapping from parameter space to solution spaces, and had tremendous impact towards improving neural PDE solver surrogates. ( Ad, Guo, parallel, Lu et al.Ad (ii.b): Guo et al. (2016) learned a surrogate CNN-based model to approximate steady-state flow field predictions, similarly Bhatnagar et al. (2019) trained a surrogate CNN-based model to predict solutions for unseen flow conditions and geometries, and Zhu & Zabaras (2018) used Baysian CNNs for surrogate PDE modeling and uncertainty quantification. Fourier Neural Operators (FNOs) (Li et al., 2020) proposed the mapping from parameter space to solution spaces, and had tremendous impact towards improving neural PDE solver surrogates. In parallel, Lu et al. (2021) introduced
which learns mappings between function spaces, and was successfully applied to many parametric ODEs and PDEs. Both, FNOs and DeepONets have been combined with PINNs and trained in a physics-informed style. ; Deeponet, Li, A comprehensive comparDeepONet, which learns mappings between function spaces, and was successfully applied to many parametric ODEs and PDEs. Both, FNOs and DeepONets have been combined with PINNs and trained in a physics-informed style (Li et al., 2022b; Wang et al., 2021). A comprehensive compar-
| [
"https://github.com/flaport/fdtd",
"https://github.com/tum-pbs/PhiFlow",
"https://github.com/Orkis-Research/Pytorch-Quaternion-Neural-Networks",
"https://github.com/tum-pbs/PhiFlow",
"https://github.com/milankl/SpeedyWeather.jl"
]
|
[
"Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks",
"Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks"
]
| [
"Stephen James ",
"Paul Wohlhart ",
"Mrinal Kalakrishnan ",
"Dmitry Kalashnikov ",
"Alex Irpan ",
"Julian Ibarz ",
"Sergey Levine ",
"Raia Hadsell ",
"Konstantinos Bousmalis [email protected] "
]
| []
| []
| Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to realworld ones. Using domain adaptation methods to cross this "reality gap" requires a large amount of unlabelled realworld data, whilst domain randomization alone can waste modeling power. In this paper, we present Randomizedto-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no realworld data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zero-shot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, attaining comparable performance to a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%. | 10.1109/cvpr.2019.01291 | [
"https://arxiv.org/pdf/1812.07252v2.pdf"
]
| 56,171,147 | 1812.07252 | 1941187db4bbace69f6d8420e127505ec449e717 |
Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Stephen James
Paul Wohlhart
Mrinal Kalakrishnan
Dmitry Kalashnikov
Alex Irpan
Julian Ibarz
Sergey Levine
Raia Hadsell
Konstantinos Bousmalis [email protected]
Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to realworld ones. Using domain adaptation methods to cross this "reality gap" requires a large amount of unlabelled realworld data, whilst domain randomization alone can waste modeling power. In this paper, we present Randomizedto-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no realworld data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zero-shot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, attaining comparable performance to a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.
Introduction
Deep learning for vision-based robotics tasks is a promising research direction [58]. However, it necessitates large amounts of real-world data, which is a severe 1 Imperial College London. Work done while Stephen James was at X 2 X, Mountain View, California, United States 3 Google Brain, United States 4 DeepMind, London 5 University of California Berkeley, Berkeley, California, United States Figure 1: We learn a generator that translates randomized simulation images to a chosen canonical simulation version which are then used to train a robot grasping agent (top). The system can then be used to translate real-world images to canonical images, and consequently allow for Sim-to-Real transfer of the agent (bottom). Feeding both source and target images to the agent allows for joint finetuning of the agent in the real world.
limitation, since real-robot data collection is expensive and cumbersome, often requiring days or even months for a single task [34,44]. Due to the availability of affordable cloud computing services, it is becoming more attractive to leverage large-scale simulations to collect experience from a large number of agents in parallel. But with this comes the issue of transferring gained experience from simulation to the real world -a non-trivial task given the usually large domain shift.
Reducing the reality gap between simulation and reality is possible with recent advances in visual domain adaptation [14,36,5,55,4,66,71,30,54,59,21]. Such techniques usually require large amounts of unlabelled images from the real world. Although such unlabelled images are easier to capture than labelled, they can still be costly to collect in robotics tasks. Domain randomization [51,61,25,38,3,24] is another technique that is particularly popular in robotics, where an agent is trained on a wide range of variations of sensory inputs, with the intention that this forces the input processing layers of the network to extract semantically relevant features in a way that is agnostic to the superficial properties of the image (such as particular textures or particular ways shadows are cast from a constant light source). The intuition is that this leads to a network that extracts the same information from real-world images, featuring yet another variation of the input. However, performing randomization directly on the input of a learning algorithm, as done in related work, makes the task potentially harder than necessary, as the algorithm has to model both the arbitrary changes in the visual domain, while at the same time trying to decipher the dynamics of the task. Moreover, although randomization has been successful in the supervised learning setting, there is evidence that some popular reinforcement learning (RL) algorithms, such as DDPG [35] and A3C [39], can be destabilized by this transfer method [38,70].
In this paper, we investigate learning vision-based robotic closed-loop grasping, where a robotic arm is tasked with picking up a diverse range of unseen objects, with the help of simulation and the use of as little real-world data as possible. Robotic grasping is an important application area in robotics, but also an exceptionally challenging problem: since a grasping system must successfully pick up previously unseen objects, it is not enough simply to memorize grasps that work well for individual instances, but to generalize and extrapolate from an internal understanding of geometry and physics. This presents a particularly difficult challenge for simulation-to-real-world transfer: besides the distributional shift from simulated images and physics, the system must also handle domain shift in the distribution of objects themselves.
To that end, we propose Randomized-to-Canonical Adaptation Networks (RCAN), a novel approach to crossing the reality gap that translates real-world images into their equivalent simulated versions, but makes use of no realworld data. This is achieved by leveraging domain randomization in a unique way, where we learn to adapt from one heavily randomized scene to an equivalent non-randomized, canonical version. We are then able to train a robotic grasping algorithm in a pre-defined canonical version of our simulator, and then use our RCAN model to convert the realworld images to the canonical domain our grasping algorithm was trained on.
Using RCAN along with a grasping algorithm that uses QT-Opt, a recent reinforcement learning algorithm, we achieve almost double the performance in comparison to alternative methods of using randomization. Bootstrapping from this performance, and with the addition of only 5,000 real-world grasps, we are able to achieve higher performance than a system trained with 580,000 real-world grasps. In our particular experiment, none of the objects used during testing are seen during either simulated training or real-world joint finetuning.
Our results also show that RCAN (summarized in Figure 1) is superior to learning a grasping network directly with domain randomization. RCAN has additional advantages compared to other simulation-to-real-world transfer methods. Firstly, unlike domain adaptation methods, it does not need any real-world data in order to learn our realityto-simulation translation function. Secondly, RCAN gives an interpretable intermediate output that would otherwise not be available when performing domain randomization directly on the policy. Finally, as our method is trained in a supervised manner and preprocesses the input to the downstream task, it enables the use of RL methods that currently suffer from the stability issues when learning a policy directly from domain randomization [38,70].
In summary, our contributions are as follows:
• We present a novel approach of crossing the reality gap by using an image-conditioned generative adversarial network (cGAN) [23] to transform randomized simulation images into their non-randomized, canonical versions, which in turn enables real-world images to also be transformed to canonical simulation versions.
• We show that by using this approach, we are able to train a state-of-the-art vision-based grasping reinforcement learning algorithm (QT-Opt) purely in simulation and achieve 70% success on the challenging task of grasping previously unseen objects in the real world, almost double the performance obtained by naively using domain randomization on the input of the learning algorithm.
• We also show that by using RCAN and joint finetuning in the real-world with only 5,000 additional grasping episodes we are able to increase grasping performance to 91%, outperforming QT-Opt when trained from scratch in the real-world with 580,000 graspsa reduction of over 99% of required real-world samples.
Related Work
Robotic grasping is a well studied problem [2]. Traditionally, grasping was usually solved analytically, where 3D meshes of objects would be used to compute the stability of a grasp against external wrenches [45,47] or constrain the object's motion [47]. These solutions often assume that the same, or similar objects will be seen during testing, such that point clouds of the test objects can be matched with stored objects based on visual and geometric similarity [6,11,19,20,29]. Due to this limitation, data-driven methods have become the dominant way to solve grasping [33,37]. These methods commonly make use of either hand-labeled grasp positions [33,28], self-supervision [44], or predicting grasp outcomes [34]. State-of-the-art grasping systems typically either operate in an open-loop style, where grasping locations are chosen, and then a motion is executed to complete the grasp [69,41,37,60], or in a closed-loop manner, where grasp prediction is continuously run during motion, either explicitly [65], or implicitly [27].
Simulation-to-real-world transfer concerns itself with learning skills in simulation and then transferring them to the real world, which reduces the need for expensive realdata collection. However, it is often not possible to naively transfer such skills directly due to the visual and dynamics differences between the two domains [26]. Numerous works have looked into enabling such transfer both in computer vision and robotics. In the context of robotic manipulation in particular, Saxena et al. [53] used rendered objects to learn a vision-based grasping model. Rusu et al. [50] introduced progressive neural networks that help adapt an existing deep reinforcement learning policy trained from pixels in simulation to the real world for a reaching task. Other works have considered simulation-to-real world transfer using only depth images [64,18]. Although this may be an attractive option, using depth cameras alone is not suitable for all situations, and coupled with the low cost of simple RGB cameras, there is considerable value in studying transfer in systems that solely use monocular RGB images. Although in this work we use depth estimation from RGB input as an auxiliary task to aid with our randomized-to-canonical image translation model, we neither use depth sensors in the real world, nor do we use our estimated depth during training.
Data augmentation has been a standard tool in computer vision for decades. More recently, and as a way to avoid overfitting, the random application of cropping, flipping samples horizontally, and photometric variations to input images were used to train AlexNet [31] and many more subsequent deep learning models. In robotics, a number of recent works have examined using randomized simulated environments [61,25,38,3,24,52] specifically for simulation-to-real world transfer for grasping and other similar manipulation tasks, extending on prior work on randomization for collision-free robotic indoor flight [51]. These works apply randomization in the form of random textures, lighting, and camera position, allowing the resulting algorithm to become invariant to domain differences and applicable to the real world. There have been more robotics works that do not use vision, but that apply domain randomization on physical properties of the simulator to aid transferability [40,46,1,68,43]. Recently, Chebotar et al. [9] have specifically looked into learning, from few realworld trajectories, the optimal distribution of such simulation properties, for transfer of policies learned in simulation to the real world. All of these methods learn a policy directly on randomization, whilst our method instead utilizes domain randomization in a novel way in order to learn a randomized-to-canonical adaption function to gain an interpretable intermediate representation and achieve superior results in comparison to learning directly on randomization.
Visual domain adaptation [42,13] is a process that allows a machine learning model trained with samples from a source domain to generalize to a target domain, by utilizing existing but (mostly) unlabeled target data. In simulationto-reality transfer, the source domain is usually the simulation, whereas the target is the real world. Prior methods can be split into: (1) feature-level adaptation, where domain-invariant features are learned between source and target domains [17,15,57,7,14,36,5,55], or (2) pixellevel adaptation, which focuses on re-stylizing images from the source domain to make them look like images from the target domain [4,66,71,30,54,59,21]. Pixel-level domain adaptation differs from image-to-image translation techniques [23,10,67], which deal with the easier task of learning such a re-stylization from matching pairs of examples from both domains. Our technique can be seen as an image-to-image translation model that transforms randomized renderings from our simulator to their equivalent nonrandomized, canonical ones.
In the context of robotics, visual domain adaptation has also been used for simulation-to-real-world transfer [62,56,3]. Bousmalis et al. [3], introduced the GraspGAN method, which combines pixel-level with feature-level domain adaptation to limit the amount of real data needed for learning grasping. Although the task is similar to ours, GraspGAN required significant amounts of unlabeled real-world data that were previously collected by a variety of pre-existing grasping networks. Our method can be viewed as orthogonal to existing domain adaptation methods and GraspGAN: the process of training the adapter could make use of unlabeled real-world data by incorporating ideas from domain adaptation in the form of additional auxiliary losses to improve performance further. Although in this work we do explore using our simulation-trained policy to collect labeled real-world data for joint finetuning, the combination with domain adaptation techniques is proposed as a promising future research direction.
The reverse, i.e. reality-to-simulation transfer, has been examined recently by Zhang et al. [70] in the context of a simple robotic driving task. The approach has certain advantages, namely the learning algorithm is trained only in simulation, and during inference the real-world images are adapted to look like simulated ones. This decouples adaptation from training and if the real-world environment changes, it is only the adaptation model that needs to be re-learned. We also explore reality-to-simulation transfer, but unlike [70], which uses CyCaDA [21] and unlabeled real-world data, we do so only in simulation, by learning to adapt randomized images from our simulator to their equivalent non-randomized versions, which allows data-efficient transfer of our model to the real-world.
Background
We demonstrate our approach by using a recent reinforcement algorithm, Q-function Targets via Optimization (QT-Opt) [27], though our method is compatible with any reinforcement learning or imitation learning algorithm, as we are only adapting the input. QT-Opt is a state-of-theart method for vision base grasping, which made it an ideal choice as a baseline for a direct comparison. Below, we will cover the fundamentals of Q-learning and then provide an overview of QT-Opt.
In reinforcement learning, we assume an agent interacting with an environment consisting of states s ∈ S, actions a ∈ A, and a reward function r(s t , a t ), where s t and a t are the state and action at time step t respectively. The goal of the agent is then to discover a policy that results in maximizing the total expected reward. One way to achieve such a policy is to use the recently proposed QT-Opt [27] algorithm. QT-Opt is an off-policy, continuousaction generalization of Q-learning, where the goal is to learn a parametrized Q-function (or state-action value function). This can be learned by minimizing the Bellman error:
E(θ) = E (s,a,s )∼p(s,a,s ) [D (Q θ (s, a), Q T (s, a, s ))] ,
(1) where Q T (s, a, s ) = r(s, a) + γV (s ) is a target value, and D is a divergence metric, defined as the cross-entropy function in this case. Much like other works in RL, stability was improved by the introduction of two target networks. The target value V (s ) was computed via a combination of Polyak averaging and clipped double Q-learning to give V (s ) = min i=1,2 Qθ i (s , arg max a Qθ 1 (s , a )). QT-Opt differs from other methods primarily with regards to action selection. Rather than selecting actions based on the argmax: πθ 1 (s) = arg max a Qθ 1 (s, a), QT-Opt instead evaluates the argmax via a stochastic optimization algorithm over a; in this case, the cross-entropy method (CEM) [49].
Method
Our method, Randomized-to-Canonical Adaptation Networks (RCAN), consists of an image-conditioned generative adversarial network (cGAN) [23] that transforms images from randomized simulated environments (an example is show in Figure 2a) into images that seem similar to those obtained from a non-randomized, canonical one (Figure 2b). Once trained, the cGAN generator is also able to transform real-world images into images that seem as if they were obtained from the canonical simulation environment. We are then able to train a reinforcement learning algorithm (in this case QT-Opt) fully in simulation, and use such a generator to enable the trained policy to act in the real-world.
The approach assumes 3 domains: the randomized simulation domain, the canonical simulation domain, and the real-world domain. Let D = {(x s , x c , m c , d c ) j } N j=1 be a dataset of N training samples, where each sample is a tuple containing an RGB image x s from the randomization (source) domain, an RGB image x c from the canonical (target) domain (with semantic content, i.e. scene configuration, matching that of x s ), a segmentation mask m c , and a depth image d c . Both the segmentation mask and depth mask are only used as auxiliary tasks during the training of our generator. The RCAN generator function G(x) → {x a , m a , d a }, maps an image x from any domain to an adapted image x a , segmentation mask m a , and depth image d a , such that they appear to belong to the canonical domain.
RCAN Data Generation
In order to learn this translation G, we need pairs of observations capturing the robot in interaction with the scene, with one observation showing the scene in its canonical version and the other one showing the same scene but with randomization applied, as shown in image (a) and (b) of Figure 2. Our simulated environments are based on the Bullet physics engine and use the default renderer [12]. They are built to roughly correspond to the real word, and include a Kuka IIWA, a tray, an over-the-shoulder camera aimed at the tray, and a set of graspable objects. Graspable objects consist of a combination of 1,000 procedurally generated objects (consisting of randomly merged geometric shapes), and 51,300 realistic objects from 55 categories obtained from the ShapeNet repository [8].
We create the trajectories from which we sample paired snapshots by running training of QT-Opt in simulation. At the beginning of each episode, the position of the divider in the tray is randomly sampled, and 5 randomly selected objects are dropped into the tray. Then, at each timestep we freeze the scene, apply a new arbitrary randomization (described below) to capture the randomized observation, reset to and capture an observation of the canonical version, and let QT-Opt proceed. In our case, observations consist of RGB images, depth, and segmentation masks, labeling each pixel with one of 5 categories: graspable objects, tray, tray divider, robot arm, and background.
The randomization includes applying at each timestep randomly selected textures from a set of over 5,000 images to all models, which includes the tray, graspable objects, arm segments, and floor. Additionally we randomize the position, direction and color of the lighting. To further increase the diversity of scene configurations beyond those that the normal robot operation during QT-Opt training gives us, we also slightly randomize the position and size of the arm and tray (sampling from a uniform distribution), applying the same transformation to both the canonical and the randomized scene when creating the snapshot, such that the semantics between the two still match.
One important question is: what should the canonical environment look like? In practice, the canonical environment can be defined in a number of ways. We opt for applying uniform colors to the background, tray and arm, while leaving the textures for the objects from the randomized version in-place, as this preserves the objects' identity and thus opens up the potential for instance-specific grasping in future works. Each link of the arm is colored independently to aid tracking of individual links of the arm. We opt for fixing the light source in the canonical version, requiring the network to learn some aspect of geometry in order to re-render any shadows in the correct shape and direction.
RCAN Training Method
We aim to learn G(x s ) → {x a , m a , d a }, which transforms randomized sim images into canonical sim images with matching semantics, with the intuition that the generator will generalize to accept an image from the real world x r , and produce a canonical RGB image, segmentation mask, and depth image: G(x r ) → {x a , m a , d a }. To train the generator, we encourage visual equality between the generated x a and target x c through a loss function l eqx , semantic equality between m c and m a through a function l eqm , and depth equality between d c and d a through a func-tion l eq d . Having experimented with L1, L2, and the mean pairwise squared error (MPSE), our solution uses MPSE for l eqx which was found to converge faster with no loss in performance [5], along with the L2 distance for our auxiliary losses l eqm and l eq d . This results in the following loss:
L eq (G) = E (xs,xc,mc,dc) [λ x l eqx (G x (x s ), x c ) + (2) + λ m l eqm (G m (x s ), m c ) + λ d l eq d (G d (x s ), d c )],
where G x , G m , and G d denotes the image, mask, and depth element of the generator output respectively. In addition, λ x , λ m and λ d represent the respective weightings.
It is well known that these equality losses can lead to blurry images [32], and so we employ a sigmoid-cross entropy generative adversarial (GAN) objective [16] to encourage high-frequency sharpness. Let D(x) be a discriminator that outputs the likelihood that a given image x is from the canonical domain. With this, the GAN is trained with the following objective:
L GAN (G, D) = E x [log D(x)] + E x [log(1 − D(G x (x))],(3)
where G x denotes the image element of the generator output. The final objective for the generator then becomes:
G = arg min G max D L GAN (G, D) + L eq (G) .(4)
The generator G and discriminator D are parameterized by weights of a convolutional neural network; details of which are presented in Appendix A. Qualitative results of our generator can be seen in Figure 3 and on the project web-page 6 .
Real World Grasping with QT-Opt
We use QT-Opt for our grasping algorithm, and follow the same state and action definition as Kalashnikov et al. [27], where the state is defined as s t = (x t , g apt,t , g height,t ) at each timestep t, which includes a 472×472 image x t taken from a mounted over-the-shoulder camera overlooking the work space, a binary open/close indicator of gripper aperture g apt,t , and the scalar height of the gripper above the bottom of the tray g height,t .
In our case, rather than sending the image directly to the RL algorithm, the image x t is instead passed through the generator G, and the resulting generated image x a is extracted and concatenated, channel-wise, with the original source image x t . This results in the state s t = ([G(x t ) +
x t ], g apt,t , g height,t ), where [G(x t ) + x t ] represents the concatenation. Note that we do not use the generated depth and segmentation masks of G as input to QT-Opt in order to make a fair comparison to Kalashnikov et al. [27], though these could also be added in practice. The action space of Kalashnikov et al. [27], which consists of gripper (a) Randomized-to-canonical samples.
(b) Real-to-canonical samples. Figure 3: Sample outputs of our trained generator G when given randomized sim images (3a) and real images (3b). Note the accuracy of the reconstruction of the canonical images from real-world images in complex and cluttered scenes, along with shadows being re-rendered into the canonical representation. However, also note that randomized-to-canonical adaptation performs a noticeably better reconstruction of the gripper in comparison to the real-to-canonical adaptation. This leads to the failure cases discussed in Section 5. The generated depth and segmentation masks are used as auxiliaries during training of the generator. Further examples can be seen in Figure 8 In Kalashnikov et al. [27], the authors take their agent that was trained with 580,000 off-policy real-world grasps, and jointly finetune with an additional 28,000 on-policy grasps. During this joint finetuning process, QT-Opt asynchronously updates target values, collects real on-policy data, reloads real off-policy (offline) data from past experiences, and then trains the Q-network on both the on and off policy data streams within a distributed optimization framework. In the case of jointly finetuning RCAN, we also collect real on-policy data, but rather than using real-world past experiences (which we assume we do not have), we instead leverage the power of our simulation to continuously generate on-policy simulation data, and instead train on these streams of data. During the real world on-policy collection of both approaches, a selection of about 1,000 diverse training objects are used; a sample of which are shown in Figure 5 of the Appendix. Between 5 and 10 objects are randomly chosen every few hours to be placed in each of the trays until the desired number of joint finetuning grasps are reached.
Experiments
Our experimental section aims to answer the following questions: (1) Can we train an agent to grasp arbitrary unseen objects without having seen any real-world images?
(2) How does QT-Opt perform with standard domain randomization, and can our method perform better than this? (3) Does the addition of real-world on-policy training of our method lead to higher grasping performance while still drastically reducing the amount of real-world data required? We answer these questions through a series of rigorous realworld vision-based grasping experiments across multiple Kuka IIWA robots.
Evaluation Protocol
During evaluation, each robot attempts 102 grasps on its own set of 5 to 6 previously unseen test objects (shown in Figure 5 of the Appendix) which are deposited into each robots' respective tray and remain constant across all evaluations. Each grasp attempt (episode) consists of at most 20 time steps. If after 20 time steps no object has been grasped, the attempt is regarded as a failure. Following a grasp attempt, the object is deposited back into the tray at a random location. Although grasping was done with replacement, in practice, QT-Opt was not found attempting a grasp on the same object multiple times in a row. All observations come from an over-the-shoulder RGB camera.
Results
We first focus on the first 4 columns of Table 1. The first row of this section shows the results of QT-Opt reported in Kalashnikov et al. [27]; where following 580,000 off-policy real-world grasps, a performance of 87% was achieved. The Canonical Sim data source (second row) takes QT-Opt trained in the canonical simulation environment and then runs this directly in the real-world. The low success rate of 21% shows the existence of the reality gap. The following three rows show the result of training QT-Opt directly on varying degrees of randomization: mild, medium and heavy. Mild randomization consists of varying tray texture, object texture and color, robot arm color, lighting direction and brightness, and a background image consisting of 6 different images from the view of the real-world camera. Medium randomization adds a diverse mix of background images to the floor. Finally, heavy randomization uses the same scheme used to train RCAN, explained in Section 4.1.
QT-Opt
Surprisingly, an unexpected discovery was that QT-Opt responds well to heavy domain randomization during training (i.e. is not destabilized). This is contrary to other RL methods, such as DDPG [35] and A3C [39], where heavy domain randomization has been shown to cause training to fail [38,70]. Although QT-Opt was able to train stably with randomization, the results show that this does not lead to a successful transfer, achieving between 33% and 37% zeroshot grasping performance, whereas RCAN achieves 70%: over double the success in the real world. This success highlights that RCAN better utilizes domain randomization to achieve sim-to-real transfer, rather than training a policy directly on domain randomization.
We now focus on the remaining 2 columns, that is, the ability to jointly finetune on a small amount of realworld on-policy grasps. We chose to use 5,000 to represent "small", which is less than 1% of the 580,000 grasps used in Kalashnikov et al. [27] for the off-policy training and takes only a day to collect, instead of months. To make comparison easier, in addition to reporting the 28,000 on-policy grasps for joint finetuning from [27], we also report the performance after 5,000 grasps. This baseline result of 85% suggest that 5,000 real-world grasps for joint finetuning a system already trained with 580,000 does not improve performance. For the next joint finetuning experiment, we take each of the agents that were trained directly on domain randomization, and jointly finetune them on 5,000 real grasps, achieving between 77% and 85% grasping success. The rapid increase of ∼ 50p.p. is very surprising, and to the best of our knowledge, no other related works have shown such a dramatic performance increase from pre-training on domain randomization.
Finally, we look at joint finetuning RCAN with 5,000 and 28,000 real grasps, where the real images are adapted by the generator and then both the source and adapted image are passed to the grasping network; in this case, the gradients are only applied to the grasping network and not the generator network. The result of 91% for 5,000 shows that the improvement over learning directly on domain randomization holds, though for this result the difference is much smaller. What we believe is incredibly encouraging for the robotics community, is that with 91% RCAN outperforms a version of QT-Opt that was trained on 580,000 real-world grasps, while using less than 1% of the data. Moreover, following joint finetuning with with the same number of online grasps as Kalashnikov et al. [27] (28,000), we are able to achieve an almost equal grasp performance of 94%.
In order to understand how performance varies as we progress from 0 to 5,000 on-policy grasps, we repeat the evaluation protocol set above for intermediate checkpoints.
We re-evaluate both agents at every 1,000 grasps for both RCAN and Mild Randomization. The results, presented in Figure 4, show that the majority of the success is gained within the first 2,000 grasps for both approaches. This is encouraging, as we ultimately wish to limit the amount of real-world data that we are reliant on.
Failure cases
A large contributing factor to QT-Opts 96% grasp success, was its ability to perform corrective behaviors, regrasping, probing motions to ascertain the best grasp, and non-prehensile repositioning of objects. Much of this ability remained with our approach, except for the regrasping ability. This powerful ability allows the policy to detect when there is no object in the closed gripper, and thus, it can decide to re-open it in an attempt to try and re-grasp. Given that our method is not perfect at translating real-world images into simulation ones, artifacts may arise. As objects that we grasp are often small, it can be very difficult for the agent to differentiate between artifacts in the image or if there is indeed an object in the gripper. We observe this to be detrimental to the agents ability to perform regrasping, resulting in only a small amount of regrasps. The main observation from joint finetuning our method with 5,000 realworld grasps, is the re-emergence of the regrasping. We believe that this is contributed by our decision to concatenate the source image to the generated ones, and thus giving the grasping algorithm the option to choose which data source to extract information from for each part of the image as the joint finetuning continues. We hypothesize, that as the number of joint finetuning grasps increase, the network would eventually learn to solely rely on the source (real-world) image, rather than the adapted simulation image. However, we believe that, with a limited amount of labeled real-world data, feeding both the output of RCAN as well as the original image to the agent offers the best combination of a simplified, yet potentially incomplete adapted view and the complex, but complete original real-world view.
Discussion
A number of questions arise from these results. For example: why does our method perform better than learning a policy directly with domain randomization? We hypothesize that our method allows offloading visual complexity to the generator network, thus simplifying the task for the grasping network and in turn, leading to a higher grasping success. Moreover, having a chosen canonical environment allows us to impose structure on the task which may be beneficial for training the grasping network.. Despite our method achieving over double the zero-shot performance in the real world in comparison to domain randomization, with 5,000 additional real-world grasps, the performance of direct domain randomization also achieves a surprisingly high performance. This leads us to the hypothesis that learning a policy directly on domain randomization can act as a very powerful pre-training regime, where the network is forced to learn a very general feature extractor that can be easily jointly finetuned to a new environment. Having said that, our method outperforms this and has the added benefit of giving us an interpretable output for sim-to-real transfer.
Another question for future work would be: is there a way to better utilize the data collected during the 5,000 onpolicy grasps? Given this real-world data, it is now possible to consider fusing ideas from other transfer methods that require some real-world data, such as PixelDA [5].
Conclusion
We have presented Randomized-to-Canonical Adaptation Networks (RCAN), a sim-to-real method that learns to translate randomized simulation images into a canonical representation, which in turn allows for real-world images to also be translated to this canonical representation. Given that our grasping algorithm (QT-Opt) is trained in this canonical environment, it is possible to run policies trained in simulation in the real world. We show that this approach is superior to the common domain randomization approach, and argue that it is a much more meaningful use of domain randomization. This general style of transfer has applications beyond just grasping, and can be used in other settings where real world data is expensive to collect, for example, producing segmentation masks for self-driving cars. For future work, we wish to explore further ways of introducing unlabelled real-world data in order to improve the real-to-canonical translation. Moreover, we are interested in exploring the effect of using the auxiliary outputs as additional inputs to the grasping network.
A. RCAN Architecture
The generator G is parameterized by weights of a convolutional neural network, summarized in Figure 7, and follows a U-Net style architecture [48] with downsampling performed via 3 × 3 convolutions with stride 2 for the first 2 layers, and average pooling with 3 × 3 convolution of stride 1 for the remaining layers. Upsampling was performed via bilinear upsampling, followed by a 3 × 3 convolutions of stride 1, and skip connections were fused back into the network via channel-wise concatenation, followed by a 1 × 1 convolution. All layers were followed by instance normalization [63] and ReLU non-linearities. The discriminator D is also parameterized by weights of a convolutional neural network with 2 layers of 32, 3×3 filters, followed by a layer of 64, 3 × 3 filters, and finally a layer of 128, 3 × 3 filters. The network follows a multi-scale patch-based design [3], where 3 scales of 472 × 472, 236 × 236, and 118 × 118, are used to produce domain estimates for all patches which are then combined to compute the joint discriminator loss.
B. QT-Opt Architecture
The action space of [27], which consists of gripper pose displacement and an open/close command, remains unchanged in our paper, and is defined as a t = (t t , r t , g close,t , g open,t , e t ), containing Cartesian translation t t ∈ R 3 , sine-cosine rotation encoding r t ∈ R 2 , a onehot vector gripper open/close command [g close,t , g open,t ] ∈ {0, 1} 2 , and a learned stopping criterion e t . The reward function is sparse, consisting of a reward of 1 following a successful grasp, or 0 for an unsuccessful grasp, and −0.05 Figure 6: The Q-function of the grasping algorithm. The source image x (either from the randomized domain or realworld domain) and generated canonical image x a are concatenated (channel-wise) and processed by a convolutional neural network (and fused with action and state variables) to produce a scalar representing the Q value Q θ (s, a). on all other transitions. Summarized in Figure 6, the Qfunction follows the same architecture as [27] (originally inspired by [34]).
Rather than a single RGB image input, our network takes in a 6 channel image, consisting of channel-wise concatenation of the source image x (either from the randomized domain or real-world domain) and generated image x a . Features are extracted from these images via 7 convolutional layers and then merged with a transformed action and state vector (which have passed through 2 fully-connected layers) via element-wise addition. The merged streams are then processed by a further 9 convolution layers and 2 fullyconnected layers, resulting in a scalar output representing the Q value Q θ (s, a). Each layer, excluding the final, uses batch normalization [22] and ReLU non-linearities. A summary of the architecture can be seen in Figure 6. [48] to produce a generated RGB image x a , and auxiliaries that includes a segmentation mask m a and depth image d a . These auxiliaries forces the generator to extract semantic and depth information about the scene and encode them in the intermediate latent representation, which is then available during the generation of the output image.
Figure 2 :
2The setup used in our approach. A dataset of observations from a randomized version of a simulated environment (a) are paired with observations from a canonical version of the same environment(b) in order to learn an adaptation function and allow observations from the realworld (c) to be transformed into observations looking as if they came from the canonical simulation environment.
of the Appendix. pose displacement and an open/close command, remains unchanged. A summary of the Q-function is shown in Figure 6 of the Appendix, and further details of the action space and architecture can be found in Appendix B.
Figure 4 :
4A graph showing how the performance of RCAN and directly learning a policy on domain randomization varies with the number of real world on-policy grasps.
Figure 5 :
5Real-world grasping objects that range greatly in size and appearance. Left: about 1000 visually and physically diverse training objects used for joint finetuning. Right: the unseen test objects.
Figure 7 :
7Network architecture of the generator function G. An RGB image from the source domain (either from the randomized domain or real-world domain) is processed via a U-Net style architecture
https://sites.google.com/view/rcan/
AcknowledgmentsWe would like to give special thanks to Ivonne Fajardo, Peter Pastor, Iñaki Gonzalo and Benjamin Swanson for overseeing the robot operations, Yunfei Bai for discussion on PyBullet, and Serkan Cabi for valuable comments on the paper.
Reinforcement learning for pivoting task. R Antonova, S Cruciani, C Smith, D Kragic, arXiv:1703.00472R. Antonova, S. Cruciani, C. Smith, and D. Kragic. Re- inforcement learning for pivoting task. arXiv:1703.00472, 2017.
Data-driven grasp synthesis-a survey. J Bohg, A Morales, T Asfour, D Kragic, IEEE Transactions on Robotics. 302J. Bohg, A. Morales, T. Asfour, and D. Kragic. Data-driven grasp synthesis-a survey. IEEE Transactions on Robotics, 30(2):289-309, 2014.
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping. K Bousmalis, A Irpan, P Wohlhart, Y Bai, M Kelcey, M Kalakrishnan, L Downs, J Ibarz, P Pastor, K Konolige, S Levine, V Vanhoucke, IEEE Intl. Conference on Robotics and Automation. K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, S. Levine, and V. Vanhoucke. Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping. IEEE Intl. Conference on Robotics and Automation, 2018.
Unsupervised pixel-level domain adaptation with generative adversarial neural networks. K Bousmalis, N Silberman, D Dohan, D Erhan, D Krishnan, IEEE Conference on Computer Vision and Pattern Recognition. K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Kr- ishnan. Unsupervised pixel-level domain adaptation with generative adversarial neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
Domain separation networks. K Bousmalis, G Trigeorgis, N Silberman, D Krishnan, D Erhan, Advances in Neural Information Processing Systems. K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. Advances in Neural Information Processing Systems, 2016.
Collaborative grasp planning with multiple object representations. P Brook, M Ciocarlie, K Hsiao, IEEE Intl. Conference on Robotics and Automation. P. Brook, M. Ciocarlie, and K. Hsiao. Collaborative grasp planning with multiple object representations. IEEE Intl. Conference on Robotics and Automation, 2011.
Beyond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow. R Caseiro, J F Henriques, P Martins, J Batista, IEEE Conference on Computer Vision and Pattern Recognition. R. Caseiro, J. F. Henriques, P. Martins, and J. Batista. Be- yond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow. In IEEE Con- ference on Computer Vision and Pattern Recognition, 2015.
A X Chang, T Funkhouser, L Guibas, P Hanrahan, Q Huang, Z Li, S Savarese, M Savva, S Song, H Su, J Xiao, L Yi, F Yu, arXiv:1512.03012ShapeNet: An information-rich 3D model repository. A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An information-rich 3D model repository. arXiv:1512.03012, 2015.
Closing the sim-to-real loop: Adapting simulation randomization with real world experience. Y Chebotar, A Handa, V Makoviychuk, M Macklin, J Issac, N Ratliff, D Fox, arXiv:1810.05687Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Is- sac, N. Ratliff, and D. Fox. Closing the sim-to-real loop: Adapting simulation randomization with real world experi- ence. arXiv:1810.05687, 2018.
Photographic image synthesis with cascaded refinement networks. Q Chen, V Koltun, Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. Intl. Conference on Com- puter Vision, 2017.
Towards reliable grasping and manipulation in household environments. M Ciocarlie, K Hsiao, E G Jones, S Chitta, R B Rusu, I A Ucan, Experimental Robotics. SpringerM. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Ş ucan. Towards reliable grasping and manipulation in household environments. In Experimental Robotics, pages 241-252. Springer, 2014.
Pybullet, a python module for physics simulation for games, robotics and machine learning. E Coumans, Y Bai, E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learn- ing. http://pybullet.org, 2016-2018.
Domain adaptation for visual applications: A comprehensive survey. G Csurka, arxiv:1702.05374G. Csurka. Domain adaptation for visual applications: A comprehensive survey. arxiv:1702.05374, 2017.
Domainadversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, The Journal of Machine Learning Research. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain- adversarial training of neural networks. The Journal of Ma- chine Learning Research, 2016.
Geodesic flow kernel for unsupervised domain adaptation. B Gong, Y Shi, F Sha, K Grauman, IEEE Conference on Computer Vision and Pattern Recognition. B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow ker- nel for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, 2012.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. Advances in Neural Information Processing Systems, 2014.
Domain Adaptation for Object Recognition: An Unsupervised Approach. R Gopalan, R Li, R Chellappa, Intl. Conference on Computer Vision. R. Gopalan, R. Li, and R. Chellappa. Domain Adaptation for Object Recognition: An Unsupervised Approach. In Intl. Conference on Computer Vision, 2011.
High precision grasp pose detection in dense clutter. M Gualtieri, A Pas, K Saenko, R Platt, IEEE Intl. Conference on Intelligent Robots and Systems. M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt. High pre- cision grasp pose detection in dense clutter. In IEEE Intl. Conference on Intelligent Robots and Systems, pages 598- 605, 2016.
Team delfts robot winner of the amazon picking challenge. C Hernandez, M Bharatheesha, W Ko, H Gaiser, J Tan, K Van Deurzen, M Vries, B Van Mil, J Van Egmond, R Burger, Robot World Cup. SpringerC. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van Deurzen, M. de Vries, B. Van Mil, J. van Egmond, R. Burger, et al. Team delfts robot winner of the amazon picking challenge 2016. In Robot World Cup, pages 613- 624. Springer, 2016.
Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. S Hinterstoisser, S Holzer, C Cagniart, S Ilic, K Konolige, N Navab, V Lepetit, Intl. Conference on Computer Vision. S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. Intl. Conference on Computer Vision, 2011.
Cycada: Cycle-consistent adversarial domain adaptation. J Hoffman, E Tzeng, T Park, J.-Y Zhu, P Isola, K Saenko, A A Efros, T Darrell, Intl. Conference on Machine Learning. J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent ad- versarial domain adaptation. In Intl. Conference on Machine Learning, 2018.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Intl. S Ioffe, C Szegedy, Conference on Machine Learning. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Intl. Conference on Machine Learning, 2015.
Image-toimage translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, IEEE Conference on Computer Vision and Pattern Recognition. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to- image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2017.
Task-embedded control networks for few-shot imitation learning. S James, M Bloesch, A J Davison, Conference on Robot Learning. S. James, M. Bloesch, and A. J. Davison. Task-embedded control networks for few-shot imitation learning. Conference on Robot Learning, 2018.
Transferring end-toend visuomotor control from simulation to real world for a multi-stage task. S James, A J Davison, E Johns, Conference on Robot Learning. S. James, A. J. Davison, and E. Johns. Transferring end-to- end visuomotor control from simulation to real world for a multi-stage task. Conference on Robot Learning, 2017.
3d simulation for robot arm control with deep q-learning. S James, E Johns, NeurIPS Workshop on Deep Learning for Action and Interaction. S. James and E. Johns. 3d simulation for robot arm control with deep q-learning. NeurIPS Workshop on Deep Learning for Action and Interaction, 2016.
QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. D Kalashnikov, A Irpan, P Pastor, J Ibarz, A Herzog, E Jang, D Quillen, E Holly, M Kalakrishnan, V Vanhoucke, S Levine, Conference on Robot Learning. D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Van- houcke, and S. Levine. QT-Opt: Scalable Deep Rein- forcement Learning for Vision-Based Robotic Manipulation. Conference on Robot Learning, 2018.
Leveraging big data for grasp planning. D Kappler, J Bohg, S Schaal, IEEE Intl. Conference on Robotics and Automation. D. Kappler, J. Bohg, and S. Schaal. Leveraging big data for grasp planning. IEEE Intl. Conference on Robotics and Automation, 2015.
Cloud-based robot grasping with the google object recognition engine. B Kehoe, A Matsukawa, S Candido, J Kuffner, K Goldberg, IEEE Intl. Conference on Robotics and Automation. B. Kehoe, A. Matsukawa, S. Candido, J. Kuffner, and K. Goldberg. Cloud-based robot grasping with the google object recognition engine. In IEEE Intl. Conference on Robotics and Automation, 2013.
Learning to discover cross-domain relations with generative adversarial networks. T Kim, M Cha, H Kim, J K Lee, J Kim, Conference on Machine Learning. T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. Intl. Conference on Machine Learning, 2017.
A Krizhevsky, I Sutskever, G E Hinton, Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Ad- vances in Neural Information Processing Systems, 2012.
Autoencoding beyond pixels using a learned similarity metric. A B L Larsen, S K Sønderby, H Larochelle, O Winther, Conference on Machine Learning. A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. Intl. Conference on Machine Learning, 2016.
Deep learning for detecting robotic grasps. I Lenz, H Lee, A Saxena, The International Journal of Robotics Research. 344-5I. Lenz, H. Lee, and A. Saxena. Deep learning for detect- ing robotic grasps. The International Journal of Robotics Research, 34(4-5):705-724, 2015.
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. S Levine, P Pastor, A Krizhevsky, D Quillen, International Symposium on Experimental Robotics. S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen. Learn- ing Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. International Symposium on Experimental Robotics, 2016.
T P Lillicrap, J J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, D Wierstra, arXiv:1509.02971Continuous control with deep reinforcement learning. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
Learning transferable features with deep adaptation networks. M Long, J Wang, Conference on Machine Learning. M. Long and J. Wang. Learning transferable features with deep adaptation networks. Intl. Conference on Machine Learning, 2015.
Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. J Mahler, J Liang, S Niyaz, M Laskey, R Doan, X Liu, J A Ojea, K Goldberg, Robotics: Science and Systems. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. Robotics: Science and Systems, 2017.
Sim-to-real reinforcement learning for deformable object manipulation. J Matas, S James, A J Davison, Conference on Robot Learning. J. Matas, S. James, and A. J. Davison. Sim-to-real reinforce- ment learning for deformable object manipulation. Confer- ence on Robot Learning, 2018.
V Mnih, A P Badia, M Mirza, A Graves, T Lillicrap, T Harley, D Silver, K Kavukcuoglu, Asynchronous methods for deep reinforcement learning. Intl. Conference on Machine Learning. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. Intl. Conference on Machine Learning, 2016.
Ensemble-cio: Full-body dynamic motion planning that transfers to physical humanoids. I Mordatch, K Lowrey, E Todorov, IEEE Intl. Conference on Intelligent Robots and Systems. I. Mordatch, K. Lowrey, and E. Todorov. Ensemble-cio: Full-body dynamic motion planning that transfers to phys- ical humanoids. IEEE Intl. Conference on Intelligent Robots and Systems, 2015.
The low-cost cartesian manipulator that won the amazon robotics challenge. D Morrison, A W Tow, M Mctaggart, R Smith, N Kelly-Boxall, S Wade-Mccue, J Erskine, R Grinover, A Gurman, T Hunn, IEEE Intl. Conference on Robotics and Automation. D. Morrison, A. W. Tow, M. McTaggart, R. Smith, N. Kelly- Boxall, S. Wade-McCue, J. Erskine, R. Grinover, A. Gur- man, T. Hunn, et al. Cartman: The low-cost cartesian ma- nipulator that won the amazon robotics challenge. IEEE Intl. Conference on Robotics and Automation, 2018.
Visual domain adaptation: A survey of recent advances. V M Patel, R Gopalan, R Li, R Chellappa, IEEE Signal Processing Magazine. 323V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual do- main adaptation: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53-69, 2015.
Sim-to-real transfer of robotic control with dynamics randomization. X B Peng, M Andrychowicz, W Zaremba, P Abbeel, IEEE Intl. Conference on Robotics and Automation. IEEEX. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of robotic control with dynamics ran- domization. In IEEE Intl. Conference on Robotics and Au- tomation. IEEE, 2018.
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours. L Pinto, A Gupta, IEEE Intl. Conference on Robotics and Automation. L. Pinto and A. Gupta. Supersizing self-supervision: Learn- ing to grasp from 50K tries and 700 robot hours. IEEE Intl. Conference on Robotics and Automation, 2016.
D Prattichizzo, J C Trinkle, Grasping, Springer handbook of robotics. SpringerD. Prattichizzo and J. C. Trinkle. Grasping. In Springer handbook of robotics, pages 671-700. Springer, 2008.
A Rajeswaran, S Ghotra, B Ravindran, S Levine, Epopt: Learning robust neural network policies using model ensembles. Intl. Conference on Learning Representations. A. Rajeswaran, S. Ghotra, B. Ravindran, and S. Levine. Epopt: Learning robust neural network policies using model ensembles. Intl. Conference on Learning Representations, 2017.
From caging to grasping. A Rodriguez, M T Mason, S Ferry, The International Journal of Robotics Research. 317A. Rodriguez, M. T. Mason, and S. Ferry. From caging to grasping. The International Journal of Robotics Research, 31(7):886-900, 2012.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Intl. Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox. U-net: Convolu- tional networks for biomedical image segmentation. In Intl. Conference on Medical Image Computing and Computer- Assisted Intervention. Springer, 2015.
The cross-entropy method: A unified approach to monte carlo simulation, randomized optimization and machine learning. R Y Rubinstein, D P Kroese, Information Science & Statistics. R. Y. Rubinstein and D. P. Kroese. The cross-entropy method: A unified approach to monte carlo simulation, ran- domized optimization and machine learning. Information Science & Statistics, 2004.
Sim-to-real robot learning from pixels with progressive nets. A A Rusu, M Vecerik, T Rothörl, N Heess, R Pascanu, R Hadsell, Conference on Robot Learning. A. A. Rusu, M. Vecerik, T. Rothörl, N. Heess, R. Pascanu, and R. Hadsell. Sim-to-real robot learning from pixels with progressive nets. Conference on Robot Learning, 2017.
CAD2RL: Real single-image flight without a single real image. F Sadeghi, S Levine, Robotics: Science and Systems. F. Sadeghi and S. Levine. CAD2RL: Real single-image flight without a single real image. In Robotics: Science and Sys- tems, 2017.
Sim2real viewpoint invariant visual servoing by recurrent control. F Sadeghi, A Toshev, E Jang, S Levine, IEEE Conference on Computer Vision and Pattern Recognition. F. Sadeghi, A. Toshev, E. Jang, and S. Levine. Sim2real viewpoint invariant visual servoing by recurrent control. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2018.
Robotic grasping of novel objects using vision. The Intl. A Saxena, J Driemeyer, A Y Ng, Journal of Robotics Research. A. Saxena, J. Driemeyer, and A. Y. Ng. Robotic grasping of novel objects using vision. The Intl. Journal of Robotics Research, 2008.
Learning from simulated and unsupervised images through adversarial training. A Shrivastava, T Pfister, O Tuzel, J Susskind, W Wang, R Webb, IEEE Conference on Computer Vision and Pattern Recognition. A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
A DIRT-t approach to unsupervised domain adaptation. R Shu, H Bui, H Narui, S Ermon, Intl. Conference on Learning Representations. R. Shu, H. Bui, H. Narui, and S. Ermon. A DIRT-t approach to unsupervised domain adaptation. In Intl. Conference on Learning Representations, 2018.
Genesis-rt: Generating synthetic images for training secondary real-world tasks. G J Stein, N Roy, IEEE Intl. Conference on Robotics and Automation. G. J. Stein and N. Roy. Genesis-rt: Generating synthetic images for training secondary real-world tasks. In IEEE Intl. Conference on Robotics and Automation, 2018.
Return of frustratingly easy domain adaptation. B Sun, J Feng, K Saenko, Association for the Advancement of Artificial Intelligence. B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In Association for the Advancement of Artificial Intelligence, 2016.
The limits and potentials of deep learning for robotics. The Intl. N Sünderhauf, O Brock, W Scheirer, R Hadsell, D Fox, J Leitner, B Upcroft, P Abbeel, W Burgard, M Milford, Journal of Robotics Research. 374-5N. Sünderhauf, O. Brock, W. Scheirer, R. Hadsell, D. Fox, J. Leitner, B. Upcroft, P. Abbeel, W. Burgard, M. Milford, et al. The limits and potentials of deep learning for robotics. The Intl. Journal of Robotics Research, 37(4-5):405-420, 2018.
Unsupervised crossdomain image generation. Y Taigman, A Polyak, L Wolf, Conference on Learning Representations. Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross- domain image generation. Intl. Conference on Learning Rep- resentations, 2017.
Grasp pose detection in point clouds. A Pas, M Gualtieri, K Saenko, R Platt, The International Journal of Robotics Research. 36A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt. Grasp pose detection in point clouds. The International Journal of Robotics Research, 36(13-14):1455-1473, 2017.
Domain randomization for transferring deep neural networks from simulation to the real world. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, P Abbeel, IEEE Intl. Conference on Intelligent Robots and Systems. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neu- ral networks from simulation to the real world. In IEEE Intl. Conference on Intelligent Robots and Systems, 2017.
Adapting deep visuomotor representations with weak pairwise constraints. E Tzeng, C Devin, J Hoffman, C Finn, P Abbeel, S Levine, K Saenko, T Darrell, Workshop on the Algorithmic Foundations of Robotics. E. Tzeng, C. Devin, J. Hoffman, C. Finn, P. Abbeel, S. Levine, K. Saenko, and T. Darrell. Adapting deep visuo- motor representations with weak pairwise constraints. Work- shop on the Algorithmic Foundations of Robotics, 2016.
Instance normalization: The missing ingredient for fast stylization. D Ulyanov, A Vedaldi, V Lempitsky, arXiv:1607.08022D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance nor- malization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
Learning a visuomotor controller for real world robotic grasping using easily simulated depth images. U Viereck, A T Pas, K Saenko, R Platt, Conference on Robot Learning. U. Viereck, A. t. Pas, K. Saenko, and R. Platt. Learning a visuomotor controller for real world robotic grasping us- ing easily simulated depth images. In Conference on Robot Learning, 2017.
Learning a visuomotor controller for real world robotic grasping using simulated depth images. U Viereck, A Pas, K Saenko, R Platt, Conference on Robot Learning. U. Viereck, A. ten Pas, K. Saenko, and R. Platt. Learning a visuomotor controller for real world robotic grasping us- ing simulated depth images. Conference on Robot Learning, 2017.
Dualgan: Unsupervised dual learning for image-to-image translation. Z Yi, H R Zhang, P Tan, M Gong, Z. Yi, H. R. Zhang, P. Tan, and M. Gong. Dualgan: Unsu- pervised dual learning for image-to-image translation. Intl. Conference on Computer Vision, 2017.
D Yoo, N Kim, S Park, A S Paek, I S Kweon, Pixel-Level Domain Transfer. European Conference on Computer Vision. D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel- Level Domain Transfer. European Conference on Computer Vision, 2016.
Preparing for the unknown: Learning a universal policy with online system identification. W Yu, J Tan, C K Liu, G Turk, Robotics: Science and Systems. W. Yu, J. Tan, C. K. Liu, and G. Turk. Preparing for the unknown: Learning a universal policy with online system identification. In Robotics: Science and Systems, 2017.
Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. A Zeng, S Song, S Welker, J Lee, A Rodriguez, T Funkhouser, IEEE Intl. Conference on Intelligent Robots and Systems. A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. IEEE Intl. Conference on Intelligent Robots and Systems, 2018.
Vr goggles for robots: Real-to-sim domain adaptation for visual control. J Zhang, L Tai, Y Xiong, M Liu, J Boedecker, W Burgard, IEEE Robotics and Automation Letters. J. Zhang, L. Tai, Y. Xiong, M. Liu, J. Boedecker, and W. Bur- gard. Vr goggles for robots: Real-to-sim domain adaptation for visual control. IEEE Robotics and Automation Letters, 2019.
Unpaired imageto-image translation using cycle-consistent adversarial networks. Intl. Conference on Computer Vision. J.-Y Zhu, T Park, P Isola, A A Efros, canonical samplesJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image- to-image translation using cycle-consistent adversarial net- works. Intl. Conference on Computer Vision, 2017. (a) Randomized-to-canonical samples.
Real-to-canonical samples. Real-to-canonical samples.
Additional sample outputs of our trained generator G when given randomized sim images (8a) and real images (8b). Figure. 8Figure 8: Additional sample outputs of our trained generator G when given randomized sim images (8a) and real images (8b).
| []
|
[
"A non-linear dynamical systems approach to source compression for constrained sources",
"A non-linear dynamical systems approach to source compression for constrained sources"
]
| [
"Nithin Nagaraj [email protected] \nSchool of Natural and Engineering Sciences\nSchool of Natural and Engineering Sciences National Institute of Advanced Studies\nRajesh Sundaresan ECE Dept\nIndian Institute of Science\nNational Institute of Advanced Studies\n\n",
"Prabhakar G Vaidya [email protected] \nSchool of Natural and Engineering Sciences\nSchool of Natural and Engineering Sciences National Institute of Advanced Studies\nRajesh Sundaresan ECE Dept\nIndian Institute of Science\nNational Institute of Advanced Studies\n\n"
]
| [
"School of Natural and Engineering Sciences\nSchool of Natural and Engineering Sciences National Institute of Advanced Studies\nRajesh Sundaresan ECE Dept\nIndian Institute of Science\nNational Institute of Advanced Studies\n",
"School of Natural and Engineering Sciences\nSchool of Natural and Engineering Sciences National Institute of Advanced Studies\nRajesh Sundaresan ECE Dept\nIndian Institute of Science\nNational Institute of Advanced Studies\n"
]
| []
| We have recently established a strong connection between the Tent map (also known as Generalized Luroth Series or GLS which is a chaotic, ergodic and lebesgue measure preserving non-linear dynamical system) and Arithmetic coding which is a popular source compression algorithm used in international compression standards such as JPEG2000 and H.264. This was for independent and identically distributed binary sources. In this paper, we address the problem of compression of ergodic Markov binary sources with certain words forbidden from the message space. We shall show that GLS can be modified suitably to achieve Shannon's entropy rate for these sources.Constrained Source CodingIntroductionEmerging trends in data acquisition and imaging technologies have resulted in a rapid increase in the volume of data. Hence source coding (or data compression) continues to occupy an important area of research in the design of communication and storage systems. Shannon, the father of information theory, provided the definition of Entropy as the ultimate limit of lossless data compression. Ever since, there have been a number of compression algorithms which tries to achieve this limit.The source coding problem is stated as follows: Given an independent and identically distributed (i.i.d) binary source X emitting bits of information in the absence of noise, how do we obtain the shortest lossless representation of this information? Source coding is also known as entropy coding or data compression and is an important part of most communication systems[1].Shannon in his 1948 masterpiece [2] defined the most important concept of Information Theory, namely 'Entropy'. Shannon's Entropy of a source H(X) is defined as the amount of information content or the amount of uncertainty associated with the source, or equivalently the least number of bits required to represent the information content of a source without any loss. Shannon proposed a method (Shannon-Fano coding [3]) that achieves this limit as the block-length (number of symbols taken together) for coding increases asymptotically to infinity. Huffman[4]proposed what are called minimum-redundancy codes with integer code-word lengths that achieve Shannon's Entropy in the limit of the block-length tending to infinity. However, there are problems associated with both Shannon-Fano coding and Huffman coding (and other similar techniques). As the block-length increases, the number of alphabets exponentially increase, thereby increasing the memory needed for storing and handling. Also, the complexity of the encoding algorithm increases since these methods build code-words for all possible messages for a given length instead of designing the codes for a particular message at hand. Another disadvantage of all such methods is that they do not lend themselves easily to an adaptive coding technique[1]. The idea of adaptive coding is to update the probability model of the source during the coding process. Unfortunately, for both Huffman and Shannon-Fano coding, the updating of the probability model would result in re-computation of the code-words for all the symbols which is an expensive process.Recently, we have proposed a new approach to address the source coding problem using a dynamical systems perspective[5]. We modeled the information bits of the source X as measurements of a non-linear dynamical system. Since measurement is rarely accurate, we treat these measured bits of information as a symbolic sequence [6] of the Tent map[7]and their skewed cousins. Source coding is seen as determination of the initial condition that has generated the given symbolic sequence. Subsequently, we established that such an approach leads us to a well known entropy coding technique (Arithmetic Coding) which is optimal for compression. Furthermore, this new approach enabled a robust framework for joint source coding and encryption.In this paper, we focus on constrained sources where certain words are forbidden from its message space. Such sources are longer i.i.d because they violate the independence assumption (most natural sources are not independent, for e.g., english text is clearly not independent). We consider only ergodic Markov sources in this paper. We propose a nonlinear dynamical systems approach to compress messages from these constrained sources. | null | [
"https://export.arxiv.org/pdf/0709.1545v1.pdf"
]
| 16,375,326 | 0709.1545 | 22eace846324fd1f90999147f967bdcffcb1f00b |
A non-linear dynamical systems approach to source compression for constrained sources
11 Sep 2007 September 4, 2007
Nithin Nagaraj [email protected]
School of Natural and Engineering Sciences
School of Natural and Engineering Sciences National Institute of Advanced Studies
Rajesh Sundaresan ECE Dept
Indian Institute of Science
National Institute of Advanced Studies
Prabhakar G Vaidya [email protected]
School of Natural and Engineering Sciences
School of Natural and Engineering Sciences National Institute of Advanced Studies
Rajesh Sundaresan ECE Dept
Indian Institute of Science
National Institute of Advanced Studies
A non-linear dynamical systems approach to source compression for constrained sources
11 Sep 2007 September 4, 2007
We have recently established a strong connection between the Tent map (also known as Generalized Luroth Series or GLS which is a chaotic, ergodic and lebesgue measure preserving non-linear dynamical system) and Arithmetic coding which is a popular source compression algorithm used in international compression standards such as JPEG2000 and H.264. This was for independent and identically distributed binary sources. In this paper, we address the problem of compression of ergodic Markov binary sources with certain words forbidden from the message space. We shall show that GLS can be modified suitably to achieve Shannon's entropy rate for these sources.Constrained Source CodingIntroductionEmerging trends in data acquisition and imaging technologies have resulted in a rapid increase in the volume of data. Hence source coding (or data compression) continues to occupy an important area of research in the design of communication and storage systems. Shannon, the father of information theory, provided the definition of Entropy as the ultimate limit of lossless data compression. Ever since, there have been a number of compression algorithms which tries to achieve this limit.The source coding problem is stated as follows: Given an independent and identically distributed (i.i.d) binary source X emitting bits of information in the absence of noise, how do we obtain the shortest lossless representation of this information? Source coding is also known as entropy coding or data compression and is an important part of most communication systems[1].Shannon in his 1948 masterpiece [2] defined the most important concept of Information Theory, namely 'Entropy'. Shannon's Entropy of a source H(X) is defined as the amount of information content or the amount of uncertainty associated with the source, or equivalently the least number of bits required to represent the information content of a source without any loss. Shannon proposed a method (Shannon-Fano coding [3]) that achieves this limit as the block-length (number of symbols taken together) for coding increases asymptotically to infinity. Huffman[4]proposed what are called minimum-redundancy codes with integer code-word lengths that achieve Shannon's Entropy in the limit of the block-length tending to infinity. However, there are problems associated with both Shannon-Fano coding and Huffman coding (and other similar techniques). As the block-length increases, the number of alphabets exponentially increase, thereby increasing the memory needed for storing and handling. Also, the complexity of the encoding algorithm increases since these methods build code-words for all possible messages for a given length instead of designing the codes for a particular message at hand. Another disadvantage of all such methods is that they do not lend themselves easily to an adaptive coding technique[1]. The idea of adaptive coding is to update the probability model of the source during the coding process. Unfortunately, for both Huffman and Shannon-Fano coding, the updating of the probability model would result in re-computation of the code-words for all the symbols which is an expensive process.Recently, we have proposed a new approach to address the source coding problem using a dynamical systems perspective[5]. We modeled the information bits of the source X as measurements of a non-linear dynamical system. Since measurement is rarely accurate, we treat these measured bits of information as a symbolic sequence [6] of the Tent map[7]and their skewed cousins. Source coding is seen as determination of the initial condition that has generated the given symbolic sequence. Subsequently, we established that such an approach leads us to a well known entropy coding technique (Arithmetic Coding) which is optimal for compression. Furthermore, this new approach enabled a robust framework for joint source coding and encryption.In this paper, we focus on constrained sources where certain words are forbidden from its message space. Such sources are longer i.i.d because they violate the independence assumption (most natural sources are not independent, for e.g., english text is clearly not independent). We consider only ergodic Markov sources in this paper. We propose a nonlinear dynamical systems approach to compress messages from these constrained sources.
We shall first start with an i.i.d source X which can be thought of as emitting a sequence of random variables {X 1 , X 2 , . . . , X N }. Each random variable takes values independently from the common alphabet A = {a 1 , a 2 , . . . , a |A| } with probabilities {p 1 , p 2 , . . . , p |A| } respectively. A message from this source is a particular sequence of values M = {x 1 , x 2 , . . . , x N } where x 1 is drawn from X 1 , x 2 from X 2 and so on. Since these are i.i.d, we can think of them as being drawn from the common random variable X (we use the notation X for both the source and the common random variable). We always deal with finite sized alphabets (|A| < ∞) and finite length messages (N < ∞) since real-world messages are always finite in length.
Our aim is to embed this i.i.d source into a non-linear discrete dynamical system. The reason for doing this will be clear soon. To this end, we model the i.i.d source as a 0-order Markov source (memoryless source). Each alphabet can be seen as a Markov state. Since these are independent, the transition probability from state i to state j is equal to the probability of being in state j. In other words, P ij = P (X r+1 = j|X r = i) = P (X r+1 = j).
We wish to embed this 0-order Markov source into a non-linear dynamical system (Ω, ℑ, T, µ) where Ω is the set [0, 1), ℑ is the Borel σ-algebra on [0, 1), T is the measure preserving transformation (yet to be defined) and µ is the invariant measure. Here we consider the probability measure (or Lebesgue measure) as the invariant measure. We now need to define T which preserves the Lebesgue measure and which can simulate the 0-order Markov source faithfully.
GLS embeds the i.i.d source
The non-linear discrete dynamical system known as Generalized Luröth series (GLS) [8] ( Figure 1) embeds the 0-order Markov source. We list the important properties of GLS which enable this embedding:
1. The number of partitions (disjoint intervals which cover the space, also known as cylinders) of the GLS is equal to the size of the alphabet.
2. Each alphabet is used to 'label' a partition.
3. The size of each partition is equal to the probability of the corresponding alphabet.
4. The map is linear and surjective on each of the partitions.
5. GLS preserves the Lebesgue measure [8].
6. Successive digits (or symbols) of the GLS are i.i.d. 7. Every unique sequence of digits (or symbols) maps to a unique point x in [0,1) under T . In other words, every point x has an unique representation in terms of the alphabets of GLS. We call x as the initial condition corresponding to the symbolic sequence (any sequence of digits composed from the alphabets associated with the partitions).
8. GLS is chaotic (positive Lyapunov exponent and positive Topological entorpy).
9. The GLS transformation T on [0,1) is isomorphic to the Bernoulli shift [8]. Hence GLS is ergodic with Lebesgue as the invariant measure.
Invariant distribution and Lyapunov exponent of GLS
Π(x)dx. where T −1 ([c, d]) = {x|c ≤ T (x) ≤ d}.
For the GLS, the above condition has constant probability density on [0, 1) as the only solution. It then follows from Birkhoff's ergodic theorem [8] that the asymptotic probability distribution of the points of almost every trajectory is uniform. We can hence calculate Lyapunov exponent as follows:
λ = 1 0 log 2 (|T ′ (x)|)Π(x)dx. (a.e.)
Here, we measure λ in bits/iteration.
Shannon's entropy = Lyapunov exponent for GLS
Π(x) is uniform with value 1 on [0,1) and T ′ (x) = constant since T (x) is linear in each of the partitions, the above expression simplifies to:
λ = − i=|A| i=1,p i =0 p i log 2 (p i ). (a.e.)
This is nothing but Shannon's entropy of the source X. Thus Lyapunov exponent of the GLS that embeds the i.i.d source X is equal to the Shannon's entropy of the source. Lyapunov exponent can be understood as the amount of information in bits revealed by the dynamical system in every iteration(?). The number of partitions together with the Lyapunov exponent completely characterizes GLS (up to a permutation of the partitions and flip of the graph in each parition -this changes the sign of the slope, but not its magnitude).
Coding the Initial Condition (GLS-coding)
In the previous section, we have seen how we can embed a stochastic i.i.d source X in to a dynamical system (Generalized Luröth Series). The motivation for this is that modeling the stochastic source by embedding in to a non-linear dynamical system is way to achieve compression.
We know that Huffman coding is not Shannon optimal [1]. This can be easily seen if the original source X took only 2 values 'a' and 'b' with probabilities {p, 1 − p}. For p = 0.5, Shannon's entropy of source X is < 1 bit whereas in Huffman coding, we would allocate one bit to encode 'a' and 'b'. That is the best Huffman coding can do. Thus, by using Huffman coding, we would be up to 1 bit away from Shannon's entropy per symbol. This can be very expensive for skewed sources (where p is close to 0 or 1) which have a very low Shannon entropy. This means, we can do better for such sources.
We have already said that every sequence of measurements (the message) is a symbolic sequence on an appropriate GLS. It is well known fact about dynamical systems that the symbolic sequence contains as much information as the initial condition. Hence, we could as well find out the initial condition for every symbolic sequence and use that as our compressed stream. Hence, the task of capturing the essential information of the source X now translates to determining the initial condition on the GLS (the source model) and storing the initial condition in whatever base we wish (typically the initial condition is binary encoded). Thus, the task of source compression is now one of finding the initial condition. We shall henceforth refer to this method as GLS-coding. How good is GLScoding when compared to Huffman coding?
GLS-coding = Arithmetic Coding
It turns out that the method just described is the popular Arithmetic coding algorithm which is used in international compression standards such as JPEG2000 and H.264. It is already known that Arithmetic coding always achieves Shannon's optimality without having to compute codewords for all possible messages and this make it better than Huffman coding. We have thus re-discovered Arithmetic coding using a dynamical systems approach to source coding. For full details of the proof of equivalence with Arithmetic coding, the reader is referred to [5].
Constrained Source Coding (no longer i.i.d.)
So far, we have dealt with an i.i.d source X. In the real world, most sources are not independent even if they are identically distributed. As an example, for a particle traveling in space, the measurements of its position and velocity is clearly not independent across successive time units. Thus, the assumption of independence needs to be relaxed.
In communications, independence assumption is not generally true. For example, assume that the source X is an excerpt from an English text. The probability that the letter 'u' appears after 'q' is very high. Thus, given that a particular symbol has occurred, there is probably a very small set of alphabets that can occur with a very high probability.
In this paper, we consider another kind of dependence known as a constrained source. A constrained source is one where certain 'words' are forbidden from the message space. As an example, if the source is an English text, certain words are forbidden (words which are profane or even words which are just gibberish, like for e.g., 'QWZTY'). For the rest of the paper, we shall consider a binary constrained source X. The constraint is given in terms of a list of forbidden words (for e.g., the word '101' and '011' may be known never to occur in any of the messages emitted by the source, then both these words are defined as forbidden words). This is clearly not an i.i.d source any more. Because given that a '10' as occurred, the next symbol can only be '0'. Thus the present symbol depends on what occurred in the last two instances. We are interested in compressing sequences of such a source in the most optimal fashion.
Constrained Ergodic Markov Sources
We shall model constrained sources as a Markov source. In this paper, we shall consider only those Markov sources that are ergodic. A Markov source is ergodic if either the Markov chain itself is ergodic [9] or equivalently, the dynamical system in which the Markov chain is embedded is ergodic. We know by the theory of Markov chains [9] that all finite discrete time Markov chains that are ergodic (i.e. they are irreducible and aperiodic) have an unique stationary distribution, also known as invariant distribution. This means that given any arbitrary initial distribution on the Markov states, it eventually settles to an unique probability distribution. Once we have an unique stationary distribution with finite states, Shannon's entropy rate can be calculated. In general, it is hard to determine the invariant measure of the underlying dynamical system in which the Markov chain is embedded, but it is relatively easy to determine whether the Markov chain is ergodic (all we need to do is test for irreducibility and aperiodicity).
We already saw in Section 2 how an i.i.d source X can be seen as a 0-order Markov chain with finite number of states. We could do similarly for ergodic Markov sources. As an example, a binary ergodic Markov source with '101' as a forbidden word is shown in Figure 2.
Computing Shannon's entropy rate
In Figure 2, a general four state Markov source is also shown. The condition p 1 = p 2 = p 3 = p 4 = p implies that the source is i.i.d. The condition p 1 = p 2 = p 4 = p and p 3 = 1 corresponds to the ergodic Markov source with the forbidden word '101'. We shall deal with the general four state Markov source (assuming that it is ergodic).
The transition probability matrix for the general four state ergodic Markov source is given by:
P = p 1 1 − p 1 0 0 0 0 p 2 1 − p 2 p 3 1 − p 3 0 0 0 0 p 4 1 − p 4 .
The unique stationary probability distribution Π (a row vector of dimension 1 × |A|) can be determined by solving the equation:
ΠP = P.(1)
Since P is always a stochastic matrix (every row adds to 1) of dimension |A| × |A|, there exists a unique eigenvector Π for P corresponding to the eigenvalue 1. We normalize Π as follows:Π = Π Π .
(2)
Once we haveΠ = {Φ 1 , Φ 2 , . . . , Φ |A| } we can compute Shannon's entropy of the source X as follows:
H X = − 1 log 2 (Num) i=|A| i=1,Φ i =0 Φ i log 2 (Φ i ).(3)
The units of H X is bits/symbol and Num represents the number of Markov states. For the general 4-state ergodic Markov source, this will be 1/2 since each states accounts for 2 bits of the message.
Modified GLS-coding
In this section, we shall embed the general 4-state ergodic Markov source into a non-linear dynamical system. We shall determine the initial condition for any given symbolic sequence (message) on the resulting dynamical system and use the initial condition to compress the message (similar to GLS-coding). We shall show that such a method achieves Shannon's entropy rate H X .
Embedding
We shall embed the general 4-state ergodic Markov source into a non-linear dynamical system similar to GLS for the i.i.d source. To each Markov state, we associate a Markov partition of the dynamical system (refer to Figure 3). We want the messages of this source to be symbolic sequences of the dynamical system. Draw straight lines with slopes yet to be determined connecting those states that communicate. For example, 00 communicates only with 01 and 10. Let the lengths of the Markov paritions be x 1 , x 2 , x 3 and x 4 . We then write the "measure-preserving" constraints as follows:
p 1 x 1 + p 3 x 3 = x 1 (f or 00) (1 − p 1 )x 1 + (1 − p 3 )x 3 = x 2 (f or 01) p 2 x 2 + p 4 x 4 = x 3 (f or 10) (1 − p 2 )x 2 + (1 − p 4 )x 4 = x 4 (f or 11)
These constraints automatically satisfy x 1 + x 2 + x 3 + x 4 = 1. Solving the first two of the above equation yields x 3 = x 2 . The above linear set of equations can be solved for the given set of {p 1 , p 2 , p 3 , p 4 } (if the Markov chain is ergodic, a unique solution always exists). The solution gives the length of the Markov partitions {x 1 , x 2 , x 3 , x 4 } which is unique. The slopes of the line segments are determined from the probabilities. We thus have a dynamical system (modified GLS) and we shall show that this embeds the ergodic Markov source.
Interpretation of the "measure-preserving" constraints
How do we understand the "measure-preserving" constraints? As an example, consider the Markov partition of length x 1 corresponding to the state 00. Since 00 communicates only with itself and 01, we have the range of the function limited to intervals 00 and 01. The slope of the line segment (in blue) which maps fraction of x 1 to 00 is 1 p 1 . This is because whenever we are in state 00, the probability that we end up in the same state on receiving the next symbol is p 1 . Thus p 1 x 1 is the fraction of initial conditions which end up in the same state 00. The remaining (1 − p 1 )x 1 fraction of initial conditions end up in 01. Similarly, we do this for all the states. The "measure-preserving" constraint for say the state 00 is indicating what fraction of initial conditions end up in state 00 in one iteration. This is formed precisely as the sum of the fraction p 1 x 1 which come from 00 and p 3 x 3 that comes from 10 (because these are the only two states that communicate with 00). This is how we get all the constraint equations. 4.6 "Measure-preserving" constraints are equivalent to ΠP = P .
The linear set of "measure-preserving" constraints can be written in matrix form as follows:
x 1 x 2 x 3 x 4 p 1 1 − p 1 0 0 0 0 p 2 1 − p 2 p 3 1 − p 3 0 0 0 0 p 4 1 − p 4 = x 1 x 2 x 3 x 4 .
Notice that the above equation is the same as ΠP = P where P is the transition probability matrix of the Markov source. Thus {x 1 , x 2 , x 3 , x 4 } is nothing but the unique stationary probability distribution obtained from the equation ΠP = P .
We have thus embedded the ergodic Markov source in to a non-linear dynamical system.
Modified GLS-coding achieves Shannon's entropy rate
In order to prove that encoding the initial condition on the modified GLS will achieve Shannon's entropy rate, we compute the Lyapunov exponent of the modified GLS and show that this is the same as Shannon's entropy rate. In fact, the non-linear dynamical system is a faithful modelling of the general 4-state ergodic Markov source.
It is important to observe that the modified GLS shown in Figure 3 does not preserve the Lebesgue measure. However, if we restrict all the intervals on the y-axis to strictly lie in one of the four intervals labelled 00, 01, 10 and 11 only, then we see that the Lebesgue measure (=probability measure) is actually preserved (sum of the measures of all the inverse images of a particular interval on the y-axis is equal to the measure of that interval we started with). In fact, this is what we ensured by the "measure preserving" constraints in the first place. We could hence compute the Lyapunov exponent as if the map preserved the Lebesgue measure everywhere.
Computation of Lyapunov exponents is then straightforward and yields the following expression (the base of the logarithm in all our Lyapunov exponent computation is always 2, though the standard practice is to use e.)
λ = − 4 i=1 p i log 2 (p i ) = 2H X .
This is nothing but twice the Shannon's entropy rate H X of the source. The factor 2 is because in every iteration, the symbolic sequence emitted by the source consists of two symbols. Thus, we have faithfully modelled the general 4-state ergodic Markov source as a non-linear dynamical system. Hence modified-GLS coding would achieve the Shannon's entropy rate and is optimal for compression.
Conclusions and Future Research Directions
In this paper, we have shown how one can embed an i.i.d source in to a non-linear dynamical system, namely Generalized Luröth Series or GLS. We then considered Markov sources which are not independent anymore. Constrained sources were defined as ergodic Markov sources with certain forbidden words. We showed how to compute the Shannon's entropy rate and also modified GLS to compresses messages from these sources. The modified-GLS is a faithful embedding of constrained sources and hence achieves Shannon's entropy rate.
It is possible to generalize our method for a list of arbitrary forbidden words. Implementation issues were not discussed in this paper. It may be worthwhile to investigate joint compression and encryption for constrained sources.
Figure 1 :
1GLS embeds an i.i.d source faithfully. The digits {a 1 , a 2 , . . . , a |A| } are the alphabets from A. The lengths of the intervals are precisely the respective probabilities {p 1 , p 2 , . . . , p |A| }.
Figure 2 :
2(a) Left: Constrained ergodic Markov source (forbidden word = '101'). (b) Right: A general 4-state ergodic Markov source. Note: The case p 1 = p 2 = p 3 = p 4 = p yields an i.i.d source and the case p 1 = p 2 = p 4 = p, p 3 = 1 yields the source with forbidden word '101'. Thus the general 4-state ergodic Markov source captures both cases.
Figure 3 :
3Embedding a constrained ergodic Markov source in a dynamical system with the map T : [0,1) → [0,1).
Figure 4 :
4Embedding for the source with forbidden word '101'. Notice how one of the slope becomes infinite since '10' is forbidden to communicate with '01'.
AcknowledgementsNithin Nagaraj would like to express his sincere gratitude to the Department of Science and Technology (DST) for funding the Ph.D. fellowship program at National Institute of Advanced Studies (NIAS). We gratefully acknowledge DST, Govt. of India and Council of Scientific and Industrial Research (CSIR), Govt. of India for providing with travel grant to present this work at the "International Conference on Non-linear Dynamics and Chaos:Advances and Perspectives", held at University of Aberdeen, Scotland, September 17-21, 2007.
Introduction to Data Compression. K Sayood, Morgan KaufmannK. Sayood, Introduction to Data Compression, Morgan Kaufmann (1996)
A Mathematical Theory of Communication. C E Shannon, Bell Sys. Tech. J. 27C.E. Shannon, A Mathematical Theory of Communication, Bell Sys. Tech. J. 27 (1948) 379-423.
Data Compression: The Complete Reference. D Salomon, Springer-VerlagNew YorkD. Salomon, Data Compression: The Complete Reference, Springer-Verlag, New York (2000).
A method for the construction of minimum-redundancy codes. D A Huffman, Proceedings of the I.R.E. D.A. Huffman, A method for the construction of minimum-redundancy codes, Proceed- ings of the I.R.E. (1952) 1098-1102.
. N Nagaraj, P G Vaidya, K G Bhat, arXiv.org:nlin.CD/0608051Joint Entropy Coding and Encryption using Robust Chaos. N. Nagaraj, P.G. Vaidya, K.G. Bhat, Joint Entropy Coding and Encryption using Robust Chaos, arXiv.org:nlin.CD/0608051 (2006).
Foundational Issues of Chaos and Randomness. P G Vaidya, N Nagaraj, Proc. of Foundations of Sciences, Project of History of Indian Science. of Foundations of Sciences, Project of History of Indian ScienceNew DelhiGod or Devil, Do We Have A Choice?P.G. Vaidya, N. Nagaraj, Foundational Issues of Chaos and Randomness: "God or Devil, Do We Have A Choice?", Proc. of Foundations of Sciences, Project of History of Indian Science, Philosophy and Culture New Delhi (2006).
Chaos: An Introduction to Dynamical Systems. K T Alligood, T D Sauer, J A Yorke, SpringerNew YorkK.T. Alligood, T.D. Sauer, J.A. Yorke, Chaos: An Introduction to Dynamical Systems, Springer, New York (1996).
Ergodic Theory of Numbers. K Dajani, C Kraaikamp, Mathematical Association of America29Washington, DCK. Dajani, C. Kraaikamp, Ergodic Theory of Numbers, 29. Mathematical Association of America, Washington, DC (2002).
S P Meyn, R L Tweedie, Markov Chains and Stochastic Stability. Springer-VerlagS.P. Meyn, R.L. Tweedie, Markov Chains and Stochastic Stability, Springer-Verlag (1993).
| []
|
[
"Higher-Page Hodge Theory of Compact Complex Manifolds",
"Higher-Page Hodge Theory of Compact Complex Manifolds"
]
| [
"Dan Popovici ",
"Jonas Stelzig ",
"Luis Ugarte "
]
| []
| []
| On a compact ∂∂-manifold X, one has the Hodge decomposition: the de Rham cohomology groups split into subspaces of pure-type classes as H k dR (X) = ⊕ p+q=k H p,q (X), where the H p,q (X) are canonically isomorphic to the Dolbeault cohomology groups H p,q ∂ (X). For an arbitrary nonnegative integer r, we introduce the class of page-r-∂∂-manifolds by requiring the analogue of the Hodge decomposition to hold on a compact complex manifold X when the usual Dolbeault cohomology groups H p, q ∂ (X) are replaced by the spaces E p, q r+1 (X) featuring on the (r + 1)-st page of the Frölicher spectral sequence of X. The class of page-r-∂∂-manifolds coincides with the usual class of ∂∂-manifolds when r = 0 but may increase as r increases. We give two kinds of applications. On the one hand, we give a purely numerical characterisation of the page-r-∂∂-property in terms of dimensions of various cohomology vector spaces. On the other hand, we obtain several classes of examples, including all complex parallelisable nilmanifolds and certain families of solvmanifolds and abelian nilmanifolds. Further, there are general results about the behaviour of this new class under standard constructions like blow-ups and deformations.Keywords: Hodge theory; cohomology theories of compact complex manifolds; Frölicher spectral sequence; deformations of complex structures; blow-up of a complex manifold; nilmanifolds and solvmanifolds. | 10.2422/2036-2145.202111_014 | [
"https://export.arxiv.org/pdf/2001.02313v3.pdf"
]
| 220,250,173 | 2001.02313 | b2c99016fc2cbaedd3e0bc9ec2aaee2e63edf75c |
Higher-Page Hodge Theory of Compact Complex Manifolds
Dan Popovici
Jonas Stelzig
Luis Ugarte
Higher-Page Hodge Theory of Compact Complex Manifolds
Hodge theorycohomology theories of compact complex manifoldsFrölicher spectral sequencedeformations of complex structuresblow-up of a complex manifoldnilmanifolds and solvmanifolds
On a compact ∂∂-manifold X, one has the Hodge decomposition: the de Rham cohomology groups split into subspaces of pure-type classes as H k dR (X) = ⊕ p+q=k H p,q (X), where the H p,q (X) are canonically isomorphic to the Dolbeault cohomology groups H p,q ∂ (X). For an arbitrary nonnegative integer r, we introduce the class of page-r-∂∂-manifolds by requiring the analogue of the Hodge decomposition to hold on a compact complex manifold X when the usual Dolbeault cohomology groups H p, q ∂ (X) are replaced by the spaces E p, q r+1 (X) featuring on the (r + 1)-st page of the Frölicher spectral sequence of X. The class of page-r-∂∂-manifolds coincides with the usual class of ∂∂-manifolds when r = 0 but may increase as r increases. We give two kinds of applications. On the one hand, we give a purely numerical characterisation of the page-r-∂∂-property in terms of dimensions of various cohomology vector spaces. On the other hand, we obtain several classes of examples, including all complex parallelisable nilmanifolds and certain families of solvmanifolds and abelian nilmanifolds. Further, there are general results about the behaviour of this new class under standard constructions like blow-ups and deformations.Keywords: Hodge theory; cohomology theories of compact complex manifolds; Frölicher spectral sequence; deformations of complex structures; blow-up of a complex manifold; nilmanifolds and solvmanifolds.
Introduction
Let X be an n-dimensional compact complex manifold. Recall the following notion that goes back to Deligne-Griffiths-Morgan-Sullivan [15] in the form (equivalent to that in [15]) and with the name given in [30,Definition 1.6]. Definition 1.1. A compact complex manifold X is said to be a ∂∂-manifold if for any d-closed pure-type form u on X, the following exactness properties are equivalent:
u is d-exact ⇐⇒ u is ∂-exact ⇐⇒ u is∂-exact ⇐⇒ u is ∂∂-exact.
The classical ∂∂-lemma asserts that every compact Kähler manifold is a ∂∂-manifold. More generally, thanks to [15], every class C manifold (i.e. every compact complex manifold bimeromorphically equivalent to a compact Kähler manifold) is a ∂∂-manifold. However, there exist many ∂∂-manifolds that are not of class C (see e.g. [11], [24], [42], [30,Obs. 4.10], [3], [5,Thm 3.8], [16]).
On the other hand, every ∂∂-manifold admits canonically, namely in a way that depends only on the complex structure and does not involve arbitrary choices of metrics or other objects, a Hodge decomposition of the de Rham cohomology into a direct sum of subspaces of pure-type classes H k dR (X) = p+q=k H p,q (X), with canonical isomorphisms H p,q ∂ (X) ∼ = H p,q (X), and the conjugation induces the Hodge symmetry, i.e. an antilinear isomorphism H p,q ∂ (X) ∼ = H q,p ∂ (X) (accounting for the fact that some authors call these manifolds cohomologically Kähler). In particular, the Frölicher spectral sequence (FSS) of any ∂∂-manifold degenerates at the first page. However, the converse fails. (Indeed, as is well known, any non-Kähler compact complex surface provides a counter-example to the converse.) Meanwhile, ∂∂-manifolds also have good deformation and modification properties. For a review of these and some further properties of ∂∂-manifolds, see e.g. [30]. Let us now mention only a few of these for the reader's convenience: (1) The ∂∂-property is deformation open in the following sense: if (X t ) t∈B is a holomorphic family of compact complex manifolds X t parametrised by an open disc B ⊂ C about the origin (or by any complex manifold B), whenever X 0 (or any fibre X t 0 with t 0 ∈ B) is a ∂∂-manifold, every X t with t ∈ B sufficiently close to 0 (resp. to t 0 ∈ B) is again a ∂∂-manifold. (See [42] or [8].) (2) The ∂∂-property is stable under contractions in the following sense: if f : X −→ X is a holomorphic bimeromorphic map (i.e. a modification) between compact complex manifolds and if X is a ∂∂manifold, then X is again a ∂∂-manifold. (See [15,Theorem 5.22].) In particular, f may be the blow-up of X along a smooth centre Z. However, it is still unknown whether X being a ∂∂-manifold implies that X is a ∂∂-manifold in full generality. It has recently been shown (see e.g. [6], [37], [38], [35], [43] or also §5) that, on the one hand, X is a ∂∂-manifold whenever both X and Z are, and, on the other hand, that the ∂∂-property is stable under blow-ups with smooth centres if and only if it is inherited by any submanifold of a ∂∂-manifold. So, the remaining open problem in this direction is this submanifold heredity issue.
(3) The ∂∂-property of compact Calabi-Yau manifolds implies the unobstructedness of the small deformations of the complex structure in the following sense: if X is a compact ∂∂-manifold whose canonical bundle K X is trivial, the Kuranishi family of X is unobstructed. (This statement in the case where X is Kähler is called the Bogomolov-Tian-Todorov Theorem -see [9], [39], [40].)
One of our main goals in this work is to relax the notion of ∂∂-manifold to accommodate a Hodge theory involving the higher pages of the Frölicher spectral sequence (FSS) of X when degeneration does not occur at the first page. As a result, we enhance the class of ∂∂-manifolds to classes of manifolds that seem worthy of further attention. This is motivated by the existence of many well-known non-Kähler and even non-∂∂ compact complex manifolds, such as the Iwasawa manifold and higher-dimensional analogues thereof, whose Frölicher spectral sequence degenerates only at the second page or later. Thus, we aim for a unified Hodge theoretical treatment of as large a class of compact complex manifolds as possible.
This approach enables us to get exact analogues of the usual Hodge decomposition and symmetry properties for what we call page-(r − 1)-∂∂-manifolds using the spaces E p, q r (X) featuring on the r-th page of the FSS, for some given r ≥ 2, rather than the usual spaces E p, q 1 (X) = H p, q ∂ (X) of the Dolbeault cohomology. The standard notion of ∂∂-manifold coincides with that of page-0-∂∂-manifold, while the class of page-r-∂∂-manifolds, our main find in this work, may grow as r ∈ N increases.
Overview of the results
The following statement sums up several results and definitions of sections 2 and 3.
Theorem and Definition 1.2. Let X be a compact complex manifold with dim C X = n. Fix an arbitrary r ∈ N . The following statements are equivalent.
(1) X has the E r -Hodge Decomposition property, i.e. for every k, there is a d-closed representative of any class in E p,q r (X) and sending such a representative to its de Rham class induces a well-defined isomorphism p+q=k E p, q r (X) −→ H k dR (X, C).
(2) The Frölicher spectral sequence of X degenerates at E r and the Hodge filtration induces a pure Hodge structure on the de Rham cohomology in every degree.
(3) The double complex ( p,q∈Z C ∞ p, q (X), ∂,∂) of smooth, complex-valued differential forms is isomorphic to a direct sum of indecomposable double complexes of the following types: squares, namely double complexes of the shape ∂ a ∂ ∂a a ∂a ;
dots, namely double complexes of the shape a ; and zigzags of even length ≤ 2(r − 1).
(4) There is an equality
h BC (X) = r−1 i=1 e i (X) − (r − 2)b(X),
where h BC (X) := p,q∈Z dim H p,q BC (X) is the total Bott-Chern cohomology dimension, e i (X) := p,q dim E p,q i (X) is the total dimension of the i-th page of the Frölicher spectral sequence and b(X) := i∈Z = b i (X) is the total Betti-number.
A compact complex manifold X that satisfies any of these equivalent conditions is said to be a page-(r − 1)-∂∂-manifold.
For example, when they are of lengths 2 and 4, the zigzags mentioned under (3) in the above statement are of the following types:
a ∂a , ∂ a a , a 1 ∂a 1 a 2 ∂a 2 , ∂ a 1 a 1 ∂ a 2 a 2 .
In all these diagrams, a and a i are non-zero elements of pure bidegrees, all drawn arrows are supposed to be isomorphisms and all omitted arrows are zero.
Note that the pure-Hodge condition of the de Rham cohomology imposes topological restrictions on the underlying manifold: as in the Kähler case, the odd-degree Betti numbers b 2k+1 (X) of page-r-∂∂-manifolds have to be even. Condition (4) is actually the equality case of the following general inequality, proved in section 3. Theorem 1.3. Let X be a compact complex manifold with dim C X = n. Then, for any r ∈ N , the following inequality holds
h BC (X) ≥ r−1 i=1 e i (X) − (r − 2)b(X),
with equality if and only if X is a page-(r − 1)-∂∂-manifold.
Note that one can rewrite this inequality to obtain a lower bound for the total Betti number (a topological quantity) in terms of analytic invariants.
The statement for r = 1 (i.e. h BC (X) ≥ b(X)) and for r = 2 (i.e. h BC (X) ≥ h∂(X)) were known (see [8]) but the characterisation of the equality is new for r = 2. For r ≥ 3, both the inequality and the characterisation of the equality are new.
In section 4, we provide examples of primary (i.e. not derived from others of their kind) page-r-∂∂manifolds that are not page-0-∂∂-manifolds (i.e. not ∂∂-manifolds in the usual sense). We produce three different classes of such examples with r = 1:
(1) all the complex parallelisable nilmanifolds (see Theorem 4.7). These are not ∂∂-manifolds unless they are complex tori. This class of manifolds includes the complex 3-dimensional Iwasawa manifold, but is far wider;
(2) two families of nilmanifolds with abelian complex structures with members of arbitrarily high dimensions. (See Theorem 4.8 and Proposition 4.10.) In some sense, these form the opposite of the above class (1) among nilmanifolds (see Remarks 4.9 and 4.11);
(3) the Nakamura solvmanifolds (which are not nilmanifolds) considered in [29] and [4]. (See Corollary 4.12.)
In the subsequent work [20] by H. Kasuya and the second-named author, examples (1) and (3) were generalised; in particular, all complex parallelisable solvmanifolds were shown to be page-1-∂∂-manifolds.
In section 5, we study the behaviour of page-r-∂∂-manifolds under standard geometric operations. In particular, we obtain construction methods for new examples from given ones. These include:
(4) products of page-r i -∂∂-manifolds, with possibly different r i 's;
(5) blow-ups of page-r 1 -∂∂-submanifolds of page-r 2 -∂∂-manifolds, with possibly different r i 's;
(6) the projectivised bundle P(V) of any holomorphic vector bundle V on a page-r-∂∂-manifold;
(7) small deformations of a page-1-∂∂-manifold with fixed Hodge numbers.
Page-r-∂∂-manifolds
In this section, we give the main definitions and some of the basic properties of the new class of manifolds that we introduce herein. Unless otherwise stated, X will stand for an n-dimensional compact complex manifold. We refer to [17] for the setup of the Frölicher spectral sequence and [13] for a hands-on description of the groups appearing on the higher pages.
Preliminaries
We start by recalling some well-known facts in order to fix the setup and to spell out the relationship between the Frölicher spectral sequence and the (filtered) de Rham cohomology when the latter is pure in a sense that will be specified.
For all non-negative integers k ≤ 2n and p ≤ min{k, n}, it is standard to put
F p C ∞ k (X, C) := i≥p C ∞ i, k−i (X) ⊂ C ∞ k (X, C)(1)
and get a filtration of C ∞ k (X, C) for every k:
{0} ⊂ · · · ⊂ F p+1 C ∞ k (X, C) ⊂ F p C ∞ k (X, C) ⊂ · · · ⊂ C ∞ k (X, C).(2)
It is equally standard to put
F p H k dR (X, C) := F p C ∞ k (X, C) ∩ ker d F p C ∞ k (X, C) ∩ im d ⊂ H k dR (X, C),
the subspace of de Rham cohomology classes of degree k that are representable by forms in F p C ∞ k (X, C), to get a filtration of H k dR (X, C) for every k:
{0} ⊂ · · · ⊂ F p+1 H k dR (X, C) ⊂ F p H k dR (X, C) ⊂ · · · ⊂ H k dR (X, C).(3)
Let us now recall the following standard result (see e.g. [17,Lemma 2]).
Theorem 2.1. Let X be an n-dimensional compact complex manifold. For every p, q ∈ {0, . . . , n}, the vector space E p, q ∞ (X) of type (p, q) on the degenerating page of the Frölicher spectral sequence of X is naturally isomorphic to the graded module associated with the filtration (3):
E p, q ∞ (X) G p H p+q dR (X, C) := F p H p+q dR (X, C) F p+1 H p+q dR (X, C) , where the isomorphism G p H p+q dR (X, C) E p, q ∞ (X) is induced by the projection F p H p+q dR (X, C) i≥p u i, p+q−i dR → {u p, q } E∞ ∈ E p, q ∞ (X).
The following statement is immediate to prove.
Lemma 2.2. The following relations hold:
C ∞ k (X, C) = F p C ∞ k (X, C) ⊕ F k−p+1 C ∞ k (X, C) for all 0 ≤ p ≤ min{k, n}; (4) C ∞ p, q (X) = F p C ∞ k (X, C) ∩ F q C ∞ k (X, C)
for all p, q such that p + q = k.
On the other hand, for all p, q ∈ {0, . . . , n}, let us consider the following space of de Rham cohomology classes of degree p + q that are representable by pure-type (p, q)-forms:
H p, q dR (X) := c ∈ H p+q dR (X, C) | ∃α ∈ C ∞ p, q (X) s.t. [α] = c ⊂ H p+q dR (X, C).
This definition makes it obvious that the analogue of the Hodge symmetry for the spaces H p, q dR (X) always holds. In other words, the conjugation induces an isomorphism
H p, q dR (X) ∈ {α} dR → {ᾱ} dR ∈ H q, p dR (X) for all 0 ≤ p, q ≤ n.(6)
The following analogue in cohomology of identity (5), resp. of one of the inclusions defining the filtration (1) of C ∞ k (X, C), can be immediately proved to hold. Lemma 2.3. The following relations hold:
H p, q dR (X) = F p H k dR (X, C) ∩ F q H k dR (X, C)
for all p, q such that p + q = k;
H i, k−i dR (X) ⊂ F p H k dR (X, C)(7)
for all i ≥ p and all p ≤ k.
Proof. Everything is obvious, except perhaps the inclusion "⊃" in (7) which can be proved as follows.
Let {α} dR = {β} dR ∈ F p H k dR (X, C) ∩ F q H k dR (X, C) with α = i≥p α i, k−i ∈ F p C ∞ k (X, C) and β = s≤p β s, k−s ∈ F q C ∞ k (X, C)
. Since α and β are de Rham-cohomologous, there exists a form σ ∈ C ∞ k−1 (X, C) such that α − β = dσ. This identity implies, after equating the terms with a holomorphic degree > p on either side, the second identity below:
α − α p, q = i>p α i, k−i = d( j≥p σ j, k−1−j ) −∂σ p, q−1 ,
which, in turn, implies that {α} dR = {α p, q −∂σ p, q−1 } dR . Since α p, q −∂σ p, q−1 is a (p, q)-form, we get {α} dR ∈ H p, q dR (X) and we are done.
Note that, with no assumption on X, the subspaces H i, k−i dR (X) may have non-zero mutual intersections inside H k dR (X, C), i.e. they may not sit in a direct sum. Similarly, they may not fill out the whole space (i.e. their linear span could be a proper subspace). If they do, i.e. if
H k dR (X, C) = p+q=k H p, q dR (X),
then H k dR (X, C) is said to carry a pure Hodge structure (induced from the Hodge filtration), [14]. We also recall from [14] that purity in degree k is equivalent to the filtrations F andF on H k dR (X, C) being k-opposed, which means that the natural map F p H k dR (X, C) ⊕ F q H k dR (X, C) −→ H k dR (X, C) is an isomorphism whenever p + q = k + 1, or equivalently that gr p F gr qF H k dR (X, C) = 0 whenever p + q = k. We now introduce the following shorthand terminology. Definition 2.4. Let X be an n-dimensional compact complex manifold. The de Rham cohomology of X is said to be pure if the Hodge filtration induces a pure Hodge structure in every degree, i.e. if H k dR (X, C) = p+q=k H p, q dR (X) for all k ∈ {0, . . . , 2n}.
Note on terminology 2.5. Some authors call the property of the Hodge filtration inducing a pure Hodge structure in degree k complex-C ∞ -pure-and-full in degree k (cf. [25]). It was remarked in [7] that the complex-C ∞ -full property in degree k (i.e. the sum of the H p, q dR (X)'s is not necessarily direct but it fills out H k dR (X, C)) implies the complex-C ∞ -pure property in degree (2n − k) (i.e. the sum of the H p, q dR (X)'s is direct but it may not fill out H 2n−k dR (X, C)). We will show further down that the converse is also true, i.e. the complex-C ∞ -pure property in degree k implies the complex-C ∞ -full property in degree (2n − k). Therefore, compact complex manifolds satisfying either the complex-C ∞ -full property or the complex-C ∞ -pure property in every degree k are of pure de Rham cohomology in the sense of our Definition 2.4. Proposition 2.6. Suppose X is an n-dimensional compact complex manifold. Then its de Rham cohomology is pure if and only if
F p H k dR (X, C) = i≥p H i, k−i dR (X) for all p ≤ k.(9)
In particular, for pure X, the spaces E p, q ∞ (X) in the Frölicher spectral sequence of X are given by
E p, q ∞ (X) H p, q dR (X) for all p, q ∈ {0, . . . , n},(10)
where stands for the natural isomorphism induced by the identity.
Proof. Equation (9) for all p = k is just the definition of purity, so it remains to prove the converse. Assume X is pure. Inclusion "⊃" in (9) follows at once from (8) and from the de Rham purity assumption. To prove inclusion "⊂" in (9)
, let {α} dR ∈ F p H k dR (X, C) with α = r≥p α r, k−r ∈ ker d. Since F p H k dR (X, C) ⊂ H k dR (X, C) = ⊕ p+q=k H p, q
dR (X) (the last identity being due to the purity assumption), there exist pure-type d-closed forms β r, k−r such that {α} dR = { 0≤r≤k β r, k−r } dR . Hence, there exists a (k − 1)-form σ such that α − 0≤r≤k β r, k−r = −dσ, which amounts to
α r, k−r − β r, k−r + ∂σ r−1, k−r +∂σ r, k−r−1 = 0, r ∈ {0, . . . , k},
with the understanding that α r, k−r = 0 whenever r < p. Therefore, β r, k−r = ∂σ r−1, k−r +∂σ r, k−r−1 whenever r < p. Since every β r, k−r is d-closed (hence also ∂-and∂-closed), we infer that σ r−1, k−r and σ r, k−r−1 are ∂∂-closed for every r < p. Hence
k r=0 β r, k−r − d( r<p σ r, k−r−1 ) = r<p (β r, k−r − ∂σ r−1, k−r −∂σ r, k−r−1 ) − ∂σ p−1, k−p + k r≥p β r, k−r = r≥p β r, k−r − ∂σ p−1, k−p .(11)
Note that from the identity β p−1, k−p+1 = ∂σ p−2, k−p+1 +∂σ p−1, k−p and the d-closedness of β p−1, k−p+1 we infer that ∂σ p−1, k−p ∈ ker d.
Thus, (11) shows that the k-form r≥p β r, k−r − ∂σ p−1, k−p ∈ F p C ∞ k (X, C), whose all pure-type components are d-closed, is de Rham-cohomologous to 0≤r≤k β r, k−r , hence to α. Consequently, we have
{α} dR = r≥p β r, k−r − ∂σ p−1, k−p dR ∈ i≥p H i, k−i dR (X).
The proof of (9) is complete. Identity (10) follows at once from (9) and from Theorem 2.1.
Definition of page-r-∂∂-manifolds
Recall that X is a fixed n-dimensional compact complex manifold and E p, q r (X) stands for the space of bidegree (p, q) on the r-th page of the Frölicher spectral sequence of X.
Definition 2.7. Fix r ∈ N and k ∈ {0, . . . , 2n}. We say that the identity induces an isomorphism between ⊕ p+q=k E p, q r (X) and H k dR (X, C) if the following two conditions are satisfied:
(1) for every bidegree (p, q) with p + q = k, every class {α p, q } Er ∈ E p, q r (X) contains a d-closed representative of pure type α p, q ∈ C ∞ p, q (X) ;
(2) the linear map
p+q=k E p, q r (X) p+q=k {α p, q } Er → p+q=k α p, q dR ∈ H k dR (X, C),
defined using only d-closed pure-type representatives α p, q of the classes {α p, q } Er , whose existence is guaranteed by condition (1), is well-defined and bijective. Here, well-defined means that it does not depend on the choice of the d-closed representatives.
Moreover, if, for a fixed r ∈ N , the identity induces an isomorphism ⊕ p+q=k E p, q r (X) H k dR (X, C) for every k ∈ {0, . . . , 2n}, we say that the manifold X has the E r -Hodge Decomposition property.
Note that whenever the identity induces a well-defined (not necessarily injective) linear map E p, q r (X) −→ H k dR (X, C), the image of this map is H p, q dR (X). Indeed, one inclusion is obvious. The reverse inclusion follows from the observation that any d-closed (p, q)-form defines an E r -cohomology class (i.e. it is E r -closed in the terminology of [32]). Further note that whenever X has the E r -Hodge Decomposition property, the Frölicher spectral sequence of X degenerates at E r (at the latest).
Definition 2.8. Fix r ∈ N and p, q ∈ {0, . . . , n}. We say that the conjugation induces an isomorphism between E p, q r (X) and the conjugate of E q, p r (X) if the following two conditions are satisfied:
(1) every class {α p, q } Er ∈ E p, q r (X) contains a d-closed representative of pure type α p, q ∈ C ∞ p, q (X); (2) the linear map E p, q r (X) {α p, q } Er → {α p, q } Er ∈ E q, p r (X)
is well-defined (in the sense that it does not depend on the choice of d-closed representative α p, q of the class {α p, q } Er ) and bijective.
Moreover, if, for a fixed r ∈ N , the conjugation induces an isomorphism E p, q r (X) E q, p r (X) for every p, q ∈ {0, . . . , n}, we say that the manifold X has the E r -Hodge Symmetry property.
We shall now see that the E r -Hodge Decomposition property implies the E r -Hodge Symmetry property. This follows from the following characterisation of the former property. Theorem 2.9. Let X be a compact complex manifold with dim C X = n. Fix an arbitrary r ∈ N . Then, the following two conditions are equivalent:
(1) X has the E r -Hodge Decomposition property;
(2) the Frölicher spectral sequence of X degenerates at E r (we will denote this by E r (X) = E ∞ (X)) and the de Rham cohomology of X is pure.
Proof. (1) =⇒ (2)
We have already noticed that the E r -Hodge Decomposition property implies E r (X) = E ∞ (X) and that the image of each E p, q r (X) in H p+q dR (X, C) under the map induced by the identity is H p, q dR (X). We get (2).
(2) =⇒ (1) Since the de Rham cohomology of X is supposed pure, we know from Proposition 2.6 that E p, q ∞ (X) H p, q dR (X) (isomorphism induced by the identity) for all bidegrees (p, q). On the other hand, E p, q ∞ (X) = E p, q r (X) for all bidegrees (p, q) since we are assuming that E r (X) = E ∞ (X). Combined with the de Rham purity assumption, these facts imply that X has the E r -Hodge Decomposition property. Definition 2.10. A compact complex manifold X that satisfies the equivalent conditions (1) and (2) of Theorem 2.9 is said to be a page-(r − 1)-∂∂-manifold.
Corollary 2.11. Any page-(r − 1)-∂∂-manifold has the E r -Hodge Symmetry property.
Proof. We have already noticed in (6) that the conjugation (trivially) induces an isomorphism between any space H p, q dR (X) and the conjugate of H q, p dR (X). Meanwhile, we have seen that the page-(r−1)-∂∂-assumption implies that the identity induces an isomorphism between any space E p, q r (X) and H p, q dR (X). Hence, the conjugation induces an isomorphism between any space E p, q r (X) and the conjugate of E q, p r (X).
Another obvious consequence of (2) of Theorem 2.9 and Definition 2.10 is that the page-r-∂∂-property becomes weaker and weaker as r increases.
Corollary 2.12. Let X be a compact complex manifold. Then, for every r ∈ N , the following implication holds:
X is a page-r-∂∂-manifold =⇒ X is a page-(r + 1)-∂∂-manifold.
Indeed, the purity of the de Rham cohomology is independent of r, while the property E r (X) = E ∞ (X) obviously implies E r+1 (X) = E ∞ (X) for every r ∈ N.
Characterisation in terms of squares and zigzags
The goal of this section is to relate the page-r-∂∂-property to structural results about double complexes. This degree of generality has the advantage of emphasising which aspects of the theory are purely algebraic. Even if one is only interested in the complex A X := (C ∞ p, q (X), ∂,∂) of C-valued forms on a complex manifold X, in the more general setting one can consider certain finite-dimensional subcomplexes on an equal footing.
Specifically, by double complexes we mean bigraded vector spaces A = p,q∈Z A p,q with endomorphisms ∂ 1 , ∂ 2 of bidegrees (1, 0), resp. (0, 1), satisfying d 2 = 0 for d := ∂ 1 + ∂ 2 . We do not require A to be finitedimensional. We will always assume our double complexes to be bounded, i.e. A p,q = 0 for all but finitely many (p, q) ∈ Z 2 .
There are now two Frölicher-style spectral sequences, starting from column, i.e. (∂ 2 -), resp. row, i.e. (∂ 1 -), cohomology and converging to the total (de Rham) cohomology of (A, d). We denote them by
i E p,q r (A) =⇒ (H p+q dR (A), F i ) i = 1, 2.
In the case A = A X , the case i = 1 is the Frölicher spectral sequence and i = 2 its conjugate.
The following is a minor extension to general double complexes of the definition (based on its second characterisation) of the page-(r − 1)-∂∂-property of manifolds. The equivalence of the two conditions is seen just as before.
Definition 2.13. A double complex A is said to satisfy the page-(r − 1)-∂ 1 ∂ 2 -property if one (hence both) of the following equivalent conditions hold:
(1) Both Frölicher spectral sequences degenerate at page r and the de Rham cohomology is pure.
(2) For i = 1, 2, every i E p,q r (A)-class contains a d-closed representative and the corresponding map
p+q=k i E p,q r (A) −→ H k dR (A)
induced by the identity is well-defined and bijective.
The following observation will motivate the subsequent considerations.
Observation 2.14. The Frölicher spectral sequences, as well as H dR , H A and H BC , are compatible with direct sums. In particular, a sum A = B ⊕ C satisfies the page-r-∂ 1 ∂ 2 -property if and only if B and C do.
Recall that a (nonzero) double complex
A is called indecomposable if there exists no nontrivial decomposition A = B ⊕ C into subcomplexes B, C.
Theorem 2.15. ( [21,38]) For every bounded double complex over a field K, there exists an isomorphism
A ∼ = C C ⊕ mult C (A) ,
where C runs over a set of representatives for the isomorphism classes of bounded indecomposable double complexes and mult C (A) are (not necessarily finite) cardinal numbers uniquely determined by A.
Moreover, each bounded indecomposable double complex is finite dimensional and isomorphic to a complex of one of the following types:
(1) square: a double complex generated by a single pure-(p, q)-type element a in a given bidegree with no further relations (i.e. ∂ 1 a = 0 = ∂ 2 a, ∂ 1 ∂ 2 a = 0):
∂ 2 a ∂ 2 ∂ 1 a a ∂ 1 a .
(2) even-length zigzag of type 1 and length 2l. This is a complex generated by elements a 1 , ...a l and their differentials such that ∂ 2 a 1 = 0 and ∂ 1 a 1 = −∂ 2 a 2 , ∂ 1 a 2 = −∂ 2 a 3 , ..., ∂ 1 a l−1 = −∂ 2 a l and no further relations (i.e. ∂ 1 a l = 0 for all i = 1, ..., l). Drawing only nonzero arrows, it has the shape:
a 1 ∂ 1 a 1 a 2 a l ∂ 1 a l .
···
Here, as in all the following examples, the length of a zigzag is the number of its vertices.
(3) even-length zigzag of type 2 and length 2l. This is a complex generated by elements a 1 , ..., a l and their differentials, such that ∂ 1 a 1 = −∂ 2 a 2 , ∂ 1 a 2 = −∂ 2 a 3 , ..., ∂ 1 a l−1 = −∂ 2 a l , ∂ 1 a l = 0 and no further relations. It is of the shape:
∂ 2 a 1 a 1 ∂ 1 a 1 a 2 a l .
··· (4) odd-length zigzag of type M and length 2l + 1. This is a complex generated by elements a 1 , ..., a l+1 with the only relations ∂ 1 a i = −∂ 2 a i+1 , ∂ 2 a 1 = 0 and ∂ 1 a l+1 = 0. It has the shape:
a 1 ∂ 1 a 1 a 2 a l+1 .
···
The special case where l = 0 is also called a dot.
(5) odd-length zigzag of type L and length 2l + 1 (l > 0). This is a complex generated by elements a 1 , ..., a l with the only relations ∂ 1 a i = −∂ 2 a i+1 (i.e. ∂ 2 a i = 0 = ∂ 1 a i for all i). It has the shape:
∂ 2 a 1 a 1 ∂ 1 a 1 ∂ 2 a l a l ∂ 1 a l .
···
Illustrating this result, we point out the following Example 2. 16. Any bounded complex of K-vector spaces (V ·, δ) is a direct sum of complexes of the following two forms:
... −→ 0 −→ K ⊕n −→ 0 −→ ... and ... −→ 0 −→ K ⊕m ∼ = −→ K ⊕m −→ 0 −→ ...
This can easily be seen directly by picking succesive complements of im δ ⊆ ker δ ⊆ V k in every degree.
A special case of Theorem 2.15 is obtained when one considers (V ·, δ) as a double complex concentrated on a single horizontal line, i.e. setting V p,0 := V p and V p,q := 0 for q = 0, ∂ 1 := δ and ∂ 2 := 0.
As an application of Theorem 2.15, we get the following result expressed in the language of this work.
Theorem 2.17. Let A be a bounded double complex over a field K. The following statements are equivalent.
(1) A satisfies the page-r-∂ 1 ∂ 2 -property.
(2) There exists an isomorphism between A and a direct sum of squares, even-length zigzags of length ≤ 2r and odd-length zigzags of length one (i.e. dots).
Proof. This is a special case of [ For the reader's convenience and because we will need this type of reasoning again in the next section, we recall the proof in our case. By Observation 2.14 and Theorem 2.15, it suffices to show that an indecomposable double complex satisfies the page-r-∂ 1 ∂ 2 -property if and only if it is one of those listed under (2) in the statement. We now run through the list given in Theorem 2.15.
(1): For a square, both row and column cohomologies are zero. Therefore, the spectral sequences trivially degenerate and the total cohomology vanishes. In particular, the page-r-∂ 1 ∂ 2 -property is trivially satisfied.
(2) & (3): An even-length zigzag Z of type 1 and length 2l has vanishing row cohomology. Therefore, all terms in the second spectral sequence are identically zero and this spectral sequence degenerates for trivial reasons. On the other hand, the column cohomology is 2-dimensional, with the two classes at the endpoints generating a one-dimensional space each. Since the total cohomology has to vanish (it is the limit of both spectral sequences), there must be a non-trivial differential at some page of 1 E(Z) and for space reasons it can only be at page l. Hence, Z has the page-r-∂ 1 ∂ 2 -property if and only if l ≤ r. The case of an even length zigzag Z of type 2 is analogous, reversing the roles of row and column cohomology.
(4) & (5): For an odd-length zigzag Z, both row and column cohomologies are one-dimensional, so there is no space for non-trivial differentials and both spectral sequences degenerate at the first page. Moreover, the de Rham cohomology is one-dimensional. If Z is of type M , H dR (Z) is identified with the one-dimensional space generated by a : Proof. Indeed, the page-0-∂ 1 ∂ 2 -property means that there is a decomposition of A into squares and dots. Obviously, both satisfy the usual ∂ 1 ∂ 2 -property. Conversely, in any zigzag of length ≥ 2 there is a closed element ('form') of pure type, which is ∂ 1 -or ∂ 2 -exact, but no non-zero element in a zigzag is ∂ 1 ∂ 2 -exact. Hence, if A satisfies the usual ∂ 1 ∂ 2 -property, in any decomposition of A into elementary complexes only squares and length-one zigzags can occur.
= l+1 i=1 a i , which is of pure-type only if l = 0, namely if Z is a dot. If Z is of type L, it is generated by [∂ 2 a 1 ] dR = [−∂ 1 a 1 ] dR ,Definition 2.19. A map A −→ B of double complexes is an E r -isomorphism if i E r (f ) is an isomorphism for i ∈ {1, 2}.
One writes A r B if there exist such an E r -isomorphism. The usefulness of this notion stems from the following statements. or H p,q A . Thanks to its explicit description given above, one sees that an indecomposable double complex C is determined up to isomorphism by its shape S(C) = {(p, q) ∈ Z 2 | C p,q = 0}. By a slight abuse of notation, we will sometimes conveniently write mult S (A) instead of mult C (A) when S = S(C).
We will need the following duality results in the special case A = A X . They follow from the real structure and the Serre duality. Then, conjugation ω → ω and integration ω → X ω∧ define an isomorphism, resp. an E 1 -isomorphism:
A ∼ =Ā, respectively A → DA.
In particular, the set of zigzags occuring in A X is symmetric under reflection along the diagonal and the anti-diagonal. More precisely, for any zigzag shape S, mult S (A) = mult rS (A) = mult dS (A), where rS = {(p, q) ∈ Z 2 | (q, p) ∈ S} and dS := {(p, q) ∈ Z 2 | (n − p, n − q) ∈ S}.
Recall that the complex-C ∞ -pure and -full properties from Remark 2.5. As a consequence of the above, we obtain (1) X satisfies the complex-C ∞ -pure property in all degrees;
(2) X satisfies the complex-C ∞ -full property in all degrees;
(3) The de Rham cohomology of X is pure (in the sense of Definition 2.4).
Numerical characterisation of page-r-∂∂-manifolds and applications
Let X be a compact connected complex manifold. Let b(X) = k∈Z b k (X), h BC (X) = p,q∈Z h p,q BC (X) and define h A (X), h ∂ (X) and h∂(X) analogously. Angella and Tomassini showed in [8] that there are inequalities:
h BC (X) + h A (X) ( * ) ≥ h∂(X) + h ∂ (X) ( * * ) ≥ 2 b(X)(12)
and that both of these inequalities are equalities if and only if X is a ∂∂-manifold.
It is a standard fact about spectral sequences that equality in ( * * ) is equivalent to the degeneration at E 1 of the Frölicher spectral sequence (and its conjugate). One application of our methods is a generalisation of inequality ( * ) and a characterisation of the equality case in terms of our new classes of manifolds introduced in this paper.
Remark 3.1. Since h p, q BC (X) = h n−p, n−q A (X) by duality, one gets h BC (X) = h A (X) and conjugation yields h ∂ (X) = h∂(X). Therefore, one can replace (12) with the equivalent inequalities h BC (X) ≥ h∂(X) ≥ b(X) and have the same characterisations for the equality cases.
The following general statement is new. Note that it was stated in the introduction as Theorem 1.3 for any r − 1 ∈ N. We now shift r − 1 ∈ N to r ∈ N in the notation. In both cases, we implicitly put 0 i=1 e i (X) = 0 for the sake of notation consistency. Theorem 3.2. For every compact complex manifold X and for every r ∈ N, there is an inequality:
h BC (X) ≥ r i=1 e i (X) − (r − 1)b(X),
where e i := p,q∈Z dim E p,q i (X). Moreover, equality holds for some fixed r ∈ N if and only if X is a page-r-∂∂-manifold.
In particular, for r = 1, we obtain the characterisation of the equality case in ( * ). Using the upper-semicontinuity of h BC and h A in families of manifolds, we infer from Theorem 3.2 applied with r = 1 the stability of page-1-∂∂-manifolds with fixed Hodge numbers under small deformations of the complex structure. The analogous statement for r ≥ 2 and constant e i 's with i ≤ r also holds. Corollary 3.3. If X 0 is a page-1-∂∂-manifold, then every sufficiently small deformation X t of X 0 which satisfies h∂(X t ) = h∂(X 0 ) is again page-1-∂∂.
If one drops the condition on constant Hodge numbers, one cannot say much in general. In fact, as we will see, the Iwasawa manifold is page-1-∂∂, but any small deformation with different Hodge numbers is not.
In order to prove Theorem 3.2 we will work with abstract (bounded) double complexes rather than double complexes of forms and prove the following (more general) statement.
For a bounded double complex A with finite-dimensional cohomology, let 1 e i (A), resp. 2 e i (A), be the total dimension of the i-th page of the row, resp. column, spectral sequence. There is always an inequality:
h BC (A) + h A (A) ≥ r i=1
( 1 e i (A) + 2 e i (A)) − 2(r − 1)b(A) and the equality is equivalent to the page-r-∂ 1 ∂ 2 property for A. ( 1 e i + 2 e i )) − 2(r − 1)b.
Before spelling out the details, we state the general idea, which is very simple: In any decomposition of A into indecomposables, LHS counts all the zigzags occuring in A, weighted by their length, except for the dots, which are counted twice. When r = 1, RHS 1 counts all the zigzags twice. For an arbitrary r, the count on the right becomes slightly more involved.
For the actual proof, we first notice that both LHS and RHS r are additive under direct sums. Therefore, as in the proof of Theorem 2.17, using Theorem 2.15, we may reduce the problem to checking the statement on every possible indecomposable double complex individually. Let us run through the list of Theorem 2.15 using the notation introduced there. It remains to calculate LHS for zigzags. We distinguish two cases: For a dot D, one has h A (D) = h BC (D) = 1, so LHS(D) = 2 = RHS r (D) for any r. For a zigzag Z of length l ≥ 2, the Aeppli cohomology is the space of generators H A (Z) = {a i } i , i.e. all those spaces lying on the first nonzero anti-diagonal of Z, and the Bott-Chern cohomology is the space of their images H BC (Z) = ∂ 1 a i , ∂ 2 a i , i.e. all those spaces lying on the second anti-diagonal. For example,
H A a 1 ∂ 1 a 1 a 2 = a 1 0 0 a 2 and H BC a 1 ∂ 1 a 1 a 2 = 0 ∂ 1 a 1 0 0.
Thus, one has LHS(Z) = l.
Summing up, we see that for any indecomposable complex I, one always has LHS(I) ≥ RHS r (I), but equality only holds for squares, dots and even length zigzags of length ≤ 2r. Using the characterisation of the page-r-∂∂-property given in Theorem 2.17, this completes the proof.
Examples of page-r-∂∂-manifolds and counterexamples
We shall organise our examples in several classes, each flagged by a specific heading.
The case r = 0 and low dimensions
The first observation is the following rewording of (5.21) in [15]. Proposition 4.1. For any compact complex manifold X, the following equivalence holds:
X is a ∂∂-manifold ⇐⇒ X is a page-0-∂∂-manifold.
In dimensions one and two, it follows from well-known results that the only possible examples of pager-∂∂-manifolds are Kähler: Proof. It is standard that the Frölicher spectral sequence of any compact complex surface degenerates at E 1 . It is equally standard that H k dR is always pure for k = 0, 2, 4, while it follows from the Buchdahl-Lamari results (see [10] and [22]) that H 1 dR (and hence H 3 dR ) is pure iff the surface is Kähler.
Case of the Iwasawa manifold and its small deformations
Recall that the Iwasawa manifold I (3) is the nilmanifold of complex dimension 3 obtained as the quotient of the Heisenberg group of 3 × 3 upper triangular matrices with entries in C by the subgroup of those matrices with entries in Z[i].
It is well known that the Iwasawa manifold is not a ∂∂-manifold. In fact, its Frölicher spectral sequence is known to satisfy E 1 = E 2 = E ∞ . On the other hand, it is known that the de Rham cohomology of the Iwasawa manifold can be generated in every degree by de Rham classes of (d-closed) pure-type forms. (See e.g. [2].) Together with Cor. 2.25 this yields However, the situation is more complex for the small deformations of the Iwasawa manifold, all of which are already known to not be ∂∂-manifolds. The following result shows, in particular, that unlike the ∂∂-property, the page-1-∂∂-property is not deformation open.
Proposition 4.4. Let (X t ) t∈B be the Kuranishi family of the Iwasawa manifold X 0 (see [29], [2]). For every t ∈ B, we have:
(1) X t is a page-1-∂∂-manifold if and only if X t is complex parallelisable (i.e. lies in Nakamura's class (i));
(2) if X t lies in one of Nakamura's classes (ii) or (iii), the de Rham cohomology of X t is not pure, so X t is not a page-r-∂∂-manifold for any r ∈ N.
Proof. That deformations in Nakamura's class (i) are page-1-∂∂-manifolds can be proved in the same way as the Iwasawa manifold was proved to have this property in Proposition 4.3. This fact also follows from the far more general Proposition 4.7 since all the small deformations X t of X 0 are nilmanifolds. To show (2), we will actually prove a slightly more general result. Calculations of Angella [2] show that the hypotheses of the next Lemma are satisfied in this case. Lemma 4.5. Let X be a compact complex manifold with b 1 = 4, h 1,0 ∂ = h 0,1 ∂ = 2 and h 1,0 A = 3. Then, either H 1 dR (X, C) or H 2 dR (X, C) is not pure.
Proof. The proof is combinatorial. We will exploit the fact that the de Rham, Dolbeault and Aeppli cohomologies of indecomposable complexes are computable. This is spelt out in detail in [38]. Summarised briefly, an even-length zigzag has a nonzero differential in the Frölicher spectral sequence or its conjugate, but has no de Rham cohomology. Meanwhile, odd-length zigzags have no differentials in the Frölicher spectral sequence, but have a nonzero de Rham cohomology and h p,q A counts the zigzags that have a nonzero component in degree (p, q) with possibly outgoing but no incoming arrows.
Specifically, denote by A = (C ∞ p, q (X), ∂,∂) the double complex of C-valued forms on X. We investigate for which zigzag shapes S with (1, 0) ∈ S or (0, 1) ∈ S, one can have mult S (A) = 0. Assume H 1 dR (X, C) is pure. That means that any odd zigzag contributing to the de Rham cohomology H 1 dR (X, C) is of length one, i.e. drawing only the odd zigzags and not squares or even ones, the lower part of the double complex looks like this:
p q 0 1 2 0 1 2 . . . • • • • •
Here, a • denotes a zigzag of length one and multiplicity one. The symmetry along the diagonal comes from the real structure of A given by complex conjugation. A priori, there may be other zigzags passing through (1, 0) and (0, 1). Schematically, these would all arise by choosing some connected subgraph with at least one arrow of the diagram
• • 0,1 • • 1,0 • ∂ ∂∂ ∂
They could either be of even length or of odd length but not contributing to H 1 dR (X, C) but to H 2 (X, C). Note that the subdiagram
• 1,0 • • 0,1 ∂∂
is not allowed since this would be give rise to a nonzero class in H 1 dR (X, C), which we have ruled out already by purity.
However, since h 1,0 ∂ + h 0,1 ∂ = b 1 , there can be no differentials in the Frölicher spectral sequence starting or ending in degree (1, 0) or (0, 1). In terms of zigzags, this means no even-length zigzag passes through these bidegrees. This rules out the zigzags
• 0,1 • • 1,0 • ∂∂ ∂ • • 1,0 ∂ • 1,0 • ∂
and their reflections along the diagonal (which have to occur with the same multiplicity since A is equipped with a real structure). So, the only options for zigzags passing through (1, 0) that are left are
• • 0,1 • • 1,0 • ∂ ∂∂ ∂ or • • 1,0 • ∂ ∂
and one of these has to occur since otherwise H 1,0 A would be of dimension 2, contradicting the assumptions. But the occurence of either one implies that H 2 dR (X, C) is not pure.
Case of complex parallelisable nilmanifolds
We will now prove that all complex parallelisable nilmanifolds are page-1-∂∂-manifolds. On the one hand, this generalises one implication in (1) of Proposition 4.4. On the other hand, it provides a large class of page-1-∂∂-manifolds that are not ∂∂-manifolds. Indeed, it is known that a nilmanifold Γ\G is never ∂∂ (or even formal in the sense of [15]) unless it is Kähler (i.e. a complex torus, or equivalently, the Lie group G is abelian) [19].
Recall that a compact complex parallelisable manifold X is a manifold whose holomorphic tangent bundle is trivial. By Wang's theorem [41], X is a quotient Γ\G of a complex Lie group G by a co-compact, discrete subgroup Γ. When G is nilpotent, the manifold X is a complex parallelisable nilmanifold. The Iwasawa manifold is an example of this type. We first need an algebraic result. Lemma 4.6. Let (A·, d A ) and (B·, d B ) be two complexes of vector spaces and C = A ⊗ B their tensor product, considered as a double complex, i.e.:
C p,q := A p ⊗ B q ∂ 1 (a ⊗ b) := d A a ⊗ b ∂ 2 (a ⊗ b) := (−1) |a| a ⊗ d B b
Then C satisfies the page-1-∂ 1 ∂ 2 -property.
Proof. First, we compute the first and second pages of the column Frölicher spectral sequence. (We only treat the column case, the row case being analogous.) The first page is the column cohomology:
( 1 E · , · 1 , d 1 ) = (H q (C p, · , ∂ 2 ), ∂ 1 ) Since ∂ 2 is, up to sign, Id A ⊗d B , one has H q (C p, ·,∂ 2 ) = A p ⊗ H q (B, d B ) and H q (B, d B ). Now, for every d A -closed element a ∈ A p and every d B -closed element b ∈ B q , the element a ⊗ b ∈ C p+q is d = ∂ 1 + ∂ 2 closed. Similarly, if one of the two is d A or d B exact, the form a ⊗ b will be d-exact. Hence we get a natural map p+q=k H p (A, d A ) ⊗ H q (B, d B ) → H k dR (C). Since we are working over a field, the Künneth formula tells us that this is an isomorphism.
d 1 = d A ⊗ Id H(B) . Therefore, 2 E p,q 2 = H p (A, d A ) ⊗
Given a complex parallelisable nilmanifold Γ\G, let g be the (real) Lie algebra of G, and denote by J : g → g the endomorphism induced by the complex structure of the Lie group G. Then J 2 = −Id and
[Jx, y] = J[x, y],(13)
for all x, y ∈ g. Let g * C be the dual of the complexification g C of g and denote by g 1,0 (respectively g 0,1 ) the eigenspace of the eigenvalue i (resp. −i) of J considered as an endomorphism of g * C . Condition (13) is equivalent to [g 0,1 , g 1,0 ] = 0 which is equivalent to d(g 1,0 ) ⊂ 2 (g 1,0 ), i.e. there is no component of bidegree (1,1). Therefore,∂ is identically zero on p (g 1,0 ) and ∂ is identically zero on q (g 0,1 ), that is,
∂ | p (g 1,0 ) = d | p (g 1,0 ) ,∂ | p (g 1,0 ) = 0, ∂ | q (g 0,1 ) = 0, and∂ | q (g 0,1 ) = d | q (g 0,1 ) .(14)
Theorem 4.7. Complex parallelisable nilmanifolds are page-1-∂∂-manifolds.
Proof. Sakane [36] showed that the inclusion of the double complex ( · , · g * C , ∂,∂) as left invariant forms into the complex of all forms on Γ\G induces an isomorphism of the respective first pages of the corresponding Frölicher spectral sequences (hence of all later pages). But the equations (14) mean that the double complex ( · , · g * C , ∂,∂) is the tensor product of the simple complexes ( · g 1,0 , d) and ( · , g 0,1 , d), so we can apply Lemma 4.6.
Nilmanifolds with abelian complex structures
In this subsection, we construct two classes of page-1-∂∂-manifolds which are not biholomorphic to complex parallelisable nilmanifolds (see Remarks 4.9 and 4.11). Indeed, they are nilmanifolds endowed with an invariant complex structure that is abelian, which means that, in contrast to (14), ∂ vanishes on leftinvariant (p, 0)-forms, i.e. ∂ | p (g 1,0 ) = 0.
Theorem 4.8. Let n ≥ 3 and G be the nilpotent Lie group with abelian complex structure defined by the structure equations (Ab1 n ) dω 1 = 0, dω 2 = 0, dω 3 = ω 2 ∧ ω 1 , . . . , dω n = ω n−1 ∧ ω 1 , or (Ab2 n ) dω 1 = 0, . . . , dω n−1 = 0, dω n = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 + · · · + ω n−2 ∧ ω n−1 (only for odd n ≥ 3).
Then, any nilmanifold Γ\G is a page-1-∂∂-manifold.
Proof. For every 1 ≤ k ≤ n, we write ω k = e k + i f k , where e k and f k are the real part and the imaginary part of the complex (1, 0)-form ω k , respectively. We start by working out the real structure equations of the Lie group G in the basis of left-invariant 1-forms {e 1 , f 1 , . . . , e n , f n }.
In the case (Ab1 n ), we first notice that, for 3 ≤ k ≤ n,
ω k−1 ∧ ω 1 = (e k−1 + i f k−1 ) ∧ (e 1 − i f 1 ) = −(e 1 ∧ e k−1 + f 1 ∧ f k−1 ) − i (e 1 ∧ f k−1 − f 1 ∧ e k−1 ).
Hence, the real structure equations are, for 3 ≤ k ≤ n,
de 1 = df 1 = de 2 = df 2 = 0, de k = −e 1 ∧ e k−1 − f 1 ∧ f k−1 , df k = −e 1 ∧ f k−1 + f 1 ∧ e k−1 .
Since the structure constants in this basis are 0, ±1, in particular rational numbers, a result by Mal'cev [26] implies the existence of a lattice Γ for G. Thus, we get a nilmanifold Γ\G endowed with an abelian complex structure.
We now consider the case (Ab2 n ). Writing n = 2m + 1, the last complex equation in (Ab2 n ) is dω 2m+1 = m k=1 ω 2k−1 ∧ ω 2k . A direct calculation shows that the real structure equations of the Lie group in the basis of left-invariant 1-forms {e 1 , f 1 , . . . , e n , f n } are
de 1 = df 1 = · · · = de 2m = df 2m = 0, de 2m+1 = m k=1 (e 2k−1 ∧ e 2k + f 2k−1 ∧ f 2k ), df 2m+1 = m k=1 (−e 2k−1 ∧ f 2k + f 2k−1 ∧ e 2k )
. Again, the structure constants in this basis are 0, ±1 ∈ Q, so Mal'cev's theorem [26] implies the existence of a lattice Γ for G. This induces a nilmanifold Γ\G endowed with an abelian complex structure.
Recall that for abelian complex structures, just as for complex parallelisable ones, Dolbeault, Aeppli and Bott-Chern cohomology groups can be computed using only left-invariant forms (see [12,Remark 4] and [2,Theorem 3.8]).
(Ab1 n ): First, we consider a complex structure defined by (Ab1 n ). Let A be the exterior algebra over the vector space ω 1 , . . . , ω n , ω 1 , . . . , ω n . It identifies naturally with the space of left-invariant C-valued forms on G. We write A 1 for (A, ∂,∂), where the exterior algebra A is equipped with the differentials defined under (Ab1 n ) in the statement. We can also equip A with a different differential d P 1 , acting as follows in degree 1:
d P 1 (ω 1 ) = 0, d P 1 (ω 2 ) = 0, d P 1 (ω 3 ) = ω 2 ∧ ω 1 , . . . , d P 1 (ω n ) = ω n−1 ∧ ω 1 .
One has d 2 P 1 = 0, d P 1 = ∂ P 1 +∂ P 1 , where ∂ P 1 and∂ P 1 denote the components of bidegrees (1, 0) and (0, 1), as well as d P 1 (A 1,0 ) ⊆ A 2,0 . So, (A, d P 1 ) can be considered as the space of left-invariant forms on a nilmanifold endowed with a complex parallelisable structure P 1 . By Theorem 4.7, A P 1 := (A, ∂ P 1 ,∂ P 1 ) has the page-1-∂ P 1∂ P 1 -property. So, by Theorem 3.2, h BC (
A P 1 ) + h A (A P 1 ) = h ∂ P 1 (A P 1 ) + h∂ P 1 (A P 1 ).
Define a C-linear involution C : A → A in degree 1 by C(ω 1 ) = ω 1 and C(ω i ) = ω i , C(ω i ) = ω i for i > 1 and in degree k by C(α 1 ∧ . . . ∧ α k ) := C(α 1 ) ∧ . . . ∧ C(α k ). This is compatible with the total degree, but not with the bigrading. One checks that
C • ∂ =∂ P 1 • C and C •∂ = ∂ P 1 • C.
Indeed, this holds in degree 1 and then, thanks to the Leibniz rule, in higher degrees as well. Consequently, C induces isomorphisms:
H BC (A 1 ) ∼ = H BC (A P 1 ), H A (A 1 ) ∼ = H A (A P 1 ), H ∂ (A 1 ) ∼ = H∂ P 1 (A P 1 ), H∂(A 1 ) ∼ = H ∂ P 1 (A P 1 ).
at the level of the total cohomologies. For example, the notation means that
H BC (A 1 ) = ker ∂ ∩ ker∂ im ∂∂ (A 1 ) = ⊕ p,q H p,q BC (A 1 ).
We stress that the induced maps are not assumed to be compatible with the bigrading. The existence of such isomorphisms implies that we also have an equality h BC (A 1 ) + h A (A 1 ) = h ∂ (A 1 ) + h∂(A 1 ), i.e. the page-1-∂∂-property holds for the space of left-invariant forms on G. Since G carries an abelian complex structure, this implies that Γ\G is a page-1-∂∂-manifold.
(Ab2 n ): Second, we consider a complex nilmanifold, of odd complex dimension n ≥ 3, defined by (Ab2 n ). We write A 2 for (A, ∂,∂), where the exterior algebra A is now equipped with the differentials defined under (Ab2 n ) in the statement.
As before, we may also equip A with a different differential d P 2 , acting as follows in degree 1:
d P 2 (ω 1 ) = 0, . . . , d P 2 (ω n−1 ) = 0, d P 2 (ω n ) = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 + · · · + ω n−2 ∧ ω n−1 .
One has d P 2 (A 1,0 ) ⊆ A 2,0 , so (A, d P 2 ) can be considered as the space of left-invariant forms on a nilmanifold endowed with a complex parallelisable structure P 2 . Hence, A P 2 := (A, ∂ P 2 ,∂ P 2 ) has the page-1-∂ P 2∂ P 2 -property, by Theorem 4.7, so we have h BC (
A P 2 ) + h A (A P 2 ) = h ∂ P 2 (A P 2 ) + h∂ P 2 (A P 2 ), by Theorem 3.2.
Let us define a C-linear involution C : A → A in degree 1 by
C(ω 2i+1 ) = ω 2i+1 , C(ω 2i+1 ) = ω 2i+1 , C(ω 2i ) = ω 2i , for 0 ≤ i ≤ n − 1 2 ,
together with C(ω 2n ) = ω 2n and C(ω 2n ) = ω 2n . We extend C to degree k by C(α 1 ∧ . . . ∧ α k ) := C(α 1 ) ∧ . . . ∧ C(α k ). One checks that C • ∂ =∂ P 2 • C and C •∂ = ∂ P 2 • C, so we can conclude as before.
Remark 4.9. Note that C • d = d P 1 • C (similarly for P 2 ), and C is compatible with the real structure. So it induces an isomorphism of the underlying real Lie groups. However, the corresponding complex nilmanifolds are not biholomorhic. Indeed, the Hodge number of bidegree (1, 0) is given by h 1,0 ∂ = 2 for (Ab1 n ), and h 1,0 ∂ = n − 1 for (Ab2 n ), whereas h 1,0 ∂ P = n for any complex parallelisable nilmanifold of complex dimension n.
Note that the abelian complex structures defined by (Ab1 n ) and (Ab2 n ) coincide precisely when n = 3. We denote this common complex structure on G byJ and we writeX = (Γ\G,J) for any nilmanifold endowed with the induced complex structure, still denoted byJ, where Γ ⊂ G is a lattice.
In the following proposition we prove that, in complex dimension 3, the only complex nilmanifolds which are page-(r − 1)-∂∂ for some r ∈ N are, apart from a torus, the Iwasawa manifold I (3) and the nilmanifolds X.
Notice that these results generalise those in §.4.2.
Proposition 4.10. Let X = (Γ\G, J) be a complex nilmanifold of complex dimension 3, different from a torus, endowed with an invariant complex structure J.
If there exists r ∈ N such that X is a page-(r − 1)-∂∂-manifold, then J is equivalent to the complex parallelisable structure of I (3) or to the abelian complex structureJ defined by (Ab1 n ) in Theorem 4.8 for n = 3. In both cases r = 2, i.e. both of these manifolds are page-1-∂∂-manifolds.
Proof. We already know by Theorems 4.7 and 4.8 that I (3) andX are page-1-∂∂-manifolds.
On the other hand, it is proved in [23] that for any other invariant complex structure J (i.e. not equivalent toJ or to the complex parallelisable structure of I (3) ), the nilmanifold X = (Γ\G, J) fails to be pure in degree 4 or 5, that is, the direct sum decomposition of Definition 2.4 is not satisfied for k = 4 or k = 5 (or both). So, such complex nilmanifolds X = (Γ\G, J) are not page-(r − 1)-∂∂-manifolds for any r ∈ N .
Remark 4.11. According to [31] and [34], a compact complex manifold X is called an sGG manifold if every Gauduchon metric ω on X is sG, i.e. ∂ω n−1 is∂-exact.
By the numerical characterisation proved in [34, Theorem 1.6], a compact complex manifold is sGG if and only if b 1 = 2h 0,1 ∂ . For instance, the Iwasawa manifold is sGG (see [34]), and more generally any complex parallelisable nilmanifold is sGG, due to (14).
For the nilmanifolds endowed with the abelian complex structures defined in Theorem 4.8, we have the following Betti and Hodge numbers:
b 1 = 4 = 2n = 2 h 0,1 ∂ for (Ab1 n ), and b 1 = 2(n − 1) = 2n = 2 h 0,1 ∂ for (Ab2 n ).
Hence, such complex nilmanifolds are not sGG-manifolds.
On the other hand, all the sGG nilmanifolds of complex dimension n = 3 are identified in [34, Theorem 6.1]. In particular, there exist complex nilmanifolds different from the Iwasawa manifold andX which are sGG, so by Proposition 4.10, they are not page-(r − 1)-∂∂-manifolds for any r ∈ N .
Therefore, the page-1-∂∂ and the sGG properties of compact complex manifolds are unrelated.
Nakamura solvmanifolds
Consider G := C φ C 2 , where φ is either φ(z) = e z 0 0 e −z or φ(z) = e Re(z) 0 0 e −Re(z)
(complex parallelizable, resp. completely solvable case). Define X to be the quotient of G by a lattice of the form Γ φ Γ with Γ ⊂ C, Γ ⊂ C 2 lattices. These manifolds were studied in [29] and are called Nakamura manifolds. They are among the best known solvmanifolds, but are not nilmanifolds. In [4], Angella and Kasuya computed the Hodge, Bott-Chern and Aeppli numbers depending on the lattice Γ. (These numbers turn out to be independent of Γ ). In particular, their calculations yield the equality h BC (X) = h∂(X). Hence, by Theorem 1.3, we obtain:
Corollary 4.12. The complex parallelisable and completely solvable Nakamura manifolds considered in [4] are page-1-∂∂-manifolds.
Construction methods for page-r-∂∂-manifolds
Among the issues that we take up in this section, there is the behaviour of page-r-∂∂-manifolds under modifications and its link with the open problem of submanifold heredity of this class of manifolds.
Theorem 5.1. Let X and Y be compact complex manifolds.
(1) If X is a page-r-∂∂-manifold and Y is a page-r -∂∂-manifold, the product X × Y is a page-r-∂∂manifold, wherer = max{r, r }.
Conversely, if the product is a page-r-∂∂-manifold, so are both factors.
(2) For any vector bundle V over X, the projectivised bundle P(V) is a page-r-∂∂-manifold if and only if X is.
(3) Suppose X is a page-r-∂∂-manifold. Let f : X −→ Y be a surjective holomorphic map and assume there exists a d-closed (l, l)-current Ω on X (with l = dim X − dim Y ) such that f * Ω = 0. Then Y is also a page-r-∂∂-manifold. In particular, this implication always holds when dim X = dim Y , e.g. for contractions (take Ω to be a constant).
(4) Given a submanifold Z ⊂ X, denote by X the blow-up of X along Z. If X is page-r-∂∂ and Z is page-r -∂∂, then X is a page-r-∂∂-manifold, wherer = max{r, r }. Conversely, if X is page r-∂∂, so are X and Z.
A X×Y 1 A X ⊗ A Y ,(15)A P(V) 1 rk V−1 i=0 A X [i],(16)A X 1 A Y ⊕ A X /f * A Y ,(17)A X 1 A X ⊕ codim Z−1 i=1 A Z [i].(18)
Since the occuring zigzags get only shifted, A X [i] satisfies the page-r-∂∂-property if and only if A X does. Furthermore, a direct sum of complexes satisfies the page-r-∂∂-property if and only if each summand does. So, the second, third and fourth E 1 -isomorphisms imply (2), (3) and (4) For the first part of (1), we use the first isomorphism and the fact that one knows how irreducible subcomplexes behave under tensor product (see [38,Prop. 16]). In particular, even-length zigzags do not get longer and the product of two length-one zigzags is again of length one. For the converse, note that A X and A Y are direct summands in their tensor product, so we can argue as before.
The 'if' statement in the last part of (5) is a direct consequence of (4) and the weak factorisation theorem [1], which says that every bimeromorphic map can be factored as a sequence of blow-ups and blow-downs with smooth centres. The 'only if' part also follows from (4) (cf. also [27]). Indeed, let X be page-r-∂∂ and let Z ⊂ X be a submanifold. If Z has codimension one, we replace X by X × P 1 C (which is still page-r-∂∂ by (1)) and Z by Z × {0}. By assumption, the blow-up is still page-r-∂∂ and one can apply (4) to infer that the same holds for Z.
Since for surfaces and threefolds, the centre of a nontrivial blow-up is a point or a curve, we get Corollary 5.2. Fix any r ∈ N. The page-r-∂∂-property of compact complex surfaces and threefolds is a bimeromorphic invariant.
In the remainder of the paper, because we can handle it with very similar methods, we point out a result about a class of manifolds that contains the class of page-(r − 1)-∂∂-manifolds. Given a compact complex n-dimensional manifold X, recall the following facts and definitions:
(1) For a fixed integer r ≥ 2 and a bidegree (p, q), a C ∞ form α of bidegree (p, q) on X is E r -closed (in the sense that it represents a cohomology class {α} Er ∈ E p, q r (X) on the r-th page of the Frölicher spectral sequence of X) if and only if there exist forms u l ∈ C ∞ p+l, q−l (X, C) with l ∈ {1, . . . , r − 1} such that∂ α = 0, ∂α =∂u 1 , ∂u 1 =∂u 2 , . . . , ∂u r−2 =∂u r−1 .
(See [13], taken up again in [32,Proposition 2.7].)
(2) For a fixed integer r ≥ 1, an E r -sG metric on X is a Hermitian metric (i.e. a C ∞ positive definite (1, 1)-form) ω such that ∂ω n−1 is E r -exact. Note that this map is well defined since:
Whenever α represents an Aeppli class, we have α ∈ ker(∂∂), so∂(∂α) = 0 and ∂(∂α) = 0. Hence, ∂α satisfies the E r -closedness conditions (19) with u 1 = · · · = u r−1 = 0. Thus, the class {∂α} Er is well defined.
If two (n − 1, n − 1)-forms α, β ∈ ker(∂∂) represent the same Aeppli class, there exist forms u ∈ C ∞ n−2, n−1 (X, C) and v ∈ C ∞ n−1, n−2 (X, C) such that α − β = ∂u +∂v. Hence, ∂α − ∂β = ∂∂v. In particular, ∂α − ∂β ∈ im∂. In the language of [32, Definition 3.1], this means that ∂α − ∂β is E 1 -exact, which implies that it is E r -exact (i.e. it represents the zero class on the r-th page of the Frölicher spectral sequence of X) for every r ≥ 1. Thus, {∂α} Er = {∂β} Er . This proves that the map T r is independent of the choice of representative of the class [α] A ∈ H n−1, n−1 (X, C). Proof. The argument is the analogue in this context of the proof of Observation 5.3. in [31].
Let ω be a Gauduchon metric on X. This means that ω is a Hermitian metric such that ∂∂ω n−1 = 0 (cf. [18]). We see that ω is E r -sG if and only if [ω n−1 ] A ∈ ker T r . Thus, the set of all classes [ω n−1 ] A ∈ H n−1, n−1 A (X, C) that are representable by the (n − 1)-st power ω n−1 of an E r -sG metric ω is precisely the intersection G X ∩ ker T r ,
where G X ⊂ H n−1, n−1 A (X, R) is the Gauduchon cone of X (defined in [31,Definition 5.1] as the set of all classes [ω n−1 ] A ∈ H n−1, n−1 A (X, R) that are representable by the (n − 1)-st power ω n−1 of a Gauduchon metric ω).
Since the Gauduchon cone is open in H n−1, n−1 A (X, R) and non-empty, the equality G X ∩ ker T r = G X (which holds if and only if X is an E r -sGG manifold) is equivalent to ker T r = H n−1, n−1 A (X, C), so to the map T r vanishing identically.
As a consequence of this, we get the bimeromorphic invariance of the E r -sGG property.
Corollary 5.4. Let X and X be bimeromorphically equivalent compact complex manifolds. Then, every Gauduchon metric on X is E r -sG if and only if this is true on X.
Proof. By the weak factorisation theorem [1], it suffices again to check this for blow-ups X → X with d-dimensional smooth centers Z ⊂ X of codimension ≥ 2. After picking any isomorphism realising formula (18), any class c ∈ H n−1, n−1 A ( X) can be written as c = c X + c Z , with c X ∈ H n−1, n−1 A (X) and c Z ∈ H d, d A (Z). Hence, T r c = T r c X + T r c Z = T r c X since ∂η = 0 for all (d, d)-forms on Z for dimension reasons.
Note that the above map T r is given in all cases by applying ∂. Generally speaking, if A = B ⊕ C, then H A (A) = H A (B) ⊕ H A (C) and E r (A) = E r (B) ⊕ E r (C) and T A r = T B r + T C r . We omitted the superscripts on T r in the above proof for the sake of simplicity.
We end the paper with an obvious open problem suggested by the examples constructed so far.
Problem 5.5. For every r ≥ 2, construct page-r-∂∂-manifolds that are not page-(r−1)-∂∂.
We believe such examples exist and the difficulty of constructing them for r ≥ 2 is related to the general difficulty of constructing manifolds with very non-degenerate Frölicher spectral sequence.
where the representatives live in different bidegrees. Therefore, H dR (Z) is not pure, so Z does not have the page-r-∂ 1 ∂ 2 -property.Remark 2.18. This theorem also gives a quick alternative proof to Prop. 4.1 (equivalence of page-0-∂ 1 ∂ 2 with the usual ∂ 1 ∂ 2 -property).
Lemma 2 .
220. ([38, Prop. 12]) If H is a linear functor from the category of double complexes to the category of vector spaces which maps squares and even-length zigzags of length < 2r to 0, then H(f ) is an isomorphism for any E r -isomorphism f .Lemma 2.21. ([38, Prop. 11]) For two double complexes A, B one has A 1 B if and only if 'the same zigzags occur in A and B', i.e. mult Z (A) = mult Z (B) for all zigzags Z. Example 2.22. Examples of functors H satisfying the hypotheses of Lemma 2.20 are provided by H dR , H p,q BC , E p,q i
Lemma 2 .
223. ([38, Ch. 4]) Let A = A X for a compact complex manifold X and define the conjugate complex byĀ p,q = A p,q and the dual complex DA by DA p,q = Hom(A n−p,n−q , C), for all p, q.
Proposition 2 . 24 .
224Fix arbitrary integers 0 ≤ k ≤ 2n. A compact complex manifold X of dimension n satisfies the complex-C ∞ -pure property in degree k if and only if it satisfies the complex-C ∞ -full property in degree 2n − k Proof. Let Z be a zigzag with H k dR (Z) = 0. The sum of the subspaces H p,q dR (Z) with p + q = k is not direct if and only if Z is of odd length and of type L. Meanwhile, the sum of the subspaces H p,q dR (Z) with p + q = k is strictly contained in H k dR (Z) if and only if Z is of odd length > 1 (i.e. not a dot) and of type M . We have seen both statements in the course of the proof of Theorem 2.17, see also [38, Prop. 6, Cor. 7]. Hence, X is complex-C ∞ -pure in degree k if and only if mult Z (A X ) = 0 for all odd zigzags Z of type L with H k dR (Z) = 0 and X is complex C ∞ -full in degree k if and only if mult Z (A X ) = 0 for all odd zigzags Z of type M and length > 1 with H k dR (Z) = 0. The result now follows from Lemma 2.23 and Lemma 2.21 since zigzags of type L and those of type M and length greater than 1 are exchanged when forming the dual complex.
Corollary 2 . 25 .
225For a compact complex manifold X, the following statements are equivalent.
Let us write as a shorthand LHS := h A + h BC and RHS r := r i=1
( 1 )
1: For a square S, we have already noticed in the proof of Theorem 2.17 that one has 1 e i (S) = 1 e i (S) = b(S) = 0, so RHS r = 0 for any r. On the other hand, on S we have: ker ∂ 1∂ = ∂ 1 a, ∂ 2 a, ∂ 1 ∂ 2 a = im ∂ 1 +im ∂ 2 and ker ∂ 1 ∩ker ∂ 2 = ∂ 1 ∂ 2 a = im ∂ 1 ∂ 2 , so h BC (S) = h A (S) = 0 and LHS(S) = 0 = RHS r (S).
( 2 )
2& (3): For any even length zigzag Z of length l = 2k, we saw in the proof of Theorem 2.17 that b(Z) = 0 and 1 e i (Z) + 2 e i (ZRHS r (Z) = min{2r, l}.
( 4 )
4& (5): For an odd length zigzag, we also saw that 1 e i (Z) = 2 e i (Z) = b(Z) = 1 for all i, so RHS r (Z) = 2, for any r.
Observation 4 . 2 .
42Any compact complex curve is Kähler, hence a ∂∂-manifold. A compact complex surface is a page-r-∂∂-manifold (for some r) if any only if it is Kähler.
Proposition 4 . 3 .
43The Iwasawa manifold is a page-1-∂∂-manifold.
)
For a fixed integer r ≥ 1, X is said to be an E r -sG manifold if an E r -sG metric exists on X.(See [32, Definition 3.1, (ii)].) (4) For a fixed integer r ≥ 1, X is said to be an E r -sGG manifold if every Gauduchon metric on X is an E r -sG metric. (See [32, Definition 3.1, (iii)].) (5) For a fixed integer r ≥ 1, every page-(r − 1)-∂∂-manifold is an E r -sGG manifold. (See [33, Proposition 5.2].) Let us now fix an integer r ≥ 1 and consider the canonical linear map on a compact complex n-] A → {∂α} Er .
Lemma 5. 3 .
3An n-dimensional compact complex manifold X is E r -sGG if and only if T r = 0.
38, Thm C], which states in particular that the Frölicher spectral sequences degenerate at page r if and only if all even length zigzags of length ≥ 2r have multiplicity zero and that H k dR (A) is pure of weight k if and only if all odd-length zigzags of length ≥ 3 have multiplicity zero.
The page-r-∂∂-property of compact complex manifolds is a bimeromorphic invariant if and only if it is stable under passage to submanifolds.Proof. The proofs are very similar to those in[38, Cor. 28]. We will be using the characterisation of the page-r-∂∂-property in terms of occuring zigzags (Theorem 2.17) and E 1 -isomorphisms (Def. 2.19), in particular Lemma 2.21.Write A X as shorthand for the double complex (C ∞ · , · (X, C), ∂,∂) and A X [i] for the shifted doublecomplex with bigrading (A X [i]) p,q := A p−i,q−i X .By[38, Sect. 4],[37] and[27], we have the following E 1isomorphisms: 1
Cf. also[35],[43],[28] and[6] for different approaches to the blow-up question in the setting of particular cohomologies.
Acknowledgements. This work was partially supported by grant PID2020-115652GB-I00, funded by MCIN/AEI/10.13039/501100011033, and grant E22-17R "Álgebra y Geometría" (Gob. Aragón/FEDER). We are grateful to the referee for useful comments that helped us to improve the presentation of the paper.
Torification and factorization of birational maps. D Abramovich, K Karu, K Matsuki, J Lodarczyk, J. Amer. Math. Soc. 153D. Abramovich, K. Karu, K. Matsuki and J. W lodarczyk, Torification and factorization of birational maps, J. Amer. Math. Soc. 15 (2002), no. 3, 531-572.
The cohomologies of the Iwasawa manifold and of its small deformations. D Angella, J. Geom. Anal. 233D. Angella, The cohomologies of the Iwasawa manifold and of its small deformations, J. Geom. Anal. 23 (2013), no. 3, 1355-1378.
Cohomologies of deformations of solvmanifolds and closedness of some properties, North-West. D Angella, H Kasuya, Eur. J. Math. 3D. Angella and H. Kasuya, Cohomologies of deformations of solvmanifolds and closedness of some properties, North-West. Eur. J. Math. 3 (2017), 75-105.
Bott-Chern cohomology of solvmanifolds. D Angella, H Kasuya, Ann. Global Anal. Geom. 524D. Angella and H. Kasuya, Bott-Chern cohomology of solvmanifolds, Ann. Global Anal. Geom. 52 (2017), no. 4, 363-411.
Complex structures of splitting type. D Angella, A Otal, L Ugarte, R Villacampa, Rev. Mat. Iberoam. 334D. Angella, A. Otal, L. Ugarte and R. Villacampa, Complex structures of splitting type, Rev. Mat. Iberoam. 33 (2017), no. 4, 1309-1350.
Note on Dolbeault cohomology and Hodge structures up to bimeromorphisms. D Angella, T Suwa, N Tardini, A Tomassini, Complex Manifolds. 71D. Angella, T. Suwa, N. Tardini and A. Tomassini, Note on Dolbeault cohomology and Hodge structures up to bimeromorphisms, Complex Manifolds 7 (2020), no. 1, 194-214.
On cohomological decomposition of almost-complex manifolds and deformations. D Angella, A Tomassini, J. Symplectic Geom. 93D. Angella and A. Tomassini, On cohomological decomposition of almost-complex manifolds and defor- mations, J. Symplectic Geom. 9, no. 3 (2011), 403-428.
On the ∂∂-Lemma and Bott-Chern cohomology. D Angella, A Tomassini, Invent. Math. 1921D. Angella and A. Tomassini, On the ∂∂-Lemma and Bott-Chern cohomology, Invent. Math. 192 (2013), no. 1, 71-81.
Hamiltonian Kähler manifolds. F A Bogomolov, Soviet Math. Dokl. 19F. A. Bogomolov, Hamiltonian Kähler manifolds, Soviet Math. Dokl. 19 (1978), 1462-1465.
On Compact Kähler Surfaces. N Buchdahl, Ann. Inst. Fourier (Grenoble). 491N. Buchdahl, On Compact Kähler Surfaces, Ann. Inst. Fourier (Grenoble) 49 (1999), no. 1, 287-302.
The class C is not stable by small deformations. F Campana, Math. Ann. 2901F. Campana, The class C is not stable by small deformations, Math. Ann. 290 (1991), no. 1, 19-30.
Dolbeault cohomology of compact nilmanifolds. S Console, A Fino, Transform. Groups. 62S. Console and A. Fino, Dolbeault cohomology of compact nilmanifolds, Transform. Groups 6 (2001), no. 2, 111-124.
A general description of the terms in the Frölicher spectral sequence. L A Cordero, M Fernández, A Gray, L Ugarte, Differential Geom. Appl. 71L. A. Cordero, M. Fernández, A. Gray and L. Ugarte, A general description of the terms in the Frölicher spectral sequence, Differential Geom. Appl. 7 (1997), no. 1, 75-84.
. P Deligne, Théorie De Hodge, Publ. Math. Inst. HautesÉtudes Sci. IIP. Deligne, Théorie de Hodge: II, Publ. Math. Inst. HautesÉtudes Sci. 40, (1971), 5-57.
Real Homotopy Theory of Kähler Manifolds. P Deligne, Ph, J Griffiths, D Morgan, Sullivan, Invent. Math. 29P. Deligne, Ph. Griffiths, J. Morgan and D. Sullivan, Real Homotopy Theory of Kähler Manifolds, Invent. Math. 29, (1975), 245-274.
The ∂∂-lemma for general Clemens manifolds. R Friedman, Pure. Appl. Math. Q. 154R. Friedman, The ∂∂-lemma for general Clemens manifolds, Pure. Appl. Math. Q. 15 (2019), no. 4, 1001-1028.
Relations between the cohomology groups of Dolbeault and topological invariant. A Frölicher, Proc. Nat. Acad. Sci. U.S.A. 41A. Frölicher, Relations between the cohomology groups of Dolbeault and topological invariant, Proc. Nat. Acad. Sci. U.S.A. 41 (1955), 641-644.
Le théorème de l'excentricité nulle. P Gauduchon, C. R. Acad. Sci. Paris, Sér. A. 285P. Gauduchon, Le théorème de l'excentricité nulle, C. R. Acad. Sci. Paris, Sér. A, 285 (1977), 387-390.
Minimal models of nilmanifolds. K Hasegawa, Proc. Amer. Math. Soc. 1061K. Hasegawa, Minimal models of nilmanifolds, Proc. Amer. Math. Soc. 106 (1989), no. 1, 65-71.
Frölicher spectral sequence and Hodge structures on the cohomology of complex parallelisable manifolds. H Kasuya, J Stelzig, to appear in Transform. GroupsH. Kasuya and J. Stelzig, Frölicher spectral sequence and Hodge structures on the cohomology of complex parallelisable manifolds, to appear in Transform. Groups.
M Khovanov, Y Qi, A faithful braid group action on the stable category of tricomplexes. 16ppPaper No. 019M. Khovanov and Y. Qi, A faithful braid group action on the stable category of tricomplexes, SIGMA Symmetry Integrability Geom. Methods Appl. 16 (2020), Paper No. 019, 32 pp.
Courants kählériens et surfaces compactes. A Lamari, Ann. Inst. Fourier (Grenoble). 491A. Lamari, Courants kählériens et surfaces compactes, Ann. Inst. Fourier (Grenoble) 49 (1999), no. 1, 263-285.
Cohomological decomposition of complex nilmanifolds. A Latorre, L Ugarte, Topol. Methods Nonlinear Anal. 451A. Latorre and L. Ugarte, Cohomological decomposition of complex nilmanifolds, Topol. Methods Nonlinear Anal. 45 (2015), no. 1, 215-231.
Twistors, Kähler manifolds, and bimeromorphic geometry II. C Lebrun, Y S Poon, J. Amer. Math. Soc. 52C. LeBrun and Y. S. Poon, Twistors, Kähler manifolds, and bimeromorphic geometry II, J. Amer. Math. Soc. 5 (1992), no. 2, 317-325.
Comparing tamed and compatible symplectic cones and cohomological properties of almost complex manifolds. T.-J Li, W Zhang, Comm. Anal. Geom. 174T.-J. Li and W. Zhang, Comparing tamed and compatible symplectic cones and cohomological properties of almost complex manifolds, Comm. Anal. Geom. 17 (2009), no. 4, 651-683.
Mal'cev, On a class of homogeneous spaces. A I , Amer. Math. Soc. Transl. 39ppA. I. Mal'cev, On a class of homogeneous spaces, Amer. Math. Soc. Transl. 39 (1951), 33 pp.
The heredity and bimeromorphic invariance of the ∂∂-lemma property. L Meng, C. R. Math. Acad. Sci. Paris. 359650L. Meng, The heredity and bimeromorphic invariance of the ∂∂-lemma property, C. R. Math. Acad. Sci. Paris 359 (2021), 645???650.
Leray-Hirsh theorem and blow-up formula for Dolbeault cohomology. L Meng, Ann. Mat. Pura Appl. 199L. Meng, Leray-Hirsh theorem and blow-up formula for Dolbeault cohomology, Ann. Mat. Pura Appl. 199, (2020), 1997-2014.
Complex parallelisable manifolds and their small deformations. I Nakamura, J. Differential Geometry. 10I. Nakamura, Complex parallelisable manifolds and their small deformations, J. Differential Geometry 10 (1975), 85-112.
Deformation Openness and Closedness of Various Classes of Compact Complex Manifolds. D Popovici, D. Popovici, Deformation Openness and Closedness of Various Classes of Compact Complex Manifolds;
. Examples, Ann. Sc. Norm. Super. Pisa Cl. Sci. XIII5Examples, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), Vol. XIII (2014), 255-305.
D Popovici, Aeppli Cohomology Classes Associated with Gauduchon Metrics on Compact Complex Manifolds. 143D. Popovici, Aeppli Cohomology Classes Associated with Gauduchon Metrics on Compact Complex Manifolds, Bull. Soc. Math. France 143 (2015), no. 3, 1-37.
D Popovici, arXiv e-print AG 1901.04087v2Adiabatic Limit and Deformations of Complex Structures. D. Popovici, Adiabatic Limit and Deformations of Complex Structures, arXiv e-print AG 1901.04087v2.
Higher-page Bott-Chern and Aeppli Cohomologies and Applications. D Popovici, J Stelzig, L Ugarte, J. Reine Angew. Math. 777D. Popovici, J. Stelzig and L. Ugarte, Higher-page Bott-Chern and Aeppli Cohomologies and Applica- tions, J. Reine Angew. Math. 777 (2021), 157-194.
Compact complex manifolds with small Gauduchon cone. D Popovici, L Ugarte, Proc. Lond. Math. Soc. 1165D. Popovici and L. Ugarte, Compact complex manifolds with small Gauduchon cone, Proc. Lond. Math. Soc. 116 (2018), no. 5, 1161-1186.
Dolbeault cohomologies of blowing up complex manifolds. S Rao, S Yang, X Yang, J. Math. Pures Appl. 130S. Rao, S. Yang and X. Yang, Dolbeault cohomologies of blowing up complex manifolds, J. Math. Pures Appl. 130, (2019), 68-92.
On compact complex parallelisable solvmanifolds. Y Sakane, Osaka Math. J. 131Y. Sakane, On compact complex parallelisable solvmanifolds, Osaka Math. J. 13 (1976), no. 1, 187-212.
The Double Complex of a Blow-up. J Stelzig, Int. Math. Res. Not. IMRN. 202114J. Stelzig, The Double Complex of a Blow-up, Int. Math. Res. Not. IMRN 2021, no. 14, 10731-10744.
On the Structure of Double Complexes. J Stelzig, J. Lond. Math. Soc. 1042J. Stelzig, On the Structure of Double Complexes, J. Lond. Math. Soc. 104 (2021), no. 2, 956-988.
Smoothness of the Universal Deformation Space of Compact Calabi-Yau Manifolds and Its Petersson-Weil Metric. G Tian, Adv. Ser. Math. Phys. 1, World Sci. Publishing. Mathematical Aspects of String TheoryG. Tian, Smoothness of the Universal Deformation Space of Compact Calabi-Yau Manifolds and Its Petersson-Weil Metric, Mathematical Aspects of String Theory (San Diego, 1986), Adv. Ser. Math. Phys. 1, World Sci. Publishing, Singapore (1987), 629-646.
The Weil-Petersson Geometry of the Moduli Space of SU (n ≥ 3) (Calabi-Yau) Manifolds I. A N Todorov, Comm. Math. Phys. 126A. N. Todorov, The Weil-Petersson Geometry of the Moduli Space of SU (n ≥ 3) (Calabi-Yau) Manifolds I, Comm. Math. Phys. 126 (1989), 325-346.
H.-C Wang, Complex Parallelisable Manifolds. 5H.-C. Wang, Complex Parallelisable Manifolds, Proc. Amer. Math. Soc. 5 (1954), 771-776.
On the Geometry of Superstrings with Torsion. C C Wu, Cambridge MA 02138Department of Mathematics, Harvard UniversityThesisC. C. Wu, On the Geometry of Superstrings with Torsion, Thesis (Ph.D.), Department of Mathematics, Harvard University, Cambridge MA 02138, (2006).
Bott-Chern blow-up formula and bimeromorphic invariance of the ∂∂-Lemma for threefolds. S Yang, X Yang, Trans. Amer. Math. Soc. 373S. Yang and X. Yang, Bott-Chern blow-up formula and bimeromorphic invariance of the ∂∂-Lemma for threefolds, Trans. Amer. Math. Soc. 373 (2020), 8885-8909.
| []
|
[
"Multi-View Substructure Learning for Drug-Drug Interaction Prediction",
"Multi-View Substructure Learning for Drug-Drug Interaction Prediction"
]
| [
"Zimeng Li \nCollege of Information Science and Engineering\nHunan University\n410086ChangshaChina\n\nMicrosoft Research Asia\n10080BeijingChina\n",
"Shichao Zhu \nMicrosoft Research Asia\n10080BeijingChina\n\nSchool of Cyber Security\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n\nInstitute of Information Engineering\nChinese Academy of Sciences\n100093BeijingChina\n",
"Tie-Yan Liu \nMicrosoft Research Asia\n10080BeijingChina\n",
"Xiangxiang Zeng [email protected] \nCollege of Information Science and Engineering\nHunan University\n410086ChangshaChina\n\nMicrosoft Research Asia\n10080BeijingChina\n",
"Tong Wang [email protected] \nMicrosoft Research Asia\n10080BeijingChina\n"
]
| [
"College of Information Science and Engineering\nHunan University\n410086ChangshaChina",
"Microsoft Research Asia\n10080BeijingChina",
"Microsoft Research Asia\n10080BeijingChina",
"School of Cyber Security\nUniversity of Chinese Academy of Sciences\n100049BeijingChina",
"Institute of Information Engineering\nChinese Academy of Sciences\n100093BeijingChina",
"Microsoft Research Asia\n10080BeijingChina",
"College of Information Science and Engineering\nHunan University\n410086ChangshaChina",
"Microsoft Research Asia\n10080BeijingChina",
"Microsoft Research Asia\n10080BeijingChina"
]
| []
| Drug-drug interaction (DDI) prediction provides a drug combination strategy for systemically effective treatment. Previous studies usually model drug information constrained on a single view such as the drug itself, leading to incomplete and noisy information, which limits the accuracy of DDI prediction. In this work, we propose a novel multiview drug substructure network for DDI prediction ("MSN-DDI"), which learns chemical substructures from both the representations of the single drug ("intra-view") and the drug pair ("inter-view") simultaneously and utilizes the substructures to update the drug representation iteratively. Comprehensive evaluations demonstrate that MSN-DDI has almost solved DDI prediction for existing drugs by achieving a relatively improved accuracy of 19.32% and an over 99% accuracy under the transductive setting. More importantly, MSN-DDI exhibits better generalization ability to unseen drugs with a relatively improved accuracy of 7.07% under more challenging inductive scenarios. Finally, MSN-DDI improves prediction performance for real-world DDI applications to new drugs.MSN-DDI | 10.48550/arxiv.2203.14513 | [
"https://arxiv.org/pdf/2203.14513v1.pdf"
]
| 247,762,875 | 2203.14513 | a0d24a815c7eadb3a661ac6b7e124a541b88cf97 |
Multi-View Substructure Learning for Drug-Drug Interaction Prediction
Zimeng Li
College of Information Science and Engineering
Hunan University
410086ChangshaChina
Microsoft Research Asia
10080BeijingChina
Shichao Zhu
Microsoft Research Asia
10080BeijingChina
School of Cyber Security
University of Chinese Academy of Sciences
100049BeijingChina
Institute of Information Engineering
Chinese Academy of Sciences
100093BeijingChina
Tie-Yan Liu
Microsoft Research Asia
10080BeijingChina
Xiangxiang Zeng [email protected]
College of Information Science and Engineering
Hunan University
410086ChangshaChina
Microsoft Research Asia
10080BeijingChina
Tong Wang [email protected]
Microsoft Research Asia
10080BeijingChina
Multi-View Substructure Learning for Drug-Drug Interaction Prediction
† These authors contributed equally to this work.drug-drug interactionsmulti-view learningsubstructure interactionmolecular graph
Drug-drug interaction (DDI) prediction provides a drug combination strategy for systemically effective treatment. Previous studies usually model drug information constrained on a single view such as the drug itself, leading to incomplete and noisy information, which limits the accuracy of DDI prediction. In this work, we propose a novel multiview drug substructure network for DDI prediction ("MSN-DDI"), which learns chemical substructures from both the representations of the single drug ("intra-view") and the drug pair ("inter-view") simultaneously and utilizes the substructures to update the drug representation iteratively. Comprehensive evaluations demonstrate that MSN-DDI has almost solved DDI prediction for existing drugs by achieving a relatively improved accuracy of 19.32% and an over 99% accuracy under the transductive setting. More importantly, MSN-DDI exhibits better generalization ability to unseen drugs with a relatively improved accuracy of 7.07% under more challenging inductive scenarios. Finally, MSN-DDI improves prediction performance for real-world DDI applications to new drugs.MSN-DDI
Introduction
Drug combinations can provide therapeutic benefits but also increase the risk of adverse side effects, caused by the physicochemical incompatibility of the drugs [1,2,3]. The identification of drug-drug interactions (DDIs) remains a challenging task considering that the huge number of drug combinations lead to the pharmaceutical research and clinical trials highly expensive and inefficient, even with high-throughput methods. There lots of computational methods for prediction of side effects caused by DDIs have emerged, which have proven to be an effective and alternative way to alleviate the challenge [4,5,6,7,8]. Most of these methods follow the assumption that drugs with similar features are more likely to have similar interactions. In order to make full use of the raw features of drugs, i.e., drug structures, chemical properties and molecular fingerprints, recent works mainly focus on utilizing the powerful feature extraction ability of deep neural networks [9,10,11,12]. As a drug can be represented as a graph based on its molecular structure, graph neural networks (GNNs) have shown the impressive representation learning ability of drug molecules. Existing GNN-based methods for DDI [13,14,15] usually take the advantage of GNN's topological and semantic representation capabilities to model the drug itself, and then learn the representation of drug pairs based on the respective representation of each drug. Finally, the representations of drugs or drug pairs are used for final DDI prediction.
Considering that a drug can be simply divided into several functional groups or chemical substructures which jointly lead to the overall pharmacological properties [16], some studies were motivated to refine drugs into substructures for DDI prediction [17,18,19,20]. Existing works can be roughly classified into two categories: implicit and explicit manners, depending on how the substructure is used. The implicit manner usually takes substructure features as inputs of the model, which doesn't explicitly learn a specific substructure through the neural network [17,18]. As a contrast, the other approach, including SSI-DDI [19], GMPNN-CS [20] and so on, extracts the respective substructures of a pair of drugs in drug representation learning stage and predict DDI effect by identifying pairwise interactions between two drugs' substructures in the final readout module, leading to an improvement in performance over previous methods. However, the extracted substructures of a drug pair are only combined and used in the readout module for final DDI prediction instead of playing a direct role in the drug representation learning.
In most DDI prediction algorithms, drug representation learning is a singleview process in the message passing module that only encodes information from the drug itself, which may hinder accuracy improvement of DDI prediction. There are some interests that try to adopt multi-view representation learning into DDI prediction, such as MHCADDI [21], GoGNN [22] and MIRACLE [23]. For example, MHCADDI [21] considers external message passing mechanism between drugs' structures to integrate joint drug-drug information during the representation learning phase for individual drugs. GoGNN [22] leverages the dual attention mechanism to capture the information from both entity graphs and structured-entity interaction graph, hierarchically. However, multi-view learning in these methods only serves to learn better drug representations while it could be further employed in the readout module for final DDI prediction.
Therefore, unlike the above methods, by employing the advantages of both substructures and multi-views, we propose a novel multi-view substructure learning for DDI prediction (termed as "MSN-DDI"), which learns substructures from intra-view and inter-view simultaneously, without depending on additional domain knowledge. This makes the model equally applicable to inductive settings where only the chemical structure of the drug itself is accessible. MSN-DDI consists of the following main components, including repetitive multi-view substructure extraction blocks as the encoders (MSN encoders) to model different orders of neighboring information, layer-wise substructure pooling layers as the substructure extraction module to learn substructures from different perspectives and the self-attention scoring function as the MSN decoder for final DDI prediction. Specifically, we regard the drug representations as the intra-view and drug pair interactions as the inter-view and thus define graph attention network layers respectively to learn two sets of substructures corresponding to each view. The two sets of substructures are further used to update node representations for the next MSN encoder block. Our comprehensive evaluation on DrugBank and Twosides datasets demonstrated that MSN-DDI has achieved 19.32% relative accuracy improvement on the transductive setting and 7.07% relative improvement to unseen drugs on the inductive setting. Furthermore, the AUC reaches 99.47% on DrugBank and 99.90% on Twosides for the transductive setting respectively, which indicates MSN-DDI almost has solved the DDI prediction task for existing drugs. In addition, MSN-DDI exhibits the usefulness of DDI prediction to new approved drugs and could also show some clues to interpret the DDI effect to interactions among substructures of the drug pair. All these results suggest that MSN-DDI can act as a useful tool for DDI prediction and thus greatly facilitate the drug design and discovery process.
Results
Overview of MSN-DDI architecture
Inspired by recent advances in substructure and multi-view representation learning, our approach learns the drug representation and drug-drug interaction from inter-view and intra-view simultaneously which achieves better results compared to state-of-the-arts on DDI prediction. As shown in Figure 1, MSN-DDI consists of the following components: Finally, all substructures are fed into MSN decoder that is defined as a co-attention scoring function for a given triplet for DDI prediction. (B) MSN encoder. The drugs are firstly encoded by a shared GAT layer, and then embedded by intra-view interaction and interview interaction modules through two dedicated GAT layers. The inter-view and intra-view information are then aggregated to update node representations for the next MSN encoder.
• MSN encoder: Following a bipartite graph to encode the input features of a drug pair, a series of repetitive MSN encoder blocks capture the interactions within drug (intra-view) and across drug boundaries (interview) simultaneously. For each drug, two dedicated GATs are designed following a shared GAT layer in each block to learn atom-level representations from both views. The inter-view and intra-view information are then aggregated to update node representations for the next MSN encoder. • Substructure extraction module: Followed by a MSN encoder block, a selfattention graph pooling layer is designed to learn and extract substructure representations for both drugs. Since a series of encoders capture different orders of neighbouring information, the substructures following these encoders are extracted from different perspectives. • MSN decoder: a co-attention scoring function, to predict the probability of the triplet (d i , r, d j ), where d i and d j stand for the drug pair and r stand for a type of drug pair interaction. In this component, each pair of drug's substructures is integrated by how much important or relevant it is to the final DDI prediction.
Performance evaluation on transductive Setting for existing drugs
We conduct experiments on two standard benchmarks: DrugBank and Twosides, to evaluate the performance of our method MSN-DDI. The statistics of the datasets are summarized in Table S1. For both datasets, each drug is associated with its SMILES string [24], and its molecular graphical representation Table 1 Performance evaluation between MSN-DDI and baselines for the transductive setting on DrugBank and Twosides datasets. The highest value in each column is shown in bold. For performance improvement over the second-best approach, a relative improvement percentage is shown in the bracket.
Method
DrugBank Twosides is converted from SMILES using the python library RDKit [25], which contains 55-dimensional initial chemical features for each atom, such as atomic symbols and degrees of the atoms.
ACC(%) AUC(%) AP(%) F1(%) ACC(%) AUC(%) AP(%) F1(%) MR-GNN
Similar to previous studies, we evaluate performance on two settings, transductive and inductive. For the transductive setting, drugs in the test sets also exist in the training set while the inductive setting contains drugs fully or partially not existing in the training set to examines the model generalization ability to new drugs. Following the same setting in the related work [20], for the transductive setting evaluation on DrugBank and Twosides, we perform three randomized folds with the same data split ratio of training:validation:test = 6:2:2, where the stratified split on both datasets performed on entire DDI triplets, including drugs and interactions. Furthermore, to make a fair comparison, MSN-DDI also adopts the standard deep learning experiment process and the same datasets for positive samples and negative samples with all baselines. Experiment results are reported with the means and standard deviations of the following six metrics across the three folds: the accuracy (ACC), the area under the receiver operating characteristic (AUC), the average precision (AP), the F1 score. The detailed definitions of each metric are given as follows.
• the accuracy (ACC): is defined as the number of correct predictions divided by the number of total predictions. • the area under the receiver operating characteristic (AUC): is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one. • the average precision (AP): is calculated by taking the mean average precision over all classes. • the F1 score: is the harmonic mean of precision and recall.
As shown in Table 1, MSN-DDI achieve the best performance of all six metrics compared with SoTA algorithms on both DrugBank and Twosides. Specifically, although DDI prediction on DrugBank by baseline models is highly accurate, our model still made further improvement on all evaluation metrics. As a comparison, MSN-DDI achieved significant improvement on Twosides by making 19.32% and 17.54% relative improvements on ACC and F1 score over the second-best approach, GMPNN-CS. Furthermore, the AUC reaches 99.47% on DrugBank and 99.90% on Twosides respectively, which indicates MSN-DDI made perfect DDI prediction on transductive setting and basically has solved the DDI prediction task for existing drugs. These results have verified the powerful representational ability of MSN-DDI with several novel-designed components for multi-view substructure learning.
Performance evaluation on inductive Setting for unseen drugs
In the inductive setting, it is more challenging than the transductive setting since there exists unseen drugs in DDI triplets in test sets. This cold-start scenario is an extremely difficult trial for the generalization ability of the model, without knowing any prior knowledge of unseen drugs in the training process. In this setting, we split the dataset with respect to the drugs following the common definitions in [19,20,26]. Specifically, we randomly picked 20% of drugs as unknown drugs and regarded the remaining drugs as existing ones. All positive and negative samples on the train dataset are all DDI triplets in which both drugs are existing drugs while the test set has two splitting strategies:
• S1 Partition: the positive and negative samples on the test set have two unknown drugs. This task is to predict DDI for a pair of new drugs that no effect is known in any combination with other drugs in the training set. • S2 Partition: the positive and negative samples on the test set have one unknown drug and one existing drug. This task is to predict DDI for a new drug that has no effect in any combination with another existing drug.
Furthermore, to avoid potential bias in unknown drug selection, we repeated this process three times in parallel and thus made 3-fold cross validation for the inductive setting. Previous studies prove that the chemical structures of new drugs in the test set are very different from existing drugs in the training set due to the large differences on scaffolds [19,20]. As shown in Table 2, all metrics are obvious lower than those evaluated in the transductive setting, which indicates accurate DDI prediction for unseen drugs is much more difficult. Similar to the performance on the transductive setting, MSN-DDI achieves the best performance on all metrics when compared with state-of-the-art algorithms. Furthermore, our model outperforms the second-best algorithm with a large margin, e.g., relative improvements of 9.11% and 7.27% of AUC on S1 and S2 partitions respectively. These results indicate the effective countermeasures of our model that it does not only consider the intra-structure of the drug itself, but also learns the generalization properties through the multi-view substructure learning framework, which greatly supplements the lack of prior knowledge and interactive information of unseen drugs.
Ablation study for the effectiveness of model design
To study where the performance gains come from, we perform detailed ablation studies to stress the importance of various components of MSN-DDI. Specifically, the following baselines are evaluated and compared with MSN-DDI:
• wo inter: an architecture where the inter-view message passing from drugdrug interaction module is removed and solely uses the internal message passing. The side effect probability is then computed by concatenating the individual drug representations. This serves to demonstrate the importance of jointly learning drug embeddings (i.e., inter-view interactions). • wo intra: an architecture where the intra-view message passing in drug itself is removed and solely uses external message passing. The side effect probability is then computed by concatenating the individual drug representations. Similar to above, this serves to demonstrate the importance of simultaneously performing both inter-view and intra-view feature extraction. • wo update: an architecture where the inter-view interactions are only considered in the substructure extraction module while it has no direct influence on node feature update. This serves to demonstrate the effect of inter-view interaction to drug representation learning. • wo SAGPool: an architecture where the readout function in the substructure extraction module is replaced by a simple sum function, without distinguishing the importance of nodes. In this setting, both the inter and intra drug embeddings are computed by the sum readout function. This serves to demonstrate the importance of performing self-attention graph pooling for substructure extraction. • wo co-attention: an architecture where the final DDI prediction is also computed from the pairwise interactions between substructures of a pair of drugs but without using the global attention among all substructures extracted from MSN encoder blocks as shown in Eq. ??. As it has been also used in previous studies [19], this serves to evaluate ist contribution to our performance gains. As shown in Table S3 and Table 3, the full MSN-DDI architecture outperformed all variants which indicates the effectiveness of the proposed methods. In particular, our method significantly outperforms all other variants on the inductive setting, showing considerable modeling advantages over MSN-DDI. We further summarize the conclusions as follows: (1) Inter-view contributes most to MSN-DDI, since the performance of the variant wo inter decreases significantly. The fact that the performance of two variants wo inter and wo intra decline to some extent implies that it is beneficial to learn drug-drug representations jointly from multi-view perspective rather than a respective view.
(2) When comparing between the variant wo update and MSN-DDI, the performance has also declined with a large margin, which indicates that besides for substructure extraction and final DDI prediction, the inter-view information is useful for drug representation learning when directly incorporated into node update process. (3) As we can from the results of wo SAGPool in inductive settings on Twosides datasets, the performance of the model without the final SAGPool module does not decline significantly, which reflects that the improvement brought by our model in this task does not depend on the specific readout function. (4) The little performance drop on wo co-attention indicates that the co-attention mechanism only plays a minor role in model performance gains, which reflects substructures are distinct and robust and can directly be used for DDI prediction without such complicated attention mechanism. As can be seen from the above results, the two new modules proposed in MSN-DDI have greatly improved the performance of DDI prediction on Twosides dataset, including multi-view interaction and update module.
Real-world DDI applications
To demonstrate the usefulness of MSN-DDI for real-world DDI applications, we first evaluate the DDI prediction for new FDA approved drugs by the model trained with existing information of old drugs. We collected the FDA drug approval information [27] for all drugs in DrguBank dataset and divided them into two parts according to the drug approval date before or after the year of 2017. The DDI triplets containing two old drugs form the training set while the remaining DDI triplets containing at least one new drug are recruited into the test set (see Supplementary Table S5 for more details). We trained and evaluated MSN-DDI with the same hyperparameters adopted in the inductive setting. Furthermore, we also picked the two state-of-the-art DDI prediction algorithms from the above performance evaluation, SSI-DDI and GMPNNN-CS for comparison. These two algorithms were reproduced on the same dataset with their default hyperparameterts. As shown in Supplementary Table S6, MSN-DDI outperformed SSI-DDI and GMPNN-CS on all four metrics, ACC, AUROC, AP and F1 with a large margin. These results consolidate MSN-DDI has captured the generalized information of drug-drug interaction among different drugs and thus is applicable for new approved drugs. Figure 2 illustrated the detailed ROC curve and PR curve on the test set of three algorithms. MSN-DDI achieved the significant larger areas under both the ROC and PR curves than SSI-DDI and GMPNN-CS respectively, which also indicates MSN-DDI can distinguish the positive DDI effects from negative ones well.
Furthermore, we exhibit the usefulness of MSN-DDI by a case study of drug combination for anti-COVID-19. We utilize MSN-DDI model trained on the DrugBank dataset to predict the probability of the triplet (Hydroxychloroquine, increase the QTc prolongation, Azithromycin), where the two drugs Hydroxychloroquine and Azithromycin are both known drugs for our model, which can be seen as the transductive setting. These two drugs were recommended as a combination of potential anti COVID-19 drugs when an outbreak of the pandemic, but were found to have the serious side effect. It has been verified that the risk or severity of QTc prolongation can be increased when Hydroxychloroquine is combined with Azithromycin [28], and our model can effectively filter such drug combinations and thus contribute to therapies for anti-COVID- 19. It has been verified that the risk or severity of QTc prolongation can be increased when Hydroxychloroquine is combined with Azithromycin [28].
In Figure 3, we extract and illustrate the valid substructures of the two drugs in the event of increasing the QTc prolongation from inter-view, intraview and multi-view, respectively. Specifically, as demonstrated in the ablation study, intra-view and inter-view are two variants of our model, which remove the inter-view interaction and intra-view interaction, respectively while the multi-view panel is the full MSN-DDI neural network. We first used the SAGpooling layers to obtain the contribution scores of each atom in drugs, and extract the top k atoms as important atoms based on contribution scores for each block (k = 10 for Hydroxychloroquine and k = 15 for Azithromycin). Then, we count and select the important atoms with more than 3 occurrences in all blocks as the final learned substructures and highlighted them with shadows colored in green ( Figure 3).
As shown in Figure 3 (A) and (B), the highlighted atoms learned from single perspective (intra-view or inter-view) are significantly different, dispersed in the whole structure of the drug and fail to form certain substructures, which may lead to lower DDI prediction values. As a comparison, important atoms learned by MSN-DDI are centralized to some certain regions of the drug chemical structure to form steady substructures and these atoms can be regarded as stacking the important atoms from both the inter-view and intra-view. As a result. MSN-DDI with multi-view made a perfect prediction (i.e., a prediction score of 0.99) for this case. This analysis implies that MSN-DDI is not only a good DDI predictor but also could provide some clues to interpret the DDI effect to possible interactive atoms, which may facilitate detecting the underlying mechanism of the combination of drug pairs.
Conclusions
In this work, we have presented a multi-view substructure learning framework for predicting the possible polypharmacy side effects of drug-drug combination. Extensive experiments have verified the state-of-the-art performance for DDI prediction on both transductive and inductive settings. The MSN-DDI has achieved significant improvements with 19.32% accuracy on Twosides in transductive setting compared with the SOTA methods. More importantly, the performance of our proposed method has achieved significant improvement in more challenging inductive scenarios, with an average improvement 7.07% on DrugBank and 5.40% on Twosides in accuracy compared with the SoTA methods. By performing intra-view message passing within each drug, as well as inter-view message passing between two drugs, we have demonstrated the power of integrating joint drug-drug information during the substructure representation learning phase for DDI prediction. Future directions could put more attention on the generalization of the model for new drugs in the inductive learning setting, which approximates a real-world scenario where there is a new drug without knowing prior associated drug interactions.
Fig. 1
1Schematic of the MSN-DDI architecture. (A) The framework of MSN-DDI.The input drug pair is encoded by a bipartite graph followed by a series of repetitive MSN encoder blocks. For each block, substructures are extracted by the substructure module, in which substructure specific embeddings h (l) , h (l) are summed up based on SAGPooling layer.
Fig. 2
2Performance evaluation of SSI-DDI, GMPNN-CS and MSN-DDI for new approved drugs. (A) The receiver operating characteristic (ROC) curve of three algorithms evaluated on the test set. (B) The prevision versus recall (PR) curve of three algorithms evaluated on the test set.
Fig. 3
3Visualization of DDI prediction on the triplet (Hydroxychloroquine, increase the QTc prolongation, Azithromycin) from intra-view (A), inter-view (B) and multi-view (C), respectively. The learned substructures are made up with the atoms highlighted with shadows colored in green, which are selected in terms of high contribution scores in DDI prediction and high occurrences in all blocks in our model.
Table 2
2Performance evaluation of MSN-DDI and baselines for inductive setting on DrugBank dataset (%). The highest value in each column is shown in bold. For performance improvement over the second-best approach, a relative improvement percentage is shown in the bracket.Method
S1 Partition (new drug, new drug)
S2 Partition (new drug, existing dru g)
ACC(%)
AUC(%)
AP(%)
F1(%)
ACC(%)
AUC(%)
AP(%)
F1(%)
MR-GNN
62.63±0.77
70.92±0.84
73.01±1.23
45.81±2.51
74.67±0.33
83.15±0.60
83.81±0.69
69.88±0.86
MHCADDI
66.50±0.62
72.53±0.92
71.06±1.61
67.21±0.59
70.58±0.94
77.84±1.08
76.16±1.45
72.74±0.65
SSI-DDI
65.40±1.30
73.43±1.81
75.03±1.42
54.12±3.46
76.38±0.92
84.23±1.05
84.94±0.76
73.54±1.50
GAT-DDI
66.31±0.61
72.75±0.78
71.61±1.00
68.68±0.60
69.83±1.41
77.29±1.63
75.79±1.95
73.01±0.85
GMPNN-CS 68.57±0.30
74.96±0.40
75.44±0.50
65.32±0.23
77.72±0.30
84.84±0.15
84.87±0.40
78.29±0.16
MSN-DDI
Improvement
73.42±1.29
+4.85(7.07%)
81.79±1.12
+6.83(9.11%)
81.82±1.48
+6.38(8.46%)
70.34±0.98
+1.66(2.42%)
81.92±1.20
+4.2(5.4%)
91.01±0.76
+6.17(7.27%)
91.09±0.93
+6.15(7.24%)
80.18±1.49
+1.89(2.41%)
Table 3
3Performance evaluation between MSN-DDI and its five variants on DrugBank dataset in inductive setting. The highest value in each column is shown in bold.Method
S1 Partition
S2 Partition
ACC(%) AUC(%) AP(%) F1(%) ACC(%) AUC(%) AP(%) F1(%)
wo inter
65.45
72.67
73.69 57.94
75.30
83.08
84.69 71.51
wo intra
68.23
75.81
76.07 64.06
76.94
85.93
86.16 74.13
wo update
66.52
74.05
75.40 58.71
75.57
84.13
85.13 71.74
wo SAGPool
69.71
77.05
77.03 66.11
79.31
87.76
87.62 77.77
wo co-attention
71.36
78.56
77.14 69.98
78.65
86.69
86.63 76.94
MSN-DDI
73.42
81.79 81.82 70.34 81.92
91.01 91.09 80.18
Data-driven prediction of drug effects and interactions. N P Tatonetti, P P Ye, R Daneshjou, R B Altman, Science translational medicine. 4Tatonetti, N. P., Ye, P. P., Daneshjou, R. & Altman, R. B. Data-driven prediction of drug effects and interactions. Science translational medicine 4, 125ra31-125ra31 (2012).
Mechanisms of drug combinations: interaction and network perspectives. J Jia, Nature reviews Drug discovery. 8Jia, J. et al. Mechanisms of drug combinations: interaction and network perspectives. Nature reviews Drug discovery 8, 111-128 (2009).
Synergistic drug combinations for cancer identified in a crispr screen for pairwise genetic interactions. K Han, Nature biotechnology. 35Han, K. et al. Synergistic drug combinations for cancer identified in a crispr screen for pairwise genetic interactions. Nature biotechnology 35, 463-474 (2017).
Predicting and understanding comprehensive drug-drug interactions via semi-nonnegative matrix factorization. H Yu, BMC systems biology. 12Yu, H. et al. Predicting and understanding comprehensive drug-drug interactions via semi-nonnegative matrix factorization. BMC systems biology 12, 101-110 (2018).
Detecting drug communities and predicting comprehensive drug-drug interactions via balance regularized semi-nonnegative matrix factorization. J.-Y Shi, K.-T Mao, H Yu, S.-M Yiu, Journal of cheminformatics. 11Shi, J.-Y., Mao, K.-T., Yu, H. & Yiu, S.-M. Detecting drug communities and predicting comprehensive drug-drug interactions via balance regular- ized semi-nonnegative matrix factorization. Journal of cheminformatics 11, 1-16 (2019).
Prediction of synergistic anti-cancer drug combinations based on drug target network and drug induced gene expression profiles. X Li, Artificial intelligence in medicine. 83Li, X. et al. Prediction of synergistic anti-cancer drug combinations based on drug target network and drug induced gene expression profiles. Artificial intelligence in medicine 83, 35-43 (2017).
Computational prediction of drug-drug interactions based on drugs functional similarities. R Ferdousi, R Safdari, Y Omidi, Journal of biomedical informatics. 70Ferdousi, R., Safdari, R. & Omidi, Y. Computational prediction of drug-drug interactions based on drugs functional similarities. Journal of biomedical informatics 70, 54-64 (2017).
Predicting potential drug-drug interactions on topological and semantic similarity features using statistical learning. A Kastrin, P Ferk, B Leskošek, PloS one. 13196865Kastrin, A., Ferk, P. & Leskošek, B. Predicting potential drug-drug inter- actions on topological and semantic similarity features using statistical learning. PloS one 13, e0196865 (2018).
Predicting drug interactions with chemical substructure representation. K Huang, C Xiao, T Hoang, L Glass, J Sun, Caster, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Huang, K., Xiao, C., Hoang, T., Glass, L. & Sun, J. Caster: Predicting drug interactions with chemical substructure representation. In Proceed- ings of the AAAI Conference on Artificial Intelligence, vol. 34, 702-709 (2020).
A multimodal deep learning framework for predicting drugdrug interaction events. Y Deng, Bioinformatics. 36Deng, Y. et al. A multimodal deep learning framework for predicting drug- drug interaction events. Bioinformatics 36, 4316-4322 (2020).
Muffin: multi-scale feature fusion for drug-drug interaction prediction. Y Chen, Bioinformatics. 37Chen, Y. et al. Muffin: multi-scale feature fusion for drug-drug interaction prediction. Bioinformatics 37, 2651-2658 (2021).
Predicting drugdrug interactions using multi-modal deep auto-encoders based network embedding and positive-unlabeled learning. Y Zhang, Y Qiu, Y Cui, S Liu, W Zhang, Methods. 179Zhang, Y., Qiu, Y., Cui, Y., Liu, S. & Zhang, W. Predicting drug- drug interactions using multi-modal deep auto-encoders based network embedding and positive-unlabeled learning. Methods 179, 37-46 (2020).
N Xu, P Wang, L Chen, J Tao, J Zhao, Mr-Gnn, arXiv:1905.09558Multiresolution and dual graph neural network for predicting structured entity interactions. arXiv preprintXu, N., Wang, P., Chen, L., Tao, J. & Zhao, J. Mr-gnn: Multi- resolution and dual graph neural network for predicting structured entity interactions. arXiv preprint arXiv:1905.09558 (2019).
Sumgnn: multi-typed drug interaction prediction via efficient knowledge graph summarization. Y Yu, Bioinformatics. 37Yu, Y. et al. Sumgnn: multi-typed drug interaction prediction via efficient knowledge graph summarization. Bioinformatics 37, 2988-2995 (2021).
Relation matters in sampling: A scalable multi-relational graph neural network for drug-drug interaction prediction. A Feeney, arXiv:2105.13975arXiv preprintFeeney, A. et al. Relation matters in sampling: A scalable multi-relational graph neural network for drug-drug interaction prediction. arXiv preprint arXiv:2105.13975 (2021).
Basic concepts in medicinal chemistry. M W Harrold, Z R , Drug Dev Ind Pharm. Harrold MW, Z. R. Basic concepts in medicinal chemistry. Drug Dev Ind Pharm 0363-9045 (2014).
Multitask dyadic prediction and its application in prediction of adverse drug-drug interaction. B Jin, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence31Jin, B. et al. Multitask dyadic prediction and its application in prediction of adverse drug-drug interaction. In Proceedings of the AAAI conference on artificial intelligence, vol. 31 (2017).
Mr-gnn: Multi-resolution and dual graph neural network for predicting structured entity interactions. N Xu, P Wang, L Chen, J Tao, J Zhao, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. the Thirtieth International Joint Conference on Artificial Intelligence19Xu, N., Wang, P., Chen, L., Tao, J. & Zhao, J. Mr-gnn: Multi-resolution and dual graph neural network for predicting structured entity interac- tions. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-19 (2019).
Ssi-ddi: substructure-substructure interactions for drug-drug interaction prediction. A K Nyamabo, H Yu, J.-Y Shi, Briefings in Bioinformatics. Nyamabo, A. K., Yu, H. & Shi, J.-Y. Ssi-ddi: substructure-substructure interactions for drug-drug interaction prediction. Briefings in Bioinfor- matics (2021).
Drug-drug interaction prediction with learnable size-adaptive molecular substructures. A K Nyamabo, H Yu, Z Liu, J.-Y Shi, Briefings in Bioinformatics. Nyamabo, A. K., Yu, H., Liu, Z. & Shi, J.-Y. Drug-drug interaction prediction with learnable size-adaptive molecular substructures. Briefings in Bioinformatics (2021).
Drugdrug adverse effect prediction with graph co-attention. A Deac, Y.-H Huang, P Veličković, P Liò, J Tang, arXiv:1905.00534arXiv preprintDeac, A., Huang, Y.-H., Veličković, P., Liò, P. & Tang, J. Drug- drug adverse effect prediction with graph co-attention. arXiv preprint arXiv:1905.00534 (2019).
Graph of graphs neural network for predicting structured entity interactions. H Wang, D Lian, Y Zhang, L Qin, X Lin, Gognn, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. the Thirtieth International Joint Conference on Artificial Intelligence20Wang, H., Lian, D., Zhang, Y., Qin, L. & Lin, X. Gognn: Graph of graphs neural network for predicting structured entity interactions. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-20 (2020).
Multi-view graph contrastive representation learning for drug-drug interaction prediction. Y Wang, Y Min, X Chen, J Wu, Proceedings of the Web Conference. the Web Conference2021Wang, Y., Min, Y., Chen, X. & Wu, J. Multi-view graph contrastive rep- resentation learning for drug-drug interaction prediction. In Proceedings of the Web Conference 2021, 2921-2933 (2021).
Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. D Weininger, Journal of chemical information and computer sciences. 28Weininger, D. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences 28, 31-36 (1988).
Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. G Landrum, Landrum, G. et al. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling (2013).
Cold-start problems in datadriven prediction of drug-drug interaction effects. P Dewulf, M Stock, B De Baets, Pharmaceuticals. 14429Dewulf, P., Stock, M. & De Baets, B. Cold-start problems in data- driven prediction of drug-drug interaction effects. Pharmaceuticals 14, 429 (2021).
2020 fda drug approvals. A Mullard, Nature Reviews Drug Discovery. 20Mullard, A. 2020 fda drug approvals. Nature Reviews Drug Discovery 20, 85-91 (2021).
Cardiovascular toxicities associated with hydroxychloroquine and azithromycin: an analysis of the world health organization pharmacovigilance database. L S Nguyen, Circulation. 142Nguyen, L. S. et al. Cardiovascular toxicities associated with hydroxy- chloroquine and azithromycin: an analysis of the world health organization pharmacovigilance database. Circulation 142, 303-305 (2020).
| []
|
[
"Reflection and Rotation Symmetry Detection via Equivariant Learning",
"Reflection and Rotation Symmetry Detection via Equivariant Learning"
]
| [
"Ahyun Seo \nPohang University of Science and Technology (POSTECH)\nSouth Korea\n",
"Byungjin Kim \nPohang University of Science and Technology (POSTECH)\nSouth Korea\n",
"Suha Kwak \nPohang University of Science and Technology (POSTECH)\nSouth Korea\n",
"Minsu Cho \nPohang University of Science and Technology (POSTECH)\nSouth Korea\n"
]
| [
"Pohang University of Science and Technology (POSTECH)\nSouth Korea",
"Pohang University of Science and Technology (POSTECH)\nSouth Korea",
"Pohang University of Science and Technology (POSTECH)\nSouth Korea",
"Pohang University of Science and Technology (POSTECH)\nSouth Korea"
]
| []
| The inherent challenge of detecting symmetries stems from arbitrary orientations of symmetry patterns; a reflection symmetry mirrors itself against an axis with a specific orientation while a rotation symmetry matches its rotated copy with a specific orientation. Discovering such symmetry patterns from an image thus benefits from an equivariant feature representation, which varies consistently with reflection and rotation of the image. In this work, we introduce a group-equivariant convolutional network for symmetry detection, dubbed EquiSym, which leverages equivariant feature maps with respect to a dihedral group of reflection and rotation. The proposed network is built end-toend with dihedrally-equivariant layers and trained to output a spatial map for reflection axes or rotation centers. We also present a new dataset, DENse and DIverse symmetry (DENDI), which mitigates limitations of existing benchmarks for reflection and rotation symmetry detection. Experiments show that our method achieves the state of the arts in symmetry detection on LDRS and DENDI datasets. | 10.1109/cvpr52688.2022.00932 | [
"https://arxiv.org/pdf/2203.16787v1.pdf"
]
| 247,839,220 | 2203.16787 | 2309f2d997a2cf1f9b087e2f713c5d2f27020b10 |
Reflection and Rotation Symmetry Detection via Equivariant Learning
Ahyun Seo
Pohang University of Science and Technology (POSTECH)
South Korea
Byungjin Kim
Pohang University of Science and Technology (POSTECH)
South Korea
Suha Kwak
Pohang University of Science and Technology (POSTECH)
South Korea
Minsu Cho
Pohang University of Science and Technology (POSTECH)
South Korea
Reflection and Rotation Symmetry Detection via Equivariant Learning
The inherent challenge of detecting symmetries stems from arbitrary orientations of symmetry patterns; a reflection symmetry mirrors itself against an axis with a specific orientation while a rotation symmetry matches its rotated copy with a specific orientation. Discovering such symmetry patterns from an image thus benefits from an equivariant feature representation, which varies consistently with reflection and rotation of the image. In this work, we introduce a group-equivariant convolutional network for symmetry detection, dubbed EquiSym, which leverages equivariant feature maps with respect to a dihedral group of reflection and rotation. The proposed network is built end-toend with dihedrally-equivariant layers and trained to output a spatial map for reflection axes or rotation centers. We also present a new dataset, DENse and DIverse symmetry (DENDI), which mitigates limitations of existing benchmarks for reflection and rotation symmetry detection. Experiments show that our method achieves the state of the arts in symmetry detection on LDRS and DENDI datasets.
Introduction
From molecules to galaxies, from nature to man-made environments, symmetry is everywhere. Comprehensive perception and exploitation of real-world symmetry are the instinctive abilities of humans and animals that have the potential to take intelligent systems to the next level. The focus of this paper is on the two most primitive symmetries, reflection and rotation symmetries. The goal of reflection and rotation symmetry detection is to find a reflection axis and a rotation center that remain invariant under reflection and rotation, respectively. Despite decades of efforts [29,46], symmetry detection methods have been limited to the well-defined symmetry patterns, and the remedy for realworld symmetry is still yet to be thoroughly explored. The simplicity of mathematical concepts of symmetry encouraged early approaches to find keypoint pairs that satisfy pre-defined constraints for symmetry [1,3,32,37,39,43], which leverage hand-crafted local feature descriptors to detect sparse symmetry patterns. Recently, convolutional neural networks (CNNs) have been successfully applied to detect reflection symmetry and have surpassed the previous methods by learning score map regression [13] or symmetric matching [38] from data. The primary challenge in detecting symmetry patterns lies in the fact that a symmetry manifests itself with an arbitrary orientation and perceiving the pattern requires an analysis based on the orientation; a reflection symmetry mirrors itself against an axis with a specific orientation and a rotation symmetry matches its rotated copy with a specific orientation. Most methods for symmetry detection thus involve searching over the space of candidate orientations of symmetry patterns and also developing a robust representation that is either invariant or equivariant with respect to rotation and reflection. The early approaches leverage an equivariant representation by extracting oriented keypoints and performing orientation normalization [1,32,37,39,43]. While this technique has proven effective for shallow gradient-based features, it cannot be applied to deep feature maps from standard neural networks, where rotation and reflection induce unpredictable variations in representation.
To address the challenge, we propose to learn a group-equivariant convolutional neural network for reflection and rotation symmetry detection, dubbed EquiSym. Recently, there has been active research on equivariant networks to incorporate equivariance explicitly for robust and sampleefficient representation learning [7,9,19,40,44,47]. Unlike standard neural networks, they induce predictable and structure-preserving representation with respect to the geometric transformations, e.g., rotation or reflection, which is eminently suitable for symmetry detection. To detect consistent symmetry patterns over different orientations, we build a dihedrally-equivariant convolutional network [44], which is designed to be end-to-end equivariant to a group of reflection and rotation. The network effectively learns to output a score map of reflection axes for reflection symmetry or that of rotation centers for rotation symmetry. We also present a new dataset, DENse and DIverse symmetry (DENDI), for reflection and rotation symmetry detection. DENDI contains real-world images with accurate and clean annotations for reflection and rotation symmetries and mitigates limitations of existing benchmarks [4,12,13,27,38]. First, the reflection symmetry axes are diverse in scale and orientation, while previous datasets mostly focus on the dominant axes of the vertical or horizontal ones. Second, the rotation centers are annotated to the objects in polygon and ellipse shape, not limited to the circular objects. Third, the number of the rotation folds for each rotation center is annotated, which is the first in a large-scale dataset. Finally, the number of images is 1.7x and 2.0x larger than the second-largest reflection and rotation symmetry detection datasets, respectively.
The contribution of our work can be summarized as:
• We propose a novel group-equivariant symmetry detection network, EquiSym, which outputs groupequivariant score maps for reflection axes or rotation centers via end-to-end reflection-and rotationequivariant feature maps.
• We present a new dataset, DENse and DIverse symmetry dataset (DENDI), containing images of reflection and rotation symmetries annotated in a broader range of typical real-world objects.
• We show the outstanding performance of EquiSym in reflection symmetry detection on SDRW [27], LDRS [38], and DENDI, and in rotation symmetry detection on DENDI.
Related Work
Equivariant deep learning
Equivariance is a desirable inductive bias that improves generalization and sampling efficiency. The conventional convolution is equivariant to translations but not to other transformations such as rotations and reflections. Group equivariant CNNs [7,19] use group convolution to learn equivariant representations for symmetry groups. Marcos et al. [33] generate and propagate vector fields that maintain the maximum response along with the corresponding direction throughout the network. Worrall et al. [47] exploit circular harmonics to obtain rotational equivariance in a continuous domain. Cohen et al. [9] combine fixed base filters linearly, resulting in steerable filters with no interpolation artifacts. Equivariant CNNs on homogeneous spaces [5,6,8,45] are also proposed. The aforementioned methods consider equivariance to specific transformations until Weiler et al. [44] provide a general solution of kernel space constraint for arbitrary group representations of the Euclidean group E (2). From the perspective of an application, Han et al. [16] and Gupta et al. [15] extract rotationequivariant feature maps for oriented object detection and visual tracking, respectively. We leverage E(2)-equivariant CNNs [44] as a building block of our network to perceive consistent symmetry patterns across multiple orientations.
Symmetry detection
Symmetry detection deals with different kinds of symmetric patterns such as reflection axis [1,11,13,14,20,32,37,38,[41][42][43], rotation center [13,20,22,23,32,37,42], and translation lattice [17,24,28,30,35,42,48].
Sparse prediction. Rotation and translation symmetries are often formulated as periodic signals and detected by autocorrelation [24,28] in the spatial domain, spectral density [22,23] and angular correlation [20] in the frequency domain. Meanwhile, there is a consistent need for an affineinvariant or equivariant feature descriptor in detecting symmetries, as matching the local descriptors is the most common solution. Loy and Eklundh [32] use SIFT [31] descriptors and normalize each descriptor by its dominant orientation. Cho and Lee [3] also use SIFT [31] to match feature pairs and detect symmetry by discovering the clusters of the nearby matches. Contour and edge features [1,37,39,42,43] are also useful for determining the boundary of the symmetric object. Lee and Liu [23] propose to solve affine-skewed rotation symmetry group detection by rectifying the skewed patterns. In this paper, we tackle the task by data-driven approach with our proposed dataset. Also, we use the dihedral group to interpret the symmetries as in many symmetry detection literatures [22,23].
Dense prediction. Recently proposed methods [11,13,38,41] predict pixel-wise symmetry scores. Fukushima and Kikuchi [11] build a neural network to extract edges from images and detect reflection symmetry. Tsogkas et al. [41] construct a bag of features using histogram, color, and texture for each pixel and adopt multiple instance learning when training the model. Gnutti et al. [14] take two stages of computing the symmetry score for each pixel using patch-wise correlation and validating the obtained candidate axes using gradient direction and magnitude. Funk and Liu [13] are the first to adopt deep CNNs for detecting reflection and rotation symmetries. Seo et al. [38] propose a polar self-similarity descriptor for better rotation and reflection invariance. A specially designed Polar Matching Convolution (PMC) performs region-wise feature matching to compute the symmetry score, but the model relies heavily on the CNNs. To discover consistent symmetry patterns w.r.t. geometric transformations of rotation and reflection, we deploy group equivariant neural networks in our symmetry detection model.
Proposed Method
The symmetry patterns appearing in an image are invariant to the 2D rigid transformations of the image. The detected symmetry patterns of a transformed input image should be the same with the transformed detection results using the original input image. In other words, the reflection and rotation equivariance is crucial for a symmetry detection model. To this end, we propose a unified framework for detecting the reflection and rotation symmetries via equivariant learning, EquiSym. The overall pipeline is briefly illustrated in Fig. 2. Given an input image, a shared encoder Enc and decoders Dec ref and Dec rot are applied for detecting reflection and rotation symmetries, respectively. We also perform auxiliary pixel-wise classification tasks, one for the orientation of the reflection axis and the other for the order of the rotation symmetry. The intermediate logits and the corresponding probabilities of the subtasks are denoted by S and P, respectively. The logits S are integrated with the sum of the foreground probabilities P to compute the final score map Y using a 1 × 1 group-equivariant convolution layer. The following sections cover the preliminaries and the proposed symmetry detection network.
Preliminaries
Group and equivariance. A group (G, ·) is a set G with a binary operation ·, where its elements are closed under the operation. A group has unique identity element and inverse element and also satisfies the associativity. Equivariance of a map f : X − → Y is formalized using a group G and two G-sets X and Y , where G-set is a mathematical object consisting of a set S and a group action of G on S. A map f is said to be equivariant iff
f (g · x) = g · f (x),(1)
for all x ∈ X and all g ∈ G. In 2D image domain, we focus on an euclidean group E(2), which is a group of plane R 2 isometries of translations, rotations, and reflections.
E(2)-steerable feature field. The affine transformations of the 2D or 3D coordinates are easily done by matrix multiplications. Unlike low-dimensional feature vectors, the reflection or rotation transformation of the high-dimensional feature vectors are non-trivial. The first step to build a steerable convolution is to define a steerable feature field f : R 2 − → R c that maps a feature vector f (x) ∈ R c with each point x of a base plane. Given a group G, a group representation ρ : G − → GL(R c ) specifies the transformation law for shuffling the c channels of each feature vector, where a general linear group GL is the set of c × c invertible matrices. Thus, applying transformation on a feature map not only moves the target vectors to the new positions but also shuffles each vector via ρ(g) where g ∈ G. The group representations of E(2) group are presented in [44].
E(2)-equivariant steerable convolution.
To preserve the transformation law of the steerable feature spaces in CNNs, equivariance under the group actions is required for each network layer. Convolutions with the restricted G-steerable kernels [44] provide an equivariant linear mapping between the steerable feature spaces. The input and output of the Gsteerable layers are the feature fields with their group representations ρ in (g) ∈ R cin×cin and ρ out (g) ∈ R cout×cout , where a group element g is specified. A kernel k : R 2 − → R cout×cin that transforms under ρ in and ρ out becomes Gsteerable when satisfying a kernel constraint of
k(gx) = ρ out (g)k(x)ρ in (g −1 ),(2)
for every g ∈ G given x ∈ R 2 . E(2)-equivariant CNNs solve this constraint to get a basis of the steerable kernels and comput the convolutional weights, which results in the smaller learnable parameters compared to the CNNs.
Symmetry detection network
Reflection and rotation equivariant modules. Since we aim to establish both reflection and rotation symmetry, we employ an E(2)-equivariant CNNs of dihedral group D N , which contains N discrete rotations by angles multiples of 2π N and reflections. The encoder Enc consists of an E(2)equivariant [16,44] ResNet [18] and an Atrous Spatial Pyramid Pooling(ASPP) [2] module. The decoder Dec is a 3layer convolution module. The encoder and decoder designs follow [38] except that all convolution layers are substituted by the E(2)-equivariant convolution layers. During the forward computation, the feature fields are transformed into the predefined fields of the group D N . For the predictions S ref and S rot , the feature fields of the decoders Dec ref and Dec rot are transformed back to the scalar fields.
Auxiliary classification. Instead of a direct regression of the symmetry score maps [13,38], we perform relevant subtasks that can lead to the final prediction. The proposed auxiliary tasks are the pixel-wise classification of the orientation (angle) of the reflection axis and the number of rotation folds. For simplicity, we assign the orientation into N ref bins dividing 180 degrees. The ground-truth orientation S ref gt is then quantized to be one-hot. The auxilary rotation label S rot gt is the order of the rotational symmetry, which is annotated with a positive integer in the case of the discrete rotational symmetry. We allocate '0' to the continuous group since its ground-truth order is infinite. The size of the unique set of the rotation orders in the dataset is denoted as N rot . Meanwhile, we add background classes for the pixels that are neither the axis nor the center. Therefore, the classifiers predict the scores of channel of N ref + 1 and N rot + 1. The classification logit S ∈ R H×W ×(N +1) is obtained by S = Dec(Enc(I)).
(
The encoder Enc is shared while the decoders Dec ref and Dec rot are task-specific. Note that we set the group orientation N as the same as N ref to further exploit the equivariance of the equivariant networks.
Symmetry detection. The predicted score maps of the symmetry axes of the corresponding orientation are present in the N ref foreground channels of the estimated orientation S ref , whereas the background channel contains the background pixels. Similar to reflection, the N rot foreground channels are the score maps of the rotation centers with that number of folds. We aggregate the sum of the foreground logits P ∈ R H×W ×1 and the intermediate predic-
tion S ∈ R H×W ×(N +1) to compute the final prediction Y ∈ R H×W as P h,w = N k=1 exp (S h,w,k ) c exp (S h,w,c ) ,(4)Y = conv G ([P||S]).(5)
Note that || denotes the concatenation operation along the final channel dimension.
Training objective
To train EquiSym, we optimize a combination of two loss terms for localization and classification. Following [38], we adopt the focal loss [25] as the localization loss L loc for both reflection and rotation score maps. The classication loss L cls for the intermediate predictions is the cross-entropy loss. The final objective L are expressed as
L loc = L focal (Y, Y gt ) (6) L cls = L ce (S, S gt ) (7) L = L loc + L cls ,(8)
The network for reflection and rotation symmetry detection are denoted as EquiSym-ref and EquiSym-rot, respectively. To alleviate the class imbalance issue, the loss of the background class is weighted with w for L cls . Note that the focal loss alleviates the class imbalance issue for L loc .
New Symmetry Dataset (DENDI)
We present a new dataset for symmetry detection named DENse and DIverse symmetry dataset (DENDI) in the following. [4] 239 / -/ --low SymCOCO [12] 250 / -/ -250 / -/ -high DSW [12] -200 / -/ -low BPS * [13] 959 / -/ 240 846 / -/ 211 high LDRS [38] 1
Motivation
Limitations in existing datasets. The early reflection symmetry datasets [4,27] contain small number of images with few reflection axis and rotation center. Recently proposed BPS [13] and LDRS [38] are large enough to train deep architectures, but the reflection axes still lack of diversity in terms of the length and orientation. For example, objects with multiple symmetry axes are often annotated only by a single dominant axis. Furthermore, no existing reflection symmetry datasets take into account the continuous symmetry group, such as a circle with an infinite number of reflection symmetry axes. For the rotation symmetry, the annotations of BPS [13] are limited to the rotation centers while the pioneer unsupervised methods [22,23] tackle the rotation folds also.
Proposed dataset. To address the concerns mentioned above, we present a new dataset for reflection and rotation symmetry detection that includes a wide range of geometries. We integrate 239 images of NYU [4] and 181 images of SDRW [27], and collect 2,080 images from COCO [26] dataset. Both reflection and rotation annotations are labeled for each image, and we remove images without any labels. As a result, DENDI contains 2,493 and 2,079 images for reflection and rotation split, respectively. The sizes of the symmetry detection datasets are compared in Tab. 1. To add reflection axes with diverse length and orientation, we annotate objects in common shapes such as circle, ellipse, and polygons, as well as the part-level symmetries. Also, the annotators are encouraged to exhaustively label the symmetries for each object, including the non-dominant ones, e.g. the diagonals of a square. For the reflection symmetry of a continuous symmetry group, we annotate with ellipseshaped masks to represent an infinite number of line axes. For the rotation symmetry, we additionally collect the number of rotation folds for each rotation center. As a result, the annotations in DENDI are denser and more diverse compared to the ones in the existing datasets.
Annotation
Reflection symmetry. A reflection symmetry axis is defined as a line formed by two points following [4, 12, 13, 27, 38]. In contrast to the existing datasets, we now account for circular objects. A circular object, which is equivalent to a filled circle, has an infinite number of reflection symmetry axes through its center. We propose to annotate the circular objects with 5 connected points, resembling the shape of the Arabic number '4'. We draw a '4'-shaped annotation from the circle's center towards the circle's boundary in the up, down, left, and right directions. The annotation rules and visualizations of the reflection symmetry for generic shapes are visualized in Fig. 3 (a) and Fig. 4 (a), respectively.
Rotation symmetry. We collect the rotation center coordinate, the object's boundary, and the number of folds (N) for each object. A circular object is in a continuous rotation group with infinite folds. Thus, we set the N as 0 for simplicity. We categorize the objects into ellipses and polygons based on the their shape. Circular or elliptical objects are marked with '4'-shaped annotation. We annotate the polygons of V vertices with (V+1) consecutive points. From the center of the object, we take the vertex closest to 12 o'clock as the 2nd point and link the vertices of a convex polygon counter clockwise. Note that the number of the vertices (V) and the number of the folds (N) not always match. The annotation rules and visualizations of the rotation symmetry for generic shapes are visualized in Fig. 3 (b) and Fig. 4 (b), respectively.
Statistics
Reflection symmetry. Histograms for scale and orientation of the reflection axes are presented in Fig. 5 The number of axes increases when the length of each axis decreases, reflecting the characteristics of the real-world environment. In addition, it can be interpreted that part-level symmetry is densely annotated compared to other datasets. Note that our dataset ranks first at all orientations except for the three orientations that are closest to the vertical direction. Despite the fact that LDRS [38] is also based on COCO [26], the distribution of the orientation of the axis is more diverse in our case due to the fact that we particularly request the annotators not to omit the non-dominant axes.
Rotation symmetry. Histograms of the rotation fold and the number of the rotation centers are illustrated in Fig. 5 (c) and (d). The three most common folds in the dataset are 2, 0 (continuous), and 4. The result is predictable as there are many rectangles, circles, and squares in the image. Note that the dataset contains a notable number of objects of fold 8, which are mostly the 'STOP' signs in the road. The complexity of our rotation symmetry dataset is high, as shown in Fig. 5 (d). Even if we clip a few exceptions, a lot of rotation centers are marked in the images.
Experiments
Experimental settings
Datasets. We use SDRW [27], LDRS [38], and DENDI to evaluate the reflection symmetry detection model. We follow the training and evaluation settings of PMCNet [38] in Tab. a.1 for additional use of NYU [4] and the synthesized images. For rotation, we only use DENDI which contains the images of SDRW [27] rotation dataset.
Evaluation. To evaluate EquiSym, we use F1-score computed with the precision and recall as 2×prec×rec prec+rec . A con-vention [13,38,41] for measuring the score of the output score map is morphological thinning [34] and an off-theshelf pixel-matching algorithm to compare with the groundtruth lines, which are also pixel-width. In contrast to the existing datasets, we take the circular objects into account, which are annotated with filled circles. As DENDI contains annotations of filled circles, the thinning operation shrinks to a single-pixel dot. Therefore, it is inevitable to come up with a new way to evaluate the predicted score maps. We dilate the ground-truth score maps of reflection and rotation symmetries with a maximum distance of 5 pixels following [13] where they enlarge the ground-truth dots to 5-pixel radius circles around the rotation centers. The predicted score map is also dilated to make the result of the groundtruth itself become 1. The true-positives are then computed by pixel-wise comparisons.
Implementation details. As a backbone network, we adopt the ReResNet implementation of ReDet [16] of depth 50, which is based on PyTorch [36] and e2cnn [44]. The number of the layers and the structure are the same as the vanilla ResNet [18]. We pretrain the ReResNet50 for the image classification task in imagenet-1k [10] following the procedures in [16]. For the symmetry group to initialize the equivariant networks, we use a dihedral group of eight orientations (D 8 ). To provide multi-scale contexts, we also deploy the Atrous Spatial Pyramid Pooling module (ASPP) [2] which we re-implemented the module by replacing all the vanilla convolutions to E(2)-equivariant convolutions [44]. The number of the classes N ref is 8 and N rot is 21. We train EquiSym for 100 epochs with an initial learning rate of 0.001 using the Adam [21] optimizer with a batch size of 32. For details, refer to the supplementary material.
Ablation studies
Ablation on the equivariant convolution. We study the effectiveness of the group-equivariant convolution in Tab Ablation on the joint training. We investigate the effect of a joint training of the reflection and rotation symmetries in Tab. 2. When training EquiSym-joint, the loss L is computed for both reflection and rotation symmetries. Joint symmetry detection network that is trained only with the final task achieves comparable F1 scores of 62.2 and 22.1 for reflection and rotation symmetry, respectively. However, the auxiliary task do not increase the accuracy of the reflection symmetry in joint training scenario. One probable explanation is that only orientation estimation of the reflection symmetry axis requires rotation equivariance while the fold classification only necessitates rotation invariance, resulting in network imbalance.
Comparison with the state-of-the-art methods
We compare EquiSym with the state-of-the-art methods in Tab. 3 and Tab. a.1. For both reflection and rotation symmetries, our proposed EquiSym achieves the state-of-theart, showing the effectiveness of the equivariant networks and the auxiliary classification. While PMCNet [38] is retrained on DENDI for a fair comparison, SymResNet [13] Table 4. Comparison of the reflection symmetry detection methods on the LDRS [38] and SDRW [27]. Note that the real dataset consists of SDRW, LDRS, and NYU [4] dataset.
is compared using the weights provided by the authors as fine-tuning SymResNet [13] degraded the performance. We follow the configurations of PMCNet [38] for the experiments on SDRW [27] and LDRS [38] in Tab. a.1 The training images consist of real images from SDRW, LDRS, and NYU [4] and the generate synthetic images as in [38]. EquiSym-ref achieves the state-of-the-art on LDRS while the results on SDRW are still comparable. The additional use of synthetic images is not helpful as proposed in [38]. One possible reason is the imbalance in data distribution across splits. To mitigate this issue, we construct a new split denoted as mixed by merging all images and then randomly split the images into train/val/test splits with the ratio of 4:1:1. EquiSym-ref outperforms PMCNet in that scenario, as shown in Tab. a.1. All the experiments in Tab. a.1 are evaluated with the legacy scheme.
Qualitative results
The qualitative results of EquiSym-ref, PMCNet [38], and SymResNet [13] on DENDI-ref test are shown in Fig. 6. EquiSym-ref produces dense reflection symmetry score maps compared to other methods, including non-dominant axes such as the diagonal. Furthermore, EquiSym-ref predicts masks of the circular objects accurately even for the challenging samples where the line and circles both exist in the ground-truth. We compare EquiSym-rot with SymResNet on DENDI-rot test in Fig. 7. EquiSym-rot is robust in scale and number of rotation centers. EquiSym-rot detects symmetry of polygons as well as circular objects in DENDI-rot, whereas SymResNet mainly detects circular objects.
Ground-truth
PMCNet [38] SymResNet [13] Ours Ground-truth Ours PMCNet [38] SymResNet [13]
Limitations
While EquiSym can be jointly trained to produce comparable predictions as the single-branch EquiSym, it has more room for improvement. Especially, the design of the auxiliary task of EquiSym-rot can be explored more to enhance the accuracy of the reflection symmetry detection.
Conclusion
In this paper, we have proposed a novel symmetry detection framework, EquiSym, using equivariant learning to obtain group-equivariant and invariant scores for both reflection and rotation symmetries. In addition, we have intro-duced a new dataset DENse and DIverse symmetry dataset (DENDI) for reflection and rotation symmetries. The proposed EquiSym achieves the state-of-the-art on LDRS and DENDI datasets. [38] and SDRW [27]. Note that the real dataset consists of SDRW, LDRS, and NYU [4] dataset and synthetic images generated as in [38].
PMCNet
In the main paper, we propose to use a modified evaluation scheme of blurring the ground truth rather than thinning the prediction. The primary reason is that the thinning process transforms a circular mask prediction into a single dot. The pixel matching algorithm determines whether the predicted lines are close enough to the ground truth lines within a threshold, which becomes equivalent when the ground truth itself is blurred with a radius of a threshold. In practice, we construct a filter of the kernel 11 × 11 where the weights are set to 1 for a circle of a diameter 11 and 0 otherwise. We convolve the ground-truth heatmap of both symmetries with the filter so the heatmap is dilated to the maximum distance of 5 pixels. The true positives are then computed by pixel-wise comparisons. We re-evaluate the experiments of Tab. 3 of the main paper in Tab. a.1. Note that one experiment from PMCNet [38] is excluded since the trained model is not accessible. As shown in Tab. a.1, the rankings produced by the two evaluation schemes are consistent, while the latter is significantly faster.
A.1.2. Implementation details
Construction of the orientation labels. EquiSym utilizes the intermediate tasks to increase the accuracy of the symmetry detection tasks. In the case of reflection symmetry, the intermediate labels of the orientations of the reflection axes are obtained for free. The angle of the reflection symmetry axis in the form of a straight line can be expressed as a linear combination of the closest one or two angles among the 8 predetermined angles, which is an initial soft label. On the other hand, the circle-shaped symmetry axis has an evenly divided orientation label of 8 segments determined by the orientation of the line crossing the center. The orientation label is then quantized for training.
ImageNet pretrain. To be consistent with experiments on the vanilla ResNet [18] pre-trained on ImageNet [10], we pretrain the ReResNet50 on ImageNet-1K for the image classification. While ReResNet50 from ReDet is implemented with C 8 group, we use D 8 group instead. Furthermore, we adjust the stride and dilation of each layer in ReResNet50 to obtain a higher resolution feature map with a larger receptive field than the original one, which is a common procedure in the semantic segmentation [2]. The learning rate starts at 0.1 and decreases by 0.1 every 30 epochs, for a total of 100 epochs. We use a batch size of 512. The pretrained ReResNet50 achieves 69.06% top-1 and 87.25% top-5 accuracy on the ImageNet val.
Implementation details. Following [38], The hyperparameters α and β of the focal loss are set to 0.95 and 2, respectively. For training, we resize input images so that the maximum length of the width or the height is 417. The background weight w of L cls is set to 0.01 and 0.001 for reflection and rotation symmetries. We use the PyTorch [36] and e2cnn [44] framework to build our model.
A.2. DENDI
The details of data annotation for the DENDI dataset are described in this section. To identify symmetry, we disregard texture and focus just on the shape of the object. The partial occlusion of boundary of symmetric object is allowed to a fourth of the object boundaries. For both reflection and rotation symmetries, we exhaustively mark all symmetries in an image, including those of parts. This policy ensures that the DENDI contains dense annotations. we present some examples in Fig. a.1 (d) and Fig. a.2 (d).
A.2.1. Reflection symmetry
A reflection symmetry axis is drawn as a line following the previous datasets [4,12,13,27,38]. The notable examples are shown in Fig. a.1 (a). We annotate all reflection symmetry axes in an object, including non-dominant ones. Different from the existing datasets, we account for a circular object, which has an infinite number of reflection symmetry axes. Instead of an infinite number of symmetry axes to represent a circular object, we use a '4'-shaped label consisting of 5 points which are then converted to a circular mask, as shown in Fig. a.1 (b). Note that a semantically circular object that seems to be an ellipse due to viewpoint variants is also annotated with a '4'-shaped label, as show in the first tow rows in Fig. a.1 (c). Likewise, a skewed regular polygon due to perspective variations has the same reflection axis as a non-skewed regular polygon, as shown in the last two rows in Fig. a.1 (c), e.g., a regular STOP sign and a skewed STOP sign that are both semantically regular octagon have eight reflection symmetry axes. Furthermore, we annotate symmetry in characters such as A, B, C, D, E, H, I, K, M, O, T, U, V, W, X, and Y, as well as the numbers 0, 1, 2, 5, and 8, except for those that are too thin or indistinct. We also annotate symmetry in the D-shaped part of characters, such as P and R. If multiple symmetry axes are overlapped, only the longest one is saved.
A.2.2. Rotation symmetry
For each object with rotation symmetry, we collect the coordinate of the rotation center, the boundary of the object, and the number of the rotation folds (N). We again employ the '4'-shaped labels to denote circular or elliptical objects as shown in Fig. a.2 (a). The semantically circular object also features an infinite number of rotation folds, indicated as 0 for simplicity, in addition to the '4'-shaped labels. Similar to reflection symmetry, an semantically circular object with elliptical shape due to viewpoint variants has a rotation fold of 2, e.g., the third and fourth rows in Fig. a.2 (a). The minor axis takes precedence over the major axis when drawing '4'-shaped labels for elliptical objects. The rotation fold of a circular object, in particular, can be greater than 2 if the object contains cyclic symmetry. In the case of a noncircular object with rotation symmetry such as Fig. a.2 (c), we draw a convex polygon starting from the center of the object and following convex vertices. The vertex nearest to 12 o'clock takes priority among the convex vertices. Like-
Figure 1 .
1Symmetry detection examples of our method EquiSym. (a) an input image, (b) a score map of reflection symmetry axes, and (c) that of rotation symmetry centers. Best viewed in color.
Figure 2 .
2Illustration of the proposed symmetry detection network, EquiSym. After an input image I passed a group-equivariant encoder Enc, group-equivariant decoders Dec ref and Dec rot predict intermediate predictions S ref and S rot for rotation and reflection, respectively. Auxillary tasks for the rotation and reflection symmetry are the order(N) of the rotation fold and the orientation of the reflection axis. The foreground logits are pooled to P ref and P rot and stacked with the scores S ref and S rot , respectively. The final score Y ref and Y rot for the rotation center and the reflection axis are predicted using a group-equivariant 1 × 1 convolution. For details, see Sec. 3.
Figure 3 .
3Illustration of the generic shapes and their annotations. (a) and (b) indicate the annotation rules of the reflection and rotation symmetry, respectively. For details, see Sec.
Figure 4 .
4The images and labels of the objects with generic shapes.(a) and (b) indicate the annotations of the reflection and rotation symmetry, respectively. Best viewed in the electronic version.
Figure 5 .
5(a) and (b). For scale, we measure the length of each line and nor-Statistical analysis of DENDI. (a) and (b) represent the reflection symmetry dataset while (c) and (d) indicate the rotation symmetry dataset. In specific, (a) and (b) are histograms for scale and orientation of the reflection axes, (c) and (d) represent the histograms of the rotation fold and the number of the rotation centers. malize it with the length of the image diagonal. With two points on the line annotation, we can also compute its orientation (tangents). The y-axis of Fig. 5 (a) and (b) is the ratio of the number of the axes over the total number of the axes.
F1
scores of 55.1 and 17.7 increase to 63.1 and 21.2 for EquiSym-ref and EquiSym-rot, respectively. Ablation on the auxiliary classification. To enhance the intermediate representation, we perform a relevant subtask for each branch of symmetry detection and compare them in Tab. 2. Without extra labels, the reflection-only model achieves the F1 score of 64.5, which is greater than the 63.1 obtained by training solely with the final task. The orientation estimation also requires rotation equivariance, which enhances the intermediate features. Rotation symmetry detection, on the other hand, requires extra annotation for the auxiliary task as the original labels are a set of dots. The rotation invariant subtask of classification of the rotation folds (N) compresses the information of the intermediate feature so that it can increase the F1 score from 21.2 to 22.5.
Figure 6 .Figure 7 .
67Qualitative results of the reflection symmetry detection on DENDI-ref test. Qualitative results of the rotation symmetry detection on DENDI-rot test.
Figure a. 1 .
1Illustration of the examples in the reflection symmetry dataset. The samples with (a) multiple symmetry axes, (b) circular objects, (c) skewed objects, and (d) dense symmetry axes are shown in the figure. Green lines indicate the reflection axes and the yellow lines indicate the '4'-shaped reflection circle annotation. The reflection-circle annotations are then converted to masks.
Figure a. 2 .
2Illustration of the examples in the rotation symmetry dataset. The samples with (a) circular objects, (b) circular objects with folds larger than 2, (c) polygons, and (d) dense symmetries are shown in the figure. Green lines indicate the circular annotations and the yellow polygons indicate the polygon-type annotations. Only the center coordinates are used for evaluation. wise, in the reflection symmetry dataset, symmetry in characters such as H, I, N, O, S, X, and Z, as well as the numbers 0 and 8, are taken into account.
Table 3 .
3Comparison with the state-of-the-art methods on DENDI.method
train dataset test dataset
mixed
real synth SDRW LDRS
PMCNet [38]
61.6
34.8
61.2
68.8
37.3
EquiSym-ref
67.4
40.9
71.4
67.1
39.4
Table a
a.1. Comparison of the reflection symmetry detection meth-
ods on the LDRS
Acknowledgements. This work was supported by Samsung Advanced Institute of Technology (SAIT) and also by the NRF grant (NRF-2021R1A2C3012728) and the IITP grant (No.2021-0-02068: AI Innovation Hub, No.2019-0-01906: Artificial Intelligence Graduate School Program at POSTECH) funded by the Korea government (MSIT). We like to thank Yunseon Choi for her contribution to DENDI.AppendixA.1. EquiSymThe details that are omitted in the main paper are covered in this section. We show the consistency of the evaluation schemes. The implementation details are also shown in the following.A.1.1. Evaluation scheme method train dataset test dataset mixed real synth SDRW LDRS
Reflection symmetry detection via appearance of structure descriptor. R Ibragim, Seungkyu Atadjanov, Lee, Eur. Conf. Comput. Vis. Springer1Ibragim R Atadjanov and Seungkyu Lee. Reflection symme- try detection via appearance of structure descriptor. In Eur. Conf. Comput. Vis., pages 3-18. Springer, 2016. 1, 2
Rethinking atrous convolution for semantic image segmentation. Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam, abs/1706.05587CoRR411Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. CoRR, abs/1706.05587, 2017. 4, 6, 11
Bilateral symmetry detection via symmetry-growing. Minsu Cho, Kyoung Mu Lee, Brit. Mach. Vis. Conf. Citeseer1Minsu Cho and Kyoung Mu Lee. Bilateral symmetry detec- tion via symmetry-growing. In Brit. Mach. Vis. Conf., pages 1-11. Citeseer, 2009. 1, 2
A convolutional approach to reflection symmetry. M Cicconet, V Birodkar, M Lund, M Werman, D Geiger, 711New York. 2, 5, 6M. Cicconet, V. Birodkar, M. Lund, M. Werman, and D. Geiger. A convolutional approach to reflection symmetry. http://arxiv.org/abs/1609.05257, 2016. New York. 2, 5, 6, 7, 11
A general theory of equivariant cnns on homogeneous spaces. Taco Cohen, Mario Geiger, Maurice Weiler, arXiv:1811.02017arXiv preprintTaco Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. arXiv preprint arXiv:1811.02017, 2018. 2
Gauge equivariant convolutional networks and the icosahedral cnn. Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling, PMLRInternational Conference on Machine Learning. Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In International Conference on Machine Learning, pages 1321-1330. PMLR, 2019. 2
Group equivariant convolutional networks. Taco Cohen, Max Welling, PMLRInternational conference on machine learning. Taco Cohen and Max Welling. Group equivariant convo- lutional networks. In International conference on machine learning, pages 2990-2999. PMLR, 2016. 2
Mario Taco S Cohen, Maurice Geiger, Weiler, arXiv:1803.10743Intertwiners between induced representations (with applications to the theory of equivariant neural networks). arXiv preprintTaco S Cohen, Mario Geiger, and Maurice Weiler. Inter- twiners between induced representations (with applications to the theory of equivariant neural networks). arXiv preprint arXiv:1803.10743, 2018. 2
. S Taco, Max Cohen, Welling, arXiv:1612.08498Steerable cnns. arXiv preprintTaco S Cohen and Max Welling. Steerable cnns. arXiv preprint arXiv:1612.08498, 2016. 2
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, cvpr. Ieee611Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In cvpr, pages 248-255. Ieee, 2009. 6, 11
Symmetry axis extraction by a neural network. Kunihiko Fukushima, Masayuki Kikuchi, Neurocomputing. 6916Kunihiko Fukushima and Masayuki Kikuchi. Symmetry axis extraction by a neural network. Neurocomputing, 69(16):1827-1836, 2006. 2
2017 iccv challenge: Detecting symmetry in the wild. Christopher Funk, Seungkyu Lee, R Martin, Stavros Oswald, Wei Tsogkas, Andrea Shen, Sven Cohen, Yanxi Dickinson, Liu, Int. Conf. Comput. Vis. Worksh. 511Christopher Funk, Seungkyu Lee, Martin R Oswald, Stavros Tsogkas, Wei Shen, Andrea Cohen, Sven Dickinson, and Yanxi Liu. 2017 iccv challenge: Detecting symmetry in the wild. In Int. Conf. Comput. Vis. Worksh., pages 1692-1701, 2017. 2, 5, 11
Beyond planar symmetry: Modeling human perception of reflection and rotation symmetries in the wild. Christopher Funk, Yanxi Liu, Int. Conf. Comput. Vis. 711Christopher Funk and Yanxi Liu. Beyond planar symme- try: Modeling human perception of reflection and rotation symmetries in the wild. In Int. Conf. Comput. Vis., pages 793-803, 2017. 1, 2, 3, 4, 5, 6, 7, 11
Combining appearance and gradient information for image symmetry detection. Alessandro Gnutti, Fabrizio Guerrini, Riccardo Leonardi, IEEE Trans. Image Process. 2Alessandro Gnutti, Fabrizio Guerrini, and Riccardo Leonardi. Combining appearance and gradient information for image symmetry detection. IEEE Trans. Image Process., 2021. 2
Rotation equivariant siamese networks for tracking. K Deepak, Devanshu Gupta, Efstratios Arya, Gavves, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDeepak K Gupta, Devanshu Arya, and Efstratios Gavves. Rotation equivariant siamese networks for tracking. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12362-12371, 2021. 2
Redet: A rotation-equivariant detector for aerial object detection. Jiaming Han, Jian Ding, Nan Xue, Gui-Song Xia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 2786-2795, 2021. 2, 4, 6
Discovering texture regularity as a higher-order correspondence problem. James Hays, Marius Leordeanu, Alexei A Efros, Yanxi Liu, Eur. Conf. Comput. Vis. SpringerJames Hays, Marius Leordeanu, Alexei A Efros, and Yanxi Liu. Discovering texture regularity as a higher-order corre- spondence problem. In Eur. Conf. Comput. Vis., pages 522- 535. Springer, 2006. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, cvpr. 411Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In cvpr, pages 770-778, 2016. 4, 6, 11
. Emiel Hoogeboom, W T Jorn, Peters, S Taco, Max Cohen, Welling, arXiv:1803.02108Hexaconv. arXiv preprintEmiel Hoogeboom, Jorn WT Peters, Taco S Cohen, and Max Welling. Hexaconv. arXiv preprint arXiv:1803.02108, 2018. 2
A signal processing approach to symmetry detection. Yosi Keller, Yoel Shkolnisky, IEEE Trans. Image Process. 158Yosi Keller and Yoel Shkolnisky. A signal processing ap- proach to symmetry detection. IEEE Trans. Image Process., 15(8):2198-2207, 2006. 2
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, In Int. Conf. Learn. Represent. 6Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Int. Conf. Learn. Represent., 2015. 6
Rotation symmetry group detection via frequency analysis of friezeexpansions. Seungkyu Lee, T Robert, Yanxi Collins, Liu, IEEE Conf. Comput. Vis. Pattern Recog. 25Seungkyu Lee, Robert T Collins, and Yanxi Liu. Rotation symmetry group detection via frequency analysis of frieze- expansions. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1-8. IEEE, 2008. 2, 5
Skewed rotation symmetry group detection. Seungkyu Lee, Yanxi Liu, IEEE Trans. Pattern Anal. Mach. Intell. 3295Seungkyu Lee and Yanxi Liu. Skewed rotation symmetry group detection. IEEE Trans. Pattern Anal. Mach. Intell., 32(9):1659-1672, 2009. 2, 5
Extracting periodicity of a regular texture based on autocorrelation functions. Hsin-Chih Lin, Ling-Ling Wang, Shi-Nine Yang, Pattern recognition letters. 185Hsin-Chih Lin, Ling-Ling Wang, and Shi-Nine Yang. Ex- tracting periodicity of a regular texture based on autocorre- lation functions. Pattern recognition letters, 18(5):433-443, 1997. 2
Kaiming He, and Piotr Dollár. Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, IEEE Conf. Comput. Vis. Pattern Recog. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2980-2988, 2017. 4
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. Springer56Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 5, 6
Symmetry detection from realworld images competition 2013: Summary and results. Jingchen Liu, George Slota, Gang Zheng, Zhaohui Wu, Minwoo Park, Seungkyu Lee, Ingmar Rauschert, Yanxi Liu, IEEE Conf. Comput. Vis. Pattern Recog. Worksh. 711Jingchen Liu, George Slota, Gang Zheng, Zhaohui Wu, Min- woo Park, Seungkyu Lee, Ingmar Rauschert, and Yanxi Liu. Symmetry detection from realworld images competi- tion 2013: Summary and results. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh., pages 200-205, 2013. 2, 5, 6, 7, 11
A computational model for periodic pattern perception based on frieze and wallpaper groups. Yanxi Liu, T Robert, Yanghai Collins, Tsin, IEEE Trans. Pattern Anal. Mach. Intell. 263Yanxi Liu, Robert T Collins, and Yanghai Tsin. A computa- tional model for periodic pattern perception based on frieze and wallpaper groups. IEEE Trans. Pattern Anal. Mach. In- tell., 26(3):354-371, 2004. 2
Computational symmetry in computer vision and computer graphics. Yanxi Liu, Hagit Hel-Or, Craig S Kaplan, Now publishers Inc1Yanxi Liu, Hagit Hel-Or, and Craig S Kaplan. Computa- tional symmetry in computer vision and computer graphics. Now publishers Inc, 2010. 1
Near-regular texture analysis and manipulation. Yanxi Liu, Wen-Chieh Lin, James Hays, ACM Transactions on Graphics (TOG). 233Yanxi Liu, Wen-Chieh Lin, and James Hays. Near-regular texture analysis and manipulation. ACM Transactions on Graphics (TOG), 23(3):368-376, 2004. 2
Distinctive image features from scaleinvariant keypoints. G David, Lowe, Int. J. Comput. Vis. 602David G Lowe. Distinctive image features from scale- invariant keypoints. Int. J. Comput. Vis., 60(2):91-110, 2004. 2
Detecting symmetry and symmetric constellations of features. Gareth Loy, Jan-Olof Eklundh, Eur. Conf. Comput. Vis. Springer1Gareth Loy and Jan-Olof Eklundh. Detecting symmetry and symmetric constellations of features. In Eur. Conf. Comput. Vis., pages 508-521. Springer, 2006. 1, 2
Rotation equivariant vector field networks. Diego Marcos, Michele Volpi, Nikos Komodakis, Devis Tuia, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionDiego Marcos, Michele Volpi, Nikos Komodakis, and Devis Tuia. Rotation equivariant vector field networks. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 5048-5057, 2017. 2
Learning to detect natural image boundaries using local brightness, color, and texture cues. David R Martin, C Charless, Jitendra Fowlkes, Malik, IEEE Trans. Pattern Anal. Mach. Intell. 265David R Martin, Charless C Fowlkes, and Jitendra Ma- lik. Learning to detect natural image boundaries using lo- cal brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell., 26(5):530-549, 2004. 6
Deformed lattice detection in real-world images using mean-shift belief propagation. Minwoo Park, Kyle Brocklehurst, T Robert, Yanxi Collins, Liu, IEEE Trans. Pattern Anal. Mach. Intell. 3110Minwoo Park, Kyle Brocklehurst, Robert T Collins, and Yanxi Liu. Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Trans. Pattern Anal. Mach. Intell., 31(10):1804-1816, 2009. 2
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc3211Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai- son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An im- perative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Informa- tion Processing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc., 2019. 6, 11
Detecting rotational symmetries. Naga V Shiv, Larry S Prasad, Davis, Int. Conf. Comput. Vis. IEEE2V Shiv Naga Prasad and Larry S Davis. Detecting rotational symmetries. In Int. Conf. Comput. Vis., volume 2, pages 954-961. IEEE, 2005. 1, 2
Learning to discover reflection symmetry via polar matching convolution. Ahyun Seo, Woohyeon Shim, Minsu Cho, Int. Conf. Comput. Vis. 711Ahyun Seo, Woohyeon Shim, and Minsu Cho. Learning to discover reflection symmetry via polar matching convolu- tion. In Int. Conf. Comput. Vis., 2021. 1, 2, 3, 4, 5, 6, 7, 11
Robust detection of skewed symmetries by combining local and semi-local affine invariants. Dinggang Shen, H S Horace, Eam Khwang Ip, Teoh, Pattern Recognition. 347Dinggang Shen, Horace HS Ip, and Eam Khwang Teoh. Robust detection of skewed symmetries by combining lo- cal and semi-local affine invariants. Pattern Recognition, 34(7):1417-1428, 2001. 1, 2
. Ivan Sosnovik, Michał Szmaja, Arnold Smeulders, arXiv:1910.11093Scale-equivariant steerable networks. arXiv preprintIvan Sosnovik, Michał Szmaja, and Arnold Smeulders. Scale-equivariant steerable networks. arXiv preprint arXiv:1910.11093, 2019. 2
Learning-based symmetry detection in natural images. Stavros Tsogkas, Iasonas Kokkinos, Eur. Conf. Comput. Vis. Springer26Stavros Tsogkas and Iasonas Kokkinos. Learning-based symmetry detection in natural images. In Eur. Conf. Com- put. Vis., pages 41-54. Springer, 2012. 2, 6
Unified detection of skewed rotation, reflection and translation symmetries from affine invariant contour features. Zhaozhong Wang, Lianrui Fu, Y F Li, Pattern Recognition. 474Zhaozhong Wang, Lianrui Fu, and YF Li. Unified detec- tion of skewed rotation, reflection and translation symmetries from affine invariant contour features. Pattern Recognition, 47(4):1764-1776, 2014. 2
Reflection symmetry detection using locally affine invariant edge correspondence. Zhaozhong Wang, Zesheng Tang, Xiao Zhang, IEEE Trans. Image Process. 244Zhaozhong Wang, Zesheng Tang, and Xiao Zhang. Reflec- tion symmetry detection using locally affine invariant edge correspondence. IEEE Trans. Image Process., 24(4):1297- 1301, 2015. 1, 2
General E(2)-Equivariant Steerable CNNs. Maurice Weiler, Gabriele Cesa, Conference on Neural Information Processing Systems (NeurIPS). 611Maurice Weiler and Gabriele Cesa. General E(2)-Equivariant Steerable CNNs. In Conference on Neural Information Pro- cessing Systems (NeurIPS), 2019. 2, 4, 6, 11
Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco Cohen, arXiv:1807.025473d steerable cnns: Learning rotationally equivariant features in volumetric data. arXiv preprintMaurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. arXiv preprint arXiv:1807.02547, 2018. 2
Symmetry princeton university press. Hermann Weyl, 17Princeton, New JerseyHermann Weyl. Symmetry princeton university press. Princeton, New Jersey, page 17, 1952. 1
Harmonic networks: Deep translation and rotation equivariance. E Daniel, Stephan J Worrall, Daniyar Garbin, Gabriel J Turmukhambetov, Brostow, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDaniel E Worrall, Stephan J Garbin, Daniyar Turmukham- betov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 5028-5037, 2017. 2
Translation symmetry detection in a fronto-parallel view. Peng Zhao, Long Quan, IEEE Conf. Comput. Vis. Pattern Recog. Peng Zhao and Long Quan. Translation symmetry detection in a fronto-parallel view. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1009-1016. IEEE, 2011. 2
| []
|
[
"On the Paradox of Certified Training",
"On the Paradox of Certified Training"
]
| [
"Nikola Jovanović [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n\n",
"Mislav Balunović [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n\n",
"Maximilian Baader [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n\n",
"Martin Vechev [email protected] \nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n\n"
]
| [
"Department of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n",
"Department of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n",
"Department of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n",
"Department of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nDepartment of Computer Science\nETH Zurich\nETH Zurich\nETH Zurich\nETH Zurich\n"
]
| []
| Certified defenses based on convex relaxations are an established technique for training provably robust models. The key component is the choice of relaxation, varying from simple intervals to tight polyhedra. Counterintuitively, loose interval-based training often leads to higher certified robustness than what can be achieved with tighter relaxations, which is a well-known but poorly understood paradox. While recent works introduced various improvements aiming to circumvent this issue in practice, the fundamental problem of training models with high certified robustness remains unsolved. In this work, we investigate the underlying reasons behind the paradox and identify two key properties of relaxations, beyond tightness, that impact certified training dynamics: continuity and sensitivity. Our extensive experimental evaluation with a number of popular convex relaxations provides strong evidence that these factors can explain the drop in certified robustness observed for tighter relaxations. We also systematically explore modifications of existing relaxations and discover that improving unfavorable properties is challenging, as such attempts often harm other properties, revealing a complex tradeoff. Our findings represent an important first step towards understanding the intricate optimization challenges involved in certified training. arXiv:2102.06700v3 [cs.LG] 12 Oct 2022 José Mario Martínez. Minimization of discontinuous cost functions by smoothing. Acta Applicandae Mathematica, 71(3):245-260, 2002. Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In Vechev. Prima: Precise and general neural network certification via multi-neuron convex relaxations, 2021.Panos M Pardalos and Stephen A Vavasis. Quadratic programming with one negative eigenvalue is np-hard. | null | [
"https://export.arxiv.org/pdf/2102.06700v3.pdf"
]
| 252,873,705 | 2102.06700 | 240cf80af61a1ac4969d593c8be1a3381016939c |
On the Paradox of Certified Training
Nikola Jovanović [email protected]
Department of Computer Science
Department of Computer Science
Department of Computer Science
Department of Computer Science
ETH Zurich
ETH Zurich
ETH Zurich
ETH Zurich
Mislav Balunović [email protected]
Department of Computer Science
Department of Computer Science
Department of Computer Science
Department of Computer Science
ETH Zurich
ETH Zurich
ETH Zurich
ETH Zurich
Maximilian Baader [email protected]
Department of Computer Science
Department of Computer Science
Department of Computer Science
Department of Computer Science
ETH Zurich
ETH Zurich
ETH Zurich
ETH Zurich
Martin Vechev [email protected]
Department of Computer Science
Department of Computer Science
Department of Computer Science
Department of Computer Science
ETH Zurich
ETH Zurich
ETH Zurich
ETH Zurich
On the Paradox of Certified Training
* Equal contribution
Certified defenses based on convex relaxations are an established technique for training provably robust models. The key component is the choice of relaxation, varying from simple intervals to tight polyhedra. Counterintuitively, loose interval-based training often leads to higher certified robustness than what can be achieved with tighter relaxations, which is a well-known but poorly understood paradox. While recent works introduced various improvements aiming to circumvent this issue in practice, the fundamental problem of training models with high certified robustness remains unsolved. In this work, we investigate the underlying reasons behind the paradox and identify two key properties of relaxations, beyond tightness, that impact certified training dynamics: continuity and sensitivity. Our extensive experimental evaluation with a number of popular convex relaxations provides strong evidence that these factors can explain the drop in certified robustness observed for tighter relaxations. We also systematically explore modifications of existing relaxations and discover that improving unfavorable properties is challenging, as such attempts often harm other properties, revealing a complex tradeoff. Our findings represent an important first step towards understanding the intricate optimization challenges involved in certified training. arXiv:2102.06700v3 [cs.LG] 12 Oct 2022 José Mario Martínez. Minimization of discontinuous cost functions by smoothing. Acta Applicandae Mathematica, 71(3):245-260, 2002. Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In Vechev. Prima: Precise and general neural network certification via multi-neuron convex relaxations, 2021.Panos M Pardalos and Stephen A Vavasis. Quadratic programming with one negative eigenvalue is np-hard.
Introduction
Recent years have witnessed an increased interest in developing methods for efficiently training provably robust machine learning models. Several core techniques are based on convex relaxations (e.g., CROWN , hBox (Mirman et al., 2018)), which provide robustness guarantees by approximating the effect of network layers on the input specification. A key property of a convex relaxation is its tightness, indicating how close it is to the non-convex shape it overapproximates.
The Paradox of Certified Training. As tighter relaxations are more desirable for certification (Salman et al., 2019b;Singh et al., 2019b), a natural belief is that tightness is also favorable for relaxations when used in training as part of a certified defense. Surprisingly, several prior works (Gowal et al., 2018;Balunovic & Vechev, 2020;Lyu et al., 2021;Lee et al., 2021) have noticed that training with IBP (Gowal et al., 2018), which is a loose relaxation that performs poorly for certification of undefended models, allows for higher certified robustness compared to training with tighter relaxations. We illustrate this on a real model in Table 1. We easily observe a paradox: tighter relaxations obtain worse results. More specifically, none of the tighter relaxations can consistently outperform the loose IBP. The paradox has strongly influenced the field of certified training. While some hypothesize that it occurs due to tighter relaxations introducing difficult optimization problems (Balunovic & Vechev, 2020;Lee et al., 2021), the underlying reasons for this difficulty remain unclear. Identifying these reasons and understanding how relaxations affect training is important yet very challenging as: (i) convex relaxations require more complex (symbolic) computations than those of standard (concrete) forward passes, and thus cannot directly benefit from existing convergence results, and (ii) relaxations come with different, previously unexplored properties, and identifying precisely those which affect certified training, is difficult. In light of this, recent advances primarily focus on mitigating the practical effects of the paradox by improving the underlying optimization (Balunovic & Vechev, 2020;Shi et al., 2021;Lyu et al., 2021). While these developments have advanced the state of the art, a large gap between empirical and provable robustness of models remains Croce et al., 2020), and we still lack principled investigations of the paradox.
This Work. In this work we take a step towards addressing this void and understanding the paradox of certified training. We hypothesize that two additional properties beyond tightness strongly impact certified training. First, we notice that some relaxations optimize discontinuous losses during training. Second, we find that some relaxations are sensitive to changes in weights, introducing locally non-linear loss landscapes. As they induce more complex losses, both of these properties can have negative impact on optimization, and consequently lead to low certified robustness.
While the results in Table 1 seem contradictory if considering only tightness, additionally considering continuity and sensitivity provides a more viable explanation and helps demystify the paradox. Concretely, tighter relaxations in Table 1 are harmed by discontinuity or high sensitivity of their loss, shedding light on why they do not outperform the continuous and non-sensitive IBP. On a range of datasets and architectures, our experimental evaluation further substantiates the importance of considering these two additional properties in order to gain a deeper understanding of certified training dynamics.
Main Contributions.
Our key contributions are:
• Two fundamental properties, continuity and sensitivity, that along with tightness influence the success of a convex relaxation when used in certified training (Section 4).
• Extensive experiments on a range of convex relaxations, substantiating our hypothesis that considering continuity and sensitivity is necessary to understand the paradox of certified training (Section 5).
• A study of systematic changes to existing relaxations, showing that improving an unfavorable property of a relaxation is challenging, as this often negatively affects other properties (Section 6).
We believe the ideas presented in our work benefit further investigations of the paradox, as well as future attempts to derive new certified defenses that obtain state-of-the-art experimental results. Our paper is structured as follows. First, we provide the necessary background (Section 2) and state the paradox more formally (Section 3). In Section 4, we present our core results on continuity and sensitivity of popular relaxations. In Section 5, we provide detailed experimental evidence supporting our findings. Finally, in Section 6 we present a study of relaxation modifications, demonstrating complex dependencies between properties which complicate the process of improving unfavorable properties of existing relaxations.
Background and Related Work
We now discuss related work and provide the necessary background on training and certifying with convex relaxations. We present this background within a common framework (Salman et al., 2019b), capturing various single neuron relaxations to simplify analysis and comparison.
The discovery that neural networks are not robust to small input perturbations (Szegedy et al., 2013) led to defenses based on adversarial training (Goodfellow et al., 2015;Madry et al., 2018), hardening the model by training with adversarial examples. While adversarial defenses attain good empirical robustness, they lack robustness guarantees. Popular certification methods leverage convex relaxations Gehr et al., 2018;Singh et al., 2018;Raghunathan et al., 2018b;Singh et al., 2019b;Dathathri et al., 2020;Xu et al., 2020;Lyu et al., 2021), a comprehensive exposition of which can be found in Salman et al. (2019b)-here, we only provide an overview needed to understand our work. We focus on linear relaxations, as they are scalable (e.g., can certify ResNet34 (Serre et al., 2021)), contrary to SDP (Raghunathan et al., 2018b;Dathathri et al., 2020) which is limited to smaller networks. Other, prohibitively costly approaches, use multi-neuron relaxations (Singh et al., 2019a;Tjandraatmadja et al., 2020;Müller et al., 2021;Wang et al., 2021a), or rely on SMT (Katz et al., 2017) and MILP (Tjeng et al., 2019) solvers. Two fundamentally different competitive approaches, not our focus, are l ∞ -distance nets (Zhang et al., 2021), used together with relaxations, that also give rise to optimization difficulties (recently tackled in Zhang et al. (2022)), and randomized smoothing (Cohen et al., 2019;Salman et al., 2019a) which is more scalable than convex relaxations but offers only probabilistic guarantees, and introduces work at inference time, making it unsuitable for certain applications.
Setting. We consider an L-layer feedforward ReLU network h = h L • h L−1 • · · · • h 1 with parameters θ, where h i : R ni−1 → R ni is the transformation applied at layer i. Let the network input be x 0 ∈ R n0 and let
x i := h i • · · · • h 1 (x 0 ) be the result after layer i, for i ∈ [L], where [L] := {1, . . . , L}.
Each h i is either a dense/convolutional layer, both of which can be viewed as an affine transformation
x i = W i x i−1 + b i , or a nonlinear ReLU layer x i = max(x i−1 , 0)
, where max is applied componentwise. Further, we assume the two layer types alternate, with h 1 and h L being affine. We focus on classification, where inputs x 0 are classified to one of n L classes based on the logit vector z ≡ x L , and the case of ∞ robustness. Namely, to certify robust classification to label y in an ∞ ball of radius > 0 around x, we prove that for every y = y
c T y z < 0, z := h(x 0 ), ∀x 0 , x − x 0 ∞ < ,(1)
where c y = e y − e y , by upper bounding the left-hand side with a negative value.
Given some , and a set D of input examples (x, y) ∈ R n0 × [n L ], we use CR(θ, , m) ∈ [0, 1] to denote the certified robustness of a network with parameters θ under certification method m, i.e., the ratio of examples from D for which m is able to prove that the network satisfies Equation 1.
Convex Relaxations.
On an intutive level, convex relaxations are a class of methods for robustness certification of neural networks that attempt to prove Equation 1 by deriving and propagating bounds on possible values of intermediate results, overapproximating (i.e., relaxing) the effect of non-linear activations in the network to obtain a computationally efficient certificate.
More formally, certification with convex relaxations proceeds through the network layer by layer, producing elementwise lower and upper bounds of x i , l i ∈ R ni and u i ∈ R ni respectively. Starting from l 0 = x − and u 0 = x + we aim to obtain l L and u L , which yields the desired upper bounds of c T y z for all y , allowing us to verify the robustness property. To this end, all following methods (linear relaxations) maintain one upper and one lower linear bound for each neuron x i,j of layer i:
a ij T x i−1 + d ij ≤ x i,j ≤ a ij T x i−1 + d ij ,(2)
where a ij , a ij ∈ R ni−1 and d ij , d ij ∈ R for all j ∈ [n i ]. Excluding the IBP relaxation, all methods use
x i,j = (W i x i−1 ) j + b i,j for linear layer bounds, x i,j = 0 if u i−1,j ≤ 0 and x i,j = x i−1,j if l i−1,j ≥ 0
for stable ReLU bounds, and calculate l i and u i using backsubstitution introduced next. Unstable ReLU (l i−1,j < 0 < u i−1,j ) bounds are method-specific and can depend on l i−1 and u i−1 .
Backsubstitution.
Starting with x i,j ≤ a ij T x i−1 + d ij (similarly for the lower bound), we substitute x i−1 by replacing each x i−1,j with its respective upper bound a i−1,j x i−2 + d i−1j if (a ij ) j is positive and with its Figure 1: Illustration of unstable ReLU convex relaxations for methods introduced in Section 2.
x i−1,j x i,j (a) DeepZ x i−1,j x i,j (b) IBP & hBox x i−1,j x i,j (c) CROWN & CROWN-IBP (R)
lower bound otherwise. This is repeated recursively through the layers until we reach constraints of the form
p j T x 0 + q j ≤ x i,j ≤ p j T x 0 + q j ,(3)
where p j T , p j T ∈ R n0 and q j , q j ∈ R for all j ∈ [n i ]. Here, we can in turn substitute the appropriate side of l 0 ≤ x 0 ≤ u 0 for each element in x 0 , to obtain a lower and upper bound l i,j and u i,j on x i,j solely w.r.t. the bounds of x 0 . We provide further details of this procedure in Appendix A.
Note that while some of the relaxations have more efficient implementations, they produce the same outputs as our formulation. We use this formulation as it allows us to capture all of the needed relaxations, and further, our results are conceptual and hold for any implementation.
Tightness. Given a network robust around x, the success of certification with a relaxation depends on its tightness. Intuitively, tighter relaxations utilize overapproximations that are closer to the approximated non-convex shapes, and produce tighter bounds l i,j and u i,j on each x i,j . For a small number of relaxation pairs (r 1 , r 2 ), e.g., hBox and IBP (see Appendix B.2), we can prove that r 1 is strictly tighter than r 2 (i.e., each bound is strictly tighter), implying that for any fixed network and perturbation, every example that is certified by r 2 is also certified by r 1 . For most other pairs there is a consistent empirical understanding of relative tightness (Salman et al., 2019b;Singh et al., 2019b), which we will aim to quantify in Section 3. We now proceed to introduce the specifics of commonly used relaxations.
DeepZ. The DeepZ relaxation (Singh et al., 2018), equivalent to CAP , Fast-Lin , and Neurify (Wang et al., 2018b), uses the following for unstable ReLUs (Figure 1a):
λx i−1,j ≤ x i,j ≤ λx i−1,j − λl i−1,j , where λ := u i−1,j /(u i−1,j − l i−1,j ).
IBP/hBox. The IBP (Gowal et al., 2018) or Box (Mirman et al., 2018;Gehr et al., 2018) relaxation uses interval arithmetic instead of backsubstitution, ignoring other dependencies. For affine layers, the upper bound (similarly for the lower bound) is Figure 1b). hBox is an instantiation of a hybrid zonotope (Mirman et al., 2018), also called symbolic interval in Wang et al. (2018a). It uses the same bounds as IBP, 0 ≤ x i,j ≤ u i−1,j , for unstable ReLUs, replacing x i,j with these bounds in the rest of backsubstitution. For stable ReLUs and affine layers, as with all other methods except IBP, it uses and DeepPoly (Singh et al., 2019b) have the same upper bound as DeepZ for unstable ReLUs, but choose the lower bound adaptively: Figure 1c). CROWN-IBP (R) is a variant which efficiently computes l i and u i using IBP at all layers except the last, which uses CROWN and performs a full backsubstitution.
(W i h i−1 ) j + b i,j where h i−1,j = u i−1,j if the corresponding element of W i is positive, and h i−1,j = l i−1,j otherwise. For ReLU, it uses ReLU(l i−1,j ) ≤ x i,j ≤ ReLU(u i−1,j ) (x i,j = (W i x i−1 ) j + b i,j and x i,j = x i−1,j (x i,j = 0), respectively.
CROWN/CROWN-IBP (R). CROWN
0 ≤ x i,j if −l i−1,j ≥ u i−1,j , or x i−1,j ≤ x i,j otherwise (
Certified Training. While adversarial training often improves empirical robustness, certified robustness (ratio of inputs where we can guarantee robustness as in Equation 1) known observation reaffirmed in Appendix B.1. Certified training Mirman et al., 2018;Gowal et al., 2018;Raghunathan et al., 2018a;Lyu et al., 2021) addresses this, aiming to produce networks amenable to certification by incorporating the certification method into training, and minimizing the cross-entropy loss L CE (ẑ, y), whereẑ is the worst case logit, s.t.,ẑ y = l L,y andẑ y = u L,y for all y = y.
All above relaxations can be used in certified training, but certified robustness obtained this way is far from the theoretical limit- Baader et al. (2020) proved that IBP-certified networks can approximate any continuous function arbitrarily precisely. Note that in the following, we differentiate certified training with the CROWN-IBP (R) relaxation (β : 1 → 1 in ) from the certified defense CROWN-IBP (β : 1 → 0 in ) which combines CROWN-IBP (R) and IBP losses in training.
The Paradox of Certified Training
We now present the paradox of certified training, a well-known observation that has limited the applicability of certified defenses in practice, and discuss existing hypotheses that attempt to explain it.
Tightness Should Help Training. Recall from Section 2 that while we can rarely prove that a relaxation is strictly tighter than another relaxation, there is a consistent empirical understanding of their relative tightness. We illustrate this in Figure 2 by comparing certified robustness (CR) curves of relaxations on a fixed naturally trained MNIST network. We further quantify empirical tightness as CR-AUC, the area under the CR curve. While CR-AUC varies based on network choice and the training method, for a fixed setting, it can be used to compare tightness of methods, and we use it in the following when referring to tightness. As tighter relaxations certify more examples when applied to naturally trained networks, it is natural to assume that this effect extends to certified training, i.e., training with a tighter method should lead to higher CR.
Training with Tighter Relaxations Leads to Worse Results
. Surprisingly, it is well established (Gowal et al., 2018;Balunovic & Vechev, 2020;Lee et al., 2021) that this is not the case in practice, and tightness can in fact harm certified robustness when a relaxation is used in training. Most notably, it has been observed that IBP training often outperforms training with DeepZ and CROWN which are (empirically) tighter. We refer to this phenomenon as the paradox of certified training. We illustrate this paradox in Table 1, where we report CR-AUC (from the experiment in Figure 2) and certified robustness (with test = 0.3) after certified training of the same MNIST network with each relaxation.
Existing Hypotheses. While recent state-of-the-art certified defenses based on convex relaxations (Balunovic & Vechev, 2020;Shi et al., 2021;Lyu et al., 2021) focus on mitigating the paradox in practice, the fundamental reasons behind it were so far poorly understood. Some conjecture that tighter relaxations over-regularize the network , yield hard optimization problems (Balunovic & Vechev, 2020), or simply state that they unexpectedly underperform (Gowal et al., 2018;Lyu et al., 2021), but they have not investigated this further. Lee et al. (2021) provide limited theoretical results that attempt to give insights into the paradox, but are unable to explain the results of most relaxations (e.g., hBox, CROWN, CROWN-IBP (R)) as these are discontinuous (Section 4.2), thus directly violating their Lipschitz continuity assumptions.
Properties of Convex Relaxations
We investigate the reasons behind the paradox discussed in Section 3. Concretely, we introduce two key properties of relaxations, continuity and sensitivity, and use them alongside tightness, which was the main focus of prior work, to improve our understanding of the paradox.
Tightness of Convex Relaxations
While tightness alone cannot explain the performance differences, it still has a significant role in the final certified robustness. We highlight this with the following theorem:
Theorem 1. Let r 1 and r 2 be two convex relaxations, where r 1 is known to be strictly tighter than r 2 . For a network parametrized by θ and any ≥ 0, it holds that max θ CR(θ, , r 1 ) ≥ max θ CR(θ, , r 2 ).
The theorem (see the proof in Appendix B.3) tells us that with a perfect optimizer, tightness would be the sole performance factor of relaxations, provided that one is strictly tighter than the other. However, our results in Table 1 demonstrate that this does not happen in practice, e.g., despite hBox being strictly tighter than IBP, training with it results in worse certified robustness. Clearly, gradient-based optimization in practice leads to worse parameters for tighter relaxations and the underlying reasons are unclear. While convex relaxations represent layer constraints as convex sets, the training loss is not necessarily convex with respect to network weights. Moreover, we observe that some relaxations create a discontinuous loss landscape, harming first-order optimization as gradients near the discontinuity do not provide any information about the function values after the discontinuity (see Section 4.4). We show that CROWN, CROWN-IBP (R), and hBox all suffer from this problem. In Section 5.1 we show that these relaxations have many discontinuities when instantiated on a realistic network, but here we focus on a minimal example to better illustrate the core issue. Note that our definition of continuity is binary and depends only on the convex relaxation, without requiring knowledge of the architecture nor the training process. There could be other, more fine-grained numerical definitions, such as counting the number of discontinuities (see for example Figure 6 or Table 4) along a certain trajectory, but these may depend on the setting and necessarily require running the training, as they cannot be computed beforehand. See Appendix I for a further discussion.
Continuity of Convex Relaxations
We focus on the discontinuity of the output layer lower bounds l L , treating each l L,j as a function of the network weights. Note that all findings can be easily extended to the actual loss function L CE (ẑ, y). We construct a minimal example to produce the discontinuities: a 3-layer network with input x 0,1 ∈ [−1, 1], affine layer x 1,1 = x 1,2 = x 0,1 + w where w is the only network parameter, ReLU layer x 2,1 = ReLU(x 1,1 ), x 2,2 = ReLU(x 1,2 ), and the output layer given as x 3,1 = x 2,1 + 1 and x 3,2 = x 2,2 − x 2,1 (see Appendix C.1 for an illustration of the network). Figure 3 shows the discontinuities that arise when varying the parameter w.
Discontinuity of CROWN and CROWN-IBP (R).
For CROWN, the discontinuities arise due to its adaptive choice of the lower bound for unstable ReLUs (Figure 1c), used as a heuristic to tighten the bounds. In our example, assume we use CROWN to compute the lower bound l 3,1 of x 3,1 . For w ∈ [−1, 1], the ReLUs are unstable with the preactivation range [−1 + w, 1 + w]. Thus, for w ∈ [−1, 0], as −l 2,1 ≥ u 2,1 , CROWN picks the lower bound x 2,1 ≥ 0 so l 3,1 = 1, and for w ∈ (0, 1] the lower bound x 2,1 ≥ x 1,1 so l 3,1 = w. This creates a discontinuity when −l 2,1 = u 2,1 , i.e., at w = 0. This implies the discontinuity of CROWN-IBP (R) as it uses CROWN for its final bounds.
Discontinuity of hBox. The discontinuities of hBox are caused by hBox switching from simple IBP bounds (Figure 1b), to the tight relation x i,j = x i−1,j . Assume we are deriving l 3,2 . For w ∈ (−1, 1), the ReLUs are unstable, so we use IBP bounds 0 ≤ x 2,j ≤ u 1,j = 1 + w for j ∈ {1, 2}, obtaining l 3,2 = −1 − w, which approaches −2 as w approaches 1. However, for w ≥ 1, we tighten the bound using x 2,j = x 1,j , resulting in l 3,2 = 0, thus a discontinuity when l 2,1 = l 2,2 = 0, i.e., at w = 1.
As our example shows, few neurons are sufficient to produce discontinuities. Thus, we expect large networks to have a large number of discontinuities, appearing at any ReLU neuron x i,j , whenever −l i−1,j = u i−1,j (for CROWN) or l i−1,j = 0 (for hBox). Backsubstitution accumulates this effect, creating an unfavorable landscape. As mentioned earlier, we demonstrate this in practice on a realistic network in Section 5.1.
Continuity of Other
− λl i−1,j , where λ = u i−1,j /(u i−1,j − l i−1,j ). Then, as l i−1,j → 0, λ → 1 and as u i−1,j → 0, λ → 0.
For both, the unstable case bounds (in the limit) match the stable case ones; therefore, there is no discontinuity.
Sensitivity of Convex Relaxations
Next, we analyze the effect of small weight changes on the output loss by measuring the degree of change in the output when the first layer weights are shifted by δ in the gradient direction. While changes in other layers also matter, we consider only changes in the first layer to make the computation of the bounds tractable.
To this end, we define a set of rational functions of δ as R N (δ) = {p(δ)/q(δ) | p(δ), q(δ) ∈ P N (δ)}, where P N (δ) denotes the polynomials of degree up to N . Note that P N (δ) ⊆ R N (δ). We say that some neuron x i,j is in the set P N (δ) (or R N (δ)) if that set contains both l i,j and u i,j , now treated as functions of δ where δ = 0 corresponds to the concrete l i,j and u i,j used in Section 2. Everything else is treated as a constant. During backsubstitution for x i,j , all encountered x i ,j are repeatedly replaced with bound expressions from Equation 2, until we reach Equation 3 to obtain linear expressions for l i,j and u i,j . If the output neurons of the network are in R N (δ), we say that the sensitivity of a relaxation is N . Sensitivity is an undesirable property, as it introduces a complex loss landscape that hinders optimization, as further explained in Section 4.4. Note that while coefficients of the polynomials also matter, they are influenced by the weights which makes it difficult to compute the worst-case bound in closed form. In the following, we compute the sensitivity of convex relaxations (more detailed derivation is deferred to Appendix J), to show that DeepZ, CROWN and CROWN-IBP (R) are highly sensitive, while IBP and hBox are not, inducing more favorable landscapes. As before, while we focus on l L , the conclusions can be extended to the actual loss. We always consider the worst case w.r.t. all bound choices and ReLU stability, and assume all layers are of size M . While the sensitivity values we obtain represent an upper bound, they clearly demonstrate that some relaxations are highly sensitive, as opposed to IBP and hBox, for which our result on insensitivity is exact. This is summarized in Table 2.
Computing the Sensitivity. As the first layer is affine, we have x 1,j ∈ P 1 (δ) for all relaxations. To compute the sensitivity, we sequentially analyze the effect of each layer.
IBP/hBox. For IBP, assume that at layer i, all x i−1,j ∈ P N (δ). For an affine layer, as l i,j and u i,j are linear combinations of elements of u i−1 and l i−1 , we have x i,j ∈ P N (δ). For a ReLU layer, as u i,j = ReLU(u i−1,j ) (same for l i,j ) we again have x i,j ∈ P N (δ). Thus, all neurons are in P 1 (δ) ⊆ R 1 (δ) so the sensitivity of IBP is 1. For hBox, the only difference are affine layers, where now
x i,j = (W i x i−1 ) j + b i,j .
As linear combinations of elements of P N (δ) are in P N (δ), all neurons stay in P 1 (δ) and the sensitivity of hBox is also 1. DeepZ/CROWN. The ReLU bounds of DeepZ, λx i−1,j and λx i−1,j − λl i−1,j , significantly increase the sensitivity. After the first ReLU layer, we have that x 2,j ∈ R 2 (δ) as λ ∈ R 1 (δ). This changes the behavior of all following affine layers, as a linear combination of M elements of
R N (δ) is in R M N (δ). Thus, x 3,j ∈ R 2M (δ).
For the following ReLU layers, if we assume the inputs are in R N (δ), we have that λ ∈ R 2N (δ), and thus the outputs are in R 3N (δ). Putting this together, each ReLU-affine block from layer 4 onwards multiplies the sensitivity by 3M . As there are B ≡ L/2 − 1 such blocks, we obtain 2 · 3 B M B+1 for the final sensitivity. CROWN uses the same upper ReLU bounds as DeepZ, so we can apply the same analysis, and show that the sensitivity of CROWN is 2 · 3 B M B+1 as well. Thus, both DeepZ and CROWN are highly sensitive.
CROWN-IBP (R).
Here, at ReLU layer i during the (only) backsubstitution, we have to consider l i−1,j and u i−1,j separately from x i−1,j . While the former were precomputed with IBP, and are thus in P 1 (δ), the latter get substituted as usual and can carry larger sensitivity. Assuming x i−1,j ∈ R N (δ) (N ≥ 1) and observing that we always have λ ∈ R 1 (δ), it follows that x i,j ∈ R N +1 (δ). As the affine layers have the same effect as before, each ReLU-affine block now increases sensitivity from N to (N + 1)M . As before, x 3,j ∈ R 2M (δ), so summing the arising geometric series gives the final sensitivity of
2M B+2 −M B+1 −M M −1 , which is in O(M B+1 )
. Clearly, CROWN-IBP (R) is also significantly more sensitive than IBP and hBox.
Continuity and Sensitivity Impact Optimization
We now discuss how discontinuity and high sensitivity negatively affect optimization with gradient descent (GD). We consider a randomly initialized network and optimize first layer weights via GD, trying to maximize the lower bound of one output neuron, produced using a particular relaxation. For plotting, we restrict the optimization to the direction of the gradient in the initial point δ = 0 (see Appendix D for details).
The Impact of Discontinuities. The key issue with discontinuous relaxations is that GD can, at a discontinuity, fall off a cliff in the landscape to a region from which it fails to recover-i.e., where gradients lead it to a suboptimal local maximum. Figure 4 (left) shows a manifestation of this issue. Even though the landscape of CROWN allows for a higher solution than IBP, GD with CROWN converges to a worse value than IBP. Contrary to this, continuous relaxations such as IBP allow GD to easily navigate the landscape. This matches the literature stating that optimizing discontinuous functions requires complex algorithms (Conn & Mongeau, 1998;Martínez, 2002;Wechsung & Barton, 2014).
The Impact of High Sensitivity. Sensitive relaxations introduce a complex loss landscape with a larger number of local optima and saddle points, where GD can get stuck. Figure 4 (right) is an example where DeepZ has a highly non-linear landscape that traps GD at a local maximum with a low objective value, not allowing it to progress to better solutions. While DeepZ is tighter for all δ, IBP has the minimum sensitivity and is thus piecewise linear, allowing GD to quickly converge to a higher value. Extensive theory (Pardalos & Vavasis, 1991;Jibetean & de Klerk, 2006;Zoej et al., 2007) confirms that high-degree polynomial and rational functions, which appear for sensitive relaxations, are hard to optimize.
Experimental Evaluation
In this section we perform an experimental evaluation, further substantiating our hypothesis on the properties of relaxations that explain the paradox of certified training. First, in Section 5.1 we show that discontinuities and high sensitivity appear in practice. Then, in Section 5.2, we provide a deeper insight into the paradox by evaluating certified training and confirming our claims regarding the effect of continuity and sensitivity. First, we measure continuity and sensitivity on a naturally trained network. Namely, we train FC, a 5-layer network (see Appendix F.1) on MNIST. Then, we measure the change in the lower bound of one output neuron as we shift all first layer weights by δ in the gradient direction of that neuron. In Figure 5, we show the resulting bounds depending on δ, on a representative input with = 0.15. Similar results are observed for different choices of δ and . The experiment confirms our results: hBox, CROWN, and CROWN-IBP (R) indeed suffer from discontinuities, while IBP and DeepZ do not. We observe that CROWN has more discontinuities than other relaxations due to its adaptive lower bound. We further confirm our findings on more networks in Appendix E.1.
Continuity and Sensitivity in Practice
Additionally, in Figure 6 we measure continuity and sensitivity during certified training for each convex relaxation (complete details of the experiment provided in Appendix E.2). We can observe (left) that hBox is discontinuous at the start of training when more ReLUs are changing stability, and becomes more continuous as they stabilize, when CROWN is more discontinuous due to a larger percentage of consistently unstable ReLUs. These observations match our results on continuity from Section 4.2. While DeepZ is continuous, we can notice (right) that it is highly sensitive already very early in training, which explains its bad performance when used in certified training, contrary to what might be expected given its favorable tightness.
Evaluation of Certified Training
Next, we perform a thorough evaluation of certified training with all relaxations introduced in Section 2 on 4 widely used datasets (MNIST, FashionMNIST, SVHN, CIFAR-10) and 2 architectures: FC, a 5-layer dense network, and CONV, a 3-layer convolutional network. For CIFAR-10 we use the larger 4-layer CONV+, here necessary to obtain nontrivial accuracies after certified training. Note that further increasing network size only marginally boosts the results , but prevents training with time and memory intensive CROWN, which can already not be trained on CONV+. We focus on well-established and challenging cases Table 3: Certified training of two MNIST networks with modifications of relaxations aimed at improving unfavorable properties. The first row shows the favorability of tightness (T), continuity (C) and sensitivity (S) for each relaxation. The following rows show certified robustness (in %). Reproducing the Paradox. Our main results are shown in Table 2. Whereas prior work provides certain evidence, our comprehensive experiments over 5 relaxations, 4 datasets and several networks, confirm that the well-known paradox of certified training generally holds: tighter relaxations obtain worse results, and no tight relaxation can consistently outperform the loose IBP. Note that we aim to understand the behavior of certified training with a single relaxation. As previously noted, the paradox can in some cases be circumvented with advanced training schemes, e.g., the hybrid CROWN-IBP defense can often outperform IBP by combining CROWN-IBP(R) and IBP relaxations in training (see Appendix F.4 for expanded results).
I B P h B o x h B o x -D ia g h B o x -D ia g -C h B o x -S w it c h D e e p Z D e e p Z -B o x D e e p Z -D ia g D e e p Z -D ia g -C D e e p Z -S w it c h D e e p Z -S o f t D e e p Z -I B P ( R ) C R O W N C R O W N -0 C R O W N -0 -C C R O W N -0 -T r ia C R O W N -0 -T r ia -C C R O W N -1 C R O W N -1 -C C R O W N -1 -T r ia C R O W N -1 -T r ia -C C R O W N -S o f t C R O W N -I B P ( R ) C R O W N -S o f t -I B P T/C/S
Understanding the Paradox. We use × to highlight cases when one of our two key properties, continuity and sensitivity, is unfavorable for training (discontinuous loss, high sensitivity), and when it is favorable. Considering tightness as a sole property of a relaxation led to a seemingly contradictory conclusion. Once we complement tightness with our two properties, the results are less puzzling, as we can explain the inferior performance of each method compared to IBP. As discontinuity and sensitivity manifest for realistic networks (Section 5.1), and can have negative effect on gradient descent (Section 4.4), we can now expect that discontinuous and sensitive relaxations will not produce satisfactory results. This is confirmed in Table 2.
Namely, we can attribute the poor results of hBox and CROWN-IBP (R) to the discontinuities in their loss which harm gradient descent. While DeepZ is continuous, it is highly sensitive which again poses a difficulty for optimization and hurts the results. Notably, CROWN suffers from both issues, thus failing despite its tightness. We see that IBP, while loose, has favorable continuity and sensitivity, and achieves the best results. This provides novel insights on the paradox-relaxations with unfavorable properties get worse results.
Excluding Alternative Explanations. Exactly quantifying the impact of each property (tightness, continuity, and sensitivity) on the result is challenging, as it might heavily depend on the setting, e.g., dataset or network (see discussion in Section 6). However, to exclude the possibility that our conclusions are an artifact of a specific setting (e.g., they hold only for a particular weight initialization), we repeat a subset of 86.1 ± 0.5 82.0 ± 0.5 79.9 ± 0.5 76.7 ± 1.0 68.0 ± 2.5 20.1 ± 15.2 11.3 ± 0.0 LooseIBP-DC 300 86.2 ± 0.4 82.3 ± 0.2 79.0 ± 0.2 71.9 ± 1.2 24.9 ± 13.8 11.3 ± 0.0 11.3 ± 0.0 LooseIBP-DC 1000 86.1 ± 0.3 81.9 ± 0.3 57.0 ± 12.9 11.3 ± 0.0 11.3 ± 0.0 11.3 ± 0.0 11.3 ± 0.0 our experiments for a wider range of parameter choices, including various initializations, regularization norms, learning rates, optimizers, as well as training on subsets of the data. In all considered settings we reach the same conclusions as in Table 2, further strengthening our insights. The detailed results are in Appendix G.
Improving Unfavorable Properties of Relaxations
Given our previous results that demonstrate the negative effect of unfavorable properties on certified training, a natural follow-up question is: can we simply improve the unfavorable properties of a relaxation to make it more successful in certified training? To investigate this question we systematically explore and evaluate modifications of previously considered relaxations, and demonstrate that this does not immediately lead to better results, as such changes often harm other properties, inducing a complex tradeoff.
Discovering Modifications.
To obtain the modifications, we generate all combinations of suitable choices for lower and upper linear bounds (as in Equation 2) for all three ReLU stability cases, filtering out unsound candidates (i.e., those that do not properly overapproximate ReLU), and those for which there is a strictly more favorable relaxation (i.e., provably strictly tighter and not worse in continuity and sensitivity). Additionally, we include several relaxations obtained via (i) discretely switching between bound choices for unstable ReLU based on a CROWN-inspired heuristic; (ii) changing the same bounds in a soft way, eliminating the discontinuity introduced by the switching heuristic, and (iii) computing the intermediate bounds using IBP to reduce sensitivity, as in CROWN-IBP (R).
Properties are Entangled. The resulting relaxations are shown in Table 3. We interpret each relaxation as a modification of one of the relaxations from Table 2 aimed at improving one property, and name them accordingly. We show the favorability of each property, and certified robustness after certified training of FC and CONV networks. Our claims on tightness of these modifications are based on empirical CR-AUC measurements in the same setting as in Figure 2 (see Appendix H for details). We defer complete descriptions of each modification, including the intended as well as unintended effects on properties, to Appendix H.
The main observation from Table 3 is that properties are not independent-modifying a relaxation to improve a property often negatively affects another one. For example, by fixing the lower bound x i−1,j ≤ x i,j for unstable ReLUs in CROWN to eliminate the discontinuities due to heuristic switching, we obtain CROWN-1 , which introduces a new kind of discontinuities at u i−1,j = 0 and is slightly looser. Further, using x i−1,j ≤ x i,j for the negative case creates CROWN-1-C , which is now continuous, but significantly looser. Both of these relaxations perform worse than CROWN, while some similar changes result in improvements, implying a complex tradeoff between properties, where they differently affect the certified training in different scenarios, as previously observed in Section 5.2. Crucially, no modification is able to outperform IBP, strengthening the conclusion that modifying existing relaxations to improve unfavorable properties is not simple and does not directly lead to state-of-the-art results. Note that the same phenomenon affects most prior work in designing convex relaxations, where tightness was the sole focus in relaxation design, which unknowingly harmed other two properties and caused bad results in certified training, leading to the paradox which we focus on in this work.
Towards Understanding the Tradeoff. To further demonstrate that properties can affect training in different ways for different settings, as opposed to previously explored discrete modifications, we consider two parametrized modifications of IBP-continuous LooseIBP-C(ω), which before every ReLU layer replaces the interval arithmetic bounds [l, u] with [l − ω, u + ω], and discontinuous LooseIBP-DC F (ω), which uses
[l − ω( F * l − F * l), u + ω(F * u − F * u )]
, and is further parametrized by F , where larger F leads to more discontinuities. Increasing the looseness parameter ω reduces the tightness of all relaxations. For fixed ω, all LooseIBP-DC F (ω) have intuitively comparable tightness, and are all strictly tighter than LooseIBP-C(ω).
In Table 4 we present the certified training results for various values of ω and F , with the CONV network trained on the MNIST dataset, following the setting of Table 2. We perform 4 runs with different random seeds, and report the mean and the standard deviation. As all considered relaxations are strictly looser than IBP for any fixed ω, we do not expect improvements over IBP (which achieves 86.8% CR in this case, see Table 2). However, it is interesting to determine when sacrificing the continuity of LooseIBP-C(ω) for tightness of LooseIBP-DC F (ω) is beneficial (we highlight such cases for each ω in bold). Namely, we can see that in tight regimes (low ω), the advantage of tightness on average outweighs the harm of discontinuities, and we get comparable or slightly more favorable results for most F . As we move to looser regimes (high ω), the differences between relaxations become more pronounced, and discontinuities become more important relative to tightness, i.e., increasing tightness improves results only if the cost paid in discontinuities is not too large. Even taking into account the standard deviation of results for small ω, where some of the relaxations are comparable, our conclusion still stands. Note that for high ω, relaxations sometimes fully diverge in training, dropping CR to trivial 11.3%, which explains cells with unusually high standard deviation.
This illustrates that while the three properties of a relaxation are indicative of its performance, the underlying tradeoff can vary, and estimating the exact effects of each property in a particular setting is challenging.
Conclusions, Discussion and Future Work
Our theoretical (Section 4) and experimental (Section 5) results demonstrated that attempts to use tighter relaxations in certified training have lead to unfavorable properties such as discontinuity and high sensitivity of the loss. These novel insights on the failure of these relaxations to outperform the loose IBP represent a first step towards deeper understanding of this phenomenon. As the difference between the best empirical and certified robustness is more than 25% based on current leaderboards Croce et al., 2020) on CIFAR-10, we now provide a brief outlook, in light of new evidence, on possible techniques that could help close this gap, and identify several high-level directions that could be explored in future work.
New Relaxations with Favorable Properties.
First, one could try to design a novel relaxation that is tight, and has both favorable continuity and favorable sensitivity. The results of our follow-up study in Section 6 indicate that this might be difficult, as trying to improve a property of an existing relaxation often negatively affects other properties, inducing a tradeoff with complex effects on training. Nevertheless, a relaxation with all favorable properties could still exist, as for instance Lyu et al. (2021) have obtained competitive results by using a new relaxation, and such search could be further guided by our findings.
New Training Methods for Existing Relaxations.
Second, one could attempt to utilize existing convex relaxations in certified training with a modified training procedure, which is designed to exploit the benefits of each relaxation. Examples of this in recent work include searching for counterexamples (Balunovic & Vechev, 2020), combining several relaxations or using better initialization (Shi et al., 2021), and have shown to be a promising way to obtain state-of-the-art certified robustness. Future work could attempt to explicitly incorporate the notions of continuity and sensitivity when designing such a training procedure.
Going Beyond Convex Relaxations. Finally, under the assumption that the tradeoff between tightness and other properties represents a fundamental obstacle for convex relaxations, a promising possibility could be to move away from training with convex relaxations altogether and adopt a fundamentally different approach. Recently, alternative methods based on the technique of randomized smoothing (Cohen et al., 2019;Salman et al., 2019a;Yang et al., 2020) or new certification-friendly model architectures (Zhang et al., 2021;2022) have achieved strong results, which may suggest that this avenue is the most promising. However, these methods come with their own set of challenges and tradeoffs, previously discussed in Section 2. x 0,1
x 0,2
x 1,1
x 1,2
x 2,1
x 2,2
x 3,1
x 3,2
A Backsubstitution Example
Here we illustrate the process of backsubstitution concretely on the toy network shown in Figure 7, using DeepZ. The same example but with the CROWN/DeepPoly relaxation is shown in (Singh et al., 2019b).
The components of the input x 0 , namely x 0,1 and x 0,2 , have bounds −1 and 1, meaning that −1 ≤ x 0,1 , x 0,2 ≤ 1. Because the first layer h 1 is an affine layer we have
x 1,1 = x 0,1 + x 0,2 ,(4)x 1,2 = x 0,1 − x 0,2 .(5)
To obtain the bound l 1,1 we replace both x 0,1 and x 0,2 with l 0,1 = −1 and l 0,2 = −1 as their signs are both positive in Equation 4 and obtain l 1,1 = −2. To obtain l 1,2 we replace x 0,1 with l 0,1 = −1 and x 0,2 with u 0,1 = 1 in Equation 5 as the sign of x 0,2 is negative and get l 1,2 = −2. Similarly we get u 1,1 = 2 and u 1,2 = 2.
The second layer h 2 is a ReLU layer. As both ReLUs are unstable, we need to calculate λ for each of them. As the bounds are equal, l 1,1 = l 1,2 = −2 and u 1,1 = u 1,2 = 2, we get that λ is also equal, λ = 1 2 . Using the formula for unstable ReLUs we get 1 2 x 1,1 ≤ x 2,1 ≤ 1 2 x 1,1 + 1, 1 2 x 1,2 ≤ x 2,2 ≤ 1 2 x 1,2 + 1. Now backsubstituting the bounds for x 0,1 and x 0,2 gives 1 2 (x 0,1 + x 0,2 ) ≤ x 2,1 ≤ 1 2 (x 0,1 + x 0,2 ) + 1, 1 2 (x 0,1 − x 0,2 ) ≤ x 2,2 ≤ 1 2 (x 0,1 − x 0,2 ) + 1. The lower and upper bounds l 2,1 , u 2,1 and l 2,2 , u 2,2 for x 2,1 and x 2,2 respectively follow immediately:
x 2,1 ≥ 1 2 (l 0,1 + l 0,2 ) = −1 = l 2,1 , x 2,1 ≤ 1 2 (u 0,1 + u 0,2 ) = 1 = u 2,1 , x 2,2 ≥ 1 2 (l 0,1 − u 0,2 ) = −1 = l 2,2 , x 2,2 ≤ 1 2 (u 0,1 − l 0,2 ) = 1 = u 2,2 .
The third layer h 3 is again an affine layer, hence we get x 3,1 = x 2,1 + x 2,2 and x 3,2 = x 2,1 − x 2,2 . In the backsubstitution step, we replace x 2,1 and x 2,2 with their upper and lower bounds and arrive at
x 0,1 ≤ x 3,1 ≤ x 0,1 + 2,
x 0,2 − 1 ≤ x 3,2 ≤ x 0,2 + 1.
Again, the lower and upper bounds l 3,1 , u 3,1 and l 3,2 , u 3,2 for x 3,1 and x 3,2 respectively follow:
x 3,1 ≥ l 0,1 = −1 = l 3,1 ,
x 3,1 ≤ u 0,1 + 2 = 3 = u 3,1 ,
x 3,2 ≥ l 0,2 − 1 = −2 = l 3,2 ,
x 3,2 ≤ u 0,2 + 1 = 2 = u 3,2 .
B Additional Results on Tightness
Here we present results related to tightness, namely our experiments measuring empirical tightness of relaxations, and two proofs omitted from the main paper.
B.1 Quantifying the Tightness of Relaxations
We present the complete results of our tightness experiments, one of which (experiment A) was shown in Table 1 and further in Figure 2.
Setup. We conduct 8 experiments (labeled A-H), varying the dataset (MNIST, FashionMNIST, CIFAR-10), the network (CONV and CONV+, described in Appendix F.1, and FC-S, a 100-100-10 fully-connected network), and the training method (natural training, adversarial training with PGD with = 0.1, and adversarial training with PGD with = 0.3). For PGD, we use 100 steps with step size 0.01. We train all models for 200 epochs. For PGD with = 0.3, as necessary for convergence, we use the first 10 epochs as warm-up (natural training), and the following 50 as ramp-up, where we slowly increase the perturbation radius from 0 to . We use L2 regularization with strength 5e-3 for experiment E, and 5e-5 for other experiments. The experiments are summarized in Table 5.
After training the networks, we sample 100 values from 0 to 0.07 for natural, 0 to 0.15 for PGD = 0.1, and 0 to 0.4 for PGD = 0.3 models. For every radius , we use each relaxation to attempt to certify the examples from the test set. We calculate the certified robustness of each network under all relaxations, and plot the resulting CR curves. The CR curves shown in Figure 2 correspond to the results of experiment A. Further, we quantify the tightness of each relaxation as CR-AUC, the area under the CR curve, calculated using the trapezoidal rule. Table 1 contains CR-AUC values of curves from Figure 2 (experiment A), and certified robustness after certified training of the same network with train = 0.3 (part of results in Section 5.2).
Discussion.
As the curves are similar among experiments, we summarize the results (CR-AUC) in Table 6. From these results, we can confirm two claims given in the main paper:
It is necessary to use certified training to obtain a certifiably robust network. We can see that adversarial training, along with improving empirical robustness, also has a positive effect on certified robustness. However, these results are significantly worse from those that can be obtained using certified training. To illustrate this point, for experiment C (MNIST, CONV, PGD with = 0.3), the method that certifies the most at = 0.3 is CROWN, with 11.9% certified robustness (not visible from the results shown here). Using certified training in the same setting, all relaxations obtain significantly better results: from 69.8% (worst method) to 86.8% (best method), as seen in Table 2.
The relative tightness of relaxations is well established and can be empirically confirmed. While individual CR-AUC values may vary between settings, the conclusions are consistent across all experiments.
It is worth noting that while we cover all commonly used convex relaxations, extending the tightness discussion to more complex relaxation-based methods that do not necessarily fit the framework presented in Section 2 can be challenging. As an example, recently introduced β-CROWN (Wang et al., 2021b) can in practice outperform the provably tighter Triangle (Ehlers, 2017) (Wang et al., 2021b) andpreviously in (Salman et al., 2019b).
B.2 hBox is Strictly Tighter Than IBP
Here we sketch a proof that for any neural network parameters hBox certifies more than IBP. For a formal proof see Wang et al. (2018a) or Mirman et al. (2018.
Theorem 3 (Informal). Given a neural network architecture parametrized by θ ∈ R d , for any choice of θ, hBox can certify robust classification for more inputs than IBP.
Proof. To prove the statement, it is sufficient to show that the constraints introduced for each layer for hBox are tighter than IBP constraints (see Equation 3 in (Wang et al., 2018a) for more details on this claim). This is relatively straightforward for affine layers, so here we focus on ReLU layers. For unstable ReLU both use the same constraints: 0 ≤ x i,j ≤ u i−1,j . When u i−1,j < 0, both set x i,j = 0. The only difference is when l i−1,j > 0: in this case IBP uses l i−1,j ≤ x i,j ≤ u i−1,j , while hBox sets x i,j = x i−1,j . By definition we have l i−1,j ≤ x i−1,j ≤ u i−1,j , implying that hBox constraints are indeed tighter.
B.3 Strictly Tighter Relaxations have Better Certified Robustness Optima
Here we restate and prove Theorem 1. Theorem 1. Let r 1 and r 2 be two convex relaxations, where r 1 is known to be strictly tighter than r 2 . For a network parametrized by θ and any ≥ 0, it holds that max θ CR(θ, , r 1 ) ≥ max θ CR(θ, , r 2 ).
Proof. Let θ 1 := arg max θ CR(θ, , r 1 ) and analogously θ 2 := arg max θ CR(θ, , r 2 ). We can observe that: max θ CR(θ, , r 2 ) = CR(θ 2 , , r 2 ) ≤ CR(θ 2 , , r 1 ) ≤ CR(θ 1 , , r 1 ).
Here, the first inequality follows from the fact that r 1 is a strictly tighter relaxation than r 1 for any parameter choice. The second inequality follows from the definition of θ 2 as maximum, completing the proof.
C Omitted Details on Continuity
C.1 Network Used to Show Discontinuities
The sketch of the network described in Section 4.2, used to show discontinuities of hBox and CROWN relaxations is given in Figure 8.
C.2 Proof for Continuity of IBP and DeepZ
We expand on the proof sketch given in the main paper to provide a complete proof of Theorem 2.
Proof. IBP: Recall that for IBP, l i and u i are computed directly as a function of l i−1 , u i−1 , and θ. For affine layers, this function is a sum of products of elements in l i−1 , u −1 and θ, which is continuous w.r.t. all variables. If i is a ReLU layer, the lower (upper) bound function is ReLU(l i−1,j ) (resp. ReLU(u i−1,j )), which is also clearly continuous. As compositions, sums, and products of continuous functions are continuous functions, this directly shows that l L,j are ultimately continuous.
DeepZ: For DeepZ, computing each l i and u i includes backsubstitution, where to obtain the final expressions (as in Equation 3), we repeatedly substitute in lower/upper bound expressions, based on the values of θ and l i and u i from all previous layers. As before, it suffices to show that for some i, l i and u i are continuous w.r.t. θ and all previous l i and u i .
First, recall that during each step of backsubstitution we encounter terms of the form α · x i ,j , and based on the sign of α substitute the lower or the upper bound expression for x i ,j . When one such α = 0, α · x i ,j is continuous (both the left and the right limit equal the function value at that point, 0). Thus, we can reduce these cases to cases where no α values encountered are zero, i.e., all choices for the upper/lower bound to be substituted during backsubstitution are fixed.
Next, recall that even if we fix this choice, the actual expression we substitute in for the upper or lower bound may depend on the ReLU stability case. If all l i and u i are nonzero the stability is fixed, and by substituting affine and ReLU relaxation bounds during backsubstitution we can arrive at a closed form expression w.r.t. θ that uses only elementary operations, which is continuous.
It is left to discuss the behavior in points where some elements of some l i or u i are zero. In this case the ReLU is still stable, but switches to being unstable on one side of zero. If l i ,j = 0 both upper and lower bound expressions for x i +1,j are x i ,j , which is also the right limit. In the left limit, we use the unstable ReLU bounds λx i ,j and λx i ,j − λl i ,j and see that for l i ,j → 0, λ → 1, and thus these bounds approach x i ,j as well, so there is no discontinuity. Similarly, for u i ,j = 0 (the other stable case) the bounds (as well as the left limit) are 0. For the right limit, we again have the unstable case, but now λ → 0, so both bounds approach 0, implying that this is also not a discontinuity. To conclude, we showed that l i and u i are continuous w.r.t. θ and all previous l i and u i . This can be composed to conclude that l L,j is continuous w.r.t. θ, using the same argument we used for IBP.
D Details of Gradient Descent Experiments
In this section we provide details of the experiments with used to generate Figure 4, discussed in Section 4.4.
For both examples we consider an architecture consisting of 2 hidden layers with 10 neurons each, and 2 output neurons. The network receives a 1-dimensional input. We randomly sample the input, weights, and biases as integers in [−4, 4], and sample the perturbation radius between 0.1 and 4.1. For the experiment with the discontinuous CROWN relaxation, we set the initial learning rate to 0.02, learning rate decay to 0.99, and ran for 20 epochs. For the experiment with the sensitive DeepZ relaxation, we set the initial learning rate to 0.005, learning rate decay to 0.99, and ran for 100 epochs. We sampled a number of different networks for both scenarios, and chose the one which best illustrates the behavior of gradient descent.
E Continuity and Sensitivity in Practice
We present more results and discuss the details of experiments presented in Figure 5 and Figure 6.
E.1 Additional Figures
Here we show additional figures for continuity and sensitivity with same setup as for Figure 5. In Figure 9 we show the bounds for each of the MNIST FC networks from Table 2, using = test = 0.3, and a network trained with CROWN-IBP in the same setup. All plots are generated on the same example. We can highlight some differences compared to the naturally trained network. First, as explained earlier, the relaxation used for training typically obtains the tightest bounds. Next, we can see that if the network was trained with CROWN or hBox, there is a significantly smaller number of discontinuities than in the cases when the network was trained naturally or using some other relaxation. Even though there is a lack of discontinuities in these cases, these networks do not perform well (see Table 2) which suggests that, while the network learned to eliminate the discontinuities, the performance was still hurt by them earlier in the training. Finally, we see that evaluating IBP and hBox trained models using the DeepZ relaxation shows its increased sensitivity.
E.2 Measuring Continuity and Sensitivity in Training
We provide details on the experiment discussed in Section 5.1 where we measure continuity and sensitivity of relaxations during certified training. For this experiment, we use the FC network and train on MNIST, using the same hyperparameters used to produce our main results. We measure sensitivity at every training step for the first 4 epochs, and once per epoch afterwards. Continuity is measured every 50 steps. Next, we describe the exact methods used to compute the two quantities shown in Figure 6.
To measure continuity, we compute the gradient ∇ θ L w.r.t. current parameters θ, where L is the loss that each relaxation attempts to minimize at the current epoch for a fixed input sample. Then, we consider the line segment between θ − α∇ θ L and θ + α∇ θ L for α = 0.03. We discretize this line segment into n = 500 parameter points θ 1 , θ 2 , ..., θ n . For each parameter value, we compute the loss values l 1 , l 2 , ..., l n , and define the differences ∆ i = |l i+1 − l i |. Intuitively, there is a discontinuity between θ i and θ i+1 if ∆ i is significantly bigger than its neighbors ∆ i−1 and ∆ i+1 . Formally, we say there is a discontinuity if there exists i ∈ {1, . . . , n − 2} for which we have that ∆ i > C · ∆ i−1 + D and ∆ i > C · ∆ i+1 + D, where we set C = 10 and D = 10 −5 . We measure discontinuity using this approach for a batch of 100 input samples and in Figure 6 report the proportion of samples inside the batch for which we have found a discontinuity. For IBP and DeepZ, we could also use our theoretical results from Section 4.2 proving that they are always continuous.
When computing sensitivity, recall from Section 4.3 that IBP and hBox always have the trivial sensitivity of 1 which corresponds to log sensitivity of 0. For CROWN-IBP (R), we previously derived worst case sensitivity of O(M B+1 ), where B is the number of ReLU-affine blocks in the network. This bound assumes that all ReLU layers are unstable (meaning they contain at least one unstable ReLU neuron), which usually holds for trained networks and practical values of test . As this is not always the case when observing the whole training procedure, we extend the previous analysis with the observation that a sequence of consecutive affine and stable ReLU layers can be treated as a single affine layer, and obtain a tighter sensitivity upper bound of O(M B +1 ), where B denotes the number of ReLU-affine blocks where the ReLU layer is unstable. In Figure 6 we report the log sensitivity B + 1, averaged across all samples in a single batch. Similarly, for DeepZ and CROWN, their sensitivity is O(3 B M B +1 ), taking logarithm and factoring out log M , we obtain their log sensitivity is (B + 1) · (1 + log 3 log M ). We set M = 400 as this is the biggest number of neurons in a layer for the network in this experiment.
F Details and Additional Results of Main Experiments
We provide all omitted details of our main experiments given in Section 5.2, including details of networks and training parameters (Appendix F.1), additional investigations into the effect of the seed (Appendix F.2), and certifying with different relaxations to those used in training (Appendix F.3), as well as complete results omitted from the main text (Appendix F.4), including the hybrid CROWN-IBP defense. Table 7: The architectures used in our main experiments. "FC n" denotes a dense (fully-connected) layer with n neurons. "CONV k w × h + s" denotes a convolutional layer with k kernels of size w × h and stride s. All activations are ReLU. CONV+ is equivalent to "small" in Gowal et al. (2018
F.1 Networks and Hyperparameters
Here we detail the setup of the main experiments shown in Table 2. All runs use a single GeForce RTX 2080 Ti GPU. The details of networks FC, CONV, and CONV+ are shown in Table 7. The hyperparameters vary by dataset.
For MNIST, we tune all hyperparameters thoroughly. We train all models for 200 epochs, starting with a warm-up (N w epochs) followed by a ramp-up period (N r epochs) to stabilize the training procedure (Gowal et al., 2018). During the warm-up we train the network naturally. During the ramp-up we gradually increase the perturbation radius from 0 to train , decrease κ from κ start = 1 to κ end (shifting from natural to certified training), and for CROWN-IBP gradually shift from CROWN-IBP (R) to IBP loss. We use a batch size of 100 (50 for memory intensive models) and train using the Adam optimizer with the initial learning rate α. Finally, we use L 1 regularization with the strength hyperparameter λ. We tune (N w , N r , κ end , λ, α), as well as the learning rate schedule (milestones, where we reduce the learning rate 10× at epochs 130 and 190, or steps, where we halve it every 20 epochs), and the choice of last layer elision (where we elide the final layer h L of the network with the specifications c y as in Gowal et al. (2018) For FashionMNIST, we reuse the best hyperparameter choice of the corresponding MNIST model.
For SVHN, we use the parameters given in prior work as a starting point, and introduce minimal changes. For IBP, CROWN-IBP (R), and CROWN-IBP, we start from the parameters given in Gowal et al. (2018): we train for 2200 epochs with batch size 50, warm-up for 10 epochs, ramp-up for 1100 epochs, use Adam with initial learning rate of α = 1e-3 (reduced 10× at 60% and 90% of the training steps), and use train = 1.1 test . We do not use random translations (as we notice these harm the results on large test ), and we tune κ end (trying 0 and 0.5 for each method-0 performs better for all methods except CROWN-IBP (R)), introduce L1 regularization (improves the results only for IBP, with λ = 5e-5), tune the initial learning rate (we end up using α = 5e-4 for IBP and CROWN-IBP). For hBox, DeepZ and CROWN, we use the parameters from : batch size of 20, training for 100 epochs (training longer does not improve the results), using Adam with initial learning rate α = 1e-3 halved every 10 epochs, ramp-up w.r.t. of 50 epochs where we start from = 0.001. We introduce ramp-up w.r.t. κ with κ end = 0. As before, we exclude the data transformations. For all three methods we use L1 regularization with λ = 5e-6.
For CIFAR-10, we similarly use the parameters from prior work. For IBP, CROWN-IBP (R), and CROWN-IBP, we use the values from : 3200 epochs, 320 of warm-up and 1600 of ramp-up (using κ end = 0 and train = 1.1 test for all methods, κ start = 1 for IBP and κ start = 0 for other methods), Adam with α = 5e-4 reduced 10× at epochs 2600 and 3040, and random horizontal flips and crops as augmentation.
We halve the batch size to 512 for all three methods. For DeepZ and hBox we use 50 random Cauchy projections and the parameters based on but with extended training length and introduced ramp-up w.r.t. κ: we train with batch size 50 for 240 epochs, 80 of which are ramp-up, using Adam optimizer with α = 5e-4, halved every 10 epochs. During ramp-up we start from = 0.001, and use κ start = 1 and κ end = 0.
F.2 Estimating the Effect of the Seed
To estimate variability and demonstrate that it does not significantly impact our main conclusions, we use one efficient method (IBP) and perform the same run with best parameters from Appendix F.1 with 10 seed values, across two datasets (MNIST and FashionMNIST) and both networks (FC and CONV). In all experiments we use test = 0.3. The results, with the mean and the standard deviation of obtained results, are given in Table 9. Note that, for both MNIST networks, the results we report in Table 2 are the best out of all 10 seeds (74% and 86.8% respectively), as expected given that the hyperparameters were tuned on this seed. As is it too expensive to run repetitions for relaxations other than IBP, we can not estimate the confidence intervals of their results. Nonetheless, if we took the confidence interval of the size of two standard deviations for our IBP results, it would not change the conclusions we made based on single experiment runs reported in Table 2. Namely, continuity and sensitivity, alongside tightness, can explain the results in Table 2.
F.3 Training and Certifying with Different Relaxations
In this section, we investigate the effect of varying the convex relaxation used to certify a network trained using some method M, to justify our choice of using the same relaxation for training and certification in our main experiments.
We use the MNIST models with test = 0.3 from Table 2 and the corresponding models for test = 0.1 from the full results given in Appendix F.4, on both FC and CONV architectures. We evaluate their certified robustness using all five introduced methods. The results are given in Table 10. Observe that almost all IBP trained models have extremely low certified robustness when certified with DeepZ, even though it is tighter, and vice versa. This confirms our previous statement that training with M produces a network particularly suitable to certification with M, and justifies our decision to focus on this case. The few exceptions, i.e., the instances where a method different than M achieved a better result (by more than the minimal 0.1% after rounding), are marked in bold. We can see that certification with tighter CROWN often slightly improves DeepZ-trained networks. However, this improvement mostly leaves the relative order of methods unchanged and does not affect our conclusions. Note that if we are interested in the highest certified robustness of an already trained model, the best approach is to always use more expensive certifiers (Tjeng et al., 2019;Singh et al., 2019a;Tjandraatmadja et al., 2020) which are not fast enough to be used in training. However, here we focus on analyzing training properties of a single relaxation, and not on maximizing certified robustness.
F.4 Complete Results of Main Experiments
In Tables 11 to 13 we present complete certified training evaluation results, expanding the ones given in Section 5.2. In all tables, Acc denotes accuracy, PGD denotes empirical robustness against PGD attacks (we use 100 steps with step size 0.01), and CR denotes certified robustness. For MNIST/FashionMNIST datasets we include two smaller perturbation radii, test = 0.1 and test = 0.2. Note that the paradox of certified training can rarely be observed for such small radii, and we thus in the main paper focus on the challenging case of strong adversaries, i.e., test = 0.3 for MNIST/FashionMNIST and test = 8/255 for SVHN/CIFAR-10, as it most clearly illustrates the differences between relaxations. Further, this case is of greater interest as it is a well-established benchmark for robustness even outside of the area of certified robustness (Madry et al., 2018;Croce et al., 2020). To explain the unusually high standard accuracy of CROWN-IBP (R) in our CIFAR-10 experiments, note that it is the only method that performs better with κ end = 0.5 (as opposed to κ end = 0). All other methods could reach similar standard accuracy with κ end = 0.5, but their certified robustness would drop.
G Excluding Alternative Explanations
Here we extend a subset of our evaluation given in Section 5.2 to a wider range of settings, aiming to show that our claims about continuity and sensitivity of existing relaxations are not strictly dependent on one of the concrete settings used in our main experiments. We use MNIST and both FC and CONV networks.
The results are presented in Table 14. Each column represents evaluation with one modified setting compared to the base run from Section 5.2, repeated in the first column. The next four columns correspond to different weight initializations: Kaiming Normal (He et al.), Orthogonal (Saxe et al., 2014), Xavier Normal (Glorot &Bengio, 2010), andIBPInit (Shi et al., 2021). The column marked L2 indicates the use of L2 regularization instead of L1. For the following two columns, BigLR and TinyLR, we increase (respectively, decrease), the learning rate two times. We further experiment with using the SGD optimizer instead of Adam, and training on stratified subsets of the training set with 10, 30, and 50 percent of data, respectively. We can see that the relative order of relaxations is fairly consistent across all settings, implying that our conclusions from Section 5.2 hold and are not tied to a specific choice of a setting. relaxations, as can be seen in Table 15. These two relaxations can be further made continuous: using x ≤ x − l for the negative and l ≤ x for the positive case in CROWN-0-Tria leads to CROWN-0-Tria-C , and using x ≤ x for the negative and x ≤ u for the positive case in CROWN-1-Tria leads to CROWN-1-Tria-C , both sacrificing most of their tightness to obtain two other favorable properties. Further, we investigate the continuous variant of CROWN, CROWN-Soft , obtained analogously to DeepZ-Soft, and CROWN-Soft-IBP obtained by combining both the ideas of soft switching from CROWN-Soft and computing intermediate bounds with IBP in CROWN-IBP (R) .
I Generalizing Continuity and Sensitivity
We now discuss how continuity and sensitivity could be generalized from binary to numerical metrics to compare any pair of relaxations. For continuity, this can be done by empirically measuring the number of discontinuities along one direction in the loss landscape as we have done in Figure 5 and Figure 6 (left). These experiments show that CROWN indeed has more discontinuities than CROWN-IBP (R). Similarly, sensitivity can be generalized by considering the exact upper bound degree (instead of only whether it is greater than 1), as derived in Section 4.3. This would explain that DeepZ, which has the sensitivity of O(3 B M B+1 ), performs worse than CROWN-IBP (R) which has the sensitivity of O(M B+1 ). Overall, while our properties were designed to compare IBP with other relaxations, they could be extended to compare any pair of relaxations.
J Detailed Derivations of Sensitivity
In this section we provide more detailed derivations of sensitivity, expanding on those presented in Section 4.3.
As the first layer is always affine, we have x 1,j ∈ P 1 (δ) for all relaxations. To compute the sensitivity of each relaxation, we will sequentially analyze the effect of each network layer.
IBP/hBox. For IBP, assume that at layer i, all x i−1,j ∈ P N (δ). For an affine layer, as l i,j and u i,j are linear combinations of elements of u i−1 and l i−1 , the degree stays the same and x i,j ∈ P N (δ). For a ReLU layer, as u i,j = ReLU(u i−1,j ) (same for l i,j ) we again have x i,j ∈ P N (δ) (we consider two cases: u i,j = 0 and u i,j = u i−1,j , both of which are in P N (δ)). Thus, since the degree does not change neither in affine nor ReLU operations, and in the first layer the degree is 1, all neurons are in P 1 (δ) ⊆ R 1 (δ), i.e., the sensitivity of IBP is 1. For hBox, the only difference are affine layers, where now x i,j = (W i x i−1 ) j + b i,j . As linear combinations of elements of P N (δ) are in P N (δ), all neurons stay in P 1 (δ) and the sensitivity of hBox is also 1.
DeepZ/CROWN. The ReLU bounds of DeepZ, λx i−1,j and λx i−1,j − λl i−1,j , significantly increase the sensitivity. Recall that here λ := u i−1,j /(u i−1,j − l i−1,j ). After the first ReLU layer, we have that x 2,j ∈ R 2 (δ) because λ ∈ R 1 (δ) and x 1,j ∈ P 1 (δ). This changes the behavior of all following affine layers, as a linear combination of M elements of R N (δ) is in R M N (δ). Thus, x 3,j ∈ R 2M (δ). For the following ReLU layers, if we assume the inputs x i−1,j are in R N (δ), we have that λ ∈ R 2N (δ), and thus the outputs x i,j are in R 3N (δ). This is because we are multiplying elements of R N (δ) and R 2N (δ), and we get an expression in R 3N (δ). Putting this together, each ReLU-affine block from layer 4 onwards multiplies the sensitivity by 3M .
As there are B ≡ L/2 − 1 such blocks, we obtain 2 · 3 B M B+1 for the final sensitivity. Recall that CROWN has the same upper bound as DeepZ for unstable ReLUs, but chooses the lower bound adaptively: 0 ≤ x i,j if −l i−1,j ≥ u i−1,j , or x i−1,j ≤ x i,j otherwise. Both of these options do not change the degree: if x i−1,j is in R N (δ), then also x i,j is in R N (δ). However, CROWN uses the same upper ReLU bounds as DeepZ with the slope λ := u i−1,j /(u i−1,j − l i−1,j ), so we can apply the same analysis as before for the upper bound, and show that the sensitivity of CROWN is 2 · 3 B M B+1 as well, despite having a simpler expression for the lower bound. Thus, both DeepZ and CROWN are highly sensitive.
CROWN-IBP (R).
Here, at ReLU layer i during the (only) backsubstitution, we have to consider l i−1,j and u i−1,j separately from x i−1,j . Recall that the former were precomputed with IBP, and are thus in P 1 (δ) (using the same analysis explained earlier for IBP). Then, x i−1,j get substituted as usual for CROWN and can carry larger sensitivity. The main difference here is that λ := u i−1,j /(u i−1,j − l i−1,j ) is in R 1 (δ) because l i−1,j and u i−1,j were computed using IBP, meaning they are in P 1 (δ), and this implies that λ ∈ R 1 (δ). Assuming x i−1,j ∈ R N (δ) (N ≥ 1) and using the previous observation that we always have λ ∈ R 1 (δ), it follows that x i,j ∈ R N +1 (δ). As the affine layers have the same effect as in the case of CROWN, each of B following ReLU-affine blocks now increases sensitivity from N to g(N ) = (N + 1)M , and we have, as before, x 3,j ∈ R 2M (δ). Thus, the final sensitivity is g B (2M ) = 2M B+1 + M B + . . . + M , which can be proven by induction. We have g 1 (2M ) = 2M 2 + M and assume the identity holds for g B−1 (2M ). Now: . . + M + 1) and using the closed-form formula for the sum of first B + 2 terms of a geometric series (assuming M = 1) simplify this to:
g B (2M ) = M B+1 − 1 + M B+2 − 1 M − 1 = 2M B+2 − M B+1 − M M − 1 ,(6)
which is in O(M B+1 ). Clearly, CROWN-IBP (R) is also significantly more sensitive than IBP and hBox.
Figure 2 :
2CR (certified robustness) curves of relaxations on a convolutional network trained on MNIST.
Figure 3 :
3Discontinuity of CROWN and hBox.
Figure 4 :
4Convex relaxations with discontinuous (left) and highly sensitive (right) losses.
Figure 6 :
6Discontinuity (left) and sensitivity (right) during certified training with different relaxations.
Figure 5 :
5Continuity and sensitivity on a naturally trained MNIST network.
adversaries(Madry et al., 2018;Croce et al., 2020), i.e., test = 0.3 for MNIST/FashionMNIST and test = 8/255 for SVHN/CIFAR-10. We use the same relaxation for training and certification, as this is usually optimal (see Appendix F.3). Further experimental details are provided in Appendix F.1.
Figure 7 :
7Toy network (from Singh et al. (2019b)).
Figure 8 :
8The network used in Section 4.2 to show that the discontinuity of CROWN and hBox manifest already for small networks.
Figure 9 :
9Lower bounds of convex relaxations for 5 FC networks from Table 2 with = test = 0.3, and a network trained with CROWN-IBP, showing continuity and sensitivity of each relaxation. Each subfigure shows the name of the relaxation used to train the corresponding network.
15 :FigureFigure 10 :
1510Tightness of relaxation modifications quantified as CR-AUC, calculated in the setting of A display of explored relaxation modifications indicating the origin of each proposed modification.
g
B (2M ) = (g B−1 (2M ) + 1)M = (2M B + M B−1 + . . . + M + 1)M = (2M B+1 + M B + . . . + M ), completing the proof. We further write g B (2M ) = M B+1 − 1 + (M B+1 + M B + .
Table 1 :
1The Paradox of Certified Training: training with tighter relaxations leads to worse certified robustness, failing to outperform the loose IBP relaxation. Tightness formalization and further details given in Section 3.Relaxation
Tightness Certified (%)
IBP / Box
0.73
86.8
hBox / Symbolic Intervals
1.76
83.7
CROWN / DeepPoly
3.36
70.2
DeepZ / CAP / FastLin / Neurify
3.00
69.8
CROWN-IBP (R)
2.15
75.4
usually remains low for all relaxations, a0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Perturbation Radius
20%
40%
60%
80%
100%
Certified Robustness
IBP
hBox
CROWN
DeepZ
CROWN-IBP (R)
Proof Sketch. For IBP, l i and u i depend only on the previous layer either linearly or via ReLU, both being continuous. For DeepZ, the key step is proving that the ReLU relaxation bounds are continuous in points where the ReLU changes stability. Recall that the unstable case bounds are λx i−1,j and λx i−1,jRelaxations. The remaining two relaxations, IBP and DeepZ, are always continuous,
as formalized in the following theorem (full proof in Appendix C.2):
Theorem 2. The output bounds l L,j of IBP and DeepZ are continuous w.r.t network parameters θ.
Table 2 :
2Certified robustness (in %) after certified training with convex relaxations. Symbol × indicates that a property is unfavorable, i.e. the relaxation has a discontinuous loss landscape or is highly sensitive. We use test = 0.3 for MNIST and FashionMNIST, and test = 8/255 for SVHN and CIFAR-10 datasets.MNIST
FashionMNIST SVHN CIFAR-10
Table 4 :
4Mean and standard deviation (4 random seeds) of certified robustness after certified training of the CONV network on MNIST dataset (as inTable 2) using loose parametrized relaxations described in Section 6, illustrating that the tradeoff between properties, namely continuity and tightness, can change across settings.Looseness Parameter ω
0.01
0.5
1
1.5
2
3
5
LooseIBP-C
85.4 ± 0.5 82.0 ± 0.6 80.0 ± 0.6 77.3 ± 0.6 73.4 ± 2.0 45.4 ± 12.2 13.4 ± 0.9
LooseIBP-DC 1
85.8 ± 0.6 83.1 ± 0.4 81.8 ± 0.4 79.6 ± 0.4 78.3 ± 0.8 73.8 ± 0.6 21.1 ± 19.1
LooseIBP-DC 10
85.8 ± 0.3 82.5 ± 0.7 80.8 ± 0.5 77.6 ± 1.3 75.6 ± 1.3 32.7 ± 22.1 11.3 ± 0.0
LooseIBP-DC 100
Zhaoyang Lyu, Minghao Guo, Tong Wu, Guodong Xu, Kehuan Zhang, and Dahua Lin. Towards evaluating and training verifiably robust neural networks. In CVPR, 2021. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
Table 5 :
5The summary of our tightness experiments. Acc denotes standard accuracy (in %). ER denotes empirical robustness (in %), evaluated using PGD with the same used in training.ID
Dataset
Network
Training
Acc
ER
A
MNIST
CONV
Natural
98.7
/
B
MNIST
CONV
PGD = 0.1 99.0 94.4
C
MNIST
CONV
PGD = 0.3 98.2 91.0
D
FashionMNIST
CONV
Natural
91.4
/
E
CIFAR-10
CONV+
Natural
70.0
/
F
MNIST
FC-S
Natural
98.2
/
G
MNIST
FC-S
PGD = 0.1 99.0 91.8
H
MNIST
FC-S
PGD = 0.3 92.3 78.3
Table 6 :
6The results of tightness experiments, showing CR-AUC of each method.Method
A
B
C
D
E
F
G
H
IBP
0.73 0.91 1.94 0.20 0.02 0.47 0.63 2.22
hBox
1.76 4.76 10.45 0.53 0.09 1.42 2.70 11.44
CROWN
3.36 9.77 22.75 1.19 0.28 2.86 6.74 22.63
DeepZ
3.00 9.21 20.97 1.11 0.25 2.63 6.21 21.57
CROWN-IBP (R) 2.15 4.20 8.85 0.70 0.04 1.50 2.73 13.55
relaxation in certification of naturally trained networks, which may seem contradictory. However, this holds in a setup in which the former optimizes the slopes for all bounds including the intermediate layer ones, while the latter uses fixed intermediate layer bounds. For the same set of fixed intermediate bounds, as we would expect, β-CROWN can not produce better final bounds than Triangle (as stated in Corollary 3.2.1. in
).FC
CONV
CONV+
FC 400 CONV 16 4x4+2 CONV 16 4x4+2
FC 200
FC 100
CONV 32 4x4+1
FC 100
FC 10
FC 100
FC 100
FC 10
FC 10
Table 8 :
8The best choice of hyperparameters for each MNIST model in our evaluation.Net
test
Method
train
Nw
Nr
κ end
λ
α
LR schedule Elision
FC
0.1
IBP
0.2
10
100
0.5
5e-6 5e-4
milestones
yes
FC
0.2
IBP
0.2
10
100
0
5e-6 5e-4
milestones
yes
FC
0.3
IBP
0.3
10
100
0
5e-6 5e-4
milestones
yes
FC
0.1
hBox
0.1
0
50
0
5e-5 5e-4
milestones
yes
FC
0.2
hBox
0.2
10
50
0.5
5e-5 5e-4
milestones
yes
FC
0.3
hBox
0.3
0
50
0.5
5e-5 5e-4
milestones
yes
FC
0.1
CROWN
0.1
0
100
0
0
5e-4
milestones
yes
FC
0.2
CROWN
0.2
10
50
0
0
5e-4
milestones
yes
FC
0.3
CROWN
0.3
10
100
0
0
5e-4
milestones
yes
FC
0.1
DeepZ
0.1
10
50
0
5e-6 5e-4
milestones
yes
FC
0.2
DeepZ
0.2
0
50
0
5e-6 1e-3
steps
no
FC
0.3
DeepZ
0.3
0
50
0
5e-6 1e-3
steps
no
FC
0.1
CROWN-IBP (R)
0.2
0
50
0.5
5e-6 5e-4
milestones
yes
FC
0.2
CROWN-IBP (R)
0.3
0
50
0.5
5e-6 5e-4
milestones
yes
FC
0.3
CROWN-IBP (R)
0.3
0
50
0.5
5e-6 5e-4
milestones
yes
FC
0.1
CROWN-IBP
0.2
10
50
0.5
0
5e-4
milestones
yes
FC
0.2
CROWN-IBP
0.2
10
50
0
0
5e-4
milestones
yes
FC
0.3
CROWN-IBP
0.3
10
100
0
0
5e-4
milestones
yes
CONV
0.1
IBP
0.2
0
50
0.5
5e-6 5e-4
milestones
yes
CONV
0.2
IBP
0.3
0
50
0.5
5e-6 5e-4
milestones
yes
CONV
0.3
IBP
0.3
0
50
0.5
5e-6 5e-4
milestones
yes
CONV
0.1
hBox
0.3
0
50
0.5
5e-5 5e-4
milestones
yes
CONV
0.2
hBox
0.3
0
50
0.5
5e-5 5e-4
milestones
yes
CONV
0.3
hBox
0.3
0
50
0.5
5e-5 5e-4
milestones
yes
CONV
0.1
CROWN
0.2
10
100
0
0
5e-4
milestones
yes
CONV
0.2
CROWN
0.2
10
100
0
0
5e-4
milestones
yes
CONV
0.3
CROWN
0.3
10
100
0
0
5e-4
milestones
yes
CONV
0.1
DeepZ
0.2
0
50
0
5e-6 5e-4
milestones
yes
CONV
0.2
DeepZ
0.2
0
50
0
5e-6 5e-4
milestones
yes
CONV
0.3
DeepZ
0.3
0
50
0
5e-5 5e-4
milestones
yes
CONV
0.1
CROWN-IBP (R)
0.1
10
50
0
0
5e-4
milestones
yes
CONV
0.2
CROWN-IBP (R)
0.2
10
50
0
0
5e-4
milestones
yes
CONV
0.3
CROWN-IBP (R)
0.3
10
50
0.5
5e-6 5e-4
milestones
yes
CONV
0.1
CROWN-IBP
0.2
10
100
0.5
5e-6 5e-4
milestones
yes
CONV
0.2
CROWN-IBP
0.3
10
50
0.5
5e-6 5e-4
milestones
yes
CONV
0.3
CROWN-IBP
0.3
10
50
0
5e-6 5e-4
milestones
yes
Table 9 :
9The variability of the results when changing the seed.Seed
Table 10 :
10The evaluation of models trained with certified training using different convex relaxations.Method (certification)
Table 11 :
11Complete evaluation results on the MNIST dataset.test = 0.1
test = 0.2
test = 0.3
Method
Acc (%) PGD (%) CR (%)
Acc (%) PGD (%) CR (%)
Acc (%) PGD (%) CR (%)
FC
IBP
94.8
90.7
89.5
92.6
86.6
82.4
88.7
80.8
74.0
hBox
95.6
90.6
88.4
93.2
82.1
76.6
89.2
65.7
57.0
CROWN
98.6
94.6
91.6
93.0
85.5
80.6
84.9
71.7
57.3
DeepZ
98.3
95.1
92.5
95.0
90.9
85.1
87.4
77.9
64.2
CROWN-IBP (R)
94.9
91.8
90.6
92.2
84.6
80.9
92.2
79.9
70.5
CROWN-IBP
95.5
92.0
91.2
93.3
88.6
86.0
90.8
83.6
77.9
CONV
IBP
97.2
95.0
94.6
95.9
92.3
91.3
95.9
89.7
86.8
hBox
95.1
93.0
92.7
95.1
90.9
89.5
95.1
87.9
83.7
CROWN
96.8
95.1
94.5
96.8
92.6
88.0
92.6
84.3
70.2
DeepZ
97.0
95.7
94.9
97.0
94.0
88.8
92.5
87.0
69.8
CROWN-IBP (R)
98.5
95.2
93.4
95.5
90.9
86.9
93.4
84.5
75.4
CROWN-IBP
96.9
94.8
94.4
95.6
92.0
90.9
94.8
89.1
86.6
Table 12 :
12Complete evaluation results on the FashionMNIST dataset.test = 0.1
test = 0.2
test = 0.3
Method
Acc (%) PGD (%) CR (%)
Acc (%) PGD (%) CR (%)
Acc (%) PGD (%) CR (%)
FC
IBP
79.2
71.4
67.9
73.7
65.2
57.9
54.8
46.6
40.4
hBox
79.2
73.1
70.0
76.3
62.5
56.4
68.7
49.2
39.6
CROWN
80.8
70.6
67.8
70.9
53.7
49.5
52.4
35.9
30.2
DeepZ
81.2
73.7
70.2
71.2
57.6
51.6
51.0
39.2
35.0
CROWN-IBP (R)
76.6
68.1
66.5
69.6
54.7
51.3
69.6
46.7
41.1
CROWN-IBP
78.1
72.1
69.6
74.7
64.7
58.9
66.7
55.9
47.9
CONV
IBP
80.3
74.1
72.7
76.5
65.5
61.4
76.5
58.8
52.0
hBox
72.9
67.0
65.2
72.9
61.3
57.4
72.9
53.6
47.1
CROWN
71.9
64.3
62.8
71.9
55.2
49.4
56.4
39.9
31.5
DeepZ
72.3
66.6
64.8
72.3
60.1
53.4
56.3
40.7
34.0
CROWN-IBP (R)
81.0
72.5
69.9
71.8
58.2
54.5
68.9
50.2
40.0
CROWN-IBP
80.0
73.5
72.1
74.7
63.7
60.2
65.3
56.5
50.9
Table 13 :
13Complete evaluation results on SVHN dataset with the CONV network and test = 8/255 (left), and on CIFAR-10 dataset with the CONV+ network and test = 8/255 (right).Method
Acc (%) PGD (%) CR (%)
IBP
41.6
30.9
28.9
hBox
35.0
26.9
23.6
CROWN
44.0
27.3
23.4
DeepZ
38.7
27.7
24.5
CROWN-IBP (R)
65.8
37.2
27.5
CROWN-IBP
43.9
32.6
29.3
Method
Acc (%) PGD (%) CR (%)
IBP
39.8
32.4
29.0
hBox
36.2
25.9
20.0
CROWN
OOM
OOM
OOM
DeepZ
36.5
28.6
22.8
CROWN-IBP (R)
37.4
28.7
24.3
CROWN-IBP
41.1
32.8
30.3
Table 14 :
14The extension of our main experimental results in various alternative settings. All experiments are done on the MNIST dataset.Method
Original Kaiming Ortho Xavier IBPInit
L2
BigLR TinyLR SGD 10% 30% 50%
FC
IBP
74.0
61.7
71.2
67.3
67.6
71.0
61.1
74.3
23.8 49.0 64.3 69.4
hBox
57.0
37.4
48.7
41.5
11.3
47.7
44.3
52.4
10.0 17.9 35.2 45.8
CROWN
57.3
58.3
59.9
61.8
55.2
58.3
61.0
55.8
11.3 31.3 48.3 52.9
DeepZ
64.2
60.1
63.4
63.1
64.1
64.2
60.9
59.1
11.3 42.1 60.7 64.0
CROWN-IBP (R)
70.5
61.5
67.5
64.8
65.8
69.6
70.3
63.4
4.5
44.2 62.8 68.6
CROWN-IBP
77.9
76.4
76.7
77.2
77.7
77.4
76.0
77.9
19.0 67.7 73.5 75.4
CONV
IBP
86.8
86.6
85.7
86.1
85.8
85.8
85.7
86.6
83.3 80.7 84.2 85.0
hBox
83.7
82.4
69.3
69.5
83.1
85.0
69.0
85.9
80.7 81.8 83.4 85.3
CROWN
70.2
70.1
70.5
67.1
68.6
70.6
67.5
68.1
67.8 61.6 68.5 67.3
DeepZ
69.8
69.7
67.1
67.6
69.0
69.5
66.7
67.8
67.2 62.6 67.9 68.5
CROWN-IBP (R)
75.4
77.9
76.7
77.7
73.7
73.9
72.4
73.5
72.0 58.5 69.2 71.9
CROWN-IBP
86.6
87.0
86.8
86.7
86.5
86.4
86.5
86.3
82.8 78.4 84.7 85.5
Table
H Details of Relaxation ModificationsWe give more details on modifications of relaxations explored in Section 6. For each, we describe the relaxation used as a base (also shown in the diagram inFigure 10), the performed change, the intended effect on properties, and the resulting unintended effect on properties. The summary of CR-AUC scores for all modifications is given inTable 15, produced by an experiment in the same setting asFigure 2. We treat the relaxation as loose if its tightness is closer to IBP than hBox. While we choose names indicative of the origin of each modification, note that we rediscover some less prominent relaxations considered in prior work, as an example, DeepZ-Diag corresponds to zDiag from(Mirman et al., 2018). In the following, when describing linear bounds, we use x , x, l, u to refer to x i,j , x i−1,j , l i−1,j , u i−1,j for brevity. Further, we refer to two stable ReLU cases as positive and negative.hBox-based Modifications. Starting from hBox, and setting x ≤ x ≤ x − l (unstable case) produces a theoretically non-comparable hBox-Diag which is empirically much less tight. Going further to remove the discontinuity by setting the same bounds for the negative case, we arrive at hBox-Diag-C , completely sacrificing tightness (0.00 AUC inTable 15). Finally, we partially remedy tightness by switching between hBox and hBox-Diag using a heuristic similar to one in CROWN to get hBox-Switch , which as a side-effect introduces a discontinuity.DeepZ-based Modifications. Fixing 0 ≤ x ≤ u and x ≤ x ≤ x − l for the unstable case in DeepZ produces DeepZ-Box and DeepZ-Diag , both now not sensitive, but looser and discontinuous. We construct DeepZ-Diag-C and DeepZ-Switch analogously to hBox-Diag-C and hBox-Switch. Further, we introduce DeepZ-Soft to eliminate the discontinuity of DeepZ-Switch, by setting λ = σ(γ(l/u − u/l)) with parameter γ, where σ denotes the sigmoid function. Then, for the unstable case we set λx ≤ x ≤ λx − λl when λ ≥ u/(u − l) and λx ≤ x ≤ λx + u − λu otherwise. We find that γ = 1 works best. While this both resolves the discontinuity and makes DeepZ-Switch tighter, it reintroduces the problem of sensitivity, leading to properties comparable to original DeepZ. Finally, we explore DeepZ-IBP (R), a relaxation analogous to CROWN-IBP (R) using IBP for intermediate bounds, which partially alleviates the sensitivity issue of DeepZ by slightly sacrificing tightness.CROWN-based Modifications. We obtain CROWN-0and CROWN-1 by fixing 0 ≤ x (and x ≤ x respectively) as the lower bound for the unstable case, in an attempt to remove the discontinuity caused by heuristically switching between them in CROWN . However, this introduces a new kind of discontinuities at l = 0 and u = 0 respectively and slightly harms tightness, making these relaxations still both discontinuous and highly sensitive. If we aim to obtain continuity, we can set l ≤ x for the positive case in CROWN-0 to obtain CROWN-0-C , and analogously set x ≤ x for the negative case in CROWN-1 to get CROWN-1-C . While the latter becomes prohibitively loose, the former retains some tightness (i.e., it is tighter than IBP). Alternatively, modifying CROWN-0 and CROWN-1 to alleviate high sensitivity instead of discontinuity by setting x ≤ x − l for the unstable case of CROWN-0, or x ≤ u for the unstable case of CROWN-1, we obtain CROWN-0-Tria and CROWN-1-Tria , both looser than their original
Universal approximation with certified networks. Maximilian Baader, Matthew Mirman, Martin T Vechev, International Conference on Learning Representations. Maximilian Baader, Matthew Mirman, and Martin T. Vechev. Universal approximation with certified networks. In International Conference on Learning Representations, 2020.
Adversarial training and provable defenses: Bridging the gap. Mislav Balunovic, Martin Vechev, International Conference on Learning Representations. Mislav Balunovic and Martin Vechev. Adversarial training and provable defenses: Bridging the gap. In International Conference on Learning Representations, 2020.
Certified adversarial robustness via randomized smoothing. Jeremy Cohen, Elan Rosenfeld, Zico Kolter, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningJeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, 2019.
Discontinuous piecewise linear optimization. R Andrew, Marcel Conn, Mongeau, Mathematical programming. 803Andrew R Conn and Marcel Mongeau. Discontinuous piecewise linear optimization. Mathematical programming, 80(3):315-380, 1998.
Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, arXiv:2010.09670Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprintFrancesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020.
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian J Goodfellow, Percy Liang, Pushmeet Kohli, NeurIPS. Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian J. Goodfellow, Percy Liang, and Pushmeet Kohli. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. In NeurIPS, 2020.
Formal verification of piece-wise linear feed-forward neural networks. Rüdiger Ehlers, International Symposium on Automated Technology for Verification and Analysis. Rüdiger Ehlers. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, 2017.
Ai2: Safety and robustness certification of neural networks with abstract interpretation. Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev, 2018 IEEE Symposium on Security and Privacy (S&P). Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (S&P), 2018.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Yee Whye Teh and D. Mike TitteringtonAISTATSXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and D. Mike Titterington (eds.), AISTATS, 2010.
Explaining and harnessing adversarial examples. Ian Goodfellow, Jonathon Shlens, Christian Szegedy, International Conference on Learning Representations. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
On the effectiveness of interval bound propagation for training verifiably robust models. Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Timothy Mann, Pushmeet Kohli, arXiv:1810.12715arXiv preprintSven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
A dual approach to verify and train deep networks. Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Timothy Mann, Pushmeet Kohli, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Timothy Mann, and Pushmeet Kohli. A dual approach to verify and train deep networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.
Global optimization of rational functions: a semidefinite programming approach. Dorina Jibetean, Etienne De Klerk, Mathematical Programming. 1061Dorina Jibetean and Etienne de Klerk. Global optimization of rational functions: a semidefinite programming approach. Mathematical Programming, 106(1):93-109, 2006.
Reluplex: An efficient smt solver for verifying deep neural networks. Guy Katz, Clark Barrett, L David, Kyle Dill, Julian, Kochenderfer, International Conference on Computer Aided Verification. Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, 2017.
Towards better understanding of training certifiably robust models against adversarial examples. Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee, Advances in Neural Information Processing Systems. Sungyoon Lee, Woojin Lee, Jinseong Park, and Jaewook Lee. Towards better understanding of training certifiably robust models against adversarial examples. In Advances in Neural Information Processing Systems, 2021.
Linyi Li, Xiangyu Qi, Tao Xie, Bo Li Sok, arXiv:2009.04131Certified robustness for deep neural networks. arXiv preprintLinyi Li, Xiangyu Qi, Tao Xie, and Bo Li. Sok: Certified robustness for deep neural networks. arXiv preprint arXiv:2009.04131, 2020.
Formal security analysis of neural networks using symbolic intervals. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana, USENIX Security Symposium. USENIX AssociationShiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. In USENIX Security Symposium, pp. 1599-1614. USENIX Association, 2018a.
Efficient formal safety analysis of neural networks. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana, Advances in Neural Information Processing Systems 31. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems 31. 2018b.
Beta-crown: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter, abs/2103.06624CoRRShiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. Beta-crown: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. CoRR, abs/2103.06624, 2021a.
Beta-crown: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter, Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. Beta-crown: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. 2021b.
Global optimization of bounded factorable functions with discontinuities. Achim Wechsung, Paul, Barton, Journal of Global Optimization. 581Achim Wechsung and Paul I Barton. Global optimization of bounded factorable functions with discontinuities. Journal of Global Optimization, 58(1):1-30, 2014.
Towards fast computation of certified robustness for ReLU networks. Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, Inderjit Dhillon, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningLily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. Towards fast computation of certified robustness for ReLU networks. In Proceedings of the 35th International Conference on Machine Learning, 2018.
Provable defenses against adversarial examples via the convex outer adversarial polytope. Eric Wong, Zico Kolter, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningEric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning, 2018.
Scaling provable adversarial defenses. Eric Wong, Frank Schmidt, Jan Hendrik Metzen, J Zico Kolter, Advances in Neural Information Processing Systems 31. Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems 31. 2018.
Automatic perturbation analysis for scalable certified robustness and beyond. Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh, NeurIPS. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien LinKaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. Automatic perturbation analysis for scalable certified robustness and beyond. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), NeurIPS, 2020.
Randomized smoothing of all shapes and sizes. Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, P Ilya, Jerry Razenshteyn, Li, ICML. Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya P. Razenshteyn, and Jerry Li. Randomized smoothing of all shapes and sizes. In ICML, 2020.
Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons. Bohang Zhang, Tianle Cai, Zhou Lu, Di He, Liwei Wang, Marina Meila and Tong Zhang2021Bohang Zhang, Tianle Cai, Zhou Lu, Di He, and Liwei Wang. Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons. In Marina Meila and Tong Zhang (eds.), ICML, 2021.
Boosting the certified robustness of l-infinity distance nets. Bohang Zhang, Du Jiang, Di He, Liwei Wang, 2022Bohang Zhang, Du Jiang, Di He, and Liwei Wang. Boosting the certified robustness of l-infinity distance nets. ICLR, 2022.
Efficient neural network robustness certification with general activation functions. Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel, Advances in Neural Information Processing Systems 31. Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems 31, 2018.
Towards stable and efficient training of verifiably robust neural networks. Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh, International Conference on Learning Representations. Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In International Conference on Learning Representations, 2020.
Rational function optimization using genetic algorithms. Mehdi Mj Valadan Zoej, Ali Mokhtarzade, Hamid Mansourian, S Ebadi, Sadeghian, International journal of applied earth observation and geoinformation. 94MJ Valadan Zoej, Mehdi Mokhtarzade, Ali Mansourian, Hamid Ebadi, and S Sadeghian. Rational function optimization using genetic algorithms. International journal of applied earth observation and geoinformation, 9(4):403-413, 2007.
| []
|
[]
| []
| []
| []
| Self-supervised learning of depth and ego-motion from unlabeled monocular video has acquired promising results and drawn extensive attention. Most existing methods jointly train the depth and pose networks by photometric consistency of adjacent frames based on the principle of structure-from-motion (SFM). However, the coupling relationship of the depth and pose networks seriously influences the learning performance, and the re-projection relations is sensitive to scale ambiguity, especially for pose learning. In this paper, we aim to improve the depth-pose learning performance without the auxiliary tasks and address the above issues by alternative training each task and incorporating the epipolar geometric constraints into the Iterative Closest Point (ICP) based point clouds match process. Distinct from jointly training the depth and pose networks, our key idea is to better utilize the mutual dependency of these two tasks by alternatively training each network with respective losses while fixing the other. We also design a log-scale 3D structural consistency loss to put more emphasis on the smaller depth values during training. To makes the optimization easier, we further incorporate the epipolar geometry into the ICP based learning process for pose learning. Extensive experiments on various benchmarks datasets indicate the superiority of our algorithm over the state-of-the-art selfsupervised methods. | 10.1109/tcds.2022.3152241 | [
"https://arxiv.org/pdf/2108.01980v1.pdf"
]
| 236,912,679 | 2108.01980 | 70eb3efe5797154e6e0efd23a3821b2130383f66 |
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 Index Terms-Self-supervised LearningMonocular Depth EstimationPose EstimationEpipolar GeometryIterative Closest Point
Self-supervised learning of depth and ego-motion from unlabeled monocular video has acquired promising results and drawn extensive attention. Most existing methods jointly train the depth and pose networks by photometric consistency of adjacent frames based on the principle of structure-from-motion (SFM). However, the coupling relationship of the depth and pose networks seriously influences the learning performance, and the re-projection relations is sensitive to scale ambiguity, especially for pose learning. In this paper, we aim to improve the depth-pose learning performance without the auxiliary tasks and address the above issues by alternative training each task and incorporating the epipolar geometric constraints into the Iterative Closest Point (ICP) based point clouds match process. Distinct from jointly training the depth and pose networks, our key idea is to better utilize the mutual dependency of these two tasks by alternatively training each network with respective losses while fixing the other. We also design a log-scale 3D structural consistency loss to put more emphasis on the smaller depth values during training. To makes the optimization easier, we further incorporate the epipolar geometry into the ICP based learning process for pose learning. Extensive experiments on various benchmarks datasets indicate the superiority of our algorithm over the state-of-the-art selfsupervised methods.
I. INTRODUCTION
YNAMIC 3D scene structure understanding is a key yet challenging problem in robotics and autonomous driving scenarios. Obtaining the accurate scene structure and objects' locations in the real world are essential for motion planning and decision making. The supervised methods require densely annotated ground-truth information from additional expensive sensors and precise calibration which are costly and timeconsuming. Thus, the recent works seek to obtain the 3D scene geometric information and the ego-motion of the agents in a self-supervised manner from either stereo image pairs [10] [33] or video sequences [34].
The self-supervised learning framework that jointly optimizes the relative pose and the scene depth has caught the The authors are with the School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China (e-mail: [email protected]; [email protected]). attention of academics as it depends much less on the groundtruth labels. Previous methods mainly rely on minimizing the image brightness consistency error among adjacent views by reprojecting the back-projected 3D points in the source views, which may contain much system error in realistic scenes due to the moving objects, repetitive textures, reflective surfaces, scale variations, and occlusions. Thus, some later works try to explicitly measure the inferred geometry of the whole scene by the 3D geometric alignment loss [22,40] or the epipolar geometric loss [3,41], which are important for self-supervised depth and pose learning. Although these frameworks have achieved excellent improvement, the following problems still exist: i) as the performance of the depth and pose estimations are inter-dependence, due to the scale ambiguity, the relations of the back-projection and re-projection would produce degenerated results, especially for the pose estimation. Thus more feasible optimization method is needed to obtain more reliable results. ii) The smaller depth values contain richer information and more important for depth estimation, while the large depth values are less important and always tolerated with larger estimation errors in a real application, thus the smaller depth value should be given a larger weight to avoid overly emphasizing the larger depth errors during training. iii) Although the ICP based geometric constraints can simultaneously optimize the depth and pose learning and less affected by the scale ambiguity, the optimization objective towards best-fit transformation seems tough to achieve and is in short of exact linear mathematical constraint relations.
Inspired by the existing excellent work, in this paper, we tackle the above issues by fusing the epipolar geometry into ICP registration to simplify the optimization process and obtain more reliable results, incorporating the log-scale 3D structural consistency loss for aligning depth with pose, in the selfsupervised framework. To ensure each network is directly optimized towards the gradient descent direction and increase the convergence speed, we alternatively train the depth and pose networks to align depth with pose and align pose with depth by turns according to the epipolar geometric constraints fused ICP registration. Moreover, we also verify the effectiveness of the properties of the epipolar geometry, namely the low-rankness and the self-expression in union-of-epipolar-Self-Supervised Learning of Depth and Ego-Motion from Video by Alternative Training and Geometric Constraints from 3D to 2D Jiaojiao Fang, Guizhong Liu D subspaces, for depth and pose learning. Our main contributions are the following: Epipolar geometric constraints embedded the 3D structural consistency. Multi-views geometric consistency is vitally important for the self-supervised depth and pose learning based on the structure-from-motion. The ICP-based geometric constraints can simultaneously optimize the depth and pose networks, and are less affected by the scale-ambiguity problem. While directly using the best-fitted transformation to optimize is difficult to get ideal results, due to the indirect relations of the pose network learning and ICP registration. In this paper, instead of constraining the best-fit transformation computed by ICP registration, we propose a more direct manner to optimize the pose network by the epipolar geometric constraint in consideration of the ICP registration, which is easier to optimize, and embedded the ICP registration into the depth and pose learning process, called the geometric constraints from 3D to 2D. By the combination of ICP registration and epipolar geometry, we can find a better registration for geometric consistency and effective optimization objective. Furthermore, we also verify the effectiveness of the properties of epipolar geometry, namely the low-rankness and self-expression in union-of-subspaces, which could serve as a global regularization and deal with the moving objects, for the depth and pose learning.
A log-scale 3D structural consistency measurement. It would be better to avoid excessive optimization of the relative larger errors caused by the larger depth values during the training process, which is with greater tolerance in practical applications. To this end, we use the log-scale mean squared error to measure the 3D structural consistency, which is aimed to solve the scale inconsistent problem over samples and improve the learning performance.
Alternatively training the depth and pose networks with different losses. Inspired by the two-step optimization process of the ICP method, to ease the training procedure and better utilize the mutual dependency of the two tasks, we proposed to alternatively train the depth and pose networks with different geometric constraints for aligning depth with pose and aligning pose with depth by turns. We verify the effectiveness of the alternative training in the self-supervised depth and pose learning framework.
We show that the proposed geometric constraints can be explicitly incorporated into the training process without breaking the simplicity of inference. The proposed framework is extensively evaluated on various datasets and have achieved the state-of-the-art performance.
II. RELATED WORK Monocular depth estimation based on deep neural networks from stereo image pairs or video sequences have achieved great advancement. The respective methods mainly fell into two sorts, the supervised methods and the self-supervised methods.
A. Supervised Monocular Depth Estimation
Supervised monocular depth estimation refers to the problem setting that trains with vast ground-truth data. Due to the superiority of deep learning and the availability of ground-truth data, the supervised learning frameworks [1,6,21,20,17] acquired advanced accuracy. Eigen et al. [6] firstly proposed the depth estimation from a single image by training a network on sparse labels provided by LiDAR scans. Liu et al. [21] used a convolutional neural network (CNN) combined with the conditional random field to learn monocular depth. Karsch et al. [17] proposed a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. Several works tried to further improve the accuracy of supervised depth estimation by using more robust losses [1,7,20]. But the superior performance of these supervised methods usually relied on high quality and pixel aligned ground-truth depth data for training, which is challenging to gain in various real-world environments. Fu et al. [7] introduced a spacing-increasing discretization (SID) strategy for supervised depth estimation.
B. Self-supervised Learning of Depth and Pose
Recently, many self-supervised methods have been proposed due to their less dependence on the ground truth depth data [42][43][44][45][46]. Self-supervised depth estimation methods mainly utilize the multi-view information from multiple cameras or video sequences by the methodology of structure-from-motion. Here we focus on the most related self-supervised monocular depth and pose estimation from videos.
Zhou et al. [34] first presented a joint learning of depth and ego-motion from unlabeled videos in the self-supervised manner with the static scene assumption. Yin et al. [32] added a refinement network to the depth and pose networks in for estimating the residual optical flow and used the forwardbackward consistency to account for the moving regions. Mahjourian et al. [22] proposed an Iterative Closest Point (ICP) based differentiable 3D loss, which directly penalizes the inconsistencies of the estimated depths without relying on reprojection. Chen et al. [2] proposed a method integrating both the geometric information and the semantic information for scene understanding. Ranjan et al. [28] jointly learned the motion segmentation, optical flow, camera motion, and depth to obtain the complete geometric structure and motion of the scene. Godard et al. [11] proposed an effective minimum photometric loss and an analytical binary mask to deal with the occlusions excluding the invalid regions. Bian et al. [40] proposed a geometry consistency loss for scale-consistent predictions and a self-discovered mask for handling moving objects and occlusions. Shen et al. [16,41] incorporated the epipolar geometry for more robust geometric constraints, while the pre-computed feature points matching is needed. Poggi et al. [26] focused on the uncertainty estimation for self-supervised monocular depth estimation and showed how this practice improves depth accuracy. Guizilini et al. [12] proposed a novel self-supervised monocular depth estimation method combining geometry with a novel network structure called PackNet. Guizilini et al. [13] adopted a novel network architecture using a pre-trained semantic segmentation network to guide the geometric representation learning in a self-supervised monocular learning framework.
C. Epipolar Geometry for Self-supervised Learning
The epipolar geometric constraints are popular for selfsupervised optical flow, and depth-pose learning. Valgaerts et al. [35] introduced a model to simultaneously estimate the fundamental matrix and the optical flow. Wedel et al. [36] used a fundamental matrix as a weak constraint for the optical flow training. These methods, however, assumed that the scene was mostly rigid, and treated the dynamic parts as outliers [36]. Garg et al. [24] used the subspace constraint as a regularization term for multi-frame optical flow estimation. Zhong et al. [18] proposed a low-rank constraint as well as a union-of-subspaces constraint for the self-supervised optical flow training and investigated multiple ways of enforcing the epipolar constraint. While we explore to Chen et al. [3] captured multiple geometric constraints for relating the optical flow, depth, camera pose and intrinsic parameters from monocular videos, and used epipolar geometry as a verification of putative correspondences by optical flow.
In this work, we follow the previous good practices, with the major distinction that explore a new training policy to facilitate the training procedure of the depth and pose networks with a log-scale 3D structural consistency loss and the epipolar geometry and ICP based geometric constraint.
III. THE PROPOSED METHOD
In this section, our proposed method is described, including the log-scale 3D structural consistency loss, the geometric constraints from 3D to 2D, the properties of the epipolar geometry as regularizations, as well as the alternative training with different losses. Fig. 1 illustrates an overview of our method. This section starts with an introduction to the problem formulation.
A. Problem Formulation
The problem of the self-supervised depth and pose networks learning from monocular video sequences can be formalized as follows. Given a target frame ∈ × ×3 and the source frames where ∈ {t − 1, t + 1}, collected by a potentially moving camera with intrinsic . The 3D rigid transformation can be represented as a matrix → = [ → , → ] , where → is the rotation matrix and → is the translation vector of the camera's ego-motion from time to , which is to be estimated by the pose network. Assuming a pixel = [ , ] in the target frame and the estimated depth ( ) of , then the corresponding 3D locations ( ) in the camera's world coordinates system at time is the back-projection of :
( ) = [ 1 ] = ( ) −1 [ 1 ](1)
where represents the output of the depth network and refers to the parameters of the depth network.
Assuming ( ) is transformed rigidly from time to , then its correspondence ′ in the source frame can be calculated by re-projection the back-projected 3D points ( ).
[
′ , 1] = ( ) −1 → ( ) (2)
where ( ) is the depth of the transformed ( ) calculated by → in the camera's world coordinates at time s. Note that the transformation applied to the 3D point is the inverse of the camera movement from to . Thus, the photometric loss between the target and source views can be easily computed by this displacement.
In this paper, we use the minimum error among adjacent views [11] as the photometric loss
ℒ ℎ = min[ ℎ( , → )]
(3) where → is the reconstructed target image from the source images by the re-projection relation in Eq. (2). Following [10], the bilinear sampling [38] is used to reconstruct the target image, and the convex combination of L1 and structured similarity (SSIM) [39] to construct the photometric error ℎ(•,•).
ℎ(•,•) = (1 − (•,•))/2 + (1 − )‖• − •‖ 1 (4) As in [11], the edge-aware smoothness is used for depth maps Interaction used to exclude the regions which are harmful to the photometric loss. Thus the method in [11] trained on the monocular videos with loss ℒ ℎ + ℒ is the baseline of our method.
ℒ = | ( ) * | −| | + | ( ) * | −| |(5)
B. 3D Structural Consistency in Adjacent Views
The scene structures obtained by depth estimations of adjacent views should be consistent [3]. Thus, enforcing the 3D structural consistency is necessary, which considers the whole scene structure. In this paper, to increase the importance of the smaller depth values and to avoid excessive optimization of the larger depth values, we introduce a log-scale 3D geometric consistency loss to penalize the structural variations in multiple views and enforce the scale consistency.
Given the relation between a pixel in the target image, and its correspondence ′ in the source image obtained by reprojection in Eq. (2), this relation can be used to penalize the structural inconsistency of adjacent views in the uniform coordinates system by a bilinear sampling [38]. In consideration of the occlusions, inspired by [11], we use the minimum error instead of the average error over all source views as the 3D geometric loss.
ℒ 3 = min(‖log( ( ′ ) −1 ′ − log ( → ( ))‖)(6)
By this way, the gradients of the larger depth value would be smaller during training. This loss is similar to the geometric consistency loss in [40] that computing the relative error instead of the absolute error while focus more on the smaller depth errors. And the experiments prove the effective of the simple logarithm operation.
C. Epipolar Geometry Embedded with 3D Structural Consistency
The epipolar geometric constraint is less affected by the depth estimation as it does not concern the depth in its formulation. Existing methods use either the pre-computed sparse feature matching [16] or the optical flow [3] combined with the estimated camera pose to construct the epipolar geometric constraint on the image planes, and improved the effectiveness of epipolar geometry.
In this paper, instead of using the optical flow, feature matching, or the re-projection of the back-projected 3D structure by the estimated depth of one image, we use the precise point clouds matching methods, called ICP, to establish the dense correspondences of the two sets of points
Where and ′′ denotes the point to point correspondence found by the closest point distances of the two sets of point clouds, → represents the estimated pose of the (n-1)-th iteration. With the correspondences of the point clouds, the transformation between two adjacent views can be optimized by
argmin∑ =1 → ‖ ( ′′ ) − → ( )‖ 2 2 .(8)
As the 3D structural consistency loss in Eq. (6), which is similar to [3,40], is already an effective manner to obtain the 3D structural consistency of the estimated depth. Here instead of using the matching of point clouds reconstructed by the estimated depths of adjacent views to further constrain the 3D structural consistency of the adjacent views as in [22], we just use the matching of the two sets of point clouds to indicate the pixel relations of two adjacent views. To better utilize the mutual dependency of the depth and pose networks and to take the 3D geometric structural consistency into account, we embedded the correspondences of the two sets of point clouds into the 2D image planes. Instead of directly computing a transformation by the corresponding point clouds, we use the outputs of the pose network and the correspondences embedded in the two adjacent views to construct an epipolar geometry, which could transfer the ICP optimization into a linear problem. The optimization aims to transfer the consistency of depth to pose and constrain to consistent with epipolar geometry.
The correspondence between adjacent views in the image planes based on the ICP alignment are:
′′ = + [ , ].(9)
where [•] × is the skew-symmetric matrix of a 3-dimensional vector. The obtained correspondences in Eq. (9) are integervalued not needing any bilinear sampling. The roles of this loss can be two folds, one is a validation for the multi-views 3D structural consistency with an exact linear mathematical relation, and the other is for the pose network optimization. The basic epipolar geometry is an over-constrained formulation, which are not robust to outliers or noises. To improve the robustness of the epipolar geometric constraint, in this paper we incorporate the low-rankness loss ℒ and selfexpression in union-of-subspaces proposed in [18], into the self-supervised depth and pose learning.
To introduce the properties of the epipolar geometry, we first rewrite the epipolar geometric constraint in Eq. is the vectorized multiplication of the two points, which is a 9dimensional column vector lying in the epipolar subspace [4]. Then we can form a matrix = [ 1 , ⋯ , ] by all the vectors , where t = 1: is the number of pixel points. Thus the lowrankness loss of the matrix can be formulated by the nuclear norm ℒ = ‖ ‖ * (12) where ‖•‖ * is the nuclear norm, which can be computed by the singular value decomposition (SVD) [18]. By using this constraint, it is not necessary to explicitly compute the fundamental matrix. So it can be applied in the degenerated cases where a fundamental matrix is unknown. Although the low-rankness constraint is too loose, it still can improve the performance to some extent [18].
To deal with the moving objects in the static scene, another constraint, called the self-expression in union-of-subspaces, is introduced. This constraint implies that all vectors lying in the union-of-subspaces can be characterized by the self-expression property [18], i.e., each vector can be represented as a linear combination of the other vectors. The coefficients would be nonzero among the vectors in the same subspace while keeping zero among the vectors in others. The mathematical expression is [18]:
ℒ = 1 2 ⁄ ‖( + ) −1 ‖ 2 + 2 ⁄ ‖E( + ) −1 − E ‖ 2(13)
where C is the coefficients matrix of the self-expression in the union-of-subspaces, = 0.05 is a relaxing factor. In consideration of the GPU memory and the computational efficiency, we randomly sample 2000 point pairs to compute this loss as in [18]. Even though these epipolar subspaces would be disjoint, it can still serve as a global regularization.
D. Alternative Training by Different Optimization Objectives
The ICP methods alternates between computing correspondences between two 3D point clouds by a simple closest point heuristic, and computing a best-fit transformation between the two point clouds with a given correspondences until convergence. We embed this traditional optimization idea into the self-supervised depth and camera pose learning framework, which align the 3D structure of the adjacent views with the estimated pose and learn the transformation between adjacent views by the pose network with a given correspondence from two sets of 3D points.
In this section, we describe the proposed training policy and losses used in the self-supervised depth and pose networks learning. To ensure that each of the networks is directly optimized towards the gradient descent direction, here we use an alternative training policy with different geometric constraints, the log-scale 3D geometric loss and the structural consistency embedded epipolar geometric constraint, and the properties of the epipolar geometry as regularizations for their learning respectively.
Pose Network Optimization. As the epipolar geometric constraints is less influenced by the depth estimation, it is a well way for pose estimation. Thus, in this paper we use the epipolar geometric constraint in Eq. (10) for better pose learning to ensure that each network is optimized towards the gradient descent direction. Here we use the epipolar geometric constraints in Eq. (8) to further constrain the pose network learning. Hence, we still use the minimum error to construct the epipolar geometric loss. ℒ = min([ ′′ , 1] [ , 1] ) (14) The epipolar geometric loss ℒ can be easily obtained by computing the distance map from each pixel to its corresponding epipolar line, as in [41]. As the camera pose estimation is more vulnerable to the moving objects in the static scene, the properties of self-expression in the union-ofsubspaces ℒ proposed in [18] is also used to regularize the pose network optimization by the relation of the re-projection in Eq. (2). Thereby, the consistency between the two kinds of correspondences ′ and ′′ would be guaranteed. Then the total loss of the pose network training is as follows: ℒ = ℒ ℎ + ℒ + ℒ + ℒ (15) where , , and are the weights for the different losses. Depth Network Optimization. The 3D geometric structural consistency loss can directly measure the 3D structure of the whole scene. Thus a 3D geometric loss in Eq. (6) is a better manner for directly optimizing the depth network. In consideration of the effectiveness of the low-rankness constraints ℒ reported in [18], here we also use it to regularize the depth learning. Thus the total loss for the depth network can be expressed as
ℒ ℎ = ℒ ℎ + ℒ + 3 ℒ 3 + ℒ ,(16)
where , 3 , and are the weighs for the different losses.
These losses average over all the pixels, scales, and batches to train the networks in an end-to-end manner. The role of the photometric loss here is as the raw data term and as the global verification. To ease the training process and improve the learning performance, we alternatively train one network while fixed the other by the respective losses, which means that only the feed-forward will be performed for the other network without back-propagation.
IV. EXPERIMENTS
In this section, we evaluate the performance of our models and compare it with the published state-of-the-art selfsupervised methods on the KITTI 2015 stereo dataset [9]. We also use the Make3D dataset [27] to evaluate the generalization ability on cross dataset.
A. Training Datasets
KITTI Raw Dataset.
We mainly used the raw KITTI dataset [9] for training and evaluation. The dataset contains 42,382 rectified stereo pairs from 61 scenes, with a typical resolution being 1242×375 pixels. We trained our model with the Eigen split [6] excluding 679 images for testing and removed static frames following Zhou et al. [34]. This led to a total of 44,234 sequences, out of which we used 39,810 for training and 4,424 for validation. To facilitate the training and provide a fair evaluation, input images were resized to 640×192.
KITTI Visual Odometry (VO) Dataset. The KITTI odometry dataset contains 11 driving sequences with groundtruth labels and 11 sequences without ground-truth. As in the standard setting, we used sequences 00-08 for training and sequences 09 and 10 for testing.
Cityscapes Dataset. Since starting from a pre-trained model boosted the performance [34], we also tried to pre-train the model on the Cityscapes [5] dataset where 88084 images for training and 9659 images for validation.
B. Implementation Details
Depth Network. The depth network was a fully convolutional encoder-decoder structure with skip connections, which was similar to the DispNetS [23]. The ResNet18 [14] was used as the encoder if no otherwise specified. The decoder had five deconvolution layers. Networks outputted the results at 4 different spatial scales. The lower resolution depth maps were up-sampled to the input resolution for photometric loss as in [11].
Pose Network. The pose network took two adjacent frames as input, and outputted the relative motions between the target [6] Liu et al. [21] Zhou et al. [34] Godard et al. [10] Vid2depth [22] Yin et al. [32] Shen et al. [16] Zhan et al. [33] Monodepth2. [11] Monodepth2. w/o p-p [11] Chen et al. [3] Guizilini et al. [13] Xue et al. [ view and source views. The network consisted of 7 convolutional layers followed by a 1 × 1 convolution with 6 outputs channels, corresponding to rotation angles and translations along the coordinate axis. Parameters Setting and Processing. For all the experiments, we set the weights of the different losses components as = 0.001 , = 0.002 , 3 = 0.02 , = 0.001 and λ su = 0.0001. We trained our model with the Adam [19] optimizer with β 1 = 0.9 , β 2 = 0.999 , Gaussian random initialization, ResNet18 with pre-trained weight on the ImageNet [29] and mini-batch size of 4. The learning rate was originally set to 0.0001 and halved it after every 10 epochs until the end. We used the same data augmentations as in [11]. For disparity maps, we followed a similar post-processing technique as in [11] and capped depth at 80m as per standard practice during evaluation [10].
Equipment and Efficiency. The algorithm was deployed in the PyTorch [25] framework which was compiled with CUDA 9.0 and CuDNN 7.0 on a computer with an Intel Xeon(R) E5-1660v4 HP-Z440 8-Core 3.2 GHz CPU and a Titan Xp GPU. With a single Titan Xp, the network took almost 3.1 hours per epoch compared with 0.8 hours of the baseline method. While the runtime of our model for testing was the same as the baseline.
If there was no additional specification, the models were trained by these conditions.
C. Main Results of Depth and Pose Evaluations
Depth evaluation. The evaluation of depth estimation followed the previous works [11,34,22]. Here we provided a comparison of the depth estimation with the state-of-the-art self-supervised methods [3,10,11,13,16,22,33,34,37] and the classical supervised methods [6,21]. To be fair for all methods, we used the same crop manner as [34] and evaluated the prediction with the same resolution as the input image. The measure criterion conformed to the one used in [11]. As shown in Table 1, with the same underlying network structure, the proposed method outperformed state-of-the-art methods by a large margin. The network first pre-trained on the larger Cityscapes dataset [5], and then fine-tuned on the KITTI dataset [9], would result in slight performance improvement. The final post-processing step led to an accuracy increase and fewer visual artifacts at the expense of doubling the test time. To prove the performance of the close-range depth estimation, we also provided the separate results for a depth capped at 50m, which was also shown the advantage of our method. Qualitative results compared with the predictions of Godard et al. [11] and Xue et al. [37]) could be seen in Fig.2. It was shown that our method could reduce artifacts in low-texture regions of the image and improve the accuracy of close-range objects. The performance improvements mainly owe to the 3D geometric consistency is further verified by the 2D geometric constraints on the image planes, by alternative aligning depth with pose and aligning pose with depth can correct each other and the minimum loss instead of average loss is also useful for geometric constraints.
Generalization ability on the Make3D dataset. To illustrate the generalization ability of the trained model on other unseen datasets, we evaluated the network on the Make3D dataset, which was trained only by the KITTI dataset. Qualitative results were shown in Fig. 3. Despite the dissimilarities of the datasets, both in contents and camera parameters, we still achieved reasonable results.
Pose Evaluation. Although our method mainly concentrated on better depth estimation, we also evaluated the performance of relative pose estimation with competing methods on the official KITTI odometry benchmark using the Absolute Trajectory Error (ATE) metric over N-frame snippets (N=3 or 5), as in [11]. The pose estimation results in Table 2 showed the improvement over existing methods. We had observed that with the epipolar geometric constraints, the result of pose estimation would be notably improved, which is consistent with the report in [16].
D. KITTI Ablation Study
Performance of different losses. To analyze the individual impact of each loss, we provided an ablation study over different combinations of losses. Models for depth and pose evaluation were trained only on KITTI raw dataset or odometry dataset respectively. We chose an incremental order for the proposed techniques to avoid too many loss combinations. As shown in Table 3, we had the following observations.
1) The 3D structure consistency was essential for improving the performance of depth estimation, while the log-scale 3D structural consistency loss could further improve the depth estimation. Our method was more stable overall all metrics, especially noticeable on metrics that were especially noticeable on metrics that sensitive to large depth errors, e.g. the Square Relative Error and RMSE.
2) Although the properties of epipolar geometry, lowrankness constraints and union-of-subspaces constraints, is
Methods
Sequence 09 Sequence 10 # frames Garg et al. [8] Zhou et al. [34] Vid2depth [22] GeoNet [32] Ranjan et al. [28] Monodepth2 [11] Shen et al. [ suitable for self-supervised optical flow estimation, the improvements for self-supervised depth estimation are limited. In summary, the epipolar geometry helped both the pose and depth estimation, while the log-scale 3D geometric consistency terms also could improve the performance of depth estimation.
Alternative Training Policy. We also conducted an ablation study over the proposed alternative training policy compared with the jointly training policy. It showed that the alternative training policy was as effective as the jointly training policy and more effective by training with different geometric constraints.
Different Depth Network Structures.
For the sake of completeness, as similar to [13], we also provided an ablative analysis of its generalization ability to different depth networks. To this end, we considered two variations on the wellperformed structures, the Resnet50 [14] as the encoder and the Packnet [12]. The estimation results of this consideration were shown in Table 4, where we could see that the proposed method could consistently improve the performance with different depth networks, for all considered metrics.
V. CONCLUSIONS
In this paper, we put forward a self-supervised depth and pose estimation architecture that incorporates both the geometric principles and the photometric-based learning metrics. Our main contribution is to better utilize the mutual dependency of the depth and pose learning by their alternative training with different geometries and simplify the ICP registration based optimization by incorporating epipolar geometry. Also the log-scale 3D structural consistency loss and the epipolar geometry embedded ICP registration are adopted in the respective tasks. To make the result more robust and reliable, we incorporate novel ingredients by the properties of epipolar geometry, namely the low-rankness and self-expression in union-of-subspaces constraints, for depth and pose networks learning respectively. Our method tries to eliminate negative effects of possible objects movement by self-expression in the union of epipolar subspaces. The experimental results also demonstrate that our method can obtain depth maps with better contour for the foreground target and can generalize well to the unseen datasets. Further explorations include enforcing consistency across the whole dataset and applying to the uncalibrated dataset. The weights of variety of loss functions are set by the experience values and trial and error, various adaptive weighting could be considered in the future.
- - (log) - (log) (log) (log) (log) - - - - - - - - - - - - - - - - - - - - - - - - 0
= { ( 1 ), ⋯ , ( )} and = { ( 1 ), ⋯ , ( )} from adjacent views, where N is the total number of points. ICP method alternatively computes correspondences between two sets of 3D point clouds by searching the minimum point-topoint distances of each point pairs, and computing a best-fit transformation between the two sets of the point clouds with these correspondences. The next iteration then re-computes the correspondence with the previous iteration's transformation applied. Thus the optimization objectives of the n-th iteration are as follows: min ′′ {1,⋯, } ‖ ( ′′ ) − → ( )‖ , {1, ⋯ , }.
Assuming a pinhole imaging model, the correspondence, and ′′ , in adjacent views should satisfies the relation of the epipolar geometry with a fundamental matrix = −1 [ → ] × → −1 , which are the cross product of the rotation matrix and the translation vector multiplied with the camera intrinsic.
the vectorized fundamental matrix along the column direction, and = ([ ′ , 1] [ , 1])
Fig. 2 .
2The qualitative results of our proposed architecture on the KITTI dataset with the Eigen split[6]. The columns from left to right show respectively input images, the state-of-the-art predicted depth maps(Godard et al., 2019 [11]; Xue et al., 2020[37]), and the depths maps obtained by our proposed architecture. Our method recovers more subtle details such as trees, trunks and advertising boards.TABLEI RESULTS OF COMPARISON WITH THE STATE-OF-THE-ART METHODS ON THE KITTI DATASET [9] WITH THE SPLIT OF EIGEN [6], WHERE THE ERROR METRIC IS LOWER THE BETTER, AND THE ACCURACY METRIC IS HIGHER THE BETTER. THE BEST RESULTS ARE IN BOLD; THE SECOND BEST IS UNDERLINED. 'K' REPRESENTS KITTI RAW DATASET AND 'CS' REPRESENTS CITYSCAPES TRAINING DATASET. M REFERS TO METHODS THAT TRAIN USING MONOCULAR SEQUENCES, S REFERS TO METHODS THAT TRAIN USING STEREO PAIRS, D REFERS TO METHODS THAT USE GROUND-TRUTH DEPTH SUPERVISION, 'SEM' REFERS TO METHODS THAT INCLUDE SEMANTIC INFORMATION. 'P-P' REFERS THE RESULTS OBTAINED WITHOUT POST-PROCESSING.
inverse depth to prohibit shrinking of the estimated depth[11].A per-pixel binary mask proposed by Godard et al.[11] isFig. 1. An overview of our method. Besides photometric consistency, we explore the epipolar geometric consistency, the log-scale 3D structural consistency, and an alternative training policy with different geometric consistency for each network to improve the depth and pose estimation performance. Here the two depth CNNs have shared parameters.where
( ) * = ( )/ ( )
̅̅̅̅̅̅̅̅̅ is the mean-normalized
Depth CNN
Depth CNN
A log-scale 3D structural matching loss
Union-of-subspaces
= [ 1 , ⋯ , ]
Low-rankness Low-dimensions
Real 3D point
Epipolar geometry
embedded with 3D
point clouds registration
Alternative training
Encoder
Decoder
Encoder
Decoder
Alternative training
Pose CNN
s '
TABLE Ⅱ ODOMETRY
ⅡRESULTS ON THE KITTI [9] ODOMETRY DATASET. RESULTS SHOW THE AVERAGE ABSOLUTE TRAJECTORY ERROR AND STANDARD DEVIATION IN METERS.
Fig. 3. An illustration of examples of depth predictions on the Make3D dataset. Note that our model is only trained on the KITTI dataset and directly tested on Make3D.16]
Ours
0.013±0.010
0.021±0.017
0.013±0.010
0.012±0.007
0.012±0.007
0.017±0.008
0.009±0.005
0.008±0.005
0.012±0.011
0.020±0.015
0.012±0.011
0.012±0.009
0.012±0.008
0.015±0.010
0.008±0.007
0.008±0.006
3
5
3
5
5
2
3
3
TABLE Ⅲ ABLATION
ⅢSTUDIES. EVALUATION OF DIFFERENT TRAINING LOSS CONFIGURATIONS AND TRAINING POLICIES. ALL MODELS ARE EITHER SOLELY TRAINED ON KITTI RAW DATASET WITHOUT PRE-TRAINING ON CITYSCAPES [5] AND POST-PROCESSING[11].THE DEPTH ESTIMATION PERFORMANCE IS EVALUATED WITH MAXIMUM DEPTH CAPPED AT 80M. WHERE THE ERROR METRIC IS LOWER THE BETTER, AND THE ACCURACY METRIC IS HIGHER THE BETTER. THE BEST RESULTS OF EACH METRIC ARE BOLD.Loss Configuration and Alternative Training Manner
Error metric
Accuracy metric
Baseline
ℒ 3
ℒ
ℒ
ℒ
Alternate Abs Rel
Sq Rel
RMSE
RMSE
log
δ
< 1.25
δ
< 1.25 2
δ
< 1.25 3
ABLATIVE ANALYSIS OF THE GENERALIZATION OF OUR PROPOSED NETWORK ON VARIANT NETWORK STRUCTURES. ALL MODELS ARE SOLELY TRAINED ON THE MONOCULAR IMAGES OF KITTI RAW DATASET WITHOUT PRE-TRAINING ON CITYSCAPES [5] AND POST-PROCESSING[11]..115
0.114
0.115
0.114
0.113
0.113
0.114
0.113
0.112
0.903
0.885
0.896
0.884
0.879
0.872
0.870
0.868
0.835
4.863
4.845
4.844
4.834
4.828
4.825
4.812
4.809
4.748
0.193
0.192
0.193
0.192
0.191
0.189
0.192
0.192
0.189
0.877
0.877
0.876
0.877
0.876
0.882
0.876
0.877
0.878
0.959
0.960
0.959
0.959
0.960
0.960
0.960
0.959
0.960
0.981
0.980
0.981
0.982
0.981
0.981
0.981
0.982
0.982
TABLE Ⅳ
Networks
Error metric
Accuracy metric
Abs Rel
Sq Rel
RMSE
RMSE log
δ < 1.25
δ < 1.25 2
δ < 1.25 3
ResNet-18
ResNet-50
PackNet
0.112
0.107
0.103
0.835
0.792
0.698
4.748
4.661
4.274
0.189
0.182
0.172
0.878
0.887
0.894
0.960
0.962
0.964
0.981
0.983
0.985
Estimating depth from monocular images as classification using deep fully convolutional residual networks. Y Cao, Z Wu, C Shen, IEEE Transactions on Circuits and Systems for Video Technology. 2811Cao, Y., Wu, Z., Shen, C.: Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Transactions on Circuits and Systems for Video Technology 28(11), 3174-3182 (2017)
Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. P Chen, A H Liu, Y Liu, Y F Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChen, P., Liu, A.H., Liu, Y., Wang, Y.F.: Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2624-2632 (2019)
Self-supervised learning with geometric constraints in monocular video: Connecting flow, depth, and camera. Y Chen, C Schmid, C Sminchisescu, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionChen, Y., Schmid, C., Sminchisescu, C.: Self-supervised learning with geometric constraints in monocular video: Connecting flow, depth, and camera. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 7063-7072 (2019)
Robust multibody feature tracker: a segmentation-free approach. Pan Ji, Hongdong Li, Mathieu Salzmann, Yiran Zhong, Proc. IEEE Conf. Comp. Vis. Patt. Recogn. IEEE Conf. Comp. Vis. Patt. RecognPan Ji, Hongdong Li, Mathieu Salzmann, and Yiran Zhong. Robust multi- body feature tracker: a segmentation-free approach. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3843-3851, 2016
The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionCordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3213-3223 (2016)
Depth map prediction from a single image using a multi-scale deep network. D Eigen, C Puhrsch, R Fergus, Advances in neural information processing systems. Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in neural information processing systems. pp. 2366-2374 (2014)
Deep Ordinal Regression Network for Monocular Depth Estimation. H Fu, M Gong, C Wang, K Batmanghelich, D Tao, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UTFu H., Gong M., Wang C., Batmanghelich K. and Tao D., "Deep Ordinal Regression Network for Monocular Depth Estimation," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 2002-2011(2018).
Unsupervised cnn for single view depth estimation: Geometry to the rescue. R Garg, V K Bg, G Carneiro, I Reid, European Conference on Computer Vision. Garg, R., BG, V.K., Carneiro, G., Reid, I.: Unsupervised cnn for single view depth estimation: Geometry to the rescue. In: European Conference on Computer Vision. pp. 740-756 (2016)
Are we ready for autonomous driving? the kitti vision benchmark suite. A Geiger, P Lenz, R Urtasun, 2012 IEEE Conference on Computer Vision and Pattern Recognition. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3354-3361 (2012)
Unsupervised monocular depth estimation with left-right consistency. C Godard, O Mac Aodha, G J Brostow, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGodard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 270-279 (2017)
Digging into selfsupervised monocular depth estimation. C Godard, O Mac Aodha, M Firman, G J Brostow, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionGodard, C., Mac Aodha, O., Firman, M., Brostow, G.J.: Digging into self- supervised monocular depth estimation. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3828-3838 (2019)
3d packing for self-supervised monocular depth estimation. V Guizilini, R Ambrus, S Pillai, A Raventos, A Gaidon, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Guizilini, V., Ambrus, R., Pillai, S., Raventos, A., Gaidon, A.: 3d packing for self-supervised monocular depth estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. (2020)
Semanticallyguided representation learning for self-supervised monocular depth. V Guizilini, R Hou, J Li, R Ambrus, A Gaidon, International Conference on Learning Representations. Guizilini, V., Hou, R., Li, J., Ambrus, R., Gaidon, A.: Semantically- guided representation learning for self-supervised monocular depth. In: International Conference on Learning Representations. (2020)
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, arXiv:1502.03167arXiv preprintIoffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency. T Shen, L Zhou, Z Luo, Y Yao, S Li, J Zhang, T Fang, L Quan, International Conference on Computer Vision Workshops (ICCVW). 2019Shen, T., Zhou, L., Luo, Z., Yao, Y., Li, S., Zhang, J., Fang, T., Quan, L.: Self-Supervised Learning of Depth and Motion Under Photometric Inconsistency. International Conference on Computer Vision Workshops (ICCVW) 2019: 6359-6365.
Depth transfer: Depth extraction from video using non-parametric sampling. K Karsch, C Liu, S B Kang, IEEE transactions. 3611Karsch, K., Liu, C., Kang, S.B.: Depth transfer: Depth extraction from video using non-parametric sampling. IEEE transactions on pattern analysis and machine intelligence 36(11), 2144-2158 (2014)
Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes. Y Zhong, P Ji, J Wang, Computer vision and pattern recognition. Zhong, Y., Ji, P., Wang, J., et al. Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes[C]. Computer vision and pattern recognition, 2019: 12095-12104.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Deeper depth prediction with fully convolutional residual networks. I Laina, C Rupprecht, V Belagiannis, F Tombari, N Navab, International conference on 3D vision. Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., Navab, N.: Deeper depth prediction with fully convolutional residual networks. In: International conference on 3D vision. pp. 239-248 (2016)
Learning depth from single monocular images using deep convolutional neural fields. F Liu, C Shen, G Lin, I Reid, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3810Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence. 38(10), 2024-2039 (2015)
Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. R Mahjourian, M Wicke, A Angelova, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMahjourian, R., Wicke, M., Angelova, A.: Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5667-5675(2018)
A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. N Mayer, E Ilg, P Hausser, P Fischer, D Cremers, A Dosovitskiy, T Brox, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., Brox, T.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4040-4048 (2016)
Dense multi-frame optic flow for non-rigid objects using subspace constraints. R Garg, L Pizarro, D Rueckert, L Agapito, Proc. Asian Conf. Comp. Vis. Asian Conf. Comp. VisGarg, R., Pizarro, L., Rueckert, D. and Agapito, L.: Dense multi-frame optic flow for non-rigid objects using subspace constraints. In Proc. Asian Conf. Comp. Vis., pages 460-473, 2011.
Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z Devito, Z Lin, A Desmaison, L Antiga, A Lerer, Advances in Neural Information Processing Systems Workshop. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: Advances in Neural Information Processing Systems Workshop. (2017)
On the uncertainty of selfsupervised monocular depth estimation. M Poggi, F Aleotti, F Tosi, S Mattoccia, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Poggi, M., Aleotti, F., Tosi, F., Mattoccia, S.: On the uncertainty of self- supervised monocular depth estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Make3d: Learning 3d scene structure from a single still image. A Saxena, M Sun, A Y Ng, IEEE Transactions on Pattern Analysis and Machine Intelligence. 315Saxena, A., Sun, M., Ng, A.Y.: Make3d: Learning 3d scene structure from a single still image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 31(5), 824-840 (2008)
Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. A Ranjan, V Jampani, L Balles, K Kim, D Sun, J Wulff, M J Black, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRanjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M.J.: Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12240-12249 (2019)
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, International Journal of Computer Vision. 1153Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3), 211-252 (2015)
Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation. A Pilzer, S Lathuiliè Re, N Sebe, E Ricci, 10.1109/CVPR.2019.010002019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USAPilzer, A., Lathuiliè re, S., Sebe, N. and Ricci, E.: "Refine and Distill: Exploiting Cycle-Inconsistency and Knowledge Distillation for Unsupervised Monocular Depth Estimation," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 9760-9769, doi: 10.1109/CVPR.2019.01000.
Multiple view geometry in computer vision. R Hartley, A Zisserman, Cambridge university pressHartley, R., and Zisserman, A.. Multiple view geometry in computer vision. Cambridge university press, 2003.
Geonet: Unsupervised learning of dense depth, optical flow and camera pose. Z Yin, J Shi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYin, Z., Shi, J.: Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1983-1992 (2018)
Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. H Zhan, R Garg, C Saroj Weerasekera, K Li, H Agarwal, I Reid, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhan, H., Garg, R., Saroj Weerasekera, C., Li, K., Agarwal, H., Reid, I.: Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 340-349 (2018)
Unsupervised learning of depth and ego-motion from video. T Zhou, M Brown, N Snavely, D G Lowe, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1851-1858 (2017)
A ´ variational model for the joint recovery of the fundamental matrix and the optical flow. L Valgaerts, A Bruhn, J Weickert, Proceedings of DAGM Symposium on Pattern Recognition. DAGM Symposium on Pattern RecognitionValgaerts, L., Bruhn, A., and Weickert, J.: A ´ variational model for the joint recovery of the fundamental matrix and the optical flow. In Proceedings of DAGM Symposium on Pattern Recognition, pp. 314-324, (2008)
Structureand motionadaptive regularization for high accuracy optic flow. A Wedel, D Cremers, T Pock, H Bischof, Proc. IEEE International Conference on Computer Vision. IEEE International Conference on Computer VisionSeptWedel, A., Cremers, D., Pock, T., and Bischof, H.: Structureand motion- adaptive regularization for high accuracy optic flow. In Proc. IEEE International Conference on Computer Vision, pp. 1663-1668, Sept (2009).
Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications. F Xue, G Zhuo, Z Huang, W Fu, Z Wu, AngJr, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Xue, F., Zhuo, G., Huang, Z., Fu, W., Wu, Z., Ang Jr.. Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (2020)
Spatial transformer networks. M Jaderberg, K Simonyan, A Zisserman, NIPS. Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. In NIPS, pages 2017-2025, 2015.
Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan C Bovik, R Hamid, Eero P Sheikh, Simoncelli, IEEE Transactions on Image Processing. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, pp. 600-612, (2004).
Unsupervised scale-consistent depth and ego-motion learning from monocular video. J Bian, Z Li, N Wang, H Zhan, C Shen, M Cheng, Reid , I , Advances in Neural Information Processing Systems. Bian, J., Li, Z., Wang, N., Zhan, H., Shen, C., Cheng, M., and Reid, I.: Unsupervised scale-consistent depth and ego-motion learning from monocular video. Advances in Neural Information Processing Systems, pp. 35-45, (2019).
Beyond photometric loss for self-supervised ego-motion estimation. T Shen, IEEE International Conference on Robotics and Automation (ICRA). Shen, T., et al. Beyond photometric loss for self-supervised ego-motion estimation. IEEE International Conference on Robotics and Automation (ICRA), (2019).
Unsupervised Monocular Depth Learning in Dynamic Scenes. H Li, A Gordon, H Zhao, V Casser, A Angelova, International Conference on Robot Learning (CoRL). Li, H., Gordon, A., Zhao, H., Casser, V. and Angelova, A.: Unsupervised Monocular Depth Learning in Dynamic Scenes. International Conference on Robot Learning (CoRL), (2020).
Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. V Casser, S Pirk, R Mahjourian, Angelova , A , Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Casser, V., Pirk, S., Mahjourian, R., and Angelova, A.: Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 8001-8008. (2019)
Feature-metric loss for selfsupervised learning of depth and egomotion. C Shu, K Yu, Z Duan, K Yang, European Conference on Computer Vision. Shu, C. , Yu, K. , Duan, Z. , & Yang, K. .: Feature-metric loss for self- supervised learning of depth and egomotion. In: European Conference on Computer Vision. pp. (2020)
Selfsupervised monocular depth estimation: Solving the dynamic object problem by semantic guidance. M Klingner, J Termohlen, J Mikolajczyk, T Fingscheidt, European Conference on Computer Vision. Klingner, M., Termohlen, J., Mikolajczyk, J., and Fingscheidt, T.: Self- supervised monocular depth estimation: Solving the dynamic object problem by semantic guidance. In: European Conference on Computer Vision, (2020).
Towards Better Generalization: Joint Depth-Pose Learning Without PoseNet. W Zhao, S Liu, Y Shu, Y.-J Liu, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Zhao, W., Liu, S., Shu, Y., and Liu, Y.-J.: Towards Better Generalization: Joint Depth-Pose Learning Without PoseNet. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9148-9158, (2020).
| []
|
[
"Characterizing Crisp Simulations and Crisp Directed Simulations between Fuzzy Labeled Transition Systems by Using Fuzzy Modal Logics",
"Characterizing Crisp Simulations and Crisp Directed Simulations between Fuzzy Labeled Transition Systems by Using Fuzzy Modal Logics"
]
| [
"Linh Anh Nguyen [email protected] \nInstitute of Informatics\nFaculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw\nUniversity of Warsaw Warsaw\nPoland, Poland\n",
"Ngoc-Thanh Nguyen [email protected] \nInstitute of Informatics\nFaculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw\nUniversity of Warsaw Warsaw\nPoland, Poland\n"
]
| [
"Institute of Informatics\nFaculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw\nUniversity of Warsaw Warsaw\nPoland, Poland",
"Institute of Informatics\nFaculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw\nUniversity of Warsaw Warsaw\nPoland, Poland"
]
| []
| We formulate and prove logical characterizations of crisp simulations and crisp directed simulations between fuzzy labeled transition systems with respect to fuzzy modal logics that use a general t-norm-based semantics. The considered logics are fragments of the fuzzy propositional dynamic logic with the Baaz projection operator. The logical characterizations concern preservation of positive existential (respectively, positive) modal formulas under crisp simulations (respectively, crisp directed simulations), as well as the Hennessy-Milner property of such simulations. | 10.1109/fuzz45933.2021.9494504 | [
"https://arxiv.org/pdf/2109.02334v1.pdf"
]
| 236,939,711 | 2109.02334 | 74cb656f3d5ffa08812ffd616bd5c7dccb3d2a6e |
Characterizing Crisp Simulations and Crisp Directed Simulations between Fuzzy Labeled Transition Systems by Using Fuzzy Modal Logics
Sep 2021
Linh Anh Nguyen [email protected]
Institute of Informatics
Faculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw
University of Warsaw Warsaw
Poland, Poland
Ngoc-Thanh Nguyen [email protected]
Institute of Informatics
Faculty of Computer Science and Management Wroclaw University of Science and Technology Wroclaw
University of Warsaw Warsaw
Poland, Poland
Characterizing Crisp Simulations and Crisp Directed Simulations between Fuzzy Labeled Transition Systems by Using Fuzzy Modal Logics
Sep 2021Index Terms-fuzzy labeled transition systemsfuzzy modal logicssimulationdirected simulation
We formulate and prove logical characterizations of crisp simulations and crisp directed simulations between fuzzy labeled transition systems with respect to fuzzy modal logics that use a general t-norm-based semantics. The considered logics are fragments of the fuzzy propositional dynamic logic with the Baaz projection operator. The logical characterizations concern preservation of positive existential (respectively, positive) modal formulas under crisp simulations (respectively, crisp directed simulations), as well as the Hennessy-Milner property of such simulations.
I. INTRODUCTION
Like bisimulation, simulation is a well-known notion for comparing observational behaviors of automata and labeled transition systems (LTSs) [1], [2]. Given states x and x ′ , we say that x ′ simulates x if the label of x is a subset of the label of x ′ and, for every σ-transition from x to a state y, there exists a σ-transition from x ′ to a state y ′ that simulates y, where σ is any action. Such defined simulations preserve positive existential modal formulas, which are modal formulas without implication, negation and universal modal operators. That is, if x ′ simulates x, then it satisfies all positive existential modal formulas that x does. Conversely, given image-finite LTSs S and S ′ , if a state x ′ of S ′ satisfies all positive existential modal formulas that a state x of S does, then the pair x, x ′ belongs to the largest simulation between S and S ′ (cf. [3], [4]). This is called the Hennessy-Milner property of simulation.
Directed simulation is a stronger notion than simulation. A state x ′ directedly simulates a state x if:
• the label of x is a subset of the label of x ′ ; • for every σ-transition from x to a state y, there exists a σtransition from x ′ to a state y ′ that directedly simulates y; • for every σ-transition from x ′ to a state y ′ , there exists a σ-transition from x to a state y that is directedly simulated by y ′ . Thus, directed simulation requires both the "forward" and "backward" conditions of bisimulation and is weaker than bisimulation only in that x and x ′ are not required to have the same label as in the case of bisimulation. Directed simulation was introduced and studied by Kurtonina and de Rijke [5] for modal logic. They proved that directed simulation characterizes the class of positive modal formulas, like simulation characterizes the class of positive existential modal formulas. Positive modal formulas are modal formulas without implication and negation. Directed simulation has also been formulated and studied for description logics [6].
For fuzzy structures like fuzzy automata and fuzzy labeled transition systems (FLTSs), researchers have studied both crisp simulations [7]- [11] and fuzzy simulations [8], [12]- [15]. Crisp/fuzzy bisimulations have also been studied for fuzzy structures by a considerable number of researchers [7], [9], [12], [14], [16]- [26]. However, as far as we know, only the work [11] has concerned crisp directed simulations for fuzzy structures. It only deals with computational aspects.
The current paper concerns logical characterizations of crisp simulations and crisp directed simulations for FLTSs. As related works on logical characterizations of crisp/fuzzy bisimulations or simulations for fuzzy structures, the notable ones are the papers [17], [19], [20], [23], [26] on fuzzy bisimulations, [9], [19], [21]- [23] on crisp bisimulations, [8], [13] on fuzzy simulations, and [8]- [10] on crisp simulations. We discuss below only the last three works.
In [8] Pan et at. studied simulations for quantitative transition systems (QTSs), which are a variant of FLTSs without labels for states. The authors provided logical characterizations of cut-based crisp simulations between finite QTSs w.r.t. an existential cut-based crisp Hennessy-Milner logic. A fuzzy threshold, used as the cut for the fuzzy equality relation between actions, is a parameter for both the crisp simulations and the crisp Hennessy-Milner logic under consideration. The main results of [8] are formulated only for the case when the underlying residuated lattice is a finite Heyting algebra.
In [9] Wu and Deng provided a logical characterization of crisp simulations for FLTSs w.r.t. a crisp Hennessy-Milner logic, which uses values from the interval [0, 1] as thresholds for modal operators. States of FLTSs considered in [9] are not labeled. The logical characterization of crisp simulations provided in [9] is the Hennessy-Milner property formulated w.r.t. a crisp modal logic with a minimal set of constructors, namely with ⊤, ∧ and a p , where a is an action and p ∈ [0, 1].
In [10] Nguyen introduced and studied cut-based crisp simulations between fuzzy interpretations in fuzzy description logics under the Zadeh semantics. He provided results on preservation of information under such simulations and the Hennessy-Milner property of such simulations w.r.t. fuzzy description logics under the Zadeh semantics.
As seen from the above discussion, logical characterizations of crisp simulations studied in [8], [9] are formulated w.r.t. crisp modal logics, whereas logical characterizations of crisp simulations studied in [10] are formulated w.r.t. fuzzy description logics under the Zadeh semantics. In addition, crisp simulations studied in [8], [10] are cut-based and, indeed, a form of fuzzy simulations. There was the lack of logical characterizations of crisp simulations between fuzzy structures w.r.t. fuzzy modal/description logics that use a residuated lattice or a t-norm-based semantics. Furthermore, logical characterizations of crisp/fuzzy directed simulations between fuzzy structures have not yet been studied.
In this paper, we formulate and prove logical characterizations of crisp simulations and crisp directed simulations between FLTSs w.r.t. fuzzy modal logics that use a general tnorm-based semantics. The considered logics are fragments of the fuzzy propositional dynamic logic with the Baaz projection operator (fPDL △ ). The logical characterizations concern preservation of positive existential (resp. positive) modal formulas under crisp simulations (resp. crisp directed simulations), as well as the Hennessy-Milner property of such simulations.
The rest of this paper is structured as follows. Section II contains definitions about fuzzy sets, fuzzy operators, FLTSs and fPDL △ . In Section III (resp. IV), we define crisp simulations (resp. crisp directed simulations) between FLTSs, formulate and prove their logical characterizations w.r.t. the positive existential (resp. positive) fragments of fPDL △ . Conclusions are given in Section V.
II. PRELIMINARIES
A. Fuzzy Sets and Fuzzy Operators
A fuzzy subset of a set X is a function f : X → [0, 1]. Given a fuzzy subset f of X, f (x) for x ∈ X means the fuzzy degree of that x belongs to the subset. For {x 1 , . . . , x n } ⊆ X and {a 1 , . . . , a n } ⊂ [0, 1], we write {x 1 : a 1 , . . . , x n : a n } to denote the fuzzy subset f of X such that f (x i ) = a i for 1 ≤ i ≤ n and f (x) = 0 for x ∈ X \ {x 1 , . . . , x n }. Given fuzzy subsets f and g of a set X, we write f ≤ g to denote that f (x) ≤ g(x) for all x ∈ X. A fuzzy subset of X × Y is called a fuzzy relation between X and Y . The three well-known t-norms named after Gödel, Łukasiewicz and product are specified below:
x G y = min{x, y}, x Ł y = max{0, x + y − 1},
x P y = x · y.
The corresponding residua are specified below:
x ⇒ G y = if x ≤ y then 1 else y,
x ⇒ Ł y = min{1, 1 − x + y}, x ⇒ P y = if x ≤ y then 1 else y/x.
From now on, let be an arbitrary left-continuous t-norm and ⇒ be its residuum. It is known that ⇒ is decreasing w.r.t. the first argument and increasing w.r.t. the second argument. Furthermore, (x ⇒ y) = 1 iff x ≤ y.
Let
B. Fuzzy Labeled Transition Systems
Let Σ A be a non-empty set of actions and Σ L a non-empty set of state labels. We use ̺ to denote an element of Σ A and p to denote an element of Σ L .
An FLTS is a triple S = S, δ, L , where S is a nonempty set of states, δ : S × Σ A × S → [0, 1] is called the transition function, and L : S → (Σ L → [0, 1]) is called the state labeling function. For x, y ∈ S, ̺ ∈ Σ A and p ∈ Σ L , δ(x, ̺, y) means the fuzzy degree of that there is a transition of the action ̺ from the state x to the state y, whereas the fuzzy subset L(x) of Σ L is the label of x and L(x)(p) means the fuzzy degree of that p belongs to the label of x.
An FLTS S = S, δ, L is image-finite if, for every x ∈ S and ̺ ∈ Σ A , the set {y | δ(x, ̺, y) > 0} is finite. It is finite if S, Σ A and Σ L are finite.
C. Fuzzy PDL with the Baaz Projection Operator
We use Σ A , Σ L as the signature for the logical languages considered in this article. By fPDL △ we denote the fuzzy propositional dynamic logic with the Baaz projection operator. Programs and formulas of fPDL △ over the signature Σ A , Σ L are defined as follows:
• actions from Σ A are programs of fPDL △ , • if α and β are programs of fPDL △ , then α • β, α ∪ β and α * are also programs of fPDL △ , • if ϕ is a formula of fPDL △ , then ϕ? is a program of fPDL △ , • values from the interval [0, 1] and propositions from Σ L are formulas of fPDL △ , • if ϕ and ψ are formulas of fPDL △ and α is a program of fPDL △ , then △ϕ, ϕ ∧ ψ, ϕ ∨ ψ, ϕ → ψ, [α]ϕ and α ϕ are formulas of fPDL △ . Note that we ignore negation, as a formula ¬ϕ is usually defined to be ϕ → 0.
Given a finite set Γ = {ϕ 1 , . . . , ϕ n } of formulas with n ≥ 0, we define Γ = ϕ 1 ∧ . . . ∧ ϕ n ∧ 1.
Definition 1:
Treating an FLTS S = S, δ, L as a fuzzy Kripke model, a program α is interpreted in S as a fuzzy relation α S : S×S → [0, 1], whereas a formula ϕ is interpreted in S as a fuzzy subset ϕ S : S → [0, 1]. The functions α S and ϕ S are specified as follows.
̺ S (x, y) = δ(x, ̺, y) (ϕ?) S (x, y) = (if x = y then ϕ S (x) else 0) (α • β) S (x, y) = sup{α S (x, z) β S (z, y) | z ∈ S} (α ∪ β) S (x, y) = max{α S (x, y), β S (x, y)} (α * ) S (x, y) = sup{ {α S (x i , x i+1 ) | 0 ≤ i < n} | n ≥ 0, x 0 , . . . , x n ∈ S, x 0 = x, x n = y} a S (x) = a p S (x) = L(x, p) (△ϕ) S (x) = (if ϕ S (x) = 1 then 1 else 0) (ϕ ∧ ψ) S (x) = min{ϕ S (x), ψ S (x)} (ϕ ∨ ψ) S (x) = max{ϕ S (x), ψ S (x)} (ϕ → ψ) S (x) = (ϕ S (x) ⇒ ψ S (x)) ([α]ϕ) S (x) = inf{α S (x, y) ⇒ ϕ S (y) | y ∈ S} ( α ϕ) S (x) = sup{α S (x, y) ϕ S (y) | y ∈ S}.
III. CRISP SIMULATIONS BETWEEN FLTSS AND THEIR LOGICAL CHARACTERIZATIONS
In this section, we first recall crisp simulations between FLTSs, then define the positive existential fragments fPDL ∃ △ and fK ∃,0 △ of fPDL △ , and finally formulate and prove logical characterizations of crisp simulations between FLTSs w.r.t. these positive existential fragments of fPDL △ .
A. Crisp Simulations between FLTSs
This subsection is a reformulation of the corresponding one of [11] (which concerns fuzzy graphs).
Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs. A binary relation Z ⊆ S × S ′ is called a (crisp) simulation between S and S ′ if the following conditions hold for every x, y ∈ S, x ′ ∈ S ′ and ̺ ∈ Σ A :
Z(x, x ′ ) → (L(x) ≤ L ′ (x ′ )) (1) [Z(x, x ′ ) ∧ (δ(x, ̺, y) > 0)] → ∃y ′ ∈ S ′ [(δ(x, ̺, y) ≤ δ ′ (x ′ , ̺, y ′ )) ∧ Z(y, y ′ )]. (2)
Here, → and ∧ denote the usual crisp logical connectives. Thus, the above conditions mean that:
(1) if Z(x, x ′ ) holds, then L(x) ≤ L ′ (x ′ );
(2) if Z(x, x ′ ) holds and δ(x, ̺, y) > 0, then there exists y ′ ∈ S ′ such that δ(x, ̺, y) ≤ δ ′ (x ′ , ̺, y ′ ) and Z(y, y ′ ) holds.
and δ = { u 1 , ̺, u 2 : 0.6, u 2 , ̺, u 3 : 0.5, u 2 , ̺, u 4 : 0.6, u 3 , ̺, u 4 : 0.4, u 4 , ̺, u 2 : 0.5}. • S ′ = S ′ , δ ′ , L ′ , where S ′ = {v 1 , v 2 }, L ′ (v 1 )(p) = 0.7, L ′ (v 2 )(p) = 0.8 and δ ′ = { e, ̺, e : 0.5, e, ̺, f : 0.6, f, ̺, e : 0.5}. It can be checked that { u 2 , v 1 , u 3 , v 1 , u 4 , v 2 } is the largest simulation between S and S ′ .
A (crisp) auto-simulation of S is a simulation between S and itself.
Proposition 1: Let S, S ′ and S ′′ be FLTSs and let S = S, δ, L .
1) The relation Z = { x, x | x ∈ S} is an auto-simulation of S. 2) If Z 1 is a simulation between S and S ′ , and Z 2 is a simulation between S ′ and S ′′ , then Z 1 • Z 2 is a simulation between S and S ′′ . 3) If Z is a set of simulations between S and S ′ , then Z is also a simulation between S and S ′ .
The proof of this proposition is straightforward.
Corollary 1: The largest simulation between two arbitrary FLTSs exists. The largest auto-simulation of a FLTS is a preorder.
B. The Positive Existential Fragment of fPDL △
By fPDL ∃ △ we denote the largest sublanguage of fPDL △ that disallow the formula constructor [α]ϕ and allows implication (→) only in formulas of the form a → ϕ with a ∈ [0, 1]. We call fPDL ∃ △ the positive existential fragment of fPDL △ . By fK ∃,0 △ we denote the largest sublanguage of fPDL ∃ △ that disallow the program constructors (α•β, α∪β, α * and ϕ?), the disjunction operator (∨) and the constructor a (with a ∈ [0, 1]). That is, only actions from Σ A are programs of fK ∃,0 △ , and formulas of fK ∃,0 △ are of the form p, △ϕ, ϕ ∧ ψ, a → ϕ or ̺ ϕ, where p ∈ Σ L , a ∈ [0, 1], ̺ ∈ Σ A , and ϕ and ψ are formulas of fK ∃,0
△ . An FLTS S = S, δ, L is said to be witnessed w.r.t. fPDL ∃ △ if, for every formula ϕ (resp. program α) of fPDL ∃ △ and every x, y ∈ S, if the definition of ϕ S (x) (resp. α S (x, y)) in Definition 1 uses supremum, then the set under the supremum has the biggest element if it is non-empty. The notion of whether an FLTS S = S, δ, L is witnessed w.r.t. fK ∃,0 △ is defined analogously. Observe that if an FLTS S = S, δ, L is finite, then it is witnessed w.r.t. fPDL ∃ △ and fK ∃,0 △ . If S is image-finite, then it is witnessed w.r.t. fK ∃,0 △ .
C. Logical Characterizations of Simulations between FLTSs
A formula ϕ is said to be preserved under simulations between FLTSs if, for every FLTSs S = S, δ, L and S ′ = S ′ , δ ′ , L ′ that are witnessed w.r.t. fPDL ∃ △ , for every simulation Z between them, and for every x ∈ S and
x ′ ∈ S ′ , if Z(x, x ′ ) holds, then ϕ S (x) ≤ ϕ S ′ (x ′ ).
Theorem 1: All formulas of fPDL ∃ △ are preserved under simulations between FLTSs.
This theorem follows immediately from the first assertion of the following lemma.
Lemma 1: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs witnessed w.r.t. fPDL ∃ △ and Z be a simulation between them. Then, the following assertions hold for every x, y ∈ S, x ′ ∈ S ′ , every formula ϕ and every program α of fPDL ∃ △ , where → and ∧ are the usual crisp logical connectives:
Z(x, x ′ ) → (ϕ S (x) ≤ ϕ S ′ (x ′ )) (3) [Z(x, x ′ ) ∧ (α S (x, y) > 0)] → ∃y ′ ∈ S ′ [(α S (x, y) ≤ α S ′ (x ′ , y ′ )) ∧ Z(y, y ′ )]. (4)
Proof: We prove this lemma by induction on the structure of ϕ and α.
Consider the assertion (3). Assume that Z(x, x ′ ) holds. We need to show that ϕ S (x) ≤ ϕ S ′ (x ′ ). The cases when ϕ is a constant a ∈ [0, 1] or a proposition p ∈ Σ L are trivial. The cases when ϕ is of the form △ψ, ψ ∧ ξ or ψ ∨ ξ are also straightforward by using the induction assumptions about ψ and ξ and the definition of ϕ S (x) and ϕ S ′ (x ′ ). The remaining cases are considered below.
• Case ϕ = a → ψ: By the induction assumption,
ψ S (x) ≤ ψ S ′ (x ′ )
. It is known that the residuum ⇒ of every continuous t-norm is increasing w.r.t. the second argument. Hence,
ϕ S (x) ≤ ϕ S ′ (x ′ ). • Case ϕ = α ψ: For a contradiction, assume that ϕ S (x) > ϕ S ′ (x ′ ). Since S is witnessed w.r.t. fPDL ∃ △ , there exists y ∈ S such that ϕ S (x) = α S (x, y) ψ S (y). Since ϕ S (x) > ϕ S ′ (x ′ )
, it follows that ϕ S (x) > 0 and therefore α S (x, y) > 0 (since 0 a ≤ 0 1 = 0 for all a ∈ [0, 1]). By the induction assumption of (4), there exists y ′ ∈ S ′ such that α S (x, y) ≤ α S ′ (x ′ , y ′ ) and Z(y, y ′ ) holds. Since Z(y, y ′ ) holds, by the induction assumption, ψ S (y) ≤ ψ S ′ (y ′ ). Since is increasing w.r.t. both the arguments, it follows that
α S (x, y) ψ S (y) ≤ α S ′ (x ′ , y ′ ) ψ S ′ (y ′ ).
This contradicts the assumption ϕ S (x) > ϕ S ′ (x ′ ).
Consider the assertion (4). The case when α is an action ̺ ∈ Σ A follows from Condition (2). The other cases are considered below.
• Case α = β 1 • β 2 : Suppose that Z(x, x ′ ) holds and α S (x, y) > 0. Thus,
α S (x, y) = sup{β S 1 (x, z) β S 2 (z, y) | z ∈ S} > 0. Since S is witnessed w.r.t. fPDL ∃ △ , there exists z ∈ S such that α S (x, y) = β S 1 (x, z) β S 2 (z, y) > 0.
Since a 0 = 0 a = 0 for all a ∈ [0, 1], we must have that β S 1 (x, z) > 0 and β S 2 (z, y) > 0. Since Z(x, x ′ ) holds, by the induction assumption, there exists z ′ ∈ S ′ such that Z(z, z ′ ) holds and β S 1 (x, z) ≤ β S ′ 1 (x ′ , z ′ ). Since Z(z, z ′ ) holds and β S 2 (z, y) > 0, by the induction assumption, there exists y ′ ∈ S ′ such that Z(y, y ′ ) holds and β S 2 (z, y) ≤ β S ′ 2 (z ′ , y ′ ). Since is increasing, it follows that
β S 1 (x, z) β S 2 (z, y) ≤ β S ′ 1 (x ′ , z ′ ) β S ′ 2 (z ′ , y ′ ). Therefore, α S (x, y) ≤ α S ′ (x ′ , y ′ ) and the induction hypothesis (4) holds. • Case α = β 1 ∪ β 2 : Suppose that Z(x, x ′ ) holds and α S (x, y) > 0. Thus, max{β S 1 (x, y), β S 2 (x, y)} > 0. W.l.o.g. we assume that β S 1 (x, y) > 0.
Since Z(x, x ′ ) holds, by the induction assumption, it follows that there exists y ′ ∈ S ′ such that β S 1 (x, y) ≤ β S ′ 1 (x ′ , y ′ ) and Z(y, y ′ ) holds. Therefore, α S (x, y) = β S 1 (x, y) ≤ β S ′ 1 (x ′ , y ′ ) ≤ α S ′ (x ′ , y ′ ) and the induction hypothesis (4) holds.
• Case α = β * : Suppose that Z(x, x ′ ) holds and α S (x, y) > 0. If x = y, then by taking y ′ = x ′ , α S ′ (x ′ , y ′ ) = 1 = α S (x, y) and Z(y, y ′ ) holds. Assume that x = y. Since S is witnessed w.r.t. fPDL ∃ △ , there exist n ≥ 1 and x 0 , . . . , x n ∈ S such that x 0 = x, x n = y and (β * ) S (x, y) = β S (x 0 , x 1 ) · · · β S (x n−1 , x n ). Since α S (x, y) > 0, we must have that β S (x i , x i+1 ) > 0 for all 0 ≤ i < n (because a 0 = 0 a = 0 for all a ∈ [0, 1]). Let
x ′ 0 = x ′ . For each i from 0 to n − 1, since Z(x i , x ′ i ) holds and β S (x i , x i+1 ) > 0, by the induction assumption, there exists x ′ i+1 ∈ S ′ such that β S (x i , x i+1 ) ≤ β S ′ (x ′ i , x ′ i+1 ) and Z(x i+1 , x ′ i+1 ) holds. Take y ′ = x ′ n . Thus, Z(y, y ′ ) holds. Since is increasing, β S (x 0 , x 1 ) · · · β S (x n−1 , x n ) ≤ β S ′ (x ′ 0 , x ′ 1 ) · · · β S ′ (x ′ n−1 , x ′ n )
. Hence, α S (x, y) ≤ α S ′ (x ′ , y ′ ) and the induction hypothesis (4) holds. • Case α = (ψ?): Suppose that Z(x, x ′ ) holds and α S (x, y) > 0. Thus, x = y and α S (x, y) = ψ S (x).
Since Z(x, x ′ ) holds, by the induction assumption (3), ψ S (x) ≤ ψ S ′ (x ′ ). By choosing y ′ = x ′ , we have that α S (x, y) = ψ S (x) ≤ ψ S ′ (x ′ ) = α S ′ (x ′ , y ′ ) and the induction hypothesis (4) holds.
The following lemma is a counterpart of Lemma 1 for fK ∃,0 △ (instead of fPDL ∃ △ ). Its proof can obtained from the proof of the assertion (3) of Lemma 1 by simplification, using (2) instead of (4).
Lemma 2: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs witnessed w.r.t. fK ∃,0 △ and Z be a simulation between them. Then, for every x ∈ S and x ′ ∈ S ′ , if Z(x, x ′ ) holds, then ϕ S (x) ≤ ϕ S ′ (x ′ ) for all formulas ϕ of fK ∃,0
△ . An FLTS S = S, δ, L is said to be modally saturated w.r.t. fK ∃,0 △ if, for every x ∈ S, a ∈ (0, 1], ̺ ∈ Σ A and every infinite set Γ of formulas of fK ∃,0 △ , if for every finite subset Ψ of Γ there exists y ∈ S such that ̺ S (x, y) ( Ψ) S (y) ≥ a, then there exists y ∈ S such that ̺ S (x, y) ≥ a and ϕ S (y) > 0 for all ϕ ∈ Γ. The notion of modal saturatedness is a technical extension of image-finiteness. This is confirmed by the following proposition.
Proposition 2: Every image-finite FLTS is modally saturated w.r.t. fK ∃,0 △ . Proof: Let S = S, δ, L be an image-finite FLTS. Let x ∈ S, a ∈ (0, 1], ̺ ∈ Σ A and let Γ be an infinite set of formulas of fK ∃,0 △ . We prove that S is modally saturated w.r.t. fK ∃,0 △ by contraposition. Suppose that, for every y ∈ S, there exists ϕ y ∈ Γ such that ̺ S (x, y) < a or ϕ S y (y) = 0. We need to prove that there exists a finite subset Ψ of Γ such that, for every y ∈ S, ̺ S (x, y) ( Ψ) S (y) < a. Let Ψ = {ϕ y | y ∈ S and ̺ S (x, y) > 0}. Since S is image-finite, Ψ is finite. For every y ∈ S, since either ̺ S (x, y) < a or ϕ y ∈ Ψ and ϕ S y (y) = 0, we must have that ̺ S (x, y) ( Ψ) S (y) < a. This completes the proof.
The following theorem states the Hennessy-Milner property of crisp simulations between FLTSs.
Theorem 2: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs witnessed and modally saturated w.r.t. fK ∃,0 △ . Then,
Z = { x, x ′ ∈ S × S ′ | ϕ S (x) ≤ ϕ S ′ (x ′ ) for all formulas ϕ of fK ∃,0
△ } is the largest simulation between S and S ′ . Proof: By Lemma 2, it is sufficient to prove that the considered Z is a simulation between S and S ′ . Let x, y ∈ S, x ′ ∈ S ′ and ̺ ∈ Σ A . We need to prove Conditions (1) and (2).
Condition (1) holds by the definition of Z. Consider Condition (2) and suppose that Z(x, x ′ ) holds and
δ(x, ̺, y) = a > 0. Let Y ′ = {y ′ ∈ S ′ | ̺ S ′ (x ′ , y ′ ) ≥ a}.
We need to show that there exists y ′ ∈ Y ′ such that Z(y, y ′ ) holds. For a contradiction, suppose that, for each y ′ ∈ Y ′ , Z(y, y ′ ) does not hold, which means there exists a formula ϕ y ′ of fK ∃,0 △ such that ϕ S y ′ (y) > ϕ S ′ y ′ (y ′ ). For every y ′ ∈ Y ′ , let ψ y ′ = △(ϕ S y ′ (y) → ϕ y ′ ), which is a formula of fK ∃,0 △ . Observe that, for every y ′ ∈ Y ′ , ψ S y ′ (y) = 1 and ψ S ′ y ′ (y ′ ) = 0. Let Γ = {ψ y ′ | y ′ ∈ Y ′ }. Observe that, for every y ′ ∈ S ′ , either ̺(x ′ , y ′ ) < a or there exists ψ = ψ y ′ ∈ Γ such that ψ S ′ (y ′ ) = 0. Since S ′ is modally saturated w.r.t. fK ∃,0 △ , there exists a finite subset Ψ of Γ such that, for every y ′ ∈ S ′ , ̺ S ′ (x ′ , y ′ ) ( Ψ) S ′ (y ′ ) < a. Let ϕ = ̺ Ψ. It is a formula of fK ∃,0 △ . Since S ′ is witnessed w.r.t. fK ∃,0 △ , it follows that ϕ S ′ (x ′ ) < a. Since ψ S (y) = 1 for all ψ ∈ Ψ, ( Ψ) S (y) = 1 and ϕ S (x) ≥ a. Thus, ϕ S (x) > ϕ S ′ (x ′ ), which contradicts the assumption that Z(x, x ′ ) holds. Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs and let x ∈ S and x ′ ∈ S ′ . We write x s x ′ to denote that there exists a simulation Z between S and S ′ such that Z(x, x ′ ) holds. We also write x ≤ ∃ x ′ (resp. x ≤ ∃,0 K x ′ ) to denote that ϕ S (x) ≤ ϕ S ′ (x ′ ) for all formulas ϕ of fPDL ∃ △ (resp. fK ∃,0 △ ). The following corollary follows immediately from Theorems 2 and 1.
Corollary 2: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs and let x ∈ S and x ′ ∈ S ′ . 1) If S and S ′ are witnessed and modally saturated w.r.t. fK ∃,0 △ , then
x s x ′ iff x ≤ ∃,0 K x ′ ,x ≤ ∃ x ′ iff x s x ′ iff x ≤ ∃,0 K x ′ ,
and therefore whether x ≤ ∃ x ′ or not does not depend on the used t-norm .
IV. CRISP DIRECTED SIMULATIONS BETWEEN FLTSS AND THEIR LOGICAL CHARACTERIZATIONS
In this section, we first recall crisp directed simulations between FLTSs, then define the positive fragments fPDL pos △ and fK pos △ of fPDL △ , and finally formulate and prove logical characterizations of crisp directed simulations between FLTSs w.r.t. these positive fragments of fPDL △ .
A. Crisp Directed Simulations between FLTSs
This subsection is a reformulation of the corresponding one of [11] (which concerns fuzzy graphs).
Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs. A binary relation Z ⊆ S × S ′ is called a (crisp) directed simulation between S and S ′ if it satisfies Conditions (1) and (2) (of simulations) and the following one for every x ∈ S, x ′ , y ′ ∈ S ′ and ̺ ∈ Σ A , where → and ∧ denote the usual crisp logical connectives:
[Z(x, x ′ ) ∧ (δ ′ (x ′ , ̺, y ′ ) > 0)] → ∃y ∈ S [(δ ′ (x ′ , ̺, y ′ ) ≤ δ(x, ̺, y)) ∧ Z(y, y ′ )]. (5)
Example 2: Let Σ A = {̺} and Σ L = {p}. Reconsider the FLTSs S and S ′ specified in Example 1. It can be checked that ∅ is the unique directed simulation between S and S ′ .
Let S 2 and S ′ 2 be the FLTSs illustrated in Fig. 2 and specified in a similar way as done for S and S ′ in Example 1. It is straightforward to show that Z = {u 2 , u 3 , u 4 } × {v 1 , v 2 } is the largest directed simulation between S 2 and S ′ 2 . A (crisp) directed auto-simulation of an FLTS S is a directed simulation between S and itself. The following proposition is a counterpart of Proposition 3. Its proof is also straightforward.
Proposition 3: Let S, S ′ and S ′′ be FLTSs and let S = S, δ, L .
1) The relation Z = { x, x | x ∈ S} is a directed autosimulation of S. 2) If Z 1 is a directed simulation between S and S ′ , and Z 2 is a directed simulation between S ′ and S ′′ , then Z 1 • Z 2 is a directed simulation between S and S ′′ .
3) If Z is a set of directed simulations between S and S ′ ,
then Z is also a directed simulation between S and S ′ .
The proof of this proposition is straightforward. Corollary 3: The largest directed simulation between two arbitrary FLTSs exists. The largest directed auto-simulation of a FLTS is a pre-order.
B. The Positive Fragment of fPDL △
If we disallow the test operator (?), then the positive fragment of fPDL △ would simply be defined to be the largest fragment of fPDL △ that (disallows the test operator and) allows implication (→) only in formulas of the form a → ϕ with a ∈ [0, 1]. Allowing the test operator makes the matter more sophisticated, as shown below (cf. [6]). An FLTS S = S, δ, L is said to be witnessed w.r.t. fPDL pos △ if:
• for every formula ϕ of fPDL pos △ and every x ∈ S, if the definition of ϕ S (x) in Definition 1 uses supremum (resp. infimum), then the set under the supremum (resp. infimum) has the biggest (resp. smallest) element if it is non-empty, • for every program α of fPDL pos✸ △ or fPDL pos✷ △ and every x, y ∈ S, if the definition of α S (x, y) in Definition 1 uses supremum (resp. infimum), then the set under the supremum (resp. infimum) has the biggest (resp. smallest) element if it is non-empty.
The notion of whether an FLTS S is witnessed w.r.t. fK pos △ is defined analogously.
Observe that if an FLTS S = S, δ, L is finite, then it is witnessed w.r.t. fPDL pos △ and fK pos △ . If S is image-finite, then it is witnessed w.r.t. fK pos △ .
C. Logical Characterizations of Crisp Directed Simulations between FLTSs
A formula ϕ is said to be preserved under directed simulations between FLTSs if, for every FLTSs S = S, δ, L and S ′ = S ′ , δ ′ , L ′ that are witnessed w.r.t. fPDL pos △ , for every directed simulation Z between them, and for every x ∈ S and This theorem follows immediately from the first assertion of the following lemma, which is a counterpart of Lemma 1. where → and ∧ are the usual crisp logical connectives:
x ′ ∈ S ′ , if Z(x, x ′ ) holds, then ϕ S (x) ≤ ϕ S ′ (x ′ ).Z(x, x ′ ) → (ϕ S (x) ≤ ϕ S ′ (x ′ )) (6) [Z(x, x ′ ) ∧ (α S (x, y) > 0)] → ∃y ′ ∈ S ′ [(α S (x, y) ≤ α S ′ (x ′ , y ′ )) ∧ Z(y, y ′ )] (7) [Z(x, x ′ ) ∧ (γ S ′ (x ′ , y ′ ) > 0)] → ∃y ∈ S [(γ S ′ (x ′ , y ′ ) ≤ γ S (x, y)) ∧ Z(y, y ′ )].(8)
Proof: We prove this lemma by induction analogously as done for Lemma 1.
In comparison with the proof of Lemma 1, for the assertion (6), we only need to consider the additional case when ϕ = [γ]ψ (and γ is a program of fPDL pos✷ △ ). Consider this case. For a contradiction, suppose that
ϕ S (x) > ϕ S ′ (x ′ ). Since S ′ is witnessed w.r.t. fPDL pos △ , there exists y ′ ∈ S ′ such that ϕ S ′ (x ′ ) = (γ S ′ (x ′ , y ′ ) ⇒ ψ S ′ (y ′ )). Since ϕ S (x) > ϕ S ′ (x ′ ), we have ϕ S ′ (x ′ ) < 1, which implies that γ S ′ (x ′ , y ′ ) > 0.
By the induction assumption (8), there exists y ∈ S such that γ S ′ (x ′ , y ′ ) ≤ γ S (x, y) and Z(y, y ′ ) holds. Since Z(y, y ′ ) holds, by the induction assumption, ψ S (y) ≤ ψ S ′ (y ′ ). Since ⇒ is decreasing w.r.t. the first argument and increasing w.r.t. the second argument, it follows that
(γ S (x, y) ⇒ ψ S (y)) ≤ (γ S ′ (x ′ , y ′ ) ⇒ ψ S ′ (y ′ )), which means ϕ S (x) ≤ ϕ S ′ (x ′ ), which contradicts the as- sumption ϕ S (x) > ϕ S ′ (x ′ )
. This completes the proof of the assertion (6).
The proof of the assertion (7) is obtained from the proof of the assertion (4) of Lemma 1 by replacing the occurrences of fPDL ∃ △ , (3) and (4) with fPDL pos △ , (6) and (7), respectively. The proof of the assertion (8) is dual to the proof of the assertion (7). The only special difference is that instead of the case α = (ψ?) we need to consider the case γ = (ψ → a)?, with a ∈ [0, 1]. Consider this case. Suppose that Z(x, x ′ ) holds and γ S ′ (x ′ , y ′ ) > 0. Thus, x ′ = y ′ and γ S ′ (x ′ , y ′ ) = (ψ S ′ (x ′ ) ⇒ a). Since Z(x, x ′ ) holds, by the induction assumption (3), ψ S (x) ≤ ψ S ′ (x ′ ). Since ⇒ is decreasing w.r.t. the first argument, by choosing y = x, we have that
γ S ′ (x ′ , y ′ ) = (ψ S ′ (x ′ ) ⇒ a) ≤ (ψ S (x) ⇒ a) = γ S (x, y),
and the induction hypothesis (8) holds.
The following lemma is a counterpart of Lemma 3 for fK pos △ (instead of fPDL pos △ ). Its proof can obtained from the proof of the assertion (6) of Lemma 3 by simplification, using (2) and (5) instead of (7) and (8), respectively. Lemma 4: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs witnessed w.r.t. fK pos △ and Z be a directed simulation between them. Then, for every x ∈ S and x ′ ∈ S ′ , if Z(x, x ′ ) holds, then ϕ S (x) ≤ ϕ S ′ (x ′ ) for all formulas ϕ of fK pos △ . The following theorem is a counterpart of Theorem 2 devoted to the Hennessy-Milner property of crisp directed simulations between FLTSs. It is formulated only for imagefinite FLTSs.
Theorem 4: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be image-finite FLTSs. Then,
Z = { x, x ′ ∈ S × S ′ | ϕ S (x) ≤ ϕ S ′ (x ′ )
for all formulas ϕ of fK pos △ } is the largest directed simulation between S and S ′ .
Proof: Clearly, all image-finite FLTSs are witnessed w.r.t. fK pos △ . By Lemma 4, it is sufficient to prove that Z is a directed simulation between S and S ′ . Let x ∈ S, x ′ ∈ S ′ , p ∈ Σ L and ̺ ∈ Σ A . We need to prove Conditions (1), (2) and (5). Condition (1) clearly holds by the definition of Z. Condition (2) can be proved analogously as done for Theorem 2 by replacing "witnessed w.r.t. fK ∃,0 △ " and "modally saturated w.r.t. fK ∃,0 △ " with the assumption that S ′ is imagefinite and replacing the remaining occurrence of fK ∃,0 △ with fK pos △ . We now prove Condition (5). Let y ′ ∈ S ′ and suppose that Z(x, x ′ ) holds and ̺ S ′ (x ′ , y ′ ) = a > 0. Let Y = {y ∈ S | ̺ S (x, y) ≥ a}. Since S is image-finite, Y is finite. We need to prove that there exists y ∈ Y such that Z(y, y ′ ) holds. For a contradiction, suppose that, for every y ∈ Y , Z(y, y ′ ) does not hold, which means there exists a formula ϕ y of fK pos △ such that ϕ S y (y) > ϕ S ′ y (y ′ ). For every y ∈ Y , let ψ y = △(ϕ S y (y) → ϕ y ), which is a formula of fK pos △ . Let Ψ = {ψ y | y ∈ Y }. Observe that, for every y ∈ Y , ψ S y (y) = 1 and ψ S ′ y (y ′ ) = 0. Let Y l = {y ∈ S | 0 < ̺ S (x, y) < a} and a l = sup{̺ S (x, y) | y ∈ Y l }, where the subscript l stands for "left". Note that, if Y l = ∅, then a l = 0, else a l = max{̺ S (x, y) | y ∈ Y l } (since S is image-finite). In any case, a l < a. Let a c = (a l + a)/2. Thus, a l < a c < a. Let ϕ = [̺]( Ψ ∨ a c ). It is a formula of fK pos △ . Since ψ S ′ y (y ′ ) = 0 for all y ∈ Y , ( Ψ) S ′ (y ′ ) = 0. Hence, ( Ψ ∨ a c ) S ′ (y ′ ) = a c and ϕ S ′ (x ′ ) ≤ (a ⇒ a c ). It follows that ϕ S ′ (x ′ ) < 1. Let's estimate ϕ S (x). For every y ∈ Y , since ψ S y (y) = 1, ( Ψ ∨ a c ) S (y) = 1. In addition, ( Ψ ∨ a c ) S (y) ≥ a c for all y ∈ Y l . Since a l < a c , we can conclude that ϕ S (x) = 1. This contradicts the facts that Z(x, x ′ ) holds and ϕ S ′ (x ′ ) < 1. This completes the proof.
Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs and let x ∈ S and x ′ ∈ S ′ . We write x ds x ′ to denote that there exists a directed simulation Z between S and S ′ such that Z(x, x ′ ) holds. We also write x ≤ pos x ′ (resp. x ≤ pos K x ′ ) to denote that ϕ S (x) ≤ ϕ S ′ (x ′ ) for all formulas ϕ of fPDL pos △ (resp. fK pos △ ). The following corollary follows immediately from Theorems 4 and 3.
Corollary 4: Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be image-finite FLTSs and let x ∈ S and x ′ ∈ S ′ .
• Then,
x ds x ′ iff x ≤ pos K x ′ , and therefore whether x ≤ pos K x ′ or not does not depend on the used t-norm .
• If S and S ′ are witnessed w.r.t. fPDL pos △ , then
x ≤ pos x ′ iff x ds x ′ iff x ≤ pos K x ′ , and therefore whether x ≤ pos x ′ or not does not depend on the used t-norm .
V. CONCLUSIONS
Simulation and directed simulation are useful notions for comparing observational behaviors of automata and LTSs. Before the current work, there was the lack of logical characterizations of crisp simulations between fuzzy structures w.r.t. fuzzy modal logics (or their variants) that use a residuated lattice or a t-norm-based semantics. Furthermore, logical characterizations of crisp directed simulations for fuzzy structures had not been studied.
In this paper, we have provided and proved logical characterizations of crisp simulations and crisp directed simulations between FLTSs w.r.t. fragments of the fuzzy modal logic fPDL △ under a general t-norm-based semantics. The preservation result for crisp simulations (resp. crisp directed simulations) has been formulated for fPDL ∃ △ (resp. fPDL pos △ ), whereas the Hennessy-Milner property for them has been formulated for a minimized fragment of fPDL ∃ △ (resp. fPDL pos △ ) in order to increase the generality.
An operator : [0, 1] × [0, 1] → [0, 1] is called a t-norm if it is commutative and associative, has 1 as the neutral element, and is increasing w.r.t. both the arguments. If L = [0, 1], is a left-continuous t-norm and ⇒ : [0, 1] × [0, 1] → [0, 1] is the operator defined by (x ⇒ y) = sup{z | z x ≤ y}, then ⇒ is called the residuum of .
△ : [0, 1] × [0, 1] → [0, 1] be the operator defined by △x = (if x = 1 then 1 else 0).
Example 1 :Fig. 1 .
11Let Σ A = {̺}, Σ L = {p} and let S and S ′ be the FLTSs specified below and depicted in Fig. 1. • S = S, δ, L , where S = {u 1 , u 2 , u 3 , u 4 }, L(u 1 )(p) = 0.7, L(u 2 )(p) = 0.7, L(u 3 )(p) = 0.6, L(u 4 )(p) An illustration for Example 1.
Fig. 2 .
2An illustration for Example 2.
Formulas of fPDL pos △ and programs of fPDL pos✸ △ and fPDL pos✷ △ are defined inductively as follows: • actions from Σ A are programs of fPDL pos✸ △ and fPDL pos✷ △ ; • if α and β are programs of fPDL pos✸ △ (resp. fPDL pos✷ △ ), then α • β, α ∪ β and α * are also programs of fPDL pos✸ △ (resp. fPDL pos✷ △ ); • if ϕ is a formula of fPDL pos △ , then ϕ? is a program of fPDL pos✸ △ and (ϕ → a)? is a program of fPDL pos✷ △ , for a ∈ [0, 1]; • values from the interval [0, 1] and propositions from Σ L are formulas of fPDL pos △ ; • if ϕ and ψ are formulas of fPDL pos △ , then -△ϕ, ϕ ∧ ψ and ϕ ∨ ψ are formulas of fPDL pos △ , -if a ∈ [0, 1], then a → ϕ is a formula of fPDL pos △ , -if α is a program of fPDL pos✸ △ , then α ϕ is a formula of fPDL pos △ , -if α is a program of fPDL pos✷ △ , then [α]ϕ is a formula of fPDL pos △ . We call fPDL pos △ the positive fragment of fPDL △ . By fK pos △ we denote the largest sublanguage of fPDL pos △ that disallow all the program constructors. That is, only actions from Σ A are programs of fK pos △ , and formulas of fK pos △ are of the form a, p, △ϕ, ϕ ∧ ψ, ϕ ∨ ψ, a → ϕ, [̺]ϕ or ̺ ϕ, where a ∈ [0, 1], p ∈ Σ L , ̺ ∈ Σ A , and ϕ and ψ are formulas of fK pos △ . Note that fPDL ∃ △ is the sublanguage of fPDL pos △ that disallows the formula constructor [α]ϕ, whereas fK ∃,0 △ is the sublanguage of fK pos △ that disallows the formula constructors [α]ϕ, ϕ ∨ ψ and a (with a ∈ [0, 1]).
Theorem 3 :
3All formulas of fPDL pos △ are preserved under directed simulations between FLTSs.
Lemma 3 :
3Let S = S, δ, L and S ′ = S ′ , δ ′ , L ′ be FLTSs witnessed w.r.t. fPDL pos △ and Z be a directed simulation between them. Then, the following assertions hold for every x, y ∈ S, x ′ , y ′ ∈ S ′ , every formula ϕ of fPDL pos △ , every program α of fPDL pos✸ △ and every program γ of fPDL pos✷ △ ,
Concurrency and automata on infinite sequences. D Park, Proceedings of the 5th GI-Conference, ser. LNCS, P. Deussen. the 5th GI-Conference, ser. LNCS, P. DeussenSpringer104D. Park, "Concurrency and automata on infinite sequences," in Pro- ceedings of the 5th GI-Conference, ser. LNCS, P. Deussen, Ed., vol. 104. Springer, 1981, pp. 167-183.
Process simulation and refinement. J He, Formal Aspects Comput. 13J. He, "Process simulation and refinement," Formal Aspects Comput., vol. 1, no. 3, pp. 229-241, 1989.
Extending modal logic. M De Rijke, Ph.D. dissertation, ILLC, University of Amsterdam. M. de Rijke, "Extending modal logic," Ph.D. dissertation, ILLC, Uni- versity of Amsterdam, 1993.
Modal Logic, ser. Cambridge Tracts in Theoretical Computer Science. P Blackburn, M De Rijke, Y Venema, Cambridge University PressP. Blackburn, M. de Rijke, and Y. Venema, Modal Logic, ser. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 2001, no. 53.
Simulating without negation. N Kurtonina, M De Rijke, J. Log. Comput. 74N. Kurtonina and M. de Rijke, "Simulating without negation," J. Log. Comput., vol. 7, no. 4, pp. 501-522, 1997.
On directed simulations in description logics. A Divroodi, L Nguyen, J. Log. Comput. 277A. Divroodi and L. Nguyen, "On directed simulations in description logics," J. Log. Comput., vol. 27, no. 7, pp. 1955-1986, 2017.
Bisimulations for weighted automata over an additively idempotent semiring. N Damljanović, M Ćirić, J Ignjatović, Theor. Comput. Sci. 534N. Damljanović, M.Ćirić, and J. Ignjatović, "Bisimulations for weighted automata over an additively idempotent semiring," Theor. Comput. Sci., vol. 534, pp. 86-100, 2014.
Lattice-valued simulations for quantitative transition systems. H Pan, Y Li, Y Cao, Int. J. Approx. Reason. 56H. Pan, Y. Li, and Y. Cao, "Lattice-valued simulations for quantitative transition systems," Int. J. Approx. Reason., vol. 56, pp. 28-42, 2015.
Logical characterizations of simulation and bisimulation for fuzzy transition systems. H Wu, Y Deng, Fuzzy Sets Syst. 301H. Wu and Y. Deng, "Logical characterizations of simulation and bisimulation for fuzzy transition systems," Fuzzy Sets Syst., vol. 301, pp. 19-36, 2016.
Bisimilarity in fuzzy description logics under the Zadeh semantics. L Nguyen, IEEE Trans. Fuzzy Systems. 276L. Nguyen, "Bisimilarity in fuzzy description logics under the Zadeh semantics," IEEE Trans. Fuzzy Systems, vol. 27, no. 6, pp. 1151-1161, 2019.
Computing crisp simulations and crisp directed simulations for fuzzy graph-based structures. abs/2012.01845CoRR. --, "Computing crisp simulations and crisp directed simulations for fuzzy graph-based structures," CoRR, vol. abs/2012.01845, 2020. [Online]. Available: https://arxiv.org/abs/2012.01845
Bisimulations for fuzzy automata. M Ćirić, J Ignjatović, N Damljanović, M Basic, Fuzzy Sets and Systems. 1861M.Ćirić, J. Ignjatović, N. Damljanović, and M. Basic, "Bisimulations for fuzzy automata," Fuzzy Sets and Systems, vol. 186, no. 1, pp. 100- 139, 2012.
Simulation for lattice-valued doubly labeled transition systems. H Pan, Y Cao, M Zhang, Y Chen, Int. J. Approx. Reason. 553H. Pan, Y. Cao, M. Zhang, and Y. Chen, "Simulation for lattice-valued doubly labeled transition systems," Int. J. Approx. Reason., vol. 55, no. 3, pp. 797-811, 2014.
Bisimulations in fuzzy social network analysis. J Ignjatović, M Ćirić, I Stanković, Proceedings of IFSA-EUSFLAT-15. IFSA-EUSFLAT-15Atlantis PressJ. Ignjatović, M.Ćirić, and I. Stanković, "Bisimulations in fuzzy social network analysis," in Proceedings of IFSA-EUSFLAT-15. Atlantis Press, 2015.
Computing fuzzy bisimulations for fuzzy structures under the Gödel semantics. L Nguyen, D Tran, IEEE Trans. Fuzzy Syst. 297L. Nguyen and D. Tran, "Computing fuzzy bisimulations for fuzzy structures under the Gödel semantics," IEEE Trans. Fuzzy Syst., vol. 29, no. 7, pp. 1715-1724, 2021.
Bisimulations for fuzzy-transition systems. Y Cao, G Chen, E Kerre, IEEE Trans. Fuzzy Systems. 193Y. Cao, G. Chen, and E. Kerre, "Bisimulations for fuzzy-transition systems," IEEE Trans. Fuzzy Systems, vol. 19, no. 3, pp. 540-552, 2011.
Notions of bisimulation for Heyting-valued modal languages. P Eleftheriou, C Koutras, C Nomikos, J. Log. Comput. 222P. Eleftheriou, C. Koutras, and C. Nomikos, "Notions of bisimulation for Heyting-valued modal languages," J. Log. Comput., vol. 22, no. 2, pp. 213-235, 2012.
A behavioral distance for fuzzy-transition systems. Y Cao, S Sun, H Wang, G Chen, IEEE Trans. Fuzzy Systems. 214Y. Cao, S. Sun, H. Wang, and G. Chen, "A behavioral distance for fuzzy-transition systems," IEEE Trans. Fuzzy Systems, vol. 21, no. 4, pp. 735-747, 2013.
Logical characterizations of regular equivalence in weighted social networks. T Fan, C Liau, Artif. Intell. 214T. Fan and C. Liau, "Logical characterizations of regular equivalence in weighted social networks," Artif. Intell., vol. 214, pp. 66-88, 2014.
Fuzzy bisimulation for Gödel modal logic. T.-F Fan, IEEE Trans. Fuzzy Systems. 236T.-F. Fan, "Fuzzy bisimulation for Gödel modal logic," IEEE Trans. Fuzzy Systems, vol. 23, no. 6, pp. 2387-2396, 2015.
Bisimulations for fuzzy transition systems revisited. H Wu, T Chen, T Han, Y Chen, Int. J. Approx. Reason. 99H. Wu, T. Chen, T. Han, and Y. Chen, "Bisimulations for fuzzy transition systems revisited," Int. J. Approx. Reason., vol. 99, pp. 1-11, 2018.
Algorithmic and logical characterizations of bisimulations for non-deterministic fuzzy transition systems. H Wu, Y Chen, T Bu, Y Deng, Fuzzy Sets Syst. 333H. Wu, Y. Chen, T. Bu, and Y. Deng, "Algorithmic and logical character- izations of bisimulations for non-deterministic fuzzy transition systems," Fuzzy Sets Syst., vol. 333, pp. 106-123, 2018.
Bisimulation and bisimilarity for fuzzy description logics under the Gödel semantics. L Nguyen, Q.-T Ha, N Nguyen, T Nguyen, T.-L Tran, Fuzzy Sets and Systems. 388L. Nguyen, Q.-T. Ha, N. Nguyen, T. Nguyen, and T.-L. Tran, "Bisim- ulation and bisimilarity for fuzzy description logics under the Gödel semantics," Fuzzy Sets and Systems, vol. 388, pp. 146-178, 2020.
Minimizing interpretations in fuzzy description logics under the Gödel semantics by using fuzzy bisimulations. L Nguyen, N.-T Nguyen, Journal of Intelligent and Fuzzy Systems. 376L. Nguyen and N.-T. Nguyen, "Minimizing interpretations in fuzzy description logics under the Gödel semantics by using fuzzy bisimu- lations," Journal of Intelligent and Fuzzy Systems, vol. 37, no. 6, pp. 7669-7678, 2019.
Computing crisp bisimulations for fuzzy structures. L Nguyen, D Tran, CoRR. L. Nguyen and D. Tran, "Computing crisp bisimulations for fuzzy structures," CoRR, vol. abs/2010.15671, 2020. [Online]. Available: https://arxiv.org/abs/2010.15671
Logical characterizations of fuzzy bisimulations in fuzzy modal logics over residuated lattices. L Nguyen, Fuzzy Sets and SystemsL. Nguyen, "Logical characterizations of fuzzy bisim- ulations in fuzzy modal logics over residuated lat- tices," Fuzzy Sets and Systems, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S016501142100289X
| []
|
[
"epl draft Coexistence of dilute and densely packed domains of ligand-receptor bonds in membrane adhesion",
"epl draft Coexistence of dilute and densely packed domains of ligand-receptor bonds in membrane adhesion"
]
| [
"Daniel Schmidt \nII. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany\n",
"Timo Bihr \nII. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany\n",
"Udo Seifert \nII. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany\n",
"Ana-Sunčana Smith \nInstitut für Theoretische Physik and Excellence Cluster: Engineering of Advanced Materials\nUniversität Erlangen-Nürnberg\nGermany\n"
]
| [
"II. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany",
"II. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany",
"II. Institut für Theoretische Physik\nUniversität Stuttgart\nGermany",
"Institut für Theoretische Physik and Excellence Cluster: Engineering of Advanced Materials\nUniversität Erlangen-Nürnberg\nGermany"
]
| []
| We analyze the stability of micro-domains of ligand-receptor bonds that mediate the adhesion of biological model membranes. After evaluating the effects of membrane fluctuations on the binding affinity of a single bond, we characterize the organization of bonds within the domains by theoretical means. In a large range of parameters, we find the commonly suggested dense packing to be separated by a free energy barrier from a regime in which bonds are sparsely distributed. If bonds are mobile, a coexistence of the two regimes should emerge, which agrees with recent experimental observations. p-6 | 10.1209/0295-5075/99/38003 | [
"https://arxiv.org/pdf/1207.2575v1.pdf"
]
| 118,833,072 | 1207.2575 | d4ce11379295b66d2a09fbeff40939dd3a4fd8df |
epl draft Coexistence of dilute and densely packed domains of ligand-receptor bonds in membrane adhesion
11 Jul 2012
Daniel Schmidt
II. Institut für Theoretische Physik
Universität Stuttgart
Germany
Timo Bihr
II. Institut für Theoretische Physik
Universität Stuttgart
Germany
Udo Seifert
II. Institut für Theoretische Physik
Universität Stuttgart
Germany
Ana-Sunčana Smith
Institut für Theoretische Physik and Excellence Cluster: Engineering of Advanced Materials
Universität Erlangen-Nürnberg
Germany
epl draft Coexistence of dilute and densely packed domains of ligand-receptor bonds in membrane adhesion
11 Jul 2012arXiv:1207.2575v1 [cond-mat.soft]PACS 8716dj -Membrane: dynamics and fluctuations PACS 8716dt -Membrane: domains and rafts PACS 8717Rt -Cellular adhesion
We analyze the stability of micro-domains of ligand-receptor bonds that mediate the adhesion of biological model membranes. After evaluating the effects of membrane fluctuations on the binding affinity of a single bond, we characterize the organization of bonds within the domains by theoretical means. In a large range of parameters, we find the commonly suggested dense packing to be separated by a free energy barrier from a regime in which bonds are sparsely distributed. If bonds are mobile, a coexistence of the two regimes should emerge, which agrees with recent experimental observations. p-6
The key step in the recognition process of living cells is the establishment of adhesive contacts either between opposing membranes of two cells or between the membrane of a cell and the extracellular matrix (ECM). It has been shown previously that the organization of bonds within domains has strong effects on the adhesion of cells and the consequent active response [1]. Most insightful were the experiments with cells binding to substrates containing ligands organized on a hexagonal lattice of a characteristic length between 40 and 150 nm. A length of 58 to 73 nm distance between bonds was shown necessary for a successful formation of domains [2], and at distances larger then 90 nm, domains would not form [3].
Instead of using living cells, the so-called bottom up approach [4] has been successfully used to elucidate various elements relevant to cell membranes and adhesion [5]. The main protagonists of this research are giant unilamellar vesicles that are functionalized with ligands to interact with receptors immobilized on the surface [6]. Depending on the density of binders on the substrate and in the vesicle, as well as on the intrinsic binding affinity of the binding pair, either domains with densely packed bonds have been observed to grow radially from a nucleation center, or no specific adhesion was reported. In the context of the formation of these densely packed domains, valuable information on their nucleation and the growth [7], equilibrium [8], cooperative effects [9][10][11], and membrane roughness [12][13][14] have been discussed over the years from both the experimental and the theoretical points of view.
With the development of experimental methods [15], more detailed imaging of the distribution of bonds within the adhesion domains became possible. Consequently, large domains consisting of sparsely distributed bonds have been identified in coexistence with densely packed domains [16,17]. It was reported that sparse domains may become densely packed by a gradual increase of density of ligand receptor bonds within an area of the domain of several square microns. However, some sparse domains were also found stable on time scales of the experiment (several hours) [17].
The coexistence between the sparse and dense domains driven by membrane-mediated interactions was first shown p-1 within an effective model [18]. About the same time, an adhesion stabilized phase separation induced by attractive interactions of binders within the same membrane, was suggested [19]. A somewhat similar phase diagram emerged from considering the interplay of a binding bond potential and a non-specific repulsion [20], but in the absence of membrane transmitted correlations. More recently, a complex phase behavior was suggested for active binders and binders of different length coexisting within the same membrane [11,21]. Here we show, that the stability of sparse domains and their coexistence with dense domains emerges from basic principles, at low membrane tension. Thereby, the cost of deforming the membrane in an effective non-specific potential is balanced with the energy gain associated with the formation of bonds.
The model. -We consider bonds placed on a regular square (sq) or central hexagonal (ch) lattice. Thereby, we assume that the lateral size of the membrane adhesion domain is significantly larger than the size of a lattice unit cell. We model the total free energy of a domain (containing N bonds) and investigate its dependence on the distance d between the bonds.
The membrane deformation energy (in units of k B T , where k B is the Boltzmann constant and T the temperature) is described by the Helfrich Hamiltonian [18,20,22]
H 0 = (1) A dr κ 2 ∇ 2 h(r) 2 + σ 2 (∇h(r)) 2 + γ 2 (h(r) − h 0 ) 2 .
Thereby, the Monge parametrization is used to represent the membrane of a bending stiffness κ as a surface of projected area A placed above the substrate at the height h(r), r = (x, y) being the in-plane position vector. The first term is the bending energy, whereas the second term accounts for the tension σ in the membrane. In a simplistic manner, the last term models the generic membranesubstrate interaction potential with the harmonic potential of a strength γ with a minimum at the height h 0 . In the context of mimetic systems, this interaction potential encompasses for a number of contributions such as the van-der-Waals attraction, or the steric repulsion emerging from both repeller and membrane shape fluctuations [22]. In the case of cells, numerous other factors associated with actin (de)polymerization, active forces, the glycocalix and the ECM, may all contribute to this potential depending on the cell type and the treatment of the substrate. The receptors are modeled as thermalized harmonic springs of rest length l 0 and spring constant λ, fixed for all bonds on the lattice. When the receptor is relatively stiff such as a bulky protein, it is modeled with a very large spring constant (λ → ∞). If the receptor is a soft polymer, deforming to form a bond, λ is set finite.
Ligands and receptors interact through a square-well potential [20,23], of a very short range α and depth ǫ b (1− 35 k B T ), the latter associated with the intrinsic binding affinity. Thus, the total Hamiltonian H of a domain with N bonds situated at position r j b reads
H = H 0 + λ 2 N j=1 (l(r j b ) − l 0 ) 2 + N ǫ b ≡ N H d + N ǫ b . (2)
Here, l(r j b ) is the extension of the j-th spring. When α → 0, obviously h(r j b ) → l(r j b ). Furthermore, H d denotes the deformation energy per bond stored in the membrane and all receptors. The last term is the binding enthalpy.
Free energy. -The stability of the domain is determined from the difference ∆F N between the free energy of the domain with N formed bonds F N b , and the free energy of the reference state in which receptors and the membrane fluctuate freely F N ub . Both F N b and F N ub are calculated from the partition function Z, comprising all possible conformations of N receptors and the membrane. Thereby, the partition functions of the reference and the bound state (Z ub and Z b ), and hence F N ub ≡ ln Z ub and F N b ≡ ln Z b , are associated with the conformations in which the membrane is outside or within the range α of the square potential at the position of all receptors, respectively. The domain is
stable if ∆F N = F N b − F N ub < 0.
Here we have clearly omitted the change in the mixing entropy of binders that could, in principle affect the results. However, we assume the adhesion to be mediated by large domains of bonds. Such domains typically form when the concentration of the mobile ligands is significantly larger than the concentration of the immobilized receptors [8]. In the context of the free energy of the system, in this regime the mixing entropy will provide a constant, proportional to the number of bonds and to the chemical potential of the free ligands, but independent on the distance between bonds. Consequently, in this regime, the mixing entropy will act to re-scale the effective binding affinity of a single bond [18], without affecting the general phase behavior of the domains.
The reference state for N receptors.
The partition function for the free membrane and N unbound fluctuating receptors is, up to the normalization,
Z ub = N j=1 dl(r j b ) exp − λ 2 (l(r j b ) − l 0 ) 2 × D [h ′ (r)] e −H0[h ′ (r)] ≡ C 2π λ N/2(3)
where C is denoting the result of the functional integral D h ′ q . In this state, the membrane is free, on average flat, and positioned in the minimum of the nonspecific potential h(r) = h 0 . The fluctuation amplitude is that of a the membrane under tension [24]
Σ 2 0 ≡ 1 A q 1 κq 4 + σq 2 + γ = arctan( 4κγ − σ 2 /σ) 2π 4κγ − σ 2(4)
p-2
Coexistence of dilute and densely packed domains of ligand-receptor bonds in membrane adhesion where the sum runs over all possible wave vectors q with q ≡ |q|, set by the system size. The spatial correlation function [24,25] is simply given by
G(r 0 − r) ≡ 1 A q cos (q · (r 0 − r)) κq 4 + σq 2 + γ ,(5)
where r 0 and r are arbitrary positions on the membrane. In the tensionless limit,
Σ 2 0 = 1/(8 √ κγ) and G(r) ≈ −4π −1 ξ 2 ⊥ kei(r/ξ ),
with kei signifying the kelvin function. Thereby, ξ 2 ⊥ ≡ 1/(8 √ κγ) and ξ ≡ 4 κ/γ are the vertical roughness and the lateral correlation length of a tensionless unbound membrane, respectively [22,24,25], setting the length and other scales of the system.
An isolated bond -N = 1.
The shape of the membrane bound to the substrate by one bond is typified by the Kelvin function [25]. The associated membrane de-
formation energy U(σ) = (h 0 − h(r b )) 2 /(2Σ 2 0 )
is quadratic with respect to the offset from the minimum of the nonspecific potential . When λ → ∞ the entire deformation is stored in the membrane. If, furthermore, σ = 0, one finds U 0 = (h 0 − l 0 ) 2 /2ξ 2 ⊥ , as previously determined [18]. Because both, the membrane and the receptor deformations are quadratic with the elongation, we map the problem of forming one bond to a problem of two onedimensional thermalized springs of stiffness k 1 and k 2 (Fig. 2). The springs are said to interact if their relative distance falls within a square-well potential of a (short) range α and depth V 0 . The the free energy difference between the bound and the unbound state
∆F sp = 1 2 k 1 k 2 L 2 k 1 + k 2 − V 0 + 1 2 ln 2π(k 1 + k 2 ) k 1 k 2 1 α 2(6)
is calculated as described previously, by subdividing the configurational space of the receptor and the membrane. The first term on the right side is identified with the deformation energy of the two springs, characterized by a reduced spring constant of the coupled system consisting of two springs in series k 1 k 2 /(k 1 + k 2 ) elongated to meet the system size L. The second and the third terms are the enthalpy gain and the entropy loss due to the formation of a bond. Thereby, the last contribution only affects the depth of the potential. By analogy (Fig. 2), for a membrane binding to a single receptor (indicated by the superscript 1)
∆F 1 = 1 2 (h 0 − l 0 ) 2 Σ 2 0 + 1/λ −ǭ b ≡ H 1 d −ǭ b .(7)
Thereby, H 1 d is the total deformation energy associated with an isolated bond and sets the energy scale of the problem. For σ = 0 and λ → ∞, H 1 d = U 0 . Furthermore, ǫ b is the effective binding affinitȳ which is the contribution of a single bond to the free energy. In essence, it is the intrinsic binding affinity decreased by the entropic cost related to the change in the fluctuations of the membrane and the receptor upon binding. At room temperature and typical parameters this cost amounts to several k B T . If mixing entropy of the binders is considered in the regime of large domains, the chemical potential of free ligands would need to be subtracted on the right hand side of the eq. (8).
ǫ b ≡ ǫ b − 1 2 ln 2π α 2 1 λ + Σ 2 0 ,(8)
A domain with N bonds. The free energy of an arbitrary bond configuration of N bonds emerges from the partition function for the bound system Z b
Z b = D [h ′ (r)] N j h ′ (r j b ) h ′ (r j b )−α dl(r j b )e −H[h ′ (r),{l(r j b )}] ,(9)
that accounts for all conformations in which all receptors and the membrane are simultaneously within the bond potential range α. With the Hamiltonian from eq. (2), one gets
Z b = Cα N exp − H 1 d N i,j=1 M −1 ij − N ǫ b (1 + λΣ 2 0 ) N det M ,(10)
with M ij ≡ (δ ij + λG(r i b − r j b ))/(1 + λΣ 2 0 ) accounting for the membrane-coupled deformations of bound receptors on positions r i b and r j b . The free energy difference becomes
∆F N = H 1 d N i,j=1 M −1 ij − Nǭ b + 1 2 ln (det M ) .(11)
Similarly to eq. (7), the first term in eq. (11) is the total deformation energy of the receptors and the membrane. The second term is proportional to the effective binding affinity of a single bond and to the total number of bonds within the domain. The last term calculates the fluctuation-induced interactions between the bonds, which interestingly, can be fully decoupled from other contributions. At large bond separations, d ≫ ξ , H d ≈ H 1 d and ∆F N /N ≈ ∆F 1 .
p-3 Phase diagram. To compare domains with the same number of bonds that due to packing, cover different area, we analyze the free energy density ∆f ≡ ∆F N /A that if negative, signifies stable domains ( fig. 3). Three distinct regimes are apparent even though a global minimum is found in all cases. This minimum, however, appears at such small d that it could be rendered inaccessible by the finite size of the binders.
In the limit when d → ∞, the ∆f → 0 limit is approached either from positive values ( fig. 3, red-dashed lines) or negative values (blue and yellow-dotted lines), signifying unstable and stable binding, respectively. In the former case (regime (i) in the phase diagram), by a maximum at intermediate distances precedes either stable (regime (ii)) or unstable domains at large distances between the bonds (regime (i)), in which case the domains are stable only close to the boundary minimum. The phase transition line separating the regimes (i) and (ii) is found in the limit of d → ∞ asǭ b = H 1 d Following the maximum, ∆f curves of the region (ii) possess a shallow secondary minimum that is always negative, with locally stable domains. The maximum, on the other hand, penetrates toward positive values of ∆f for smaller values ofǭ b , making the domains unstable at intermediate bond separations. At larger values ofǭ b , the maximum may remain with fully negative free energies, hence stable domains can take place for any distances between bonds. This is also true for the regime (iii) in which ∆f remains negative, despite its monotonous increase. The border between the regimes (ii) and (iii) is given by the (dis)appearance of a root in the second derivative of the free energy density with respect to the bond distance, and has to be determined numerically.
This phase diagram should be applicable to the situation in which the distance between the bonds is predefined, such as when one of the binder type is immobilized on the substrate [3,6]. The domains should be observable for any distance in which the total free energy density is smaller than zero. Consequently, the commonly observed densely packed domains [16] can be found within our model at low free energies. However, when both receptors and ligands are mobile, the distance between the bonds within the domain becomes a free parameter. In this case, domains should be found only at the distances at which the minima in the total energy density appear. More specifically, apart from the densely packed agglomerates, domains associated with the minimum in the region (ii) at intermediate distances between the bonds, should be seen. Indeed, coexistence between densely packed domains and domains with a sparse distribution of bonds has been observed recently in experiments with mobile ligand-receptor pairs [17,26].
Optimum deformation. -The understanding of the above phase diagram evolves from the analysis of the mean membrane shape and the fluctuations amplitude. They emerge as moments of the height probability distribution p(h(r)) of the membrane within the domain with a fixed bond configuration. The latter is a functional integral over all appropriately weighed realizations of the membrane profile
p (h(r)) ∼ D[h ′ (r)] exp(−H[h ′ (r)])δ(h ′ (r) − h(r)) ∼ exp − 1 2 (h(r) − h(r) ) 2 Σ 2 (r) .(12)
Because of the quadratic form of eq. (2), p (h(r)) is a Gaussian distribution with the expectation value giving the equilibrium shape
h(r) ≡ h 0 − (h 0 − l 0 ) Σ 2 0 /Σ 2 (r) ij G(r i b − r)L −1 ij ,(13)
and variance Σ 2 (r)
Σ 2 (r) ≡ Σ 4 0 Σ 2 0 + ij G(r i b − r)L −1 ij G(r j b − r) ,(14)
being the fluctuation amplitude. Thereby
L ij ≡ δ i,j λ + G(r i b − r j b ) − G(r i b − r)G(r j b − r) Σ 2 0 .(15)
By setting N = 0 and N = 1 one recovers the results presented in previous sections. The equilibrium shape can be also determined by direct minimization of H from eq. (2), with constrained extension of bonds, and periodic boundary conditions. Consequently, the effect of the lattice is explicit, and one obtains
h(r) = h 0 − g(r) φ + 1/λ (h 0 − l 0 ).(16)
Here φ ≡ a −1 q (κq 4 + σq 2 + γ) −1 and g(r) ≡ a −1 q cos(qr)(κq 4 + σq 2 + γ) −1 , with a being the area of a unit cell. For the squared lattice, the sums run over wave vectors q ≡ (q 1 , q 2 ) = (2πz 1 /d, 2πz 2 /d), with z 1 , z 2 being integers, while for the hexagonal lattice, q 1 = 2π(2z 1 + z 2 )/( √ 3d) and q 2 = 2πz 2 /d. Combining the total Hamiltonian, eq. (2), and the equilibrium shape of the membrane, eq. (16), results in the total membrane and spring deformation energy per bond
H d [ h(r) ] = (h 0 − l 0 ) 2 2 [φ + 1/λ] .(17)
Thereby, all bonds have the same extension and the membrane is at h(r i b ) = h b , for all r i b . This result is consistent with the first term in eq.(11) as well as with the 1D energy profile found for a membrane deformed by two infinite cylinders [27], and the potential calculated for the interaction between two bonds [18].
It is instructive to first analyze the case of stiff bonds when λ(h(r b ) − l 0 ) 2 → 0, and H d , eq. (17), depends only on the lattice type and d ( fig. 4). The optimum deformation energy has a shallow minimum at intermediate distances between the bonds, causing the minimum in the free energy density. This minimum can be understood by analyzing the shape of the membrane. Namely, when the bonds are far apart, they act as isolated bonds produc- ing a local deformation in which the membrane, prior to flattening into the minimum of the nonspecific potential at h 0 , overshoots h 0 ( fig. 4). As the bonds come closer, the overshoots become shared by neighboring bonds, decreasing the overall cost in bending. When the overshoots fully overlap, the shallow minimum appears in ∆f (shapes (c) in fig. 4). Bringing the bonds even closer, again increases the bending energy providing an energy barrier (shapes (b)). When the overshoots disappear the membrane starts to flatten between the bonds. Consequently, the energy slides towards a boundary minimum H d → 0, as d → 0. With the increasing tension (at constant γ) the overshoots in the shape become less pronounced, but the cost for deforming the membrane rise ( fig. 4). The secondary minimum becomes shallower and appears at larger d, shrinking the parameter space for the phase (ii).
In the limit of d → 0, flattening of the membrane caused by the large density of bonds makes both the deformation energy and fluctuations insensitive to the tension ( fig. 5). If, on the other hand, d → ∞, bonds on any lattice become independent from one another and the maximal fluctuation amplitude Σ 2 m of the domain fluctuation profile Σ(r) 2 , takes the limit Σ 2 m → Σ 2 0 . Decreasing the spring constant λ transmits the deformation from the membrane to the receptors, again affecting the size of the region (ii) in the phase diagram. For very soft bonds (λ → 0), the entire deformation is stored in the springs and (h(r b ) → h 0 ). In this case, the membrane is in average flat as evidenced by the mean membrane heighth within the domain and the bond extensions ( fig. 6). Furthermore, in this regime, the membrane fluctuates as an unbound one, irrespectively of d.
Conclusions. -We analyzed the properties of large adhesion domains forming between a membrane and a flat substrate, when the ligand-receptor adhesion competes with the nonspecific adhesion. While some aspects of our model have been investigated previously [18,20], our work provides a unifying framework within which the stability of the domains can be fully explored. Our first conclusion is that the energetics of the domains forming on different lattices including the simple hexagonal one (data not shown) is qualitatively the same. This suggests that the modeling on commonly used sq lattices [24,28] will well reproduce the behavior of domains that most likely form on the ch lattice. This result emerges from the decoupling of the entropic free energy contributions associated with the correlations between bonds and contributions of individual bonds. At room temperatures, the correlation contributions seem to be small and have no qualitative effects on the phase diagram. However, on a level of individual bonds, the effect of fluctuations may be significant resulting in considerable differences between the intrinsic and the effective binding affinity. Furthermore, another important result is the evaluation of several regimes in which domains are stable. Consequently, our model provides a physical explanation for the recently observed coexistence of densely packed and sparse domains of bonds. While we have demonstrated the power of our approach with a very simple model of receptors and of the bond potential, the quadratic nature of the membrane deformation energy allows for stability analysis of domains formed from bonds with a very wide range of potentials. Such extensions of the model may be important for quantitative comparison with measured data, which is a task that may be challenging, both from experimental and theoretical points of view. However, in order to fully comprehend the stability of adhesion domains, this comparison needs to be performed. In this light, the current work could become the necessary foundation for the understanding of the stability of domains in cellular and mimetic systems.
Fig. 1 :
1The system under investigation: A large patch of a bonded membrane that deforms and fluctuates in a harmonic potential. Bonds are separated by a distance d.
Fig. 2 :
2Mapping of the model for two fluctuating interacting springs to the membrane-bond model.
Fig. 3 :
3Free energy density as a function of the distance between the bonds for the ch lattice (left), and the respective phase diagram (right). All variables are rendered dimensionless. σ = 0. For a detailed discussion, see the main text.
Fig. 4 :
4For σ = 0, stiff bonds, and H 1 d (σ = 0) = 5.6 kBT , shapes and the free energy per bond ∆F N /N +ǭ b for the ch and the sq lattice (black and red dotted line, respectively) are displayed. The deformation energy per bond H d on the ch lattice is presented with the blue dashed line. Upper inset highlights the minimum at d = 9.5ξ . For tensions σ = (0.125, 0.25, 0.375, 0.5)ξ −2 ⊥ , shapes (ch lattice, d = 10ξ ) and the free energy are shown. All curves are scaled by H 1 d that depends on the tension as shown in the lower inset.
Fig. 5 :
5Maximal fluctuation amplitude for the ch and sq lattice (black and red-dashed lines, respectively). Fluctuation maps at σ = 0 (horizontal array), and finite tensions at d = 8ξ (vertical array) are also shown. Parameters as inFig. 4.
Fig. 6 :
6Top: Average height of the membraneh (blue) and the extension of the bonds (red) as a function of λ and the corresponding shapes at d = 6ξ . Bottom: The fluctuation amplitude of a bond as a function of λ. Full and dotted lines are obtained with σ = 0 and σ = 1/(4ξ 2 ⊥ ), respectively.
Acknowledgements. -We thank K. Sengupta and S. Fenz for helpful discussions. US, A-SS and TB acknowledge the support from DFG-SE 1119/2-1.
. B Geiger, J P Spatz, A D Bershadsky, Nat. Rev. Mol. Cell Bio. 10Geiger B., Spatz J. P., and Bershadsky A. D. Nat. Rev. Mol. Cell Bio. 10:21-33, 2009
. J Ranzinger, Nano Lett. 9Ranzinger J. et al. Nano Lett. 9:4240-4245, 2009
. J A Deeg, Nano Lett. 11Deeg J. A. et al. Nano Lett. 11:1469-1476, 2011
. A P Liu, D A Fletcher, Nat. Rev. Mol. Cell Bio. 10Liu A. P. and Fletcher D. A. Nat. Rev. Mol. Cell Bio. 10:644-650, 2009
. A.-S Smith, E Sackmann, ChemPhysChem. 10Smith A.-S. and Sackmann E. ChemPhysChem 10:66-78, 2009
. E Sackmann, R F Bruinsma, ChemPhysChem. 3Sackmann E. and Bruinsma R. F. ChemPhysChem 3:262- 269, 2002;
. B Lorz, 23Lorz B. et al. Langmuir, 23:12293-12300, 2007.
. A Raudino, M Pannuzzo, J. Chem. Phys. 13245103Raudino A. and Pannuzzo M. J. Chem. Phys., 132:045103, 2010.
. A.-S Smith, U Seifert, Soft Matter. 3Smith A.-S. and Seifert U. Soft Matter, 3:275-289, 2007.
. T Speck, E Reister, U Seifert, Phys. Rev. E. 8251919Phys. Rev. ESpeck T., Reister E., and Seifert U. Phys. Rev. E, 82:021923, 2010; Farago O. Phys. Rev. E, 78:051919, 2008.
. E Reister, New J. Phys. 1325003Reister E. et al. New J. Phys., 13:025003, 2011.
. H Krobath, B Rozycki, R Lipowsky, T R Weikl, PLoS One. 6823284Krobath H., Rozycki B., Lipowsky R., and Weikl T. R. PLoS One, 6(8):e23284, 2011.
. H Krobath, G J Schütz, R Lipowsky, T R Weikl, Epl, 7838003Krobath H., Schütz G. J., Lipowsky R., and Weikl T. R. EPL, 78:38003, 2007.
. S F Fenz, Adv. Mater. 232622Fenz S. F. et al. Adv. Mater., 23:2622, 2011.
. L C Lin, -L Groves, J T Brown, F L H , Biophys. J. 91Lin L. C.-L., Groves J. T., and Brown F. L. H. Biophys. J., 91:3600-3606, 2006.
. L Limozin, K Sengupta, C Chemphyschem ; Monzel, S F Fenz, R Merkel, K Sengupta, Chemphyschem, 10Limozin L. and Sengupta K. ChemPhysChem, 10:2752- 2768, 2009; Monzel C., Fenz S. F., Merkel R., and Sen- gupta K. ChemPhysChem, 10:2828-2838, 2009.
. A.-S Smith, S F Fenz, K Sengupta, Epl, 89Smith A.-S., Fenz S. F., and Sengupta K. EPL, 89:28003:1-6, 2010.
. A.-S Smith, Proc. Natl. Acad. Sci. U. S. A. 105Smith A.-S. et al. Proc. Natl. Acad. Sci. U. S. A., 105(19):6906-6911, 2008.
. R Bruinsma, A Behrisch, Sackmann E , Phys. Rev. E. 61Bruinsma R., Behrisch A., and Sackmann E. Phys. Rev. E, 61:4253-4267, 2000.
. S Komura, D Andelman, Eur. Phys. J. E. 3Komura S. and Andelman D. Eur. Phys. J. E., 3:259-271, 2000.
. T R Weikl, D Andelman, S Komura, Lipowsky R , Eur. Phys. J. B. 8Weikl T. R., Andelman D., Komura S., and Lipowsky R. Eur. Phys. J. B, 8:59-66, 2002.
. T R Weikl, Soft Matter. 53273Weikl T. R. et. al. Soft Matter, 5:3273, 2009.
. J O Raedler, T J Feder, H H Strey, Sackmann E , Phys. Rev. E. 515Raedler J. O., Feder T. J., Strey H. H., and Sackmann E. Phys. Rev. E, 51(5):4523-4536, 1995.
. M Breidenich, R R Netz, Lipowsky R , Eur. Phys. J. E. 5Breidenich M., Netz R. R., and Lipowsky R. Eur. Phys. J. E, 5:403-411, 2001.
R Lipowsky, Structure and Dynamics of Membranes. Elsevier11Lipowsky R. in Structure and Dynamics of Membranes. Editors Lipowsky R. and Sackmann E., chapter 11, Else- vier, 1995.
. R Bruinsma, M Goulian, Pincus P , Biophys. J. 67Bruinsma R., Goulian M., and Pincus P. Biophys. J., 67:746-750, 1994.
. S F Fenz, A.-S Smith, R Merkel, K Sengupta, Soft Matter. 73Fenz S. F., Smith A.-S., Merkel R., and Sengupta K. Soft Matter, 7(3):952-962, 2011.
. T R Weikl, Eur. Phys. J. E. 12Weikl T. R. Eur. Phys. J. E, 12:265-273, 2003.
. R.-J Merath, U Seifert, Phys. Rev. E. 7310401Merath R.-J. and Seifert U. Phys. Rev. E, 73:010401, 2006.
| []
|
[
"ANALYSIS OF DISCRETE SIGNALS WITH STOCHASTIC COMPONENTS USING FLICKER NOISE SPECTROSCOPY",
"ANALYSIS OF DISCRETE SIGNALS WITH STOCHASTIC COMPONENTS USING FLICKER NOISE SPECTROSCOPY"
]
| [
"Serge F Timashev [email protected] \nKarpov Institute of Physical Chemistry\n103064MoscowRussia\n",
"Yuriy S Polyakov [email protected] \nUSPolyResearch\n17921AshlandPAUSA\n"
]
| [
"Karpov Institute of Physical Chemistry\n103064MoscowRussia",
"USPolyResearch\n17921AshlandPAUSA"
]
| []
| The problem of information extraction from discrete stochastic 1 time series, produced with some finite sampling frequency, using flicker-noise spectroscopy, a general framework for information extraction based on the analysis of the correlation links between signal irregularities and formulated for continuous signals, is discussed. It is shown that the mathematical notions of Dirac δ -and Heaviside θ − functions used in the analysis of continuous signals may be interpreted as high-frequency and low-frequency stochastic components, respectively, in the case of discrete series. The analysis of electroencephalogram measurements for a teenager with schizophrenic symptoms at two different sampling frequencies demonstrates that the "power spectrum" and difference moment contain different information in the case of discrete signals, which was formally proven for continuous signals. The sampling interval itself is suggested as an additional parameter that should be included in general parameterization procedures for real signals. 1 The term "stochastic" in this paper refers to the presence of random variability in the signals of complex systems | 10.1142/s0218127408022020 | [
"https://export.arxiv.org/pdf/0812.2141v1.pdf"
]
| 18,724,091 | 0812.2141 | 8457528341a777bf67bc3cfb2610443b70c8e4f2 |
ANALYSIS OF DISCRETE SIGNALS WITH STOCHASTIC COMPONENTS USING FLICKER NOISE SPECTROSCOPY
Serge F Timashev [email protected]
Karpov Institute of Physical Chemistry
103064MoscowRussia
Yuriy S Polyakov [email protected]
USPolyResearch
17921AshlandPAUSA
ANALYSIS OF DISCRETE SIGNALS WITH STOCHASTIC COMPONENTS USING FLICKER NOISE SPECTROSCOPY
1
The problem of information extraction from discrete stochastic 1 time series, produced with some finite sampling frequency, using flicker-noise spectroscopy, a general framework for information extraction based on the analysis of the correlation links between signal irregularities and formulated for continuous signals, is discussed. It is shown that the mathematical notions of Dirac δ -and Heaviside θ − functions used in the analysis of continuous signals may be interpreted as high-frequency and low-frequency stochastic components, respectively, in the case of discrete series. The analysis of electroencephalogram measurements for a teenager with schizophrenic symptoms at two different sampling frequencies demonstrates that the "power spectrum" and difference moment contain different information in the case of discrete signals, which was formally proven for continuous signals. The sampling interval itself is suggested as an additional parameter that should be included in general parameterization procedures for real signals. 1 The term "stochastic" in this paper refers to the presence of random variability in the signals of complex systems
Introduction
Stochastic time and space series of dynamic variables that arise in studies of various natural processes and structure are often an important source of information about the system state and features of its evolution and structure [Réfrégier, 2004]. Therefore, a reliable tool to extract and analyze the information contained in the series could help understand many processes occurring in nature, for example, the preparation stages for a major earthquake or development of a human disease.
The raw data about the dynamics of complex real systems are usually obtained as discrete time series V(t k ) produced with some finite sampling frequency f d , where t k are time moments separated by a fixed interval ∆t = f d -1 . When extracting information from these discrete time series, one first needs to answer the fundamental question: how complete and reliable is the information contained in the signals recorded with some finite sampling frequency considering that real systems also generate signals at much higher frequencies? The Nyquist-Shannon-Kotelnikov theorem implies that in order to obtain reliable information about a resonant (regular) component with frequency f r , the inequality 2 d r f f ≥ must be true [Kotelnikov, 1933]. But the stochastic components are characterized by much higher frequencies than we can measure with. The purpose of this paper is to discuss how the analysis of discrete signals produced with some finite sampling frequencies may be carried out with Flicker-Noise Spectroscopy (FNS), a phenomenological framework for extracting information from stochastic series [Timashev, 2006;Timashev & Polyakov, 2007].
FNS principles
The basic idea of FNS is to treat the correlation links present in sequences of different irregularities, such as spikes, "jumps", discontinuities in derivatives of different orders, on all levels of the spatiotemporal hierarchy of the system under study as the main information carriers. It is further assumed that according to the Self-Organized Criticality (SOC) paradigm [Bak, 1997], the stochastic dynamics of real processes is characterized by intermittency, consecutive alternation of rapid changes in the values of dynamic variables on small time intervals with small variations of the values on longer time intervals. It was demonstrated that the origins of such intermittency, which occurs on every hierarchical level of the system evolution, are associated with the occurrence of complex (multiparticle, nonlinear) interactions, dissipation, and inertia [Bak, 1997]. To illustrate the idea, consider the model process of one-dimensional "random walk" with small "kinematic viscosity" ν ( Fig. 1). The small value of ν implies that when the signal changes from position V i to V i+1 , which are |V i+1 -V i | apart (in value) from each other, the system first overleaps ("overreacts") due to inertia and then "relaxes". We assume that the relaxation time is small compared to the residence time in a "fluctuation" position and the signal does not contain "enveloping" low-frequency curves, i.e., the resonant components are absent. The signal has all the main features of intermittent behavior: "laminar" phases with small variations in the dynamic variable V(t) on characteristic time intervals T l are followed by short-term spikes in the dynamic variable on characteristic time intervals
s τ ( ) s l T τ
. The latter ones accompany step-like changes in V(t), which determine the value of the dynamic variable during the next "laminar" phase. In order to extract the information about the dynamics of the signal in Fig. 1, one may analyze the correlation links between the signal irregularities: step-like jumps in the dynamic variable, i.e., Heaviside θfunctions, and inertial spikes, i.e., Dirac δfunctions, accompanying the step-like changes [Schuster, 1984].
In FNS, the information parameters are assumed to be related to the autocorrelation function, one of the basic concepts in statistical physics, which is defined as
( ) ( ) ( ) T V t V t −τ ψ τ = + τ ,(1)
where τ is the time lag parameter, and the angular brackets denote averaging:
2 2 1 (...) (...) T T T dt T − = ∫ . (2)
Signal V(t) generally contains the "resonant" components that are specific to the evolution of the system, and the stochastic components that are related to various irregularities. The resonant modes, which may correspond to natural, external, or interferential frequencies, manifest themselves as low-frequency (slow-varying) "envelope" components of the signal. The stochastic components are characterized by wide high-frequency bands.
To extract the information contained in ψ(τ), it is convenient to analyze some transforms ("projections") of this function, for example, "power spectrum" S(f):
( ) 2 1 1 1 2 ( ) cos(2 ) M M T T S f ft dt − = ψ τ π ∫ ,(3)
where
/ 2 M T T ≤
is the part of the averaging interval T that can be used to calculate "reliable" estimates of the actual power spectrum.
This particular transform was chosen because S(f) is most effective in separating out the resonances (main components of the signal) of the analyzed functions, which are represented as a set of N r peaks characterized by positions f 0i and "half-widths" γ i (i = 1, 2, …, N r ).
The resonance contribution S r (f) to the overall power spectrum S(f) can be extracted by rewriting the latter as
( ) ( ) ( ) c r S f S f S f = + ,(4)
where S c (f) is the continuous power-spectrum component associated with the stochastic component of dynamic variable V(t). Additive representation (4) is justified because the contributions of resonant and stochastic components to dynamic variable V(t) usually correspond to different time scales. In the frequency range from 1/T to f d /2, where f d is the sampling frequency, the resonant components mostly contribute to the low-frequency range while all the irregularities manifest themselves in the high-frequency range [Timashev, 2006]. Let us note that the parameterization of S r (f) by finding the positions, "half-widths", and partial weights А i of the fixed resonances can be done rather easily. At the same time, the parameterization of S c (f) is a much more difficult task.
To solve the latter problem, we also consider difference moments ("transient structural functions") Φ (p) (τ) of different orders p (p = 2, 3, …):
[ ] ( ) ( ) ( ) ( ) p p T V t V t −τ Φ τ = − +τ ,(5)
where M T τ ≤ .
Function (5) can also be written as a linear combination of stochastic Φ c (p) (τ) and resonant Φ r (p) (τ) components [Timashev, 2006]:
(2) (2) (2) ( ) ( ) ( ) c r Φ τ = Φ τ + Φ τ .(6)
Equation (6) The functions Φ c (p) (τ) are formed exclusively by "jumps" of the dynamic variable while S c (f) are formed by both "spikes" and "jumps" on every level of the hierarchy, which was formally proven by Timashev [2006]. It is obvious from Fig. 1 that when the number of walks is large, the functions Φ (p) (τ) will not depend on the values of "inertial skipovers" of the system, but will be determined only by the algebraic sum of walk "jumps". At the same time, the functions S(f), which characterize the "energy side" of the process, will be determined by both spikes and jumps. It should be underlined that such separation of information stored in various irregularities is attributed to the intermittent character of the evolution dynamics. In other words, the information contents of S c (f) and Φ c
(2) (τ) coincide if there is no intermittency.
Discrete time series analysis
Equations (1)-(6) were formulated for an ideal continuous series generated with an infinitely large frequency. The real series that need to be analyzed are always produced with some finite sampling frequency f d . How reliable and accurate are the information parameters calculated based on the power spectrums and difference moments for the signals with that sampling frequency? Moreover, what do the mathematical notions of Dirac δ -and Heaviside θ -functions mean in the case of real discrete signals and how can the theory derived for continuous signals be applied to the analysis of discrete ones?
The whole frequency range for a discrete signal can usually be split into three nonoverlapping ranges: (1) highest-frequency spike (high-frequency stochastic) range; (2) highfrequency jump (low-frequency stochastic) range; (3) low-frequency resonant range [Timashev & Polyakov, 2007]. In this case, the stochastic range includes both spikes and jumps, i.e., high frequencies. As δ -functions may be recorded only when the sampling interval is infinitely large, in discrete signals Dirac δfunctions can be interpreted as high-frequency stochastic components. It is clear that Heaviside θ -functions can then be associated with lowfrequency stochastic components. We can expect that when the autocorrelation function is numerically restored using the inverse cosine transform of the "power spectrum", originally calculated as the forward cosine transform of the autocorrelator, without the high-frequency spike components, any differences in the autocorrelator may only be noticed at very small values of τ, which would make the contribution of the high-frequency spike components to the overall difference moment negligible. At the same time, the contribution of the highfrequency spike components to power spectrum S(f) should be significant. In other words, if we calculate the power spectrum and difference moment at several different sampling frequencies, the power spectrum should change and the difference moment should keep its values almost the same. It is obvious that the spread in sampling frequencies should not be large (less than one order or so), as otherwise we would be comparing the evolutions at different scales.
Consider the example of electroencephalogram (EEG) measurements taken at the C4 electrode for a teenager with schizophrenic symptoms (marked as 545W patient) [Timashev et al., 2005]. The values of the electric potential were recorded with respect to the electrode placed on the left ear lobe of the patient. The sampling frequency was f d = 256 Hz. The signal was recorded for approximately 60 seconds and contained N = 15,459 measurements (Fig. 2). The power spectrum and difference moment of the second order calculated for the original signal and the signal derived by taking every fifth point from the original signal using the discrete versions of Eqs. (3) and (5) [Timashev & Polyakov, 2007] are shown in Fig. 3. In both cases, the averaging interval T was set to the total duration of the measurements and the interval M T to / 4 T . The power spectrum values were normalized by dividing the power spectrum in Eq. (3) by
M T M N T ⎢ ⎥ = ⎢ ⎥ ⎣ ⎦ .
It can be seen that the power spectrums are much different in the high-frequency area of Fig. 3a. The main reason is that the frequency range for the derived signal ends at 51.2 Hz and does not contain any data for higher frequencies. The high-frequency range is often associated with flicker noise, the key parameter for which is the exponent n in the 1/ n f interpolation. A similar exponent is present in the FNS parameterization algorithm [Timashev & Polyakov, 2007]. It can be seen that the slope of the flicker-noise "tail" changes in the range of highest frequencies, which in turn changes the value of the interpolation parameter n. Thus, the information contents of the power spectrums displayed in Fig. 3a are different. This discrepancy is mostly attributed to the behavior of the high-frequency "spike" components, which are the discrete "versions" of Dirac δ − functions. On the other hand, Figure 3b shows only minor differences (at small τ ) in the values of the difference moment function calculated for both signals. It is clear that the standard deviation, an important parameter in the analysis of difference moments [Timashev & Polyakov, 2007], is practically the same for both curves. So, the difference moment appears to be stable to minor changes in the sampling interval. The same observations are valid for many other examples of real series that we have analyzed.
Conclusions
Based on the above, we can conclude that:
(1) The traditional approach to extracting information from discrete signals, which is based on the Nyquist-Shannon-Kotelnikov (sampling) theorem, can be applied only to discrete signals without stochastic components. Discrete signals with stochastic components should be analyzed using phenomenological approaches like flickernoise spectroscopy.
(2) The information contained in the power spectrum estimates and difference moments is different in the case of discrete signals, the fact that so far has been known only for continuous signals [Timashev, 2006]. (3) Changes in the sampling interval lead to changes in some information parameters. Thus the sampling interval should be considered as another parameter in general parameterization algorithms for real signals. (4) The parameterization algorithm based on the theoretical derivations for continuous signals [Timashev & Polyakov, 2007] can be used for the analysis of real discrete signals produced with some finite sampling frequencies as it is based on the second conclusion, which is valid for both continuous and discrete signals.
Fig. 1 .
1Schematic of "random walk" evolution.
may be used because the contribution of resonant components to function Φ (2) (τ) is mostly seen at intermediate and large values of τ. At the same time, the stochastic components contribute to the whole interval of 0 M T ≤ τ ≤ .
Fig. 2 .
2Original EEG signal (C4 electrode) for 545W patient.
Fig 3 .
3Power spectrum (a) and difference moment of the second order (b) for the signal in Fig. 2 and the signal derived from it by taking every fifth point.
This study was supported in part by the Russian Foundation for Basic Research, project no. 05-02-17079.
How Nature Works. The Science of Self-Organized Criticality. P Bak, Oxford University PressOxfordBak, P. [1997] How Nature Works. The Science of Self-Organized Criticality (Oxford University Press, Oxford).
On the transmission capacity of the 'ether' and of cables in electrical communications. V A Kotelnikov, Procs. of the first All-Union Conference on the technological reconstruction of the communications sector and low-current engineering. s. of the first All-Union Conference on the technological reconstruction of the communications sector and low-current engineeringMoscowRussianKotelnikov, V. A. [1933] "On the transmission capacity of the 'ether' and of cables in electrical communications", In: Procs. of the first All-Union Conference on the technological reconstruction of the communications sector and low-current engineering (Izd. Red. Upr. Svyazi RKKA, Moscow) [Russian].
P Réfrégier, Noise Theory and Application to Physics: From Fluctuations to Information. New YorkSpringerRéfrégier, P. [2004] Noise Theory and Application to Physics: From Fluctuations to Information (Springer, New York).
Deterministic Chaos: An Introduction. H G Schuster, Physik-VerlagWeinheimSchuster H. G. [1984] Deterministic Chaos: An Introduction (Physik-Verlag, Weinheim).
What Information is hidden in stochastic signals of biological systems?. S F Timashev, G V Vstovsky, A Kaplan, Ya, A B Solovieva, In: Noise and Fluctuations -ICNF-2005. AIP Conf. Proc. 780. Timashev, S. F., Vstovsky, G.V., Kaplan, A. Ya. & Solovieva A. B. [2005] "What Information is hidden in stochastic signals of biological systems?" In: Noise and Fluctuations - ICNF-2005. AIP Conf. Proc. 780, eds.
. T Gonzalez, J Mateos, D Pardo, AIPMelville, NYGonzalez, T., Mateos, J. & Pardo, D. (AIP, Melville, NY) pp. 579-582.
Flicker Noise spectroscopy and its application: Information hidden in stochastic signals. S F Timashev, Rus. J. Electrochem. 42Timashev, S. F. [2006] "Flicker Noise spectroscopy and its application: Information hidden in stochastic signals", Rus. J. Electrochem. 42, 422-466.
Review of flicker noise spectroscopy in electrochemistry. S F Timashev, S Yu, Fluctuations & Noise Letters. 72Timashev, S. F. & Polyakov Yu. S. [2007] "Review of flicker noise spectroscopy in electrochemistry", Fluctuations & Noise Letters 7(2), R15-R47.
| []
|
[
"IDEALS WHICH GENERALIZE (v 0 )",
"IDEALS WHICH GENERALIZE (v 0 )"
]
| [
"Piotr Kalemba ",
"Szymon Plewik "
]
| []
| []
| We consider ideals d 0 (V) which are generalizations of the ideal (v 0 ). We formulate couterparts of Hadamard's theorem. Then, adopting the base tree theorem and applying Kulpa-Szymański Theorem, we obtain cov(d 0 (V)) ≤ add(d 0 (V)) + . | 10.2478/s11533-010-0074-8 | [
"https://arxiv.org/pdf/1001.5400v1.pdf"
]
| 115,164,404 | 1001.5400 | 402d3efb513b74746f083a21aa019610740875d2 |
IDEALS WHICH GENERALIZE (v 0 )
29 Jan 2010
Piotr Kalemba
Szymon Plewik
IDEALS WHICH GENERALIZE (v 0 )
29 Jan 2010
We consider ideals d 0 (V) which are generalizations of the ideal (v 0 ). We formulate couterparts of Hadamard's theorem. Then, adopting the base tree theorem and applying Kulpa-Szymański Theorem, we obtain cov(d 0 (V)) ≤ add(d 0 (V)) + .
Introduction
Let ω denotes the set of all natural numbers and |X| denotes the cardinality of a set X. If F is a family of sets, then add(F ) is the least cardinality of a subfamily G ⊆ F such that the union G is not in F , but cov(F ) is the least cardinality of a subfamily G ⊆ F such that G = F . Also, d 0 (F ) denotes an ideal defined as follows. A set X ∈ d 0 (F ), whenever X ⊂ F and for each V ∈ F there exists U ∈ F such that U ⊆ V and U ∩ X = ∅.
In this note, products X i are examined where i ∈ ω. In fact, it is assumed that each X i is a finite discrete space with more than one point. For infinite X i , we make some comments, only. Considerations are related to some trees which we call trimmed trees. Each trimmed tree T [A, α] is uniquely determined by two parameters A ∈ [ω] ω and α ∈ X i , and it is a subset of the union {X 0 × X 1 × . . . × X n : n ∈ ω} = f in X i . As usual, elements of a trimmed tree T [A, α] ⊆ f in X i are called nodes. Unions of nodes which belong to X i are called branches. The family of all branches of a trimmed tree is the perfect subset [T [A, α]] ⊆ X i . So, our terminology is standard, compare [5] or [7].
In [3], [6] or [9] were considered properties of the ideal (v 0 ). Ideals d 0 ({[T ] : T ∈ V}) are generalizations of (v 0 ). In fact,
(v 0 ) is d 0 ({[T ] : T ∈ V}),
where it is assumed that X n = {0, 1} for all n ∈ ω. Here, we generalize properties of the ideal (v 0 ) for sets X n being finite and with more than one point. Albeit, we do not consider properties which depend on the assumption that sets X n have less than k-points for a fixed k ∈ ω, compare Theorem 3 in [10], or that the set {n : |X n | = k} is finite for each k ∈ ω, compare Example 1 in [10]. We adopt the base tree theorem, see [1] and [2]. Applying so called Kulpa-Szymański Theorem, we obtain the final result of this paper, i.e. inequalities add(d 0 (V)) ≤ cov(d 0 (V)) ≤ add(d 0 (V)) + under the hypothesis that factors X i of a product X i are finite and have more than one point.
Trimmed trees
Suppose X 0 , X 1 , . . . be an infinite sequence of sets with more than one point. Let X i denotes the Cartesian product of theses sets. Fix a function α ∈ X i and an infinite subset A ⊆ ω. The subset {β ∈ X i : α| ω\A = β| ω\A } is a perfect subset of X i equipped with product topology, where each X n is considered with the discrete topology. We will denote this subset [T [A, α]], since it can be described as the set of all infinite branches of the tree which we call a trimmed tree.
Any trimmed tree T [A, α] ⊆ f in X i is the union of sets of nodes T [A, α, n], where n ∈ ω. We assume that the empty set is a node of any T [A, α], too. One obtains sets of nodes T [A, α, n] by the following procedure T [A, α, n] = {s ∪ {(n, α(n))} : s ∈ T [A, α, n − 1]}, whenever n / ∈ A, but T [A, α, n] = {s ∪ {(n, x)} : s ∈ T [A, α, n − 1] and x ∈ X n } for n ∈ A. This procedure is inductive, so we assume that T [A, α, 0] = {{(0, α(0))}}, whenever 0 / ∈ A, and T [A,
α, 0] = {{(0, x)} : x ∈ X 0 }, for 0 ∈ A. In consequence T [A, α] = {T [A, α, n] : n ∈ ω} ∪ {∅} and [T [A, α]] = {β ∈ X i : α| ω\A = β| ω\A }.
Obviously, T [A, α] does not depend on the restriction α| A . Also,
T [A, α] = T [B, β] implies α| ω\A = β| ω\B , what means A = B and α(n) = β(n) for n ∈ ω \ A. Suppose X i is fixed. Put V = {T [A, α] : A ∈ [ω] ω and α ∈ X i }.
If α ∈ X i and s ∈ f in X i , then let α s denotes the unique function
β ∈ X i such that s ⊂ β ⊆ α ∪ s. If Y ⊆ X i and s ∈ f in X i , then Y s denotes the set {α s : α ∈ Y }, but Y * denotes the family of all functions α s , where α ∈ Y and s ∈ f in X i . Thus Y * = {Y s : s ∈ f in X i }. Finally, put V * = {[T ] * : T ∈ V}.
From now on, we assume that a Cartesian product X i is fixed where each X i is a set with more than one point. This will be applied to symbols V or V * , always. A few facts will need that each set X i is finite, additionally. Lemma 1. Each member of V * contains continuum many pairwise disjoint members of V * .
Proof. Fix [T ] * ∈ V * , where T = T [A, α].
Let R(A) be an almost disjoint family of the cardinality continuum consisting of infinite subsets of A. Fix a function γ ∈ X i such that γ(n) = α(n), for any n ∈ A. For each C ∈ R(A) choose an infinite subset V C ⊂ C such that C \ V C is infinite, too. Let α C ∈ X i be a function such that α C (n) = γ(n), for n ∈ C; α(n), for n / ∈ C.
The family {[T [V C , α C ]] * : C ∈ R(A)} is a desired one.
If T ∈ V and s ∈ f in X i , then T s denotes the tree {α s |n : α ∈ [T ] and n ∈ ω}. Note that, notions T s and Y s are used in different contexts. Each tree T s consists of nodes, but any Y s consists of infinite sequences. We have assumed that each X n has at least two points, hence any tree T [A, α] has continuum many branches.
If T = T [A, α]
and s ∈ f in X i , then T s = T [A \ |s|, α s ]. Therefore, T ∈ V implies T s ∈ V.
Fusion relations
Let a 0 , a 1 , . . . and b 0 , b 1 , . . . be increasing enumerations of all points of A = {a n : n ∈ ω} ∈ [ω] ω and B ∈ [ω] ω , respectively. Put
T [A, α] ⊆ n T [B, β], whenever T [A, α] ⊆ T [B, β] and a 0 = b 0 , a 1 = b 1 , .
. . , a n = b n . Thus, the decreasing sequence of relations (⊆ n ) n∈ω is defined. These relations hold between elements of V. Always, ⊆ n+1 is contained in ⊆ n . So, we can apply the method of fusion, compare [5], using these relation to trimmed trees. In many papers, facts about fusion are presented without proof. Since details considered here are not so obvious, we run full proofs here.
Lemma 2. Let (T [A n , α n ]) n∈ω be a sequence of elements of V. If always T [A n+1 , α n+1 ] ⊆ n T [A n , α n ], then there exists C ∈ [ω] ω and α ∈ X i such that T [C, α] ⊆ T [A n , α n ] for any n ∈ ω.
Proof. An inclusion T [A n+1 , α n+1 ] ⊆ T [A n , α n ] implies A n+1 ⊆ A n and α n | ω\An = α n+1 | ω\An and α n | ω\An = α n+1 | ω\An ⊆ α n+1 | ω\A n+1 . Hence, the union {α n | ω\An : n ∈ ω} is a function. Fix α ∈ X i which extends this union. Functions α and α n coincide on the set ω \ A n , thus T [A n , α] = T [A n , α n ]. Let a n 0 , a n 1 , . . . be the increasing enumeration of points of A n . Put C = {a n n : n ∈ ω}. If k ≥ n, then a k k ∈ A k ⊆ A n . If k < n, then
T [A n , α n ] = T [A n , α] ⊆ n−1 T [A n−1 , α] ⊆ n−2 . . . ⊆ k T [A k , α], hence T [A n , α] ⊆ k T [A k , α]. This implies a k k ∈ A n . Therefore C ⊆ A n . Finally, T [C, α] ⊆ T [A n , α] = T [A n , α n ] for any n ∈ ω.
From now on, the ideal d 0 ({[T ] : T ∈ V}) will be shortly denoted d 0 (V).
Lemma 3. Let s, t ∈ f in X i and Y ⊆ X i and assume that |X k | < add(d 0 (V)) for any k ∈ ω. Then, Y s ∈ d 0 (V) if and only, if Y t ∈ d 0 (V). Proof. Suppose |s| = |t|. Fix T ∈ V. Let Y s ∈ d 0 (V). If t / ∈ T , then [T ] ∩ Y t = ∅. If t ∈ T , then choose P s ∈ V such that [P s ] ∩ Y s = ∅ and P s ⊆ T s . Hence, P t ⊆ T t ⊆ T and [P t ] ∩ Y t = ∅. Therefore Y t ∈ d 0 (V). Suppose s ∈ f in X i and Y s ∈ d 0 (V). Since Y is a subset of the union {Y u : u ∈ f in X i and |u| = |s|}, then it is contained in an union of less than add(d 0 (V)) many elements of d 0 (V). Hence Y ∈ d 0 (V). Now, suppose that Y ∈ d 0 (V) and s ∈ f in X i . If t ∈ f in X i and |s| = |t|, then {α ∈ Y : α| |s| = t} = Y t ∩ Y = (Y t ∩ Y ) t ⊆ Y. Thus (Y t ∩ Y ) s ∈ d 0 (V). Also, Y s = {(Y t ∩ Y ) s : t ∈ f in X i and |s| = |t|}. Hence Y s is an union of less than add(d 0 (V)) many elements of d 0 (V). Therefore Y s ∈ d 0 (V). Lemma 4. Suppose that |X i | < add(d 0 (V)) for all i ∈ ω. Let k ∈ ω and T ∈ V. If Y ∈ d 0 (V), then there exists a tree P ∈ V such that P ⊆ k T and [P ] ∩ Y = ∅. Proof. Let T = T [A, α] and a 0 , a 1 , . . . be the increasing enumeration of all points of A. Consider the union U = {Y s : s ∈ X 0 × . . . × X a k }.
It consists of less than add(d 0 (V)) many sets, each one from d 0 (V) by
Lemma 3. Thus U ∈ d 0 (V). Take Q ∈ V such that Q ⊆ T and [Q] ∩ U = ∅. Check that [Q s ] ∩ Y = ∅ for any s ∈ X 0 × . . . × X a k . Finally put P = {Q s : s ∈ T ∩ X 0 × . . . × X a k }.
We do not know, whether lemmas 2, 3 and 4 are valid for X i = ω ω . Their proofs would work if d 0 (V) was a σ-ideal. If all sets X i are finite, then hypotheses of these lemmas are fulfilled. Moreover, then add(d 0 (V)) is an uncountable cardinal.
Theorem 5. If each X i is a finite set, then d 0 (V) is a σ-ideal. Proof. Assume S 0 , S 1 , . . . is an increasing sequence of elements of the ideal d 0 (V). Fix T ∈ V and put T 0 = T . Using Lemma 4, choose inductively trees T k ∈ V such that T k+1 ⊆ k T k and [T k ] ∩ S k = ∅.
Thus, it has been defined a sequence of elements of V satisfying hypotheses of Lemma 2. So, there exists a tree P ∈ V such that P ⊆ T and [P ] ∩ k∈ω S k = ∅.
Corollary 6. If each X i is a finite set and Y ∈ d 0 (V), then Y * ∈ d 0 (V). Proof. By Lemma 3, Y s ∈ d 0 (V) for each s ∈ f in X i . Since Y * = {Y s : s ∈ f in X i }, it is a countable union of elements of d 0 (V). Thus Y * ∈ d 0 (V) by Theorem 5.
Counterparts of Hadamard's theorem
Two sequences of countable sets (a n ) n∈ω and (b n ) n∈ω form a (ω, ω)gap, whenever
a 0 ⊂ * a 1 ⊂ * . . . a n ⊂ * . . . ⊂ * b n ⊂ * . . . ⊂ * b 1 ⊂ * b 0
and no set c fulfills a n ⊂ * c ⊂ * b n for all n ∈ ω. The famous Hadamard's theorem says that there are no (ω, ω)-gaps, compare [4] or [11]. This theorem can be formulated in our's notations. Indeed, assume that always X i = {0, 1} and identify each subset Y ⊆ ω with its characteristic function which belongs to X i . Then one can check that Hadamard's theorem is equivalent to the property that any decreasing sequence of elements of V * has a lower bound. Theorem 8 extends this property.
If T = T [A, α] ∈ V, then the tree T is determined by the function
δ(T [A, α])(n) =
{α(n)}, whenever n / ∈ A; X n , whenever n ∈ A.
For a tree P ∈ V, we have β ∈ [P ] if and only, if β(n) ∈ δ(P )(n) for each n ∈ ω. Also, β ∈ [P ] * if and only, if β(n) ∈ δ(P )(n) for all, but finitely many n ∈ ω. One can check that Proof. Fix k 0 ∈ ω such that δ(P )(k) ⊆ δ(T )(k), for any k ≥ k 0 . If T = T [A, α], then let a n be the n-th element of A. Put
δ(Q)(m) = δ(T )(m), for m ≤ max{a n , k 0 }, δ(P )(m), for m > max{a n , k 0 }.
The function δ(Q) uniquely determines the tree Q ∈ V which is a desired one.
Theorem 8. Let (W n ) n∈ω be a sequence of elements of V * . If W n+1 ⊆ W n for any n ∈ ω, then there exists W ∈ V * such that W ⊆ W n , for any n ∈ ω.
Proof. Choose T n ∈ V such that W n = [T n ] * . Inductively, construct a sequence of trees (Q n ) n∈ω such that Q n+1 ⊆ n Q n ∈ V and [Q n ] * = [T n ] * , using Lemma 7. By Lemma 2, there exists T ∈ V such that T ⊆ Q n and W = [T ] * ⊆ [Q n ] * = [T n ] * = W n , for all n ∈ ω.
Note that, Lemma 7 and Theorem 8 do not require that sets X i are finite. Theorem 8 immediately follows that d 0 (V * ) is a σ-ideal.
Corollary 9. If each X i is a finite set, then d 0 (V) = d 0 (V * ). Proof. If Y ∈ d 0 (V) and [T ] * ∈ V * , then Y * ∈ d 0 (V) by Corollary 6. There exists a tree P ⊆ T such that [P ] ∩ Y * = ∅. Hence [P ] * ∩ Y * = ∅ and [P ] * ∩ Y = ∅ and finally Y ∈ d 0 (V * ).
If Y ∈ d 0 (V * ) and T ∈ V, then there exists [P ] * ⊆ [T ] * such that [P ] * ∩ Y = ∅. Therefore [P ] ∩ Y = ∅. By Lemma 7 one can assume that P ⊆ T , so Y ∈ d 0 (V).
Families V and V * are not isomorphic with respect to the inclusion. Any decreasing sequence of elements of V * has a lower bound, see Theorem 8. But a sequence of trees (T [ω \ n, α]) n∈ω has no lower bound in V.
A version of base tree
Base Matrix Lemma, see [1], or Base Matrix Tree, compare [2] or [6], are adopted to trimmed trees in this part. We omit some proofs, since they are completely analogical to these which are in [1], [2], [ A family P is called v-partition, whenever P is a maximal family, with respect to the inclusion, of pairwise incompatible elements of V * . Any collection of v-partitions is called v-matrix. We say that a vpartition P refines a v-partition Q (briefly P ≺ Q), if for every [P ] * ∈ P there exists [Q] * ∈ Q such that [P ] * ⊆ [Q] * . A matrix H is called shattering if for any [T ] * ∈ V * there exists a v-partition P ∈ H such that at least two elements of P are compatible with [T ] * . The least cardinality of a shattering matrix we denote κ( X i ).
The poset (V * , ⊆) is separative, i.e. if [P ] * is not contained in [T ] * , then there exists [Q] * ∈ V * such that [Q] * ⊆ [P ] * and [Q] * is incompatible with [T ] * . Indeed, if [P ] * \ [T ] * = ∅, then the set Z = {n ∈ ω : δ(P )(n) \ δ(T )(n) = ∅} is infinite. Fix a set N ∈ [Z] ω such that there exist infinitely many n ∈ ω \N for, which δ(P )(n) = X n . Fix also, a function α with domain N such that α(n) ∈ δ(P )(n) \ δ(T )(n) for n ∈ N. Put δ(Q)(n) = δ(P )(n), for n / ∈ N; {α(n)},
for n ∈ N.
The function δ(Q) uniquely determines the tree Q ∈ V which is a disered one.
φ : A → ω determines these isomorphisms. Indeed, for T [B, β] ⊆ T [A, α], put Φ(T [B, β]) = T [φ(B), β • φ −1 ]. If T [C, γ] ∈ V, then put Ψ(T [C, γ]) = T [φ −1 (C), γ • φ ∪ α| ω\A ] ⊆ T [A, α]. We have Φ • Ψ(T [C, γ]) = T [C, γ] and Ψ • Φ(T [B, β]) = T [B, β].
Thus Φ and Ψ are mutually inverse bijections. Moreover, Φ and Ψ preserve the relation of inclusion between trees, hence they are isomorphisms. Now, we can define isomorphism Φ * of posets Theorem 11. κ( X i ) is a regular uncountable cardinal number.
Proof. Using Theorem 8, one can proceed analogous as in [1] (corollaries 2.8 and 2.9) or as in [2] (Proposition 3.3).
Theorem 12. There exists a v-matrix H = {P α : α < κ( X i )} such that if α < β < κ( X i ), then P β ≺ P α . Moreover, for any [T ] * ∈ V * there exists [P ] * ∈ H where [P ] * ⊆ [T ] * .
Proof. A proof is completely analogous to the proof of Lemma 2.11 in [1], or to the proof of Theorem 3.4 in [2].
Cardinal invariants
In this part results hold for the σ-ideal d 0 (V * ). One can check d 0 (V * ) ⊆ d 0 (V) similarly as in the proof of Corollary 9. We do not know whether d 0 (V * ) = d 0 (V). To obtain results for the ideal d 0 (V) we have to assume that sets X i are finite.
Lemma 13. If P is a v-partition, then the complement of the union P belongs to d 0 (V * ). Observe that, if P is a v-partition and S ⊆ X i is a selector of P, i.e. S ∩ [P ] * has exactly one point for each [P ] * ∈ P, then S ∈ d 0 (V). Theorem 15. κ( X i ) = add(d 0 (V * )).
Proof. If
Proof. Let F ⊆ d 0 (V * ) and |F | < κ( X i ). By Lemma 14, for each W ∈ F choose a v-partition P W such that P W ∩ W = ∅. By Lemma 10 there exists a v-partition P which refines each v-partition P W for W ∈ F . The set X i \ P is element of d 0 (V * ) and contains F . Thus, we have showed that add(d 0 (V * )) ≥ κ( X i ).
Let {P α : α < κ( X i )} be a v-matrix like in the Theorem 12. We will construct inductively v-matrix {Q α : α < κ( X i )} such that for any α < κ( X i ), Q α ≺ P α and if V ∈ Q α , then V \ Q α+1 = ∅. Suppose, that we have already defined a v-partition Q β and assume that α = β + 1 < κ( X i ). Fix a selector S of v-partition Q β . By Lemma 14 there exists a v-partition Q such that Q ∩ S = ∅. Let Q α be any v-partition which refines the v-partition Q and the v-partition P α . The remaining inductive steps are obvious. For any T ∈ V the intersection [T ] * ∩ { X i \ Q α : α < κ( X i )} is nonempty. Indeed, for any T ∈ V choose α < κ( X i ) and [P ] * ∈ Q α such that [P ] * ⊆ [T ] * . Thus ∅ = [P ] * \ Q α+1 ⊆ { X i \ Q α : α < κ( X i )}. Therefore, the family { X i \ Q α : α < κ( X i )} witnesses that add(d 0 (V * )) ≤ κ( X i ).
Theorem 16. If all sets X i are countable, then ω 1 ≤ κ( X i ) = add(d 0 (V * )) ≤ cov(d 0 (V * )) ≤ cf(c).
Proof. By Lemma 1 each set S ⊆ X i of the cardinality less than continuum belongs to d 0 (V). Choose sets S α ⊆ X i for α < cf(c) such that X i = {S α : α < cf(c)} and |S α | < c. Thus cov(d 0 (V)) ≤ cf(c).
If P is a v-partition, then one can choose subsets N C ⊂ C ∈ P of the cardinality less than c such that sets A \ N A and B \ N B are disjoint for any distinct members A, B of P. One can do this by the induction using the fact that the intersection of any distinct members of P is countable. But, if {P α : α < κ( X i )} is a v-matrix like in Theorem 12, then put M C = {N C β : β ≤ α} whenever C ∈ P α and C ⊆ C β ∈ P β . If all sets X i are countable, then sets M C have cardinalities less than the continuum, by Theorem 16. Under such assumptions, the family {C \ M C : C ∈ {P α : α < κ( X i )}} is called base matrix. Any base matrix consists of sets which are either disjoint or one is contained in the other. The topology on X i generated by a base matrix is called matrix topology. So, the family
{ X i } ∪ {C \ M C : C ∈ {P α : α < κ( X i )}}
is a base for the matrix topology.
Lemma 17. Suppose sets X i are countable. A subset Y ⊂ X i is nowhere dense with respect to a matrix topology if and only, if Y ∈ d 0 (V * ).
Proof. Fix a base matrix
B = {C \ M C : C ∈ {P α : α < κ( X i )}}.
At first, let Y ⊂ X i be a nowhere dense with respect to the matrix topology. Fix a set V ∈ V * and choose W ⊆ V such that W ∈ B and W ∩ Y = ∅. Since W = C \ M C , where C ∈ V * and |M C | < c, the set W contains some D ∈ V * . Any such D witnesses that Y ∈ d 0 (V * ). On the other hand, let U be a non-empty open set. Choose V ∈ {P α : α < κ( X i )} such that V \ M V ⊆ U. If Y ∈ d 0 (V * ), then there exists W ⊆ V such that W ∩ Y = ∅ and W ∈ {P α : α < κ( X i )}. Since
W \ M W ⊆ V \ M V ⊆ U we conclude that Y ∩ W \ M W = ∅.
One can find the next theorem in the paper by W. Kulpa and A. Szymański [8]. It is presented with a proof in [1].
Theorem. Let W be a collection of families consisting of open subsets of a topological space Y . Suppose that:
W is a π-base; any family in W consists of pairwise disjoint sets; |W| < τ , where τ is a regular cardinal number; each set belonging to W contains τ many pairwise disjoint open sets. Then there exists an increasing family of nowhere dense subsets {Y α : α < τ } such that {Y α : α < τ } = Y.
Thus, we can estimate cov(d 0 (V)), more accurate than these in [6].
Theorem 18. If sets X i are countable, then add(d 0 (V * )) ≤ cov(d 0 (V * )) ≤ add(d 0 (V * )) + .
Proof. Obviously, if κ( X i ) = c, then c = add(d 0 (V * )) = cov(d 0 (V * )). Suppose that κ( X i ) < c, them the above theorem, i.e. the theorem by Kulpy and Szymański, works. Let W be a base matrix. Put τ = κ( X i ) + . Then W is a π-base for the topology generated by itself on X i . Each V ∈ W contains c-many elements of W. By Corollary 9 and Lemma 17 and the theorem by Kulpy and Szymański we obtain cov(d 0 (V * )) ≤ κ( X i ) + . We are done, since Theorem 15.
Thus, if sets X i are countable, then cov(d 0 (V)) ≤ cov(d 0 (V * )). If sets X i are finite, then add(d 0 (V)) ≤ cov(d 0 (V)) ≤ add(d 0 (V)) + .
, if [P ] * , [T ] * ∈ V * , then [P ] * ⊆ [T ] * if and only, if δ(P )(k) ⊆ δ(T )(k) for all, but finitely many k ∈ ω. Lemma 7. If P, T ∈ V and [P ] * ⊆ [T ] * , then for each n ∈ ω there exists a tree Q ∈ V such that Q ⊆ n T and [Q] * = [P ] * .
For
any tree T ∈ V a poset ({[P ] * ∈ V * : [P ] * ⊆ [T ] * }, ⊆) is isomorphic with the poset (V * , ⊆). Moreover, posets ({P ∈ V : P ⊆ T }, ⊆) and (V, ⊆) are isomorphic, too. Additionally, if T = T [A, α], then each bijection
(
{[P ] * ∈ V * : [P ] * ⊆ [T ] * }, ⊆) and (V * , ⊆) as follows Φ * ([P ] * ) = [Φ(P )] * , for any [P ] * ⊆ [T ] * and [P ] * ∈ V * . The next lemma is a counterpart of the lemma 2.6 in [1]. Lemma 10. If H is a v-matrix of the cardinality less than κ( X i ), then there exists a v-partition P which refines each v-partition Q ∈ H. Proof. Orders ({[Q] * ∈ V * : [Q] * ⊆ [T ] * }, ⊆) and (V * , ⊆) are isomorphic and separative. So, one obtains a proof analogous to the proof of the lemma 2.6 in [1].
[T ] * ∈ V * , then take [P ] * ∈ P such that[P ] * ∩ [T ] * ∈ V * . Since [P ] * ∩ [T ] * ⊆ P and [P ] * ∩ [T ] * ⊆ [T ] * , we are done.Lemma 14. If S ∈ d 0 (V * ), then there exists a v-partition P such that P ∩ S = ∅. Proof. For each [T ] * ∈ V * fix [P ] * ∈ V * such that [P ] * ⊆ [T ] * and [P ] * ∩ S = ∅. Any v-partition consisting of just fixed [P ] * is a desired one.
Indeed, for any [T ] * ∈ V * , there exist [P ] * ∈ P such that [T ] * ∩ [P ] * ∈ V * . Then [T ] * ∩[P ] * ∩S has no more than one point. By Lemma 1, there exists [Q] * ∈ V * which is disjoint with S and such that [Q] * ⊆ [T ] * .
6], etc. From now on, assme that all X i are countable. Elements [Q] * , [P ] * ∈ V * are incompatible whenever the intersection [P ] * ∩ [Q] * contains no element of V * . Thus, if [Q] * and [P ] * are incompatible, then [P ] * ∩ [Q] * is countable. But, if [P ] * ∩ [Q] * is uncountable, then [P ] * ∩ [T ] * ∈ V * . Indeed, the intersection [P ] * ∩ [Q] * is a countable union of closed sets [P s ] ∩ [Q t ], where s, t ∈ f in X i . Therefore, some [P s ] ∩ [Q t ] has to be uncountable and hence P s ∩Q t is a trimmed tree, moreover [P s ∩Q t ] * = [P ] * ∩ [Q] * .
Piotr Kalemba, Institute of Mathematics, University of Silesia, ul. Bankowa 14, 40-007 Katowice E-mail address: [email protected] Szymon Plewik, Institute of Mathematics, University of Silesia, ul. Bankowa 14, 40-007 Katowice E-mail address: [email protected]
The space of ultrafilters on N covered by nowhere dense sets. B Balcar, J Pelant, P Simon, Fund. Math. 1101B. Balcar, J. Pelant and P. Simon, The space of ultrafilters on N covered by nowhere dense sets, Fund. Math. 110 (1980), no. 1, 11 -24.
Disjoint refinement, Handbook of Boolean algebras. B Balcar, P Simon, 2North-Holland, AmsterdamB. Balcar and P. Simon, Disjoint refinement, Handbook of Boolean algebras, North-Holland, Amsterdam, (1989) Vol. 2, 333 -388.
Strolling through paradise. J Brendle, Fund. Math. 1481J. Brendle, Strolling through paradise, Fund. Math. 148 (1995), no. 1, 1 -25.
Sur les caracteres de convergence des series a termes positifs et sur les fonctions indefiniment croissantes. J Hadamard, Acta Mathematica. 18J. Hadamard, Sur les caracteres de convergence des series a termes positifs et sur les fonctions indefiniment croissantes, Acta Mathematica 18 (1894), 319 - 336.
The third millennium edition, revised and expanded. T Jech, Set theory. BerlinSpringer-VerlagT. Jech, Set theory, The third millennium edition, revised and expanded. Springer Monographs in Mathematics. Springer-Verlag, Berlin, (2003).
On the ideal (v 0 ), Cent. P Kalemba, Sz, A Plewik, Wojciechowska, Eur. J. Math. 62P. Kalemba and Sz. Plewik and A. Wojciechowska, On the ideal (v 0 ), Cent. Eur. J. Math. 6 (2008), no. 2, 218 -227.
Classical descriptive set theory. A Kechris, Graduate Texts in Mathematics. 156Springer-VerlagA. Kechris, Classical descriptive set theory, Graduate Texts in Mathematics 156, Springer-Verlag, New York, (1995).
Decomposition into nowhere dense sets. W Kulpa, A Szymański, Bull. Acad. Polon. Sci. 25W. Kulpa and A. Szymański, Decomposition into nowhere dense sets, Bull. Acad. Polon. Sci. 25 (1977), 37 -39.
Special subsets of the reals and tree forcing notions. M Kysiak, A Nowik, T Weiss, Proc. Amer. Math. Soc. 1359M. Kysiak, A. Nowik and T. Weiss, Special subsets of the reals and tree forcing notions, Proc. Amer. Math. Soc. 135 (2007), no. 9, 2975 -2982.
Countable partitions of product spaces. G Moran, D Strauss, Mathematika. 272G. Moran and D. Strauss, Countable partitions of product spaces, Mathematika 27 (1980), no. 2, 213 -224.
M Scheepers, Gaps in ( ω ω, ≺), Set theory of the reals. Ramat Gan; Bar-Ilan Univ., Ramat Gan6M. Scheepers, Gaps in ( ω ω, ≺), Set theory of the reals (Ramat Gan, 1991), 439 -561, Israel Math. Conf. Proc., 6, Bar-Ilan Univ., Ramat Gan, (1993).
| []
|
[
"Deterministic Replacement Path Covering *",
"Deterministic Replacement Path Covering *"
]
| [
"Karthik C S \nWeizmann Institute of Science\nTel Aviv University\n\n",
"Merav Parter [email protected] \nWeizmann Institute of Science\nTel Aviv University\n\n"
]
| [
"Weizmann Institute of Science\nTel Aviv University\n",
"Weizmann Institute of Science\nTel Aviv University\n"
]
| []
| In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph G, a vertex pair (s, t) | 10.1137/1.9781611976465.44 | [
"https://export.arxiv.org/pdf/2008.05421v3.pdf"
]
| 221,103,553 | 2008.05421 | 19695ccade49a758ea24a4338eea34f0c2ed4e3c |
Deterministic Replacement Path Covering *
Karthik C S
Weizmann Institute of Science
Tel Aviv University
Merav Parter [email protected]
Weizmann Institute of Science
Tel Aviv University
Deterministic Replacement Path Covering *
In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph G, a vertex pair (s, t)
Introduction
Resilience of combinatorial graph structures to faults is a major requirement in the design of modern graph algorithms and data structures. The area of fault tolerant (FT) graph algorithms is a rapidly growing subarea of network design in which resilience against faults is taken into consideration. The common challenge addressed in those algorithms is to gain immunity against all possible fault events without losing out on the efficiency of the computation. Specifically, for a given graph G and some bound f on the number of faults, the FT-algorithm is required, in principle, to address all |E(G)| f fault events, but (usually) using considerably less space and time. The traditional approach to mitigate these challenges is based on a combinatorial exploration of the structure of the graph under faults. While this approach has led to many exciting results in the area, it is however limited in two aspects. First, in many cases the combinatorial characterization is considerably harder when moving from a single failure event to events with two or more failures. Second, this characterization is mostly problem specific and rarely generalizes to more than one class of problems.
One of the most notable techniques in this area which overcomes the aforementioned two limitations is the fault-tolerant sampling technique introduced by Weimann and Yuster [WY13]. This technique is inspired by the color-coding technique [AYZ95], and provides a general recipe for translating a given fault-free algorithm for a given task into a fault-tolerant one while paying a relatively small overhead in terms of computation time and other complexity measures of interest (e.g., space). Indeed this approach has been applied in the context of distance sensitivity oracles [GW20, GW20,CC20b], fault-tolerant spanners [DK11,BCPS15,DR20a], fault-tolerant reachability preservers [CC20a], distributed minimum-cut computation [Par19a], and resilient distributed computation [PY19b, PY19a, CPT20, HP20]. The high-level idea of this technique is based on sampling a (relatively) small number of subgraphs G 1 , . . . , G of the input graph G by oversampling edges (or nodes) to act as faulty-edges, in a way that a single sampled subgraph accounts for potentially many fault events. An additional benefit of this approach is that it smoothly extends to accommodate multiple edge and vertex faults.
Two central applications of the above approach that we focus on are distance sensitivity oracles and fault-tolerant spanners. An f -sensitivity distance oracle (f -DSO) is a data-structure that reports shortest path distances when at most f edges of the graph fail. Weimann and Yuster [WY13] employed the above technique to provide the first randomized construction of f -DSO for n-vertex directed graphs accomodating f = O(log n/ log log n) many number of faults. Their datastructure has subcubic preprocessing time and subquadratic query time, and these bounds are still the state-of-the-art results for a wide range of parameters. Recently, van-den Brand and Saranurak [vdBS19] presented a randomized monte-Carlo DSO that can handle f ≥ log n updates. For small edge weights, their bounds improve over [WY13]. For the single failure case, Grandoni and Williams [GW20] also employed the sampling technique to provide an improved 1-DSO with subquadratic preprocessing time and sublinear query time. Very recently, Chechik and Cohen [CC20b] Another important application of this sampling technique appears in the context of faulttolerant spanners. Given an n-vertex graph G, and integer parameters f and k, an f -fault-tolerant k-spanner H ⊆ G is a subgraph that contains a k-spanner in G \ F for any set F ⊆ V of at most f vertices in G. The problem of designing sparse fault-tolerant spanners resilient to vertex faults was introduced by Chechik et al. [CLPR10]. Using a careful combinatorial construction they showed that one can build such spanners while paying an additional overhead of k f in the size of the output spanner (when compared to the standard k-spanner). Dinitz and Krauthgamer [DK11] simplified and improved their construction. Using the sampling technique with the right setting of parameters, they provided a meta-algorithm for constructing fault-tolerant spanners where the time and size overheads are bounded by the factor O(k 2−1/f ). Their approach was later extended by Braunschvig et al. [BCPS15] to provide the first (and currently state-of-the-art) constructions of nearly-additive fault-tolerant spanners. Very recently, Chakraborty and Choudhary [CC20a] employed this technique to provide a randomized construction of strong-connectivity preservers of directed graphs under f failures with O(f 2 f · n 2−1/f ) edges. To this date, there are no known efficient deterministic constructions that match the size bounds of these above-mentioned randomized constructions.
In this work we provide a unified and simplified approach for derandomizing the above mentioned central results. We introduce the notion of replacement path covering (RPC) which captures the key properties of the collection of sampled subgraphs obtained by the FT-sampling technique. Given a graph G, a vertex pair (s, t) ∈ V (G) × V (G), and a set of edge faults F ⊆ E(G), a replacement path P (s, t, F ) is an s-t shortest path in G \ F . To avoid repetitive descriptions, we mostly consider in this paper the setting of edge faults. However, all our definitions of RPC and their constructions naturally extend to vertex faults.
Definition 1 (Replacement Path Covering (RPC)). A subgraph G ⊆ G covers a replacement path P (s, t, F ) if P (s, t, F ) ⊆ G and F ∩ E(G ) = ∅.
A collection of subgraphs of G, say G L,f , is an (L, f )-RPC if for every s, t ∈ V and every F ⊆ E such that |F | ≤ f , we have that each P (s, t, F ) replacement path 1 with at most L edges is covered by some subgraph G in G L, f In some algorithmic applications of (L, f )-RPC, we have that L ≤ f and in others applications we have L > f . However, for simplicity of the discussion of this paragraph, we assume that L > f . The FT-sampling technique provides an efficient randomized procedure for computing an (L, f )-RPC of covering value r = c · f L f log n for some constant c (e.g., Lemma 2 in [GW20]): Sample r subgraphs G 1 , . . . , G r where each G i ⊆ G is formed by sampling each edge e ∈ E(G) into G i independently with probability p = 1 − 1/L. By taking c to be large enough, it is easy to show that a subgraph G i covers a fixed P (s, t, F ) with probability of Ω(1/L f ). Thus by using Chernoff and employing the union bound over all n O(f ) distinct P (s, t, F ) paths, one gets that this graph collection is an (L, f )-RPC, with high probability (see Lemma 7 for a formal proof). The computation time of this randomized procedure is O(r · m) (where m := |E(G)|). Alon, Chechik and Cohen [ACC19] noted that in many settings, the deterministic computation of (L, f )-RPC poses the main barrier for derandomization, and raised the following question: 1 In case there are multiple s-t shortest paths in G \ F with at most L edges, it is sufficient to cover one of them.
"What is the minimum r such that we can deterministically compute such graphs G 1 . . . , G r in O(n 2 r) time such that for every P (s, t, F ) on at most L nodes there is a subgraph G i that does not contain F but contains P (s, t, F )?" [ACC19] also mentioned that it is not clear how to efficiently derandomize a degenerated version of the above construction and proposed some relaxation of these requirements, for which we indeed obtain improved bounds in this paper.
Independently to the work of [ACC19], Parter [Par19a] recently provided 2 a deterministic construction of (L, f )-RPC for the purposes of providing an efficient distributed computation of small cuts in a graph. These RPCs are obtained by introducing the notion of (n, k) universal hash functions. For the purpose of small cuts computation, L was taken to be the diameter of the graph, and f was considered to be constant. The goal in [Par19a] was to provide an (L, f )-RPC of value poly(L). Their construction in fact yields a value of L 4f +1 . This value is already too large for several applications such as the DSO by [WY13]. Indeed, for our centralized applications, it is desirable to improve both the computation time as well as the covering value of these (L, f )-RPC constructions, and to match (to the extent possible) the bounds of their randomized counterparts.
Our Contributions
We take a principled approach for efficiently computing almost optimal (L, f )-RPC for a wide range of parameters of interest. Our algorithms extend the approach of [Par19a] and are based on the introduction of a novel notion of hash families that we call Hit and Miss (HM) hash families. We show how any Boolean alphabet HM hash family can be used to build a RPC, and in turn give near optimal constructions of HM hash family based on (algebraic) error correcting codes such as Reed-Solomon codes and Algebraic-Geometric codes. Our key result is as follows:
Theorem 2 ((L, f )-RPC)
. Given a graph G on m edges, length parameter L, and fault parameter f , there is a deterministic algorithm A for computing an (L, f )-RPC of G denoted by G L,f such that,
CV(G L,f ) ≤ (αcLf ) b+1 , if a ≥ m 1 /c , for some constant c ∈ N, (αLf ) b+2 · log m, if a = m o(1) and b = Ω(log m), (αLf ) b+2 · log m, if a ≤ log m, (αLf log m) b+1 , otherwise,
where a = max{L, f }, b = min{L, f }, and α ∈ N is some small universal constant. Moreover, the running time of A denoted by T (A) is,
T (A) = m 1+o(1) · CV(G L,f ) if a = m o(1) and b = Ω(log m), m · (log m) O(1) · CV(G L,f ), otherwise.
This resolves the open problem of Alon, Chechik and Cohen [ACC19] and considerably improves over the bounds of the second author [Par19a] in the entire range of parameters. We further improve on the parameters of Theorem 2 (see Theorem 48) when instead of accounting for all fault events, we only have to be resilient to a list of fault events that are given to us. Even this relaxed version was mentioned in [ACC19].
At a meta level, RPCs are designed to handle faults in graphs, and error correcting codes are constructed to handle errors in messages. Both do this by adding redundancy to the underlying information in some way: the encoding of a message adds many new coordinates to the message without adding any new additional information, and similarly RPC of a graph is a redundant way to represent a graph, as we only store subgraphs of the same original graph. In this work, we formalize this meta-connection to an extent through the ideas involved in proving Theorem 2.
Lower Bound for (L, f )-RPCs. We also prove lower bounds on the covering value of RPC, which to the best of our knowledge had not been addressed before. That is, despite the ubiquity of the FT-sampling approach to build (L, f )-RPCs, it is still unclear whether the bound that it provides on the covering value is the best possible. This question is interesting even if the items to be covered correspond to arbitrary subsets of edges. The question becomes even more acute in our setting where the covered items are structured, i.e., correspond to shortest-paths in some underlying subgraphs. The optimality of the randomized procedure in this context is even more questionable, as it is totally invariant to the structure of the graph. In principle, one might hope to improve these bounds by taking the graph structure into account.
Perhaps surprisingly we show that the covering values obtained by the randomized FT-sampling procedure are nearly optimal, at least for the setting where L ≥ f . Since our deterministic bounds almost match the randomized ones, we obtain almost-optimality for our bounds. Interestingly, the lower bound graph is obtained by employing slight modifications to the lower bound graphs used by [Par15] in the context of fault-tolerant FT-BFS structures. For a given (possibly weighted) graph G = (V, E) and a source vertex s ∈ S, a subgraph H ⊆ G is an f -fault-tolerant (FT)-BFS if dist(s, t, H \ F ) = dist(s, t, G \ F ) for every vertex t ∈ V and every sequence of F edge faults. The definition can be naturally extended to vertex faults as well. The second author and Peleg [PP16] presented a lower-bound construction for f = 1 with Ω(n 3/2 ) edges. The second author extended this lower bound construction to any f ≥ 1 faults with size bounds of Ω(n 2−1/(f +1) ) edges [Par15]. We show that a slight modification to the (unweighted) lower-bound graph of [Par15] by means of introducing weights, naturally implies a lower bound for the covering value of an (L, f )-RPC.
Derandomization of the Algebraic DSO by Weimann- Yuster. Our key application of the construction of efficient (L, f )-RPC is for implementing the algebraic DSO of [WY13]. [ACC19] presented a derandomization of the combinatorial f -DSO of [WY13], resulting with a preprocessing time of O(n 4−α ) and a query time of O(n 2−2α/f ), matching the randomized bounds of [WY13]. In this paper we focus on derandomizing the algebraic algorithm of [WY13] as the latter can be implemented in subcubic preprocessing time and subquadratic query time. We show: There exists a deterministic algorithm that given G and parameters f = O(log n/ log log n) and 0 < α < 1, constructs an f -sensitivity distance oracle in time
1. O(M n 3.373+2/f −α · (c f ) f +1 ) if α = 1/c for some constant c, 2. O(M n 3.373+2/f −α · (c f log n) f +1 ) if α = o(1),
for some constant c . Given a query (s, t, F ) with s, t ∈ V and F ⊆ E ∪ V being a set of at most f edges or vertices, the deterministic query algorithm computes in O(n 2−2(1−α)/f ) time the distance from s to t in the graph G \ F .
Observe that for constant number of at least f ≥ 7 faults, the preprocessing time of our construction even improves over that of Weimann-Yuster when fixing the query time to be O(n 2−2(1−α)/f ). This is because our algorithm also integrates ideas and optimizations from [ACC19] and [CC20b].
This resolves the open problem of [ACC19] concerning existence of deterministic DSO with subquadratic preprocessing time and subquadratic query time (at least with small edge weights).
While the deterministic (L, f )-RPC of Theorem 2 constitutes the key tool for the derandomization, the final algorithm requires additional effort. Specifically, we use the notion of FT-trees introduced in [ACC19] for the purpose of the deterministic combinatorial DSO. We provide an improved algebraic construction of these trees using the (L, f )-RPCs. One obstacle that we need to handle is that the approach of [ACC19] assumed that shortest path are unique by providing an algorithm that breaks the ties in a consistent manner. In our setting, the computation time of this algorithm is too heavy and thus we avoid this assumption, by making more delicate arguments.
Derandomization of Fault-Tolerant Spanner Constructions. Finally, we show that the integration of the (L, f )-RPC of Theorem 2 into the existing algorithms for (vertex) fault-tolerant spanners provide the first deterministic constructions of these structures. The running time and the size bounds of the spanners nearly match the one obtained by the randomized counter parts.
Specifically, for f -fault tolerant multiplicative spanners, we provide a nearly-optimal derandomization of the Dinitz and Krauthgamer's construction [DK11]. This follows directly by using our vertex variant of (L, f )-RPC of Theorem 2 with L = 2.
A subgraph H ⊆ G is an f -fault tolerant t-spanner if dist(s, t, H \ F ) ≤ t · dist(s, t, G \ F ) for every s, t, F ⊆ V , |F | ≤ f . We show:
Theorem 5 (Derandomized of Theorem 2.1 of [DK11], Informal). If there is a deterministic algorithm A that on every n-vertex m-edge graph builds a t-spanner of size s(n) and time τ (n, m, t), then there is an algorithm that on any such graph builds an f -fault tolerant t-spanner of size O(f 3 · s(2n/f )) and time O(f 3 (τ (2n/f, m, t) + m)).
The above derandomization matches the randomized construction of [DK11] upto a multiplicative factor of log 3 n in the size and time bounds. In the same manner, we also apply derandomization for the nearly-additive fault-tolerant spanners of Braunschvig et al. [BCPS15]. This provides the first deterministic constructions of nearly additive spanners.
Comparison with a recent independent work of [BDR20]. Independent to our work, [BDR20] presented a new slack version of the greedy algorithm from [BDPW18,DR20b] to obtain a (vertex) fault-tolerant spanners with optimal size bounds. Their main algorithm is randomized with and the emphasis there is on optimizing the size of the output spanner. To derandomize their construction, [BDR20] The quality of the spanner construction of [BDR20] depends, however, not only on the value of the covering, but rather also on additional useful properties. These properties are also addressed in our paper for the sake of the applications of derandomizing the works of [DK11,WY13]. In particular, we show that our (L = 2, f )-RPC with O(f 3 ) subgraphs also satisfies the desired properties in the same manner as provided by the randomized construction. Consequently, by using our (L = 2, f )-RPCs in the algorithm of [BDR20], we can close the gap of Theorem 1.2 of [BDR20] and get a deterministic construction which matches the randomized time bounds (of Theorem 1.1 in [BDR20]) for any value of f . In Appendix A, we provide a further detailed comparison to the related constructions of [Par19a] and [BDR20]. In addition, we provide a proof sketch for improving Thm. 1
Key Techniques
In this section, we detail some of the key techniques introduced in this paper.
Deterministic (L, f )-Replacement Path Covering
While the introduction of the notion of RPC is our key conceptual contribution, we elaborate in this subsection on our framework to construct deterministic RPC, which we also believe will be of independent interest. . We show that every error correcting code with relative distance greater than 1 − 1 ab can be seen as a HM hash family. This insight yields a systematic way to construct HM hash family.
Connection to Replacement Path Covering. We then consider HM hash family over the Boolean alphabet and associate the domain of the hash family with the edges (or vertices) of the graph for which we would like to design a RPC. We observe that every hash function of the Boolean HM hash family immediately gives a subgraph in RPC, where we view the function as a Boolean vector of length equal to the number of edges in the graph, and thus the hash function acts as an indicator vector of whether to pick the edge or not in the subgraph. Moreover, the property of a RPC always avoiding faults but containing the replacement path in at least one of the subgraphs (see Definition 1) exactly coincides with the definition of a Boolean HM hash family, and thus a Boolean HM hash family yields a RPC.
Overview. We now provide a short summary of our deterministic construction of (L, f )-RPC (assuming L ≥ f ) for a graph G with m edges. We start from an error correcting code C over alphabet of size q, block length , message length log q m and relative distance greater than 1 − 1 Lf . Next, we interpret C as a HM hash family from [m] to [q] with hash functions. Then we apply the alphabet reduction lemma to obtain a HM hash family from [m] to {0, 1} with · q f many hash functions. Finally, using the connection between Boolean HM hash family and Replacement Path Covering, we construct an (L, f )-RPC G L,f with covering value 2 · q f · in time CV(G L,f ) · O(m). In other words the alphabet size and the block length of the starting code C directly determines the covering number of our RPC. Depending on the relationship between L and f we use either just Reed-Solomon code or a concatenation of Algebraic-Geometric code (as outer code) with Reed-Solomon code (as inner code) to obtain the parameters given in Theorem 2.
Derandomization of Weimann-Yuster DSO
Our key contribution is in utilizing the (L, f )-RPC to compute fault-tolerant trees with improved time bounds compared to that of [ACC19]. Fault tolerant trees were introduced by [CCFK17, ACC19] and specifically, in [ACC19] they served the basis for implementing the combinatorial DSO implementation of [WY13]. For a given vertex pair s, t, and integer parameters L, f , the FT-tree
FT L,f (s, t) consists of O(L f ) nodes,
where each node is labeled by a pair P, F where P is an s-t path in G \ F with at most L edges, where F is a sequence of at most f faults which P avoids. Let d L (s, t, G ) denote the weight of the shortest s-t paths in G among all s-t paths with at most L edges. The key application of FT-trees is that given a query (s, t, F ) and the FT-tree FT L,f (s, t), one can compute d L (s, t, G \ F ) in time O(f 2 log n). [ACC19] provided an efficient combinatorial construction of all the FT-trees in time O(m · n · L f +1 ), thus super-cubic time for dense graphs.
By using our (L, f )-RPC family G L,f , we provide an improved (algebraic) construction of these trees in sub-cubic time for graphs with small integer weights. The construction of these trees boils down into a simple computational task which we can efficiently solve using the (L, f )-RPC. The task is as follows: given a triplet (s, t, F ), compute d L (s, t, G \ F ). To build the trees, it is required to solve this task for O(n 2 · L f ) triplets. Our algorithm starts by applying a variant of the All-Pair-Shortest-Path (APSP) in each of the subgraph G ∈ G L,f . This variant, noted as AP SP ≤L [CC20b] restricts attention to computing only the shortest paths that contain at most L edges, which can be done in time O(M Ln ω ) using matrix multiplications. Then to compute d L (s, t, G \ F ) for a given triplet (s, t, F ), we show that it is sufficient to consider a small collection of subgraphs G F ⊆ G L,f where |G F | = O(f L log n), and to return the minimum d L (s, t, G ) over every G ∈ G F . Since the d L (s, t, G ) distances are precomputed by the AP SP ≤L algorithm, each d L (s, t, G \ F ) can be computed in O(L) time.
Gap between Det. and Randomized (L, f )-Replacement Path Covering
For the sake of discussion assume that f = O(1) and L = n for some constant . Our current deterministic constructions provide (L, f )-RPC with covering value O(L f +1 ) whereas the randomized constructions obtain value of O(L f ). This gap is rooted in the following distinction between the randomized and deterministic constructions. For the purposes of the randomized construction, the (L, f )-RPC should cover n O(f ) replacement paths. The reason is that there are n O(f ) possible fault events, and for each sequence of F faults, the subgraph G \ F contains n 2 shortest paths (i.e., replacement paths avoiding F ). In particular, if there are multiple s-t shortest-path in G \ F , it is sufficient for the RPC to cover one of them. Since a single sampled subgraph G i covers a given path P (s, t, F ) with probability of c/L, by taking r = O(f L f log n) subgraphs, we get that P (s, t, F ) is covered by at least one of the subgraphs with probability of 1 − 1/n c·f . Applying the union bound over all n O(f ) replacement paths establishes the correctness of the construction. In contrast, our deterministic construction provides a covering for any P (s, t, F ) paths, and also for any arbitrary collection of L edges A and f edges B with A ∩ B = ∅. That is, since our construction does not exploit the structure of the paths, it provides a covering for n Ω(L) paths. Note that if the randomized construction would have required to cover n Ω(L) paths rather than n O(f ) , we would have end-up having O(L f +1 ) subgraphs in that covering as well. In other words, the current gap in the bounds can be explained by the number of replacement paths that the (L, f )-RPC are required to cover. Since in the deterministic constructions, it is a-priori unknown what would be the set of replacement paths that are required to be covered, they cover all n Ω(L) possible paths.
Importantly, in Appendix C, we consider a relaxed variant of the (L, f )-RPC problem, introduced by [ACC19], for which we are able to provide nearly matching bounds to the randomized construction. Specifically, in that setting, we are given as input a collection of k pairs {(P, F )} where P is a path with at most L edges and F is a set of at most f faults which P avoids. We then provide an efficient deterministic construction of a restricted (L, f )-RPC family G of value O(log k · L f ), i.e., of the same value as obtained by the randomized construction. The graph collection G then satisfies that for every pair (P, F ) in the input set, there is a subgraph G ∈ G such that P ⊆ G and G ∩ F = ∅. This further demonstrates that the only reason for the gap between our deterministic and randomized bounds is rooted in the gap in the number of replacement paths that those constructions are required to cover.
Preliminaries
Notations. Throughout this paper, G denotes a (possibly weighted) graph, V (G) denotes the vertex set of a graph G, and E(G) denotes the edge set of a graph G. In case the graph is weighted, the weights are integers in [−M, M ]. For u, v ∈ V and a subgraph G , let dist(u, v, G ) denote the shortest u-v path distance in G. For an x-y path P and y-w path P , let P • P denote the concatenation of the two paths. Also, for any n ∈ N and j ∈ N, we denote by [n] j the collection of all subsets of size exactly j, by [n] ≤j the collection of all subsets of size at most j, and by n ≤j the sum i∈[j] n i .
Replacement Paths and Randomized (L, f ) Covering
For a weighted graph G = (V, E, w) and a path P ⊆ G, let |P | be the number of edges in P and let w(P ) = e∈P w(e) be the weighted sum of the edges in P . Let
SP G (s, t, F ) be the collection of all s-t shortest path in G \ F . Every path P G (s, t, F ) ∈ SP G (s, t, F ) is called a replacement path. For a given integer L, let SP L G (s, t, F ) be the collection of all the shortest s-t paths in G \ F that contain at most L edges. A path in SP L G (s, t, F ) is referred to as P L G (s, t, F ). Let d L (s, t, G \ F ) = w(P L G (s, t, F )). If SP L G (s, t, F ) = ∅, i.e., there is no path from s to t in G \ F containing at most L edges, then define P L G (s, t, F ) = ∅ and d L (s, t, G \ F ) = ∞. For F = ∅, we abbreviate P L G (s, t, ∅) = P L G (s, t)
as the shortest s-t path with at most L edges, and d L (s, t, G) = w(P L G (s, t)) is the length of the path. When the graph G is clear from the context, we may omit it and write P (s, t, F ) and P L (s, t, F ).
The following lemma is obtained via the doubling method 3 of [YZ05], recently used in [CC20b]. 3 The algorithm provided in [YZ05] is randomized and it is described how to derandomize it with essentially no Lemma 6. [Lemma 5 of [CC20b]] For every n-vertex subgraph G ⊆ G, there is an algorithm that computes {d L (s, t, G ), P L (s, t, G )} s,t∈V in time O(LM n ω ).
The next lemma summarizes the quality of the randomized (L, f )-RPC procedures as obtained in [WY13] and [DK11]. The proof is deferred to Appendix B.
Lemma 7 (Randomized (L, f )-RPC). For every n-vertex graph G = (V, E) and integer parameters L, f ≤ n, one can compute a collection G = {G 1 , . . . , G r } of r subgraphs such that w.h.p. G is an (L, f )-RPC, where r = O(f · max{L, f } min{L,f } · log n). The computation time is O(r · |E|).
Error Correcting Codes
In this subsection, we recall the definition of error correcting codes and some standard code constructions known in literature. We define below a notion of distance used in coding theory (called Hamming distance) and then define error correcting codes with its various parameters.
Definition 8 (Distance). Let Σ be a finite set and ∈ N, then the distance 4 between x, y ∈ Σ , denoted by ∆(x, y), is defined to be:
∆(x, y) = 1 · |{i ∈ [ ] | x i = y i }| .
Definition 9 (Error Correcting Code). Let Σ be a finite set. For every ∈ N, a subset C ⊆ Σ is said to be an error correcting code with block length , message length k, and relative distance δ if |C| ≥ |Σ| k and for every x, y ∈ C, ∆(x, y) ≥ δ. We denote then ∆(C) = δ. Moreover, we say that C is a [k, , δ] q code to mean that C is a code defined over alphabet set of size q and is of message length k, block length , and relative distance δ. Finally, we refer to the elements of a code C as codewords.
For the results in this article, we require codes with certain extremal properties. First, we recall Reed-Solomon codes whose codewords are simply the evaluation of univariate polynomials over a finite field.
Theorem 10 (Reed-Solomon Codes [RS60]). For every prime power q, and every k ≤ q, there exists a k, q, 1 − k−1 q q code.
These codes achieve the best possible tradeoff between the rate of the code (i.e., the ratio of message length to block length) and the relative distance of the code in the large alphabet regime as they meet the Singleton bound [Sin64]. However, if we desire codes with alphabet size much smaller than the block length then, Algebraic-Geometric codes [Gop70,TVZ82] are the best known construction of codes achieving a good tradeoff between rate and relative distance (but do not meet the Singleton bound). We specify below a specific construction of such codes.
Theorem 11 (Algebraic-Geometric Codes [GS96]). Let p be a prime square greater than or equal to 49, and let q := p c for any c ∈ N. Then for every k ∈ N, there exists a k, k · √ q, 1 − 3
√ q q code.
loss in efficiency in Sec 8 of [YZ05].
Finally, we recall here a well-known fact about code concatenation (for example see Chapter 10. 1
of [GRS19]).
Fact 12. Let k, 1 , 2 , c, q ∈ N and let δ 1 , δ 2 ∈ [0, 1]. Suppose we are given a [k, 1 , δ 1 ] q c outer code C 1 and a [c, 2 , δ 2 ] q inner code C 2 . Then the concatenation of the two codes
C 1 • C 2 is a [k, 1 · 2 , δ 1 · δ 2 ] q code.
Hit and Miss Hash Families
In this section, we show the construction of a certain class of hash families which will subsequently be used to design a deterministic algorithm for computing an (L, f )-RPC with a small CV. Below we define the notion of Hit and Miss hash families.
Definition 13 (Hit and Miss Hash Family). For every N, a, b, , q ∈ N such that b ≤ a, we say that
H := {h i : [N ] → [q] | i ∈ [ ]} is a [N, a, b, ] q -∀(x, y) ∈ A × B, h i (x) = h i (y).
(1)
In the cases when N, a, b is clear from the context, we simply refer to H as a [ ] q -HM hash family. Moreover, the computation time of a [ ] q -HM hash family is defined to be the time needed to output the × N matrix with entries in
[q] whose (i, x) th entry is simply h i (x) (for h i ∈ H).≤ (αcab) b+1 , if a ≥ N 1 /c , for some constant c ∈ N, (αab) b+2 · log N, if a = N o(1) and b = Ω(log N ), (αab) b+2 · log N, if a ≤ log N, (αab log N ) b+1 , otherwise,
for some small universal constant α ∈ N. Moreover, the running time of A denoted by T (A) is,
T (A) = N 1+o(1) · if a = N o(1) and b = Ω(log N ), N · (log N ) O(1) · , otherwise.
Note that the above theorem significantly improves on the naive N ≤b 2 -HM hash family whenever ab N . Before we formally prove the above theorem, let us briefly outline our proof strategy. Our approach is to start from the naive [1] N -HM hash family and first construct a [ ] q -HM hash family (for some q, ∈ N) where we try to minimize the quantity q ≤b · (which is roughly q b · ). The reason for minimizing q b · is because we show below how to start from a [ ] q -HM hash family and trade off the size of the range of the hash function for the size of the hash family, in order to obtain an · q
time O(q b · T H ), where T H is the time needed to compute H. Proof. Given H := {h i : [N ] → [q] | i ∈ [ ]}, we define H := {h i,S : [N ] → {0, 1} | i ∈ , S ⊆ [q]
, |S| ≤ b} as follows:
∀(i, S) ∈ [ ] × [q] ≤ b , ∀x ∈ [N ], h i,S (x) = 0 if h i (x) ∈ S, 1 otherwise.
It is clear that there are · q ≤b many hash functions in H , and therefore in order to show that H is a N, a, b, · q ≤b 2 -HM hash family, it suffices to show that (1) holds. To see this fix any
disjoint sets A, B ⊆ [N ] such that |A| ≤ a and |B| ≤ b. Since H is a[N, a, b, ] q -HM hash family, there exists some i * ∈ [ ] such that ∀(x, y) ∈ A × B, we have h i * (x) = h i * (y).(2)
Consider the subset S * := {h i * (y) | y ∈ B}. Clearly |S * | ≤ |B| ≤ b. Therefore we have that for every y ∈ B, h i * ,S * (y) = 0. On the other hand from (2), we have that for all x ∈ A, h i * (x) / ∈ S * . Therefore, for every x ∈ A, h i * ,S * (x) = 1. Thus we have established (1). The computation time of H follows from noting that q ≤b ≤ (1 + q) b .
As a simple demonstration of how we will use the above lemma, notice that if we combine the above lemma with the naive [1] N -HM hash family, then we obtain the N ≤b 2 -HM hash family.
Following the proof strategy we mentioned before the statement of Lemma 15, we focus now on constructing non-trivial [ ] q -HM hash family, with the goal of minimizing the quantity q ≤b · . As a warm up, we show below a simple construction that achieves very good parameters. Note that since |y − x| ∈ [0, N ] for every (x, y) ∈ A × B, we have that α A,B ≤ N ab . It is known that the product of the first m primes (called primorial function) is upper bounded e m(1+o(1)) . Let α ∈ [1, α A,B ] be the number with the most number of prime factors. It is clear then that the number of prime factors of α is the largest m, for which we have e m(1+o(1)) ≤ α ≤ N ab . This implies m ≤ ab log N . Thus, α A,B has at most ab log N distinct prime factors. Therefore, given any set of 1 + ab log N prime numbers there must exist a prime that does not divide α A,B . On the other hand note that for (x, y) ∈ A × B and a prime p, we have that x( mod p) = y( mod p) implies that p divides α A,B . Thus, there must exist a prime in the first 1 + ab log N prime numbers for which we have x( mod p) = y( mod p) for all (x, y) ∈ A × B.
Lemma 16. Given integers N, a, b such that b ≤ a, there exists a [N, a, b, 1 + ab log N ] O(ab(log N ) 2 ) - HM hash family.
We remark the above proof strategy of using (modulo) prime numbers has been used many times in literature, for example [AN96]. Next, we show a systematic way to construct a HM hash family from error correcting codes and then use specific codes to improve on the parameters of the above lemma.
Proposition 17. Let N, a, b, ∈ N and δ ∈ [0, 1] such that δ > 1 − 1 ab .
Then, every log q N, , δ q code can be seen as a [N, a, b, ] q -HM hash family. Proof. Given a log q N, , δ q code C, where for every i ∈ [N ], C(i) denotes the i th codeword (under some canonical labeling of the codewords of C), we define the hash family H :
= {h i : [N ] → q | i ∈ [ ]} as follows: ∀i ∈ [ ], ∀x ∈ [N ], h i (x) = C(x) i , where C(x) i denotes the i th coordinate of C(x) (i.e.,[h i (x) = h i (y)] = ∆(x, y) ≥ δ.(3)
By a simple union bound we have that,
Pr i∼[ ] [∀(x, y) ∈ A × B, h i (x) = h i (y)] ≥ 1 − ab · (1 − δ).(4)
Finally, (1) follows by noting that δ > 1 − 1 ab .
By a direct application of the parameters of Reed-Solomon codes (Theorem 10) to the above proposition we obtain the following. Proof. Let q be the smallest prime greater than ab log N log a (note that q ∈ ab log N log a , 2ab log N log a
∆(C) = 1 − log N q log q > 1 − log N log a ab log N log a = 1 − 1 ab .
By noting that q < 2ab log N log a , we may say that C is a N,
a, b, O ab log N log a O ab log N log a -HM hash family.
It is known that the generator matrix of Reed Solomon codes mentioned in Theorem 10 can be constructed in near linear time of the size of the generator matrix [RS60]. Once we are given the generator matrix of C, outputting any codeword can be done in O(q log log N ) time using Fast Fourier Transform. Therefore the computation of the corresponding HM hash family can be done
in time O(qN log log N ) = O(abN log N log log N ).
In fact, we obtain a 1+ab log N log a+log b+log log N 1+ab log N log a+log b+log log N -HM hash family from Reed-Solomon codes but chose to write a less cumbersome version in the corollary statement. Note that while the size of the Hash families of Lemma 16 and the above corollary are the same when a N o(1) , but even in that case we save a log N factor in the alphabet size of the hash function.
In order to explore further savings in the alphabet size of the hash function, we apply the parameters of Algebraic-Geometric codes (Theorem 11) to Proposition 17 and obtain the following. Proof. Let p be the smallest prime greater than 3ab (note that p ∈ (3ab, 6ab)) and let q = p 2 . Let C be the log q N,
√ q · log q N, 1 − 3 √ q q
code guaranteed from Theorem 11. From Proposition 17
we can think of C as a N, a, b, √ q · log q N q -HM hash family since
∆(C) = 1 − 3 p > 1 − 1 ab . By noting that q ≤ 36a 2 b 2 , we may say that C is a N, a, b, O ab log N log a O(a 2 b 2 )
-HM hash family.
It is known that the generator matrix of Algebraic-Geometric codes mentioned in Theorem 11 can be constructed in near cubic time of the block length of the code [SAK + 01]. Therefore the computation of the corresponding HM hash family can be done in time O((ab log N ) 3 +N ab log 3 N ).
However these parameters are worse than the parameters of Corollary 18 whenever ab log N . We construct below a specific code concatenation of Reed-Solomon codes and Algebraic-Geometric codes that does indeed improve on the parameters of Corollary 18 for the setting when a, b are not too small.
Lemma 20. Let p be a prime square greater than or equal to 49, and let q := p c for any c ∈ N. Then for every k ∈ N, there exists a k, k · q, 1 − 4 √ q √ q code. Proof. We concatenate the k, k · √ q, 1 − 3 √ q q code from Theorem 11 (treated as the outer code)
with the 2, √ q, 1 − 1 √ q √ q code from Theorem 10 (treated as the inner code). From Fact 12, this
gives us the desired code.
It is worth noting that while concatenation codes obtained by combining Reed-Solomon codes and Algebraic-Geometric codes have appeared many times in literature, to the best of our knowledge, this is the first time that Algebraic-Geometric codes are the outer code and Reed-Solomon codes are the inner code (as Algebraic-Geometric codes are typically used for their small alphabet size).
An immediate corollary of Proposition 17 and Lemma 20 is the following. Proof. Let p be the smallest prime greater than 4ab (note that p ∈ (4ab, 8ab)). Let q := p 2 and C be the log q N, q · log q N, 1 − 4 p p code guaranteed from Lemma 20. From Proposition 17 we can think of C as a N, a, b, q · log q N p -HM hash family since
∆(C) = 1 − 4 p > 1 − 1 ab .
By noting that p ≤ 8ab, we may say that C is a N, a,
(β ab) b · N (ab log N ) 3 = O N · log N · (β ab) b+2 · (ab · (log N ) 2 ) . Notice that if a ≤ log N then the expression (ab · (log N ) 2 ) is O(log 4 N ). Otherwise if a = N o(1) then the expression (ab · (log N ) 2 ) is still N o(1) .
In every other case, consider the O ab log N
O (β ab log N ) b · N ab · (log N ) 2 = O N · log N · (β ab log N ) b+1 .
In order to facilitate the applications in the next section we introduce the notation HM 2 (C) to denote the following: given a code C, we first interpret it as a HM hash family in accordance with Proposition 17 and then apply Lemma 15 to this hash family to obtain a Boolean HM hash family, denoted by HM 2 (C).
Optimaility of Reed-Solomon based HM hash family. We digress for a short discussion on the optimality of the parameters of HM hash family constructed from Reed-Solomon codes. There are two reasons why one might suspect that the parameters of Corollary 18 can be improved. First is the union bound applied in (4). Second is the bounding of the number of disagreements between two codewords by the relative distance in (3). It seems intuitively not reasonable that there exists two subsets of codewords say A and B such that for every pair of codewords in A × B there is a unique set of coordinates on which they agree. Additionally, the expected fraction of disagreements between any two Reed-Solomon codewords is 1 − 1/q and instead bounding it by the relative distance, particularly when we are taking an union bound later in (4), seems to raise concerns if the analysis has slacks. Therefore we ask:
Open Question 1. Let a, b, d ∈ N. What is the smallest prime q such that the following holds? For every two disjoint subsets of degree d polynomials over F q , denoted by A and B, where |A| = a and |B| = b, we have that there exists some α ∈ F q such that no pair of polynomials in A × B evaluate to the same value at α.
Clearly, from Proposition 17, we have that if q is at least dab + 1 then it suffices. But can we get away with a smaller value of q?
Perfect Hash Families. We conclude the discussion on HM hash family by noting the connection between HM hash family and the notion of Perfect hash families that has received considerable attention in literature (for example see [FK84, FKS84, SS90, AAB + 92, Nil94, AYZ95, NSS95, AN96, FN01, AG10]). If we replace (1) in Definition 13 with
∀(x, y) ∈ S, x = y, we have h i (x) = h i (y),(5)
where S ⊆ [N ] then it conincides with the notion of perfect hash families. In other words, HM hash family can be as a bichromatic variant of perfect hash families. Indeed a connection between error correcting codes and perfect hash families (much like Proposition 17) was already known in literature [Alo86]. We also remark that construction of perfect hash families based on AG codes was also known in literature [WX01], but to the best of our knowledge, construction of hash families based on the concatenated AG codes (with the specific parameters of Lemma 20) is a novel contribution of this paper.
Additionally, one may see the randomized construction of RPC in Lemma 7 as coloring each edge with a random color in [L] if L ≥ f (resp. in [f ] if f ≥ L) and then randomly choosing one of the colors in [L] (resp. [f ]) and deleting (resp. retaining) all the edges corresponding to that color. The randomized procedure stated in the above way is very closely related to the celebrated color coding technique [AYZ95] and a well-known way to derandomize the color coding technique is via perfect hash functions. However, using the derandomization objects developed for color coding yields HM hash family with suboptimal parameters as they do not use the product structure of the constraints given in the definition of HM hash family. Consequently, they lead to worse constructions than the ones we give in this paper (to see this set a b and note that ab (a + b) 2 ). The use of k-restriction sets [AMS06] also yields HM hash family with suboptimal parameters for the same reason.
Strong Hit and Miss Hash Families
In order to have certain applications, we introduce the following strengthening of Definition 13. Proof Sketch. The proof follows by noting the following. First, Proposition 17 can be strengthened to say that if δ ≥ 1 − 1 2ab then every log q N, , δ q code can be seen as a [N, a, b, ] q -Strong HM hash family. Second, Corollaries 18, 19, and 21 can be modified to yield Strong HM hash family (instead of just HM hash family), by simply choosing the alphabet value of the underlying code currently in the proofs to be at least twice (for Reed Solomon codes) or four times (for AG codes concatenated with Reed Solomon codes) as large as what is currently written.
In order to facilitate the applications in the next section we introduce the notation SHM 2 (C) to denote the following: given a code C, we first interpret it as a Strong HM hash family and then apply Lemma 15 to this hash family to obtain a Boolean Strong HM hash family, denoted by SHM 2 (C).
(L, f )-Replacement Path Covering
Equipped with the construction of Boolean Hit and Miss hash families from the previous section, we show in this section how to use them in order to efficiently construct RPC. Remark 25. For all the applications in this paper, we never use the construction of (L, f )-RPC given in Theorem 2 when a = m o(1) and b = Ω(log m) (mainly because it has a prohibitive run time), and the result for that regime is merely of interest for bounding the covering number.
Useful properties of (L, f )-RPC when L ≥ f . A crucial property of the (L, f )-RPC that is needed for applications in the future section is that for every fixed set of faults F there will be only a very small set of subgraphs in the covering set G L,f that avoid F . As we see below, we have that the construction of (L, f )-RPC of Theorem 2 gives this additional property for free. • Every subgraph in G F does not contain any of the edges in F .
• For every vertex pair (s, t) and every P (s, t, F ) path of length at most L, there exists a subgraph G ∈ G F that contains P (s, t, F ).
Finally, given F and G L,f , one can detect the subgraphs in G F in time f dL · polylog(m).
The proof of the above theorem follows by the more general statement below about code based constructions of HM hash family, and applying to it the parameters of specific codes.
Lemma 27. Given a graph G on m edges and integer parameters L, f, q, , and a log q m, , δ q code C with relative distance δ > 1 − 1 Lf , then, the (L, f )-RPC G L,f given by Proposition 24 on providing HM 2 (C) has the following property. Let F be a set of d ≤ f edge failures. Then, there exist a collection G F of at most subgraphs in G L,f that satisfy the following:
• Every subgraph in G F does not contain any of the edges in F .
• For every vertex pair (s, t) and every P (s, t, F ) path of length at most L, there exists a subgraph G ∈ G F that contains P (s, t, F ).
Moreover, given F and G L,f , one can detect the subgraphs in G F in time O(d · ( + ev(C))), where en(C) is the time needed to encode a message using C.
Proof. For every i ∈ [ ] let S i ⊆ [q] be defined as:
S i := {C(r j ) i | j ∈ [d]},
where F = {e r 1 , . . . , e r d }, and C(r j ) i is the i th coordinate of the r th j codeword of C. For every i ∈ [ ] we include the subgraph G i in G F if and only if the only edges in G removed in G i are the ones mapped to an element of S i under C i . It is clear that |G F | by definition is at most . Moreover, the computation time of the indices of the graphs in G F is O(d · ( + en(C))) as once we encode the d edges of F using C, we can specify the indices of the subgraphs in G F explicitly as defined above.
To note that G F is a subset of G L,f , notice that for every i ∈ [ ] and every S i as defined above, we have in HM 2 (C) a hash function h : [m] → {0, 1} which maps to 0 exactly those edges (labels of edges) whose corresponding codeword on the i th coordinate is contained in S i (see the proof of Lemma 15 to verify this). Then, whence HM 2 (C) is provided to Proposition 24, the graph G i,1 in G HM 2 (C) L,f in the proof of Proposition 24 is precisely the graph G i in G F .
All that is left to show are the structural properties of G F . By definition of G i , it is clear that all the edges in F are removed in each G i . Furthermore, for every vertex pair (s, t) ∈ V (G) × V (G) and every replacement path P (s, t, F ) = {e j 1 , . . . , e jt } with at most L edges, we have from (1) that there is some i * ∈ [ ] such that for all κ ∈ [t], we have C(j κ ) i * / ∈ S i * (i.e., we apply Proposition 17 on C to obtain a HM hash family and use (1) Useful properties of (L, f )-RPC s when L ≤ f . Parts of the next theorem maybe morally seen as the analog of Theorem 26, only that for the setting of L ≤ f , we bound the number of subgraphs that fully contain a given path segment with at most L edges.
Theorem 28. Let L ≤ f , then one can compute an (L, f )-RPC G L,f with the same CV and time bounds 6 as in Theorem 2 that in addition satisfies the following property. Let P be a replacement path segment of at most L edges. Then, there exist a collection G P subgraphs in G L,f that satisfy the following:
(I1) |G P | = f L · polylog(m).
(I2) Given P and G L,f , one can detect the subgraphs in G P in time f dL · polylog(m).
(I3) Every subgraph in G P fully contains P .
(I4) For every set F ⊆ E of at most f edges, there are at least |G P |/2 subgraphs in G P that fully avoid F .
(I5) Every subgraph in G P has at most m f many edges.
(I6) Computing the subset of edges in each
G i ∈ G L,f takes O( m f ) time.
Additionally, (I5) and (I6) when applied to the vertex variant RPC G v L,f over a graph G on n vertices with vertex fault parameter f yield the following: (I5v) Every subgraph in G P has at most n f many vertices and (I6v) computing the subset of vertices in each
G i ∈ G v L,f takes O( n f ) time.
The proofs of (I1) to (I4) of the above theorem follow by the more general statement below about code based constructions of Strong HM hash family, and applying to it the parameters of specific codes. The proofs of (I5) and (I6) follows by a nice property of linear codes.
Lemma 29. Given a graph G on m edges and integer parameters L, f, q, , and a log q m, , δ q code C with relative distance δ > 1 − 1 2Lf , then, the (L, f )-RPC G L,f given by Proposition 24 on providing SHM 2 (C) has the following property. Let P be a replacement path segment of d ≤ L edges. Then, there exist a collection G P of at most subgraphs in G L,f that satisfy the following:
• Every subgraph in G P fully contains P .
• For every set F ⊆ E of at most f edges, there are at least |G P |/2 subgraphs in G P that fully avoid F .
Moreover, given P and G L,f , one can detect the subgraphs in G P in time O(d · ( + en(C))), where en(C) is the time needed to encode a message using C.
Proof. For every i ∈ [ ] let S i ⊆ [q] be defined as:
S i := {C(r j ) i | j ∈ [d]},
where P = {e r 1 , . . . , e r d }. For every i ∈ [ ] we include the subgraph G i in G F if and only if the only edges in G preserved in G i are the ones mapped to an element of S i under C i . It is clear that |G F | by definition is at most . Moreover, the computation time of the indices of the graphs in G P is O(d · ( + en(C))) as once we encode the d edges of P using C, we can specify the indices of the subgraphs in G P explicitly as defined above.
To note that G P is a subset of G L,f , notice that for every i ∈ [ ] and every S i as defined above, we have in SHM 2 (C) a hash function h : [m] → {0, 1} which maps to 0 exactly those edges (labels of edges) whose corresponding codeword on the i th coordinate is contained in S i (see the proof of Lemma 15 to verify this). Then, whence SHM 2 (C) is provided to Proposition 24, the graph G i,0 in G
SHM 2 (C) L,f
in the proof of Proposition 24 is precisely the graph G i in G P .
All that is left to show are the structural properties of G P . By definition of G i , it is clear that all the edges in P are preserved in each G i . Furthermore, for every set F := {e j 1 , . . . , e jt } ⊆ E of at most f edges, we have from (6) that
Pr i∼[ ] ∀(x, y) ∈ [d] × [t], C(e rx ) i = C(e jy ) i ≥ 1 2 .
Therefore all the edges of F are avoided in at least half the graphs in G P .
Proof of Theorem 28. Since we have L ≤ f , the bounds in Theorem 2 follow here as well with setting a = f and b = L, while we avoid the case when b = Ω(log m) in order to get the right bounds (we consider this case to be covered by the 'otherwise' case construction in Theorem 2). In order to see that (I1) to (I4) holds, we only need to verify that for the Reed Solomon code C RS and the concatenated code C AG•RS (from Lemma 20) when we plug in SHM 2 (C RS ) and SHM 2 (C AG•RS ) respectively into Lemma 29, that the parameters are as claimed in the theorem statement.
The block length of C RS is set to be at most Plugging in the bound on the above block lengths of the two codes into Lemma 29 gives (I1) to (I4) in the theorem statement. Note that the encoding time of C RS is · polylog(m) = Lf polylog(m) and while the encoding time of a codeword C AG•RS is O( 3 ), since L ≤ f ≤ log m, we have that the encoding time of C AG•RS is also polylog(m).
Thus we now look towards proving (I5) and (I6). Notice that since L ≤ f , and the Boolean HM hash family provided to Proposition 24 in the proof of Theorem 2 arises from the alphabet reduction of Lemma 15, we know that we can even exclude all the subgraphs G i,1 (for all i ∈ [ ]) in the proof of Proposition 24, to only have many subgraphs in G L,f . We will use this simplification later in this proof.
In order to see that every subgraph in G L,f has at most m f many edges (i.e., (I5)), we only need to verify that the Reed Solomon code C RS We now return our focus to showing that C RS and C AG•RS are 1-wise independent. In fact we will show a stronger statement: every linear code C is 1-wise independent. Let We now rewrite (Ay) i as a i , y , and since a i is not the zero vector the claim follows (by even just a simple induction argument on the dimension).
Now we show (I6). Fix some G i in G L,f . By construction of G L,f we may interpret the index i as some (j, S) ∈ [ ] × [q]
≤L such that the edge e x in G is retained in G i if and only if C(x) j ∈ S. Let A C := ( a 1 , . . . , a ) be the generator matrix of C. We can determine the subset T of [q] log q m defined as follows:
T := {x ∈ [q] log q m | a j , x ∈ S}.
Then interpretting T as a subset of [m] simply gives us the edge set of G i . To compute T efficiently, we first compute for every r ∈ [q] (log q m)−1 and every z ∈ S, the value:
α := z − (log q m)−1 w=1 ( a j (w) · r w ) · ( a j (log q m)) −1 .
Then we include the vector (r, α) ∈ [q] log q m into T. Thus T can be computed in timeÕ(m|S|/q) = O(mL/q). And as before if C = C RS then q ≥ Lf log m, and thusÕ(mL/q) =Õ(m/(f log m)), and if C = C AG•RS then q ≥ Lf , and thusÕ(mL/q) =Õ(m/f ). This proves (I6).
Lower Bounds for (L, f )-Replacement Path Covering
In this section we provide a lower bound construction for the covering value of (L, f )-RPC and establish Theorem 3. Our lower bound graph is based on a modification of the graph construction used to obtain a lower bound on the size of f -failure FT-BFS structures, defined as follows.
if dist(s, t, H \ F ) = dist(s, t, G \ F ) for every t ∈ V, F ⊆ E(G), |F | ≤ f .
FT-BFS structures were introduced by the second author and Peleg [PP16] for the single (edge or vertex) failure. It was shown that for any unweighted n-vertex graphs and any source node s, one can compute an 1-failure FT-BFS subgraph with O(n 3/2 ) edges. This was complemented by a matching lower bound graph. In [Par15], the lower bound graph construction was extended to any number of faults f , which would serve the basis for our (L, f )-RPC lower bound argument.
Fact 31. [Par15] For large enough n, and f ≥ 1, there exists an n-vertex graph G * f and a source vertex s such that any f -failure-BFS structure with respect to s has Ω(n 2−1/(f +1) ) edges.
In the high-level, the lower bound graph G * f consists of a dense bipartite subgraph B with Ω(n 2−1/(f +1) ) edges, and a collection of {s} × V paths, that serve as replacement paths from s to all other vertices in G. The collection of paths are defined in a careful manner in a way that forces any f -failure FT-BFS for s to include all the edge of the bipartite graph B. To translate this construction into one that yields an (L, f )-RPC of large CV, our key idea is to shortcut the edge-length of {s} × V replacement paths of G * f by means of introducing weights to the edges. As a result, we get a weighted graph G w f whose all {s} × V replacement paths have at most L edges for any given parameter L ≤ (n/f ) 1/(f +1) . By setting the weights carefully, one can show that any f -failure FT-BFS for the designated source s must have Ω(L f · n) edges. To complement the argument, consider the optimal (L, f )-RPC G of minimal value for G w f . Since all the {s} × V paths are of length at most L, the replacement paths are resiliently covered by G. This yields the following simple construction of f -failure FT-BFS H ⊆ G: Compute a shortest-path tree in each subgraph G ∈ G, and take the union of these subgraphs as the output subgraph H. Since this construction yields an f -failure FT-BFS with O(|G|n) edges, we conclude that |G| = Ω(L f · n). We next explain this construction in details.
In the next description, we use the notation of [Par15] and introduced several key adaptations along the way. Our lower bound graph G w f similarly to Fact 31 is based on a graph G f (d) which is defined inductively. Note that whereas in [Par15], the graph G f (d) is unweighted, for our purposes (making all replacement paths short in terms of number of edges) some edges will be given weights. (G 1 (d)) = Z. Each leaf node z i ∈ Leaf(G 1 (d)) is assigned a label based on a labeling function 1 . The label of the leaf corresponds to a set of edge faults under which the path from root to leaf is still maintained. Specifically,
Label 1 : Leaf(G 1 (d)) → E(G 1 (d))Label 1 (z i , G 1 (d)) = (u 1 i , u 1 i+1 ) for i ≤ d − 1 and Label e (z i , G 1 (d)) = ∅. In addition, define P (z i , G 1 (d)) = P 1 [r(G 1 (d)), u 1 i ] • Q 1
i to be the path from the root u 1 1 to the leaf z i . We next describe the inductive construction of the graph
G f (d) = (V f , E f ), for every f ≥ 2, given the graph G f −1 (d) = (V f −1 , E f −1 ).f i ) = (d − i) · Depth(G f −1 (d)).
In the construction of [Par15], each edge e f i is replaced by a path Q f i of length w(e f i ). This is the only distinction compared to [Par15]. Note that by replacing a path Q f i by a single edge e f i of weight |Q f i |, the weighted length of the replacement paths would preserve but their length in terms in number of edges is considerably shorter. The leaf set of the graph G f (d) is the union of the leaf sets of G j 's, Leaf(G f (d)) = d j=1 Leaf(G j ). See Fig. 1 for an illustration for the special case of f = 2.
Finally, it remains to define the labels Label f (z i ) for each z i ∈ Leaf(G f (d)). For every j ∈ {1, . . . , d−1} and any leaf z j ∈ Leaf(G j ), let
Label f (z j , G f (d)) = (u f j , u f j+1 )•Label f −1 (z j , G j )
. Denote the size (number of nodes) of G f (d) by N(f, d), its depth (maximal weighted distance between two nodes) by Depth(f, d), and its number of leaves by nLeaf(f, d) = |Leaf(G f (d))|. Note that for f = 1, N(1, d) = 2d + d i=1 4 + 2 · (d − i) ≤ 7d 2 , Depth(1, d) = 6 + 2(d − 1) (corresponding to the length of the path Q 1 1 ), and nLeaf(1, d) = d. Since in our construction, we only shortcut the length of the paths, the following inductive relations hold as in [Par15]. . . , z λ }, ordered from left to right according to their appearance in G(f, d).
Observation 32 (Observation 4.2 of [Par15]). (a) Depth(f, d) = O(d f ). (b) nLeaf(f, d) = d f . (c) N(f, d) = c · d f +1 for some constant c.
Consider the set of
λ = nLeaf(f, d) leaves in G f (d), Leaf(G f (d)) = d i=1 Leaf(G i ) = {z 1 , .
Lemma 33 (Slight modification of Lemma 4.3 of [Par15]). For every z j it holds that:
(1) The path P (z j , G f (d)) is the only u f 1 − z j path in G f (d). (2) P (z j , G f (d)) ⊆ G \ Label f (z j , G f (d)). (3) P (z i , G f (d)) ⊆ G \ Label f (z j , G f (d)) for every i > j.
(4) w(P (z i , G f (d))) > w(P (z j , G f (d))) for every i < j.
In Lemma 4.3 of [Par15], the forth claim discusses the length of the paths P (z i , G f (d)). In our case, since we shortcut the path by introducing an edge weight the equals to the length of the removed sub-path, the same claim holds only for the weighted length of the path. We next show that thanks to our modifications the hop-diameter (i.e., measured by number of edges) of G f −1 (d) is bounded, and consequently, all {s} × V replacement paths are short.
Claim 34. The hop-diameter of G f (d) is O(f · d).
Proof. The claim is shown by induction on
f . For f = 1, the hop-diameter of G 1 (d) is |P 1 | = d.
Assume that the claim holds up to f − 1 and that the hop-diameter of
G f −1 (d) is at most (f − 1)d. The graph G f (d) is then connected to G f −1 (d) via the path P f = [u f 1 , . . . , u f d ] of hop-length d. Each u f i is connected to the root of the ith copy of G f −1 (d) via an edge. Thus the hop-diameter of G f (d) is at most f · d.
Finally, we turn to describe the graph G w f which establishes our lower bound. The graph G w f consists of three components. The first is the modified weighted graph
G f (d) for d ≤ (n/2c) 1/(f +1) ,
where c is some constant to be determined later. By Obs. 32, [Par15]. That is, in [Par15] each red line correspond to a path and in our construction, it is replaced by a weighted edge whose weigh equal to the length of the path. As a result the weight of all replacement paths are preserved, but their length is edges is bounded by O(f d).
n/2 ≤ |V (G f (d))|. Note that d ≤ (5/4) 1/(f +1) · (n/2c) 1/(f +1) = (5n/8c) 1/(f +1) for sufficiently large n, hence N(f, d) = c · d f +1 ≤ 5n/8. The second component of G w f is
6 Derandomization of the Algebraic DSO by Weimann and Yuster In this section, we prove Theorem 4 by providing a derandomization of the algebraic construction of the distance sensitivity oracle of [WY13]. This construction has sub-cubic preprocessing time and sub-quadratic query time. We will use the following lemma from [ACC19]. . . , D q ⊆ V satisfy that |D i | > L for every 1 ≤ i ≤ q, and |V | = n. One can deterministically find in O(q · L) time a set R ⊂ V such that |R| = O(n log n/L) and D i ∩ R = ∅ for every 1 ≤ i ≤ q.
Lemma 36. [Lemma 2 of [ACC19]] Let D 1 , D 2 , .
We start by providing a short overview of the randomized algebraic construction of [WY13]. As we will see, despite the fact that the query algorithm of [WY13] is in fact deterministic, due to the derandomization of the preprocessing part, the query algorithm will be similar to that of [ACC19]. Following [ACC19], it will be convenient to set = 1 − α. Throughout, we describe the construction for 0 < < 1, f = O(log n/ log log n) and a bound L = n /f . We need the following definition.
Definition 37 (Long and Short (s, t, F )).
A triplet (s, t, F ) ∈ V ×V ×E(G) f is L-short if d L (s, t, G\ F ) = dist(s, t, G \ F ).
That is, there exists a P (s, t, F ) replacement path with at most L edges in G. Otherwise, (s, t, F ) is L-long 7 . When L is clear from the context, we may omit it and write short (or long) (s, t, F ). For a query (s, t, F ), the query algorithm first computes a collection of O(f log n) graphs G F ⊆ G L,f that avoid all edges of F . For an L-short query, the distance dist G\F (s, t) is obtained by taking the minimum s-t distance over all subgraphs G ∈ G F . To support L-long queries (s, t, F ), the algorithm uses the matrices A j (or the matrix pairs D j , B j ) to compute a dense graph G F with vertex set V (G F ) = R ∪ {s, t}. The edge weight (x, y) for every x, y ∈ V (G F ) is set to be the minimum x-y distance over all the subgraphs in G F . The answer to the (s, t, F ) query is obtained by computing the s-t distance in G F . In the preprocessing variant that computes the A j matrices, the query algorithm takes O(n 2−2 /f ) time. In the variant that computes the B j , D j matrices, the query time is O(n 2− /f ). In the following subsections, we explain how to derandomize the preprocessing algorithm and combine it with the modified query algorithm of [ACC19].
The structure of the remaining of the section is as follows. In Sec. 6.1, we present an improved construction of a structure called Fault-Tolerant trees. Then, in Subsec. 6.2, we provide a complete description of the preprocessing and query time algorithms, both will be based on the construction of the FT-trees.
Algebraic Construction of Fault-Tolerant Trees
For a given vertex pair s, t, the FT-tree FT L,f (s, t) consists of O(L f ) nodes 8 . Each node is labeled by a pair P, F where P is an s-t path in G \ F with at most L edges, and F is a sequence of at most f faults which P avoids. [ACC19] described a construction of FT-trees FT L,f (s, t) for every pair s, t and used it to implement the combinatorial DSO of [WY13]. The computation time of the FT-trees algorithm by [ACC19] is O(m · n · L f +1 ), which is too costly for our purposes (e.g., the implementation the algebraic DSO of [WY13]).
Defining FT-Trees. Fix a pair s, t ∈ V . For every i ∈ {0, . . . , f }, and every sequence of faults F ⊆ E, |F | ≤ f − i, the tree FT L,i (s, t, F ) is defined in an inductive manner. Throughout, the paths P L (s, t, F ) refer to some shortest s-t path in G \ F with at most L edges. If there are several such paths, the algorithm picks one as will be described later.
Base case: The tree FT L,0 (s, t, F ) for every F ⊆ E and |F | ≤ f is defined as follows. If d L (s, t, G \ F ) = ∞ (i.e., there is no s-t path with at most L edges in G \ F ), then FT L,0 (s, t, F ) is empty. Otherwise, FT L,0 (s, t, F ) consists of a single node (root node) labeled by P L (s, t, F ), F . This root node is associated with a binary search tree which stores the edges of the path P L (s, t, F ).
Inductive step: Assume the construction of FT L,j (s, t, F ) for every j up to i, and every F ⊆ E, |F | ≤ f − j. The tree FT L,i+1 (s, t, F ) is defined as follows for every set F of f − (i + 1) faults in E. If d L (s, t, G \ F ) = ∞, then FT L,i+1 (s, t, F ) is empty. Assume from now on that d L (s, t, G \ F ) < ∞. The root node r of FT L,i+1 (s, t, F ) is labeled by P L (s, t, F ), F , and the edges of P L (s, t, F ) are stored in a binary search tree. This root node is connected to the roots of the trees FT L,i (s, t, F ∪{a j }) for every a j ∈ P L (s, t, F ) satisfying that d L (s, t, G\(F ∪{a j })) < ∞. Letting, r j be the root node FT L,i (s, t, F ∪ {a j }) (if such exists), we have:
FT L,i+1 (s, t, F ) = {FT L,i (s, t, F ∪{a j })∪{(r, r j )} | a j ∈ P L (s, t, F ), d L (s, t, G\(F ∪{a j })) < ∞} . For i = f , we abbreviate FT L,f (s, t, ∅) = FT L,f (s, t).
Observation 38. Each tree FT L,f (s, t) has at most L f nodes (in the case of vertex faults, it has at most (L + 1) f nodes). Proof. The depth of the tree FT L,f (s, t) is at most f . For the case of edge faults, each node in FT L,f (s, t) has at most L children as each node is labeled by a path of ≤ L edges. In the case of vertex faults, a path of at most L edges, has L + 1 vertices.
Algebraic Construction of FT-Trees. We now turn to provide a new algorithm for computing the FT-Trees FT L,f (s, t) based on the (L, f )-RPC of Thm. 2. This algorithm will be applied in the preprocessing phase of the f -DSO. The next theorem improves upon the O(m · n · L f +1 )-time algorithm provided in [ACC19] for dense graphs. The key difference from [ACC19] is that the algorithm of [ACC19] is combinatorial (e.g., uses Dijkstra for shortest path computations), and our algorithm is algebraic (e.g., uses matrix multiplication).
Theorem 39 (Improved Computation of FT-Trees). For every L and f = O(log n/ log log n), there exists a deterministic algorithm that computes s,t∈V FT L,f (s, t) in time:
1. O((αcLf ) f +1 · LM n ω ) if L ≥ m 1/c for some constant c, and 2. O((αLf log n) f +1 · LM n ω ) otherwise,
where α is the universal constant of Theorem 2.
The first step of the algorithm applies Theorem 2 to compute an (L, f )-RPC G L,f . Then, it applies the AP SP ≤L algorithm of Lemma 6 to compute in each G ∈ G L,f , the collection of all V (G ) × V (G ) shortest paths P L G (s, t) with at most L edges, for every s, t ∈ V (G ). This computations serves the basis for the following key task in the construction of the FTtrees: Given a triplet s, t, F , compute d L (s, t, G \ F ) and some path P L s,t,F if such exists. Proof. By Theorem 26, given the (L, f )-RPC G L,f , one can compute in time O(L) a collection of subgraphs G F that fully avoid F . In addition, it holds that for any s-t path P in G \ F with at most L edges, there must be exists a subgraph G ∈ G F that fully contain P . In particular, letting P * be the shortest s-t path with at most L edges in G \ F (breaking ties in an arbitrary manner), there is a subgraph in G F that fully contains P * . Since the algorithm AP SP ≤L is applied on each of the subgraphs G ∈ G F , we have that
d L (s, t, G \ F ) = min G ∈G F d L (s, t, G ) .(7)
The desired path P L (s, t, F ) corresponds to the output path of algorithm AP SP ≤L in the subgraph G ∈ G F that minimizes the distance of Eq. (7).
The computation of the FT-tree FT L,f (s, t) for every s, t ∈ V is described as follows. The root node is simply P L (s, t) as computed by applying algorithm AP SP L in G. If d L (s, t, G) = ∞, then FT L,f (s, t) is empty. The computation of the binary search tree for storing P L (s, t) can be computed in O(L) time. Now, for every labeled node P L (s, t, F ), F , the algorithm computes its child nodes P L (s, t, F ∪ {a j }), F ∪ {a j } for every a j ∈ P L (s, t, F ). For that purpose, it applies the algorithm of Lemma 40 with input (s, t, F ∪ {a j }) for every a j ∈ P L (s, t, F ). We are now ready to complete the proof of Theorem 39. Proof. Given (s, t, F ), we query the FT-tree FT L,f (s, t) as follows. First check if the path P L (s, t) labeled at the root of the tree intersects F . If no, then output P L (s, t). Otherwise, letting a j ∈ P L (s, t) ∩ F , we continue with the child node labeled by P L (s, t, {a j }). Again, if P L (s, t, {a j }) ∩ F = ∅, we output that path and otherwise continues to its child node P L (s, t, {a j , a j }) for some a j ∈ P L (s, t, {a j }) ∩ F . Using the binary search tree at each node P L (s, t, F ), finding some edge e ∈ P L (s, t, F ) ∩ F can be done in O(f log L) time. Since the depth of the tree is f , the total time is O(f 2 log L).
Deterministic Preprocessing and Query Algorithms
The randomized preprocessing algorithm of Weimann and Yuster has two randomized ingredients. The first is the computation of the (L, f )-RPC given by the subgraphs G 1 , . . . , G r . The second is a computation of the set R which, w.h.p., hits every L-length segment of every long P (s, t, F ) paths. Our deterministic preprocessing algorithm is presented below: This completes the description of the preprocessing algorithm. We note that the computation of the FT-trees substitutes the A j , B j , D j matrices used in [WY13]. By setting the matrix multiplication exponent to ω = 2.373, and = 1 − α, Lemma 42 achieves the bound of Theorem 4.
The Query Algorithm. Once the FT-trees are computed, the query algorithm is the same as in [ACC19], for completeness we describe it here. Note that in contrast to [ACC19], we do not assume here that the shortest path ties are decided in a consistent manner. Thus the correctness of the procedure is somewhat more delicate. Given a short query (s, t, F ), i.e., d L (s, t, G \ F ) = dist(s, t, G \ F ), the desired distance d L (s, t, G \ F ) can be computed in time O(f 2 log L) by using the query algorithm of Lemma 41. From now on assume that the query (s, t, F ) is long. Unlike [WY13] we would not be able to show that there are few subgraphs in the (L, f )-RPC G L,f that fully avoid 9 F . Nevertheless, we will still be able to efficiently compute the dense graph G F , e.g., within nearly the same time bounds as in [WY13]. Recall that R is the hitting-set of the critical set of replacement paths. The vertex set of the graph G F is given by V F = R ∪ {s, t}, and the weight of each edge (x, y) ∈ V F × V F is given by w(x, y) = d L (x, y, G \ F ). This weight can be computed by applying the query algorithm of Lemma 41 on the FT-tree FT L,f (x, y) with the query (x, y, F ).
To answer the (s, t, F ) query it remains to compute the s-t distance in the dense graph G F . Using the method of feasible price functions and in the exact same manner as in [WY13], this computation is done in O(|E(G F )|) = O(n 2−2 /f ). This completes the description of the query algorithm. Given the computation of the FT-trees in the preprocessing step, by Lemma 41 the computation of the graph G F takes O(|E(G F )| · f 2 log L) = O(n 2−2 /f ) time. This matches the query time of Weimann and Yuster [WY13] (up to poly-logarithmic terms). We finalize the section by showing the correctness of the query algorithm. Due to the fact that we do not assume uniqueness of shortest paths as in [ACC19], the argument is more delicate.
Claim 43. dist(s, t, G F ) = dist(s, t, G \ F ).
Proof. The correctness for the short queries (s, t, F ) follows by the correctness of Lemma 41. Consider a long query (s, t, F ) and let P (s, t, F ) be the s-t shortest path in G \ F with the minimal number of edges. If there are several such paths, pick one in an arbitrary manner. By definition, P = P (s, t, F ) has at least L edges. Partition it into segments of length 10 [L/4, L/2] and let s i -t i be the endpoints of the ith segment. That is, P = P [s 1 = s, t 1 = s 2 ] • P [s 2 , t 2 ] • . . . P [s , t = t].
By the definition of P , every s i -t i shortest path in G \ F must have at least L/4 edges. To see this, assume towards contradiction otherwise that there exists a pair s i , t i with a shorter (in number of edges) s i -t i shortest path in G \ F . This implies that we can obtain an s-t shortest path P of the same weight but with fewer edges, contradiction to the minimality (in edges) of P . Since d L/2 (s i , t i , G \ F ) = dist(s i , t i , G \ F ) for every i ∈ {1, . . . , }, there is an s i -t i path P L (s i , t i , F ) of length at most L in the FT-tree FT L,f (s i , t i ). Specifically, this path can be found by applying the query algorithm of Lemma 41 with the query (s i , t i , F ). By Lemma 41, this results in the distance d L (s i , t i , G \ F ) along with a path P L (s i , t i , F ).
Consider now an alternative s-t path P = P L (s 1 , t 1 , F ) • P L (s 2 , t 2 , F ) • . . . • P L (s , t , F ). Since d L/2 (s i , t i , G \ F ) = dist(s i , t i , G \ F ) for every i ∈ {1, . . . , }, we have that P ∩ F = ∅ and w(P ) = w(P ) = dist(s, t, G \ F ).
By definition, every P L (s i , t i , F ) ∈ D L,f , and since P L (s i , t i , F ) has at least L/4 edges and at most L edges, P L (s i , t i , F ) ∈ D L . Since R is a hitting-set of all paths in D L , there exists some x i ∈ P L (s i , t i , F ) ∩ R for every i. This implies that P can be written as a concatenation of replacement path segments each with at most L edges and with both endpoints in V (G F ) = R ∪ {s, t}. Let {s = x 0 , x 1 , . . . , x k , x k+1 = t} be the ordered set of the representatives of the V (G F ) vertices on P . By the description of the query algorithm, for every i ∈ {0, . . . , k}, it holds that w(x i , x i+1 ) = d L (x i , x i+1 , G \ F ). By the above argument, d L (x i , x i+1 , G \ F ) = dist(x i , x i+1 , G \ F ). In addition, for every pair x, y ∈ V (G F ), w(x, y) = d L (x, y, G \ F ) ≥ dist(x, y, G \ F ). We therefore conclude that dist(s, t, G F ) = w(P ) = dist(s, t, G \ F ).
Derandomization of Fault Tolerant Spanners
We next consider the applications of the (L, f )-RPC to deterministic constructions of fault-tolerant spanners resilient to at most f vertex faults. For a given n-vertex (possibly) weighted graph G = (V, E), a subgraph H ⊆ G is an f -fault tolerant (α, β)-spanner if dist(s, t, H \ F ) ≤ α · dist(s, t, G \ F ) + β, for every s, t ∈ V, F ⊆ V, |F | ≤ f .
When β = 0, the spanner is called multiplicative spanner, denoted by f -fault tolerant t-spanner for short, t is the stretch factor. When α = 1, the spanner is additive.
Multiplicative Vertex Fault-Tolerant Spanners
Chechik, Langberg, Peleg, and Roddity [CLPR10] presented the first non-trivial construction of f fault-tolerant multiplicative spanners resilient to vertex faults. The size overhead of their construction (compared to standard spanner) is k f , that is, exponential in the number of faults. Dinitz This theorem is a consequence of a general conversion scheme that turns any τ (n, m)-time algorithm for constructing t-spanners with size s(n) into an algorithm for constructing f -fault tolerant t-spanner with size O(f 3 log n · s(2n/f )) and time complexity O(f 3 log n · τ (2n/f, m)). Specifically, applying this conversion to the greedy spanner algorithm yields an f -fault tolerant (2k − 1)-spanner with O(f 3 log n · (n/f ) 1+1/k ) edges in time O(f 3 log nk · m · (2n/f ) 1+1/k ). In this section we provide the derandomization of Theorem 2.1 of [DK11] (which used to obtain Theorem 1.1) and show:
Theorem 45 (Derandomized of Theorem 2.1 of [DK11]). If there is a deterministic algorithm A that on every n-vertex m-edge graph builds a t-spanner of size s(n) and time τ (n, m, t), then there is an algorithm that on any such graph builds an f -fault tolerant t-spanner of: there exists a subgraph G i ∈ G such that P ⊆ G i and F ∩ G i = ∅. Since H i is an (µ, α)-spanner for G i , we have that dist(u, v, H i \ F ) = dist(u, v, H i ) ≤ µ · L + α .
Partition any path P (s, t, F ) into (1/L) · dist(s, t, G \ F ) segments each of length at most L. We then have that dist(s, t, H \ F ) ≤ µ · dist(s, t, G \ F ) + α · (1/L) · dist(s, t, G \ F ) .
Since 1/L < /α, the stretch bound holds.
B Missing Proofs
Proof of Lemma 7. First consider the case where L ≥ f . Let G = {G 1 , . . . , G r } be a collection of independently sampled subgraphs for r = c·f ·L f log n where c is a sufficiently large constant. Each subgraph G i is obtained by sampling each edge e ∈ E(G) into G i independently with probability p = 1 − 1/L. We now show that G is indeed an (L, f )-RPC. Fix a replacement path P (s, t, F ) of length at most L that avoids a set of F edges. The probability that a subgraph G i covers P (s, t, F ) is at least q = p L · 1/L f = 1/(e · L f ). Thus the probability that none of the r subgraphs covers P (s, t, F ) is at most (1 − q) r ≤ (1 − 1/(e · L f )) c·f ·L f log n = 1/n c f for a sufficiently large constant 1 < c < c. By taking c to be a sufficiently large constant, and applying the union bound over all n 4f +2 triplets of s, t, F , we get that w.h.p. G is an (L, f )-RPC.
Next, assume that L ≤ f . The definition of G is almost the same up to a small modification in the selection of the parameters. Set r = c · f L+1 log n and let p = 1/f . To see the correctness, fix a replacement path P (s, t, F ) with at most L edges. The probability that G i covers P (s, t, F ) is at least q = p L · (1 − p) f = 1/(e · f L ). Thus the probability that none of the r subgraphs covers P (s, t, F ) is at most (1 − q) r ≤ (1 − 1/(e · f L )) c·f L+1 log n = 1/n c f for a sufficiently large constant 1 < c < c. By taking c to be a sufficiently large constant, and applying the union bound over all n 2f +2 triplets of s, t, F , we get that w.h.p. G is an (L, f )-RPC.
C Improved RPC given Input Sets
In this section, we show an improved RPC computation based on a given input set D. Specifically, we consider a relaxed notion of the problem as suggested by Alon, Chechik, and Cohen [ACC19] and provide an (L, f )-RPC for this relaxed notion with nearly optimal covering value. The main result of this section is the following.
Theorem 48. Let L, f be integer parameters such that L ≥ f . There exists an algorithm A that takes as input a graph G on n vertices and m edges and a list D = {(P 1 , F 1 ), . . . , (P k , F k )} of k pairs of L-length replacement paths P i and set of faults F i that it avoids 12 and outputs a restricted (L, f )-RPC G(D) satisfying that for every (P i , F i ) ∈ D, there is a subgraph G ∈ G(D) that contains P i and avoids F i . Moreover, the running time of A is (m + k) · (log m) O(1) · (αLf log m) f , where α ∈ N is some small universal constant. A, B). The next lemma should be compared with Corollary 18. The latter works for any pair of disjoint sets A, B, while the next lemma satisfies the collision-free property for every (A, B) ∈ S. This allows us to obtain a considerably smaller family of functions.
improved their construction and obtained subcubic preprocessing time with O(1) query time. Since the key randomized component in these DSO constructions is the sampling of the subgraphs {G i } i∈[ ] , Alon, Chechik and Cohen [ACC19] posed the following question (stated specifically here for f -DSOs): "It remains an open question if there exists a DSO with subcubic deterministic preprocessing algorithm and subquadratic deterministic query algorithm, matching their randomized equivalents".
. The covering value (CV) of an (L, f )-RPC G L,f is the number of subgraphs in G L,f , i.e., CV(G L,f ):=|G L,f |.
Theorem 3 (
3Lower Bound for the Covering Value of (L, f )-RPC). For every integer parameters n, L, and f such that (L/f ) f +1 ≤ n, there exists an n-vertex weighted graph G * = (V, E, w), such that any (L, f )-RPC of G has CV of Ω((L/f ) f ).
Theorem 4 .
4Let G = (V, E) be a directed n-vertex m-edge graph with real edge weights in [−M, M ].
used the notion of universal hash functions to compute deterministically an (L = 2, f )-RPC of covering value O(f 6 ) for f ≤ n o(1) and a value of O(f 3 ) for f ≥ n c for some constant c. Using our (L = 2, f )-RPC of Theorem 2 yields a covering value of O(f 3 ) for every value f . Up to a logarithmic factor, our bounds match the value of the randomized construction.
Hit and Miss Hash Families. We introduce a new notion of hash families called Hit and Miss (HM) Hash Families. Informally, given integer parameters N, a, b, and q, a family H of hash functions from [N ] to [q] is said to be a HM hash family if for every pair of mutually disjoint subsets of [N ], say (A, B), there exists a hash function h ∈ H such that every (x, y) ∈ A × B do not collide under h (see Definition 13 for a formal statement)
Hit and Miss (HM) hash family 5 if for every pair of mutually disjoint subsets A, B of [N ], where |A| ≤ a and |B| ≤ b, there exists some i ∈ [ ] such that:
We begin our discussion by noting that there exist a naive[1] N -HM hash family and a naiveN ≤b 2 -HM hash family. Our goal is to construct a [ ] 2 -HM hash family with the smallest possible value for , as this is important for the applications in the future sections. Towards this goal we prove the theorem below. Theorem 14 (Small Boolean Hit and Miss Hash Family). Given integers N, a, b such that b ≤ a, there is a deterministic algorithm A for computing an [N, a, b, ] 2 -HM hash family where:
≤b 2 -
2HM hash family. Lemma 15 (Alphabet Reduction). Given integers N, a, b, q, such that b ≤ a, and a [N, a, b, ] q -HM hash family H, there exists a N, a, b, · q ≤b 2 -HM hash family H which can be computed in
Proof.The family H we consider consists of all functions h p (x) = x( mod p) for the first 1 + ab log N prime numbers p. Note that the (1 + ab log N ) th prime number is at most 1 + 2ab log N (1 + log a + log b+log log N ) = O(ab(log N ) 2 ). Thus, in order to show that H is a [N, a, b, 1 + ab log N ] O(ab(log N ) 2 ) -HM hash family, we just need to show (1). Fix two disjoint sets A, B ⊆ [N ] such that |A| ≤ a and |B| ≤ b. Consider the following quantity.
Corollary 18 (
18Reed-Solomon Hash Family). Given integers N, a, b such that b ≤ a, there exists a N, a, b, O ab log N log a O ab log N log a -HM hash family. Moreover, the computation time of the HM hash family is O abN (log N ) 2 .
Corollary 19 (
19Algebraic-Geometric Hash Family). Given integers N, a, b such that b ≤ a, there exists a [O(ab log N )] O(a 2 b 2 ) -HM hash family. Moreover, the computation time of the HM hash family is O (ab log N ) 3 + N ab log 3 N .
Corollary 21 (
21Concatenated Hash Family). Given integers N, a, b such that b ≤ a, there exists a N, a, b, O a 2 b 2 log N log a O(ab) -HM hash family. Moreover, the computation time of the HM hash family is O N · (ab log N ) 3 .
b, O a 2 b 2 -
2HM hash family.It is known that the generator matrix of the codes mentioned in Theorem 11 (resp. Theorem 10) can be constructed in cubic time in the block length of the code [SAK + 01] (resp. linear time in the block length of the code[RS60] as the message length is 2). Therefore the computation of the corresponding HM hash family can be done in time O(N · (ab log N ) 3 ).We finally wrap up by noting below that the proof of Theorem 14 follows from combining Lemma 15 with Corollaries 18 and 21. Proof of Theorem 14. Suppose a ≥ N 1 /c , for some constant c ∈ N then consider the O ab log N log a O ab log N log a -HM hash family from Corollary 18 and note that log N log a ≤ c. Let the alphabet of this HM hash family be βcab, for some universal constant β. Then, we invoke Lemma 15 on this [O (cab)] βcab -HM hash family to obtain the desired Boolean HM hash family. The computation time of the final HM hash family is O (βcab) b · abN (log N ) 2 = O N · (log N ) 2 · (βcab) b+1 . Suppose a = N o(1) and b = Ω(log N ) (or suppose a ≤ log N ) then consider the O a 2 b 2 log N log a O(ab) -HM hash family from Corollary 21 and ignore the log a term in the denominator in the expression for the size of the hash family. Let the alphabet of this HM hash family be β ab, for some universal constant β . Then, we invoke Lemma 15 on this O a 2 b 2 log N β ab -HM hash family to obtain the desired Boolean HM hash family. The computation time of the final HM hash family is O
-
HM hash family from Corollary 18 and ignore the log a term in the denominator in the expressions for both the size of the hash family and the alphabet size. Let the alphabet of this HM hash family be β ab log N , for some universal constant β . Then, we invoke Lemma 15 on this [O (ab log N )] β ab log N -HM hash family to obtain the desired Boolean HM hash family. The computation time of the final HM hash family is
Definition 22 (
22Strong Hit and Miss Hash Family). For every N, a, b, , q ∈ N such that b ≤ a, we say that H := {h i : [N ] → [q] | i ∈ [ ]} is a [N, a, b, ] q -Strong Hit and Miss (SHM) hash family if for every pair of mutually disjoint subsets A, B of [N ], where |A| ≤ a and |B| ≤ b, we have:Pr i∼[ ] [∀(x, y) ∈ A × B, h i (x) = h i (y)]In the cases when N, a, b is clear from the context, we simply refer to H as a [ ] q -Strong HM hash family.Similar to Corollaries 18 and 21, we can prove the following bounds for Strong HM hash family.Lemma 23. Given integers N, a, b such that b ≤ a, there exists: Reed-Solomon Strong HM hash family a N, a, b, O ab log N log a O ab log N log a -Strong HM hash family whose computation time is O abN (log N ) 2 . Algebraic-Geometric Strong HM hash family a N, a, b, O a 2 b 2 log N log a O(ab)-Strong HM hash family whose computation time is O N · (ab log N )3 .
Proposition 24 .
24Given a graph G on m edges and integer parameters L, f , and a [m, max{L, f }, min{L, f }, ] 2 -HM hash family H, we can construct an (L, f )-RPC of G denoted by G H L,f such that CV(G H L,f ) = 2· . Moreover, the construction of G H L,f can be done in time O(m +T H ), where T H is the computation time of H. Proof. Label the edges of G using [m]. For every (i, ρ) ∈ [ ] × {0, 1}, we construct a subgraph G i,ρ of G as follows: for every x ∈ [m], the edge with labelx in G is retained in G i if and only if h i (x) = ρ. Then G H L,f is simply {G i,ρ | i ∈ [ ], ρ ∈ {0, 1}}. To see that G H L,f is an (L, f )-RPC, fix any vertex pair (s, t) ∈ V (G) × V (G) and fix any fault set F := {e r 1 , . . . , e r d } ⊆ E(G) where |F | = d ≤ f . Let P (s, t, F )be a replacement path with at most L edges, i.e., P (s, t, F ) = {e j 1 , . . . , e jt } ⊆ E(G), where t ≤ L. Consider the following two subsets of [m]: A = {j 1 , . . . , j t } and B = {r 1 , . . . , r d }. Note that since P (s, t, F ) is a replacement path we have A and B are disjoint subsets of [m]. From (1) we have that there exists some i * ∈ [ ] such that for all (x, y) ∈ A × B we have h i * (x) = h i * (y). Therefore we have that if h i * (j 1 ) = 0 (resp. if h i * (j 1 ) = 1) then in the graph G i * ,0 (resp. G i * ,1 ), we have that all edges of P (s, t, F ) are present and all edges of F are absent. In order to justify the computation time of G H L,f , we first compute the × m Boolean matrix M H corresponding to H where the (i, x) th entry of M H is simply h i (x). After the computation of M H we simply go over each row of the matrix to build the subgraphs. Proof of Theorem 2. The proof follows immediately by putting together Theorem 14 with Proposition 24 and noting that for every [N, a, b, ] 2 -HM hash family H used in Theorem 14, we have T H > N · .
Theorem 26 .
26Let L ≥ f and f = o(log m), then one can compute an (L, f )-RPC G L,f with the same CV and time bounds as in Theorem 2 that in addition satisfies the following property.Let
F
be a set of d ≤ f edge failures. Then, there exist a collection G F of at most f L · polylog(m) subgraphs in G L,f that satisfy the following:
with A = {j 1 , .. . , j t } and B = {r 1 , .. . , r d }). Therefore all the edges of P (s, t, F ) are retained in G i * .Proof of Theorem 26. Since we have L ≥ f and f = o(log m), the bounds in Theorem 2 follow here as well with setting a = L and b = f , while avoiding the case when b = Ω(log m). In order to see that the additional property holds, we only need to verify that for the Reed Solomon code C RS and the concatenated code C AG•RS (from Lemma 20) when we plug in HM 2 (C RS ) and HM 2 (C AG•RS ) respectively into Lemma 27, that the parameters are as claimed in the theorem statement.The block length of C RS is set to be at most 2Lf log m log L in Corollary 18. If L ≥ m 1/c then |G F | ≤ = O(cLf ) and otherwise we have |G F | ≤ = O(Lf log m). The block length of C AG•RS is set to be at most 64L 2 f 2 log m log L in Corollary 21. Since we apply this bound to the case where L ≤ log m then |G F | ≤ = O(L 2 f 2 log m) = O(Lf log 3 m).Plugging in the bound on the above block lengths of the two codes into Lemma 27 gives the bounds of the additional property in the theorem statement. Note that the encoding time of C RS is · polylog(m) = Lf polylog(m) and while the encoding time of a codeword C AG•RS is O( 3 ), since f ≤ L ≤ log m, we have that the encoding time of C AG•RS is also polylog(m).
4Lf log m log L in Lemma 23. If L ≥ m 1/c then |G P | ≤ = O(cLf ) and otherwise we have |G P | ≤ = O(Lf log m). The block length of C AG•RS is set to be at most 256L 2 f 2 log m log L in Lemma 23. Since we apply this bound to the case where L ≤ log m then |G P | ≤ = O(L 2 f 2 log m) = O(Lf log 3 m).
and the concatenated code C AG•RS (from Lemma 20) are 1-wise independent: A code C ⊆ [q] is said to be 1-wise independent if and only if for every i ∈ [ ] and every ζ ∈ [q] we have Pr x∼C [x i = ζ] = 1 q . Let us first see why it suffices for (I5) to show that C RS and C AG•RS are 1-wise independent. Given L, f , fix a code C ∈ {C RS , C AG•RS } which optimizes the parameters of Theorem 2. Fix a subgraph G in G L,f . By construction of G L,f there exists h ∈ HM 2 (C), such that the edge e i in G is retained in G if and only if h(i) = 0. Since the hash functions in HM 2 (C) are indexed by the set [ ] × [q] ≤L (where q is the alphabet size and is the block length of C), let the index of h be (j, S) ∈ [ ] × [q] ≤L . Notice that the number of edges in G is simply the subset E ⊆ [m] defined as E = {x ∈ [m] | C(x) j ∈ S}. However, since C is 1-wise independent, we have that Pr x∼C [x j ∈ S] = |S| q , and thus |E | = m · |S|/q ≤ mL/q. If C = C RS then q ≥ Lf log m, and thus |E | ≤ m/(f log m), and if C = C AG•RS then q ≥ Lf , and thus |E | ≤ m/f . This proves (I5).
A ×log q m := ( a 1 , . . . , a ) be the generator matrix of C ⊆ [q] . Then we can rewrite the claim of showing 1-wise independence as follows: for every i ∈ [ ] and every ζ ∈ [q] we have Pr y∼[q] log q m [(Ay) i = ζ] = 1 q .
Definition 30 (
30FT-BFS Structures). [PP16, Par15] Given a (possibly weighted) n-vertex graph G = (V, E), a source vertex s ∈ V , and a bound f on the number of (edge) faults f , a subgraph H ⊆ G is an f -failure FT-BFS structure with respect to s
For f = 1, G 1 (d) consists of three components: (i) a set of vertices U = {u 1 1 , . . . , u 1 d } connected by a path P 1 = [u 1 1 , . . . , u 1 d ], (ii) a set of terminal vertices Z = {z 1 , . . . , z d }, and (iii) a collection of d edges e 1 i of weight w(e 1 i ) = 6 + 2(d − i) connecting u 1 i and z i for every i ∈ {1, . . . , f }. The vertex r(G 1 (d)) = u 1 1 , and the terminal vertices of Z are the leaves of the graph denoted by Leaf
The weights are introduced only in this induction step,i.e., for f ≥ 2. The graph G f (d) = (V f , E f ) consists of the following components. First, it contains a path P f = [u f 1 , . . . , u f d ], where the node r(G f (d)) = u f 1 isfixed to be the root. In addition, it contains d disjoint copies of the graph G = G f −1 (d), denoted by G 1 , . . . , G d (viewed by convention as ordered from left to right), where each G i is connected to u f i by a collection of d edges e f i , for i ∈ {1, . . . , d}, connecting the vertices u f i with r(G i ). The edge weight of each e f i is w(e
a set of nodes X = {x 1 , . . . , x χ } and an additional vertex v * that is connected to u f d and to all the vertices of X. The cardinality of X is χ = n − N(f, d) − 1. The third component of G w f is a complete bipartite graph B connecting the nodes of X with the leaf set Leaf(G f (d)), i.e., the disjoint leaf sets Leaf(G 1 ), . . . , Leaf(G d ). The vertex set of the resulting graph is thus V = V (G f (d)) ∪ {v * } ∪ X and hence |V | = n. By Prop. (b) of Obs. 32, nLeaf(G i ) = d f = (n/2c) 1/(f +1) f ≥ (n/2c) f /(f +1) , hence |E(B)| = Θ(n · d f ).The following lemma follows the exact same proof as in[Par15].Lemma 35. [Analogue of Theorem 4.1 in [Par15]] Every f -failure FT-BFS H w.r.t s = u f 1 in G w f must contain all the edges of B. Thus, |E(H)| = Ω(n · d f ). We are now ready to prove the lower bound on covering value of the (L, f )-RPC. Proof of Thm. 3. Let L = f · d and consider the graph G w f with the source node s = u f 1 . By the construction of G w f it holds that (d/f ) f +1 ≤ n. Let G L,f be the optimal (L, f )-RPC for G w f of minimal CV. Our goal is to show that |G L,f | = Ω((L/f ) f ). We next claim that one can use this RPC (or any RPC), to compute an f -failure FT-BFS structure H with O(|G L,f |n) edges. Specifically, let H = G ∈G L,f SPT(s, G ) where SPT(s, G ) is an shortest-path tree rooted at s in G . It remain to show that H is indeed an f -failure FT-BFS structure with respect to s. By Claim 34, every s-t replacement path avoiding f faults has O(f d) edges. Thus, for every P (s, t, F ) for |F | ≤ f there exists a subgraph G ∈ G L,f such that P (s, t, F ) ⊆ G and F ∩ G = ∅. Therefore, the s-t path in the shortest path tree SPT(s, G ) is necessarily P (s, t, F ). We conclude that H ⊆ G w f is an f -failure FT-BFS structure w.r.t s and with O(|G L,f |n) edges. Combining with Lemma 35, we get that |G L,f | = Ω((L/f ) f ).
Figure 1 :
1Illustration of the lower-bound graph G w f for f = 2. The bold red edges are the only modification compared to the construction of
Outline of the Weimann-Yuster DSO. The preprocessing algorithm starts by computing an (L, f )-RPC G L,f = {G 1 , . . . , G r ⊆ G} for all replacement paths with at most L edges, where r = O(f n log n). This RPC is generated randomly by sampling each edge in G into G j independently with probability of 1 − 1/L for every j ∈ {1, . . . , r}. Let R be a random sample of O(f n log n/L) vertices in G, that we call hitting set as they hit every replacement path segment with at least L edges, w.h.p. Given the (L, f )-RPC G L,f and the hitting set R, there are two variants of the algorithm. In one variant, a collection of matrices A 1 , . . . , A r is computed in in time O(r · M 0.681 · n 2.575+ ) for storing the all-pairs distances in G 1 , . . . , G r . In an alternative variant, the algorithm computes for every subgraph G j ∈ G L,f a pair of matrices B j and D j in time O(rM n 2.376+ ). The matrix B j stores the R × R distances in G j and it is computed based on a matrix D j in O(|R| 2 n) time.
Lemma 40 .
40Consider a pre-computation of the (L, f )-RPC G L,f for f = O(log n/ log log n), and the application of algorithm AP SP ≤L in each of the subgraphs G ∈ G L,f . Then, given a triplet (s, t, F ), in time O(L), one can compute the distance d L (s, t, G \ F ) and a corresponding path P L (s, t, F ) (if such exists).
Proof of Theorem 39. The correctness of the algorithm follows by Lemma 40. Therefore, it remains to bound the computation time. The computation of the (L, f )-RPC is done in time O(CV(G L,f )·m). Applying algorithm AP SP ≤L on every G ∈ G L,f takes O(CV(G L,f ) · LM n ω ) time by Lemma 6. The computation of each child node in the FT-tree takes O(f L log n) time, by Lemma 40. By Observation 38, the total number of nodes in all the trees is bounded by O(L f · n 2 ). Thus, the total time to compute all the FT-trees is bounded by O(CV(G L,f ) · LM n ω ). The lemma holds by plugging the covering values of Theorem 2 (the first and last bounds). The applicability of the FT-trees in the context of DSOs is expressed in the next lemma. Lemma 41. [Lemma 17 of [ACC19]] Given the computation of the trees FT L,f (s, t), s, t ∈ V , for every triplet (s, t, F ) one can compute d L (s, t, G) and a replacement path P L (s, t, F ) (if such exists) in time O(f 2 log L).
•
(i): Compute FT-trees. Using (L, f )-RPC of Thm. 2, apply Theorem 39 to compute the collection of trees s,t FT L,f (s, t) with O(n 2 · L f ) nodes. • (ii): Compute Critical Paths. Let D L,f be the collection of all the pairs P, F corresponding to the nodes of the FT-trees. Define the collection of critical paths D L = {P | P, F ∈ D L,f , |P | ∈ [L/4, L]} which consists of all sufficiently long paths. • (iii): Compute Hitting Set for the Critical Paths. Apply the algorithm of Lemma 36 to compute a hitting set R ⊆ V for the paths in D L where |R| = O(n log n/L).
Lemma 42 (
42Preprocessing time). The preprocessing time of the deterministic algorithm is bounded by1. O((αcLf ) f +1 · LM n ω ) if L ≥ m 1/c for some constant c, 2. O((αLf log n) f +1 · LM n ω ) otherwise,where α is the universal constant of Theorem 2. Proof. The computation time is dominated by the computation of the FT-trees, see Theorem 39. The FT-trees consists of O(n 2 · L f ) = O(n 2+ ) labeled nodes, and thus |D L | = O(n 2+ ). By Lemma 36, the computation of the hitting set R takes O(n 2+ + /f ) time, and |R| = O(n log n/L).
and Krauthgamer [DK11] provided a simpler and sparser solution by using the notion of RPCs. They showed: Theorem 44 (Theorem 1.1 of [DK11]). For every graph G = (V, E) with positive edge lengths and odd t ≥ 3, there is an f -fault tolerant t-spanner with size O(f 2−2/(t+1) · n 1+2/(t+1) log n).
the vertices of each subgraph in G e takes O(n/f ) time per subgraph and |G e | · O(n/f ) = O(f n) in total. The rest of the time argument works line by line as in Lemma 5.6 of [BDR20].
Towards the goal of proving Theorem 48, we start by showing that for every a, b, N , given an explicit set S = {(A, B) | A, B ⊆ [N ], |A| ≤ a, |B| ≤ b, A ∩ B = ∅}, there exists considerably smaller set of hash function H S = {h : [N ] → [q]} with the following property. For every (A, B) ∈ S, there exists a function h ∈ H S that does not collide on (
.2 of[BDR20] (see Lemma 47). We also point the reader to subsequent work by Parter[Par22].
the i th coordinate of the x th codeword). To see that H is a [N, a, b, ] q -HM hash family, we need to show (1). Fix disjoint A, B ⊆ [N ] where |A| ≤ a and |B| ≤ b. For every (x, y) ∈ A × B we have:Pr
i∼[ ]
) . Let
.C be the log q N, q, 1 − code guaranteed from Theorem 10. From Proposition 17 we can think of C as a [N, a, b, q] q -HM hash family sincelog q N
q
q
In[Par19a], the term (L, f )-RPC is not used, and instead the deterministic algorithm is referred to as a derandomization of the FT-sampling technique.
We use the normalized notion of distance for the sake of exposition. In coding theory literature, our notion of distance is referred to as relative distance.
The reasoning behind naming them as Hit and Miss Hash Family is as follows. Fix A and B. There exists a hash function h in the family and a subset S of [q] of size at most b such that S completely hits h(B) and completely misses h(A). All other interpretations of the name "Hit and Miss" Hash Family are for the entertainment of the reader.
We recall Remark 25 to say that when a = m o(1) and b = Ω(log m) in the statement of Theorem 2, the covering number we aim to achieve is (αLf log m) b+1 instead of (αLf ) b+2 · log m.
In particular, for an L-long (s, t, F ) triplet it holds that every s-t shortest path in G \ F has at least L + 1 edges.
To avoid confusion, we call the vertices of the FT-trees nodes.
There are O(L) such subgraphs which is too costly for our purposes. 10 E.g., partition P (s, t, F ) into consecutive segments of length L/4, while the last segment have length at most L/2.
This is similarly to the random construction of (L, f )-RPC, where the sampling probability also differs between when L ≤ f and L > f .
In the problem statement of [ACC19], k = O(n 2+ ).
AcknowledgmentWe would like to thank Swastik Kopparty, Gil Cohen, and Amnon Ta-Shma for discussion on coding theory, Moni Naor for discussion on universal hash functions, and Eylon Yogev for various discussions.
size O(f 3 · s(n/f )) and time O(f 3 (τ (n/f, m, t) + m). size O(f 3 · s(n/f )) and time O(f 3 (τ (n/f, m, t) + m))
· s(n/f )) and time O((f log n) 3 (τ (n/f, m, t) + m)), if f ∈. log n, n o(1)size O((f log n) 3 · s(n/f )) and time O((f log n) 3 (τ (n/f, m, t) + m)), if f ∈ [log n, n o(1) ].
Since G is an (2, f )-RPC, there exists a subgraph G j ∈ G satisfying that (u, v) ∈ G j and F ∩ V (G j ) = ∅. Thus, the t-spanner H j ⊆ H satisfies that dist(u, v, H j \ F ) = dist(u, v, H j ) ≤ tw(u, v), as desired. We now turn to show that the computation time is O(|G| · (τ (n/f, m, t) + m)) and that the size of the spanner is O(|G| · s(n/f, m, t)). Subgraph G J ∈ G, The algorithm applies the vertex variant of Theorem 28 to compute (L = 2, f ) RPC G. Then, it applies the fault-free algorithm A for computing the t-spanner H j for each. 28It is required to show that dist(s, t, H \ F ) ≤ t · dist(s, t, G \ F ) and thus it is sufficient to show that dist(u, v, H \ F ) ≤ w(u, v) for every edge (u, v) ∈ P (s, t, F ). I5v), we get that |V (G j )| = O(n/f ) for every G j ∈ G. The bounds then follows by plugging the covering value |G| and the computation time of the covering of Theorem 2Proof. The algorithm applies the vertex variant of Theorem 28 to compute (L = 2, f ) RPC G. Then, it applies the fault-free algorithm A for computing the t-spanner H j for each subgraph G j ∈ G. The output spanner H = r j=1 H j is simply the union of all these spanner subgraphs. We first consider correctness. Fix a replacement-path P (s, t, F ). It is required to show that dist(s, t, H \ F ) ≤ t · dist(s, t, G \ F ) and thus it is sufficient to show that dist(u, v, H \ F ) ≤ w(u, v) for every edge (u, v) ∈ P (s, t, F ), where w(u, v) is the weight of the edge (u, v) in G. Since G is an (2, f )-RPC, there exists a subgraph G j ∈ G satisfying that (u, v) ∈ G j and F ∩ V (G j ) = ∅. Thus, the t-spanner H j ⊆ H satisfies that dist(u, v, H j \ F ) = dist(u, v, H j ) ≤ tw(u, v), as desired. We now turn to show that the computation time is O(|G| · (τ (n/f, m, t) + m)) and that the size of the spanner is O(|G| · s(n/f, m, t)). By Theorem 28(I5v), we get that |V (G j )| = O(n/f ) for every G j ∈ G. The bounds then follows by plugging the covering value |G| and the computation time of the covering of Theorem 2.
Nearly Additive Fault-Tolerant Spanners In [BCPS15], the approach of [DK11] was extended to provide vertex fault-tolerant spanners with nearly additive stretch. Nearly Additive Fault-Tolerant Spanners In [BCPS15], the approach of [DK11] was extended to provide vertex fault-tolerant spanners with nearly additive stretch.
Set L = α · −1 + 1. Then, for any > 0 and f ≤ L, one can compute an f -vertex fault-tolerant (µ + , α)-spanner with: 1. O((c f L) f +1 · n 1+δ ) edges in time O((f c L) f +1 · τ. Theorem. 46Let A be an algorithm for computing (µ, α)-spanner of size O(n 1+δ ) in time τ for an n-vertex m-edge graph G = (V, E)Theorem 46. [Derandomization of Theorem 3.1 of [BCPS15]] Let A be an algorithm for computing (µ, α)-spanner of size O(n 1+δ ) in time τ for an n-vertex m-edge graph G = (V, E). Set L = α · −1 + 1. Then, for any > 0 and f ≤ L, one can compute an f -vertex fault-tolerant (µ + , α)- spanner with: 1. O((c f L) f +1 · n 1+δ ) edges in time O((f c L) f +1 · τ ), if L ≥ n 1/c for some constant c ∈ N.
O , +2 · log n · n 1+δ ) edges in time O((c f L) f +2 · log n · τ ), if L ≤ log n. O((c f L) f +2 · log n · n 1+δ ) edges in time O((c f L) f +2 · log n · τ ), if L ≤ log n.
L log n) f +1 · n 1+δ ) edges in time O((c f L log n) f +1 · τ ), otherwise, for some constant c. O , O((c f L log n) f +1 · n 1+δ ) edges in time O((c f L log n) f +1 · τ ), otherwise, for some constant c .
It then applies algorithm A on each of these subgraphs, and take the union of the output spanner as the final subgraph H. The size and time bounds are immediate by Theorem 2. To see the stretch argument, it is sufficient to show that for any path of length at most L in G\F , there is a corresponding path in H \F of bounded length. The stretch argument for longer paths is obtained by decomposing it into L-length segments (except perhaps for the last segment), and accumulating the additive stretch from each segment. , . L + 1, F )-Rpc G = {g 1, } ; Ching-Tien Ho, Moni Naor, Endre Szemerédi, 33rd Annual Symposium on Foundations of Computer Science. Miklós AjtaiNoga Alon, Jehoshua Bruck, Robert Cypher; Pittsburgh, Pennsylvania, USAFault tolerant graphs, perfect hash functions and disjoint pathsProof. The proof follows the exact same line as Theorem 3.1 of [BCPS15] only when using Theorem 2 to build an (L + 1, f )-RPC G = {G 1 , . . . , G γ }. It then applies algorithm A on each of these subgraphs, and take the union of the output spanner as the final subgraph H. The size and time bounds are immediate by Theorem 2. To see the stretch argument, it is sufficient to show that for any path of length at most L in G\F , there is a corresponding path in H \F of bounded length. The stretch argument for longer paths is obtained by decomposing it into L-length segments (except perhaps for the last segment), and accumulating the additive stretch from each segment. Fix an L- length path P ⊆ P (s, t, F ), and let u, v be the endpoints of P . Since G is an (L + 1, f )-RPC, w.h.p., References [AAB + 92] Miklós Ajtai, Noga Alon, Jehoshua Bruck, Robert Cypher, Ching-Tien Ho, Moni Naor, and Endre Szemerédi. Fault tolerant graphs, perfect hash functions and disjoint paths. In 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsyl- vania, USA, 24-27 October 1992, pages 693-702, 1992.
Deterministic combinatorial replacement paths and distance sensitivity oracles. Shiri Noga Alon, Sarel Chechik, Cohen, 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019. Patras, Greece12Noga Alon, Shiri Chechik, and Sarel Cohen. Deterministic combinatorial replacement paths and distance sensitivity oracles. In 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, pages 12:1-12:14, 2019.
Balanced families of perfect hash functions and their applications. Noga Alon, Shai Gutner, 54:1-54:12ACM Trans. Algorithms. 63Noga Alon and Shai Gutner. Balanced families of perfect hash functions and their applications. ACM Trans. Algorithms, 6(3):54:1-54:12, 2010.
Explicit construction of exponential sized families of k-independent sets. Noga Alon, Discret. Math. 582Noga Alon. Explicit construction of exponential sized families of k-independent sets. Discret. Math., 58(2):191-193, 1986.
Algorithmic construction of sets for k -restrictions. Noga Alon, Dana Moshkovitz, Shmuel Safra, ACM Trans. Algorithms. 22Noga Alon, Dana Moshkovitz, and Shmuel Safra. Algorithmic construction of sets for k -restrictions. ACM Trans. Algorithms, 2(2):153-177, 2006.
Derandomization, witnesses for boolean matrix multiplication and construction of perfect hash functions. Noga Alon, Moni Naor, Algorithmica. 164-5Noga Alon and Moni Naor. Derandomization, witnesses for boolean matrix multiplica- tion and construction of perfect hash functions. Algorithmica, 16(4-5):434-449, 1996.
Color-coding. Noga Alon, Raphael Yuster, Uri Zwick, Journal of the ACM (JACM). 424Noga Alon, Raphael Yuster, and Uri Zwick. Color-coding. Journal of the ACM (JACM), 42(4):844-856, 1995.
Fault tolerant additive and (µ, α)-spanners. Gilad Braunschvig, Shiri Chechik, David Peleg, Adam Sealfon, Theor. Comput. Sci. 580Gilad Braunschvig, Shiri Chechik, David Peleg, and Adam Sealfon. Fault tolerant additive and (µ, α)-spanners. Theor. Comput. Sci., 580:94-100, 2015.
Optimal vertex fault tolerant spanners (for fixed stretch). Greg Bodwin, Michael Dinitz, Merav Parter, Virginia Vassilevska Williams, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMGreg Bodwin, Michael Dinitz, Merav Parter, and Virginia Vassilevska Williams. Optimal vertex fault tolerant spanners (for fixed stretch). In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1884- 1900. SIAM, 2018.
Optimal vertex fault-tolerant spanners in polynomial time. CoRR, abs. Greg Bodwin, Michael Dinitz, Caleb Robelle, Greg Bodwin, Michael Dinitz, and Caleb Robelle. Optimal vertex fault-tolerant span- ners in polynomial time. CoRR, abs/2007.08401, 2020.
New extremal bounds for reachability and strong-connectivity preservers under failures. Diptarka Chakraborty, Keerti Choudhary, 47th International Colloquium on Automata, Languages, and Programming. Saarbrücken, Germany2020Virtual Conference)Diptarka Chakraborty and Keerti Choudhary. New extremal bounds for reachability and strong-connectivity preservers under failures. In 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, July 8-11, 2020, Saarbrücken, Germany (Virtual Conference), pages 25:1-25:20, 2020.
Distance sensitivity oracles with subcubic preprocessing time and fast query time. Shiri Chechik, Sarel Cohen, Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020. cedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020Chicago, IL, USAShiri Chechik and Sarel Cohen. Distance sensitivity oracles with subcubic preprocessing time and fast query time. In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 1375-1388, 2020.
(1+eps)-approximate fsensitive distance oracles. Shiri Chechik, Sarel Cohen, Amos Fiat, Haim Kaplan, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMShiri Chechik, Sarel Cohen, Amos Fiat, and Haim Kaplan. (1+eps)-approximate f- sensitive distance oracles. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1479-1496. SIAM, 2017.
Fault tolerant spanners for general graphs. Shiri Chechik, Michael Langberg, David Peleg, Liam Roditty, SIAM Journal on Computing. 397Shiri Chechik, Michael Langberg, David Peleg, and Liam Roditty. Fault tolerant span- ners for general graphs. SIAM Journal on Computing, 39(7):3403-3423, 2010.
On packing low-diameter spanning trees. Julia Chuzhoy, Merav Parter, Zihan Tan, 47th International Colloquium on Automata, Languages, and Programming. Saarbrücken, Germany2020Virtual Conference)Julia Chuzhoy, Merav Parter, and Zihan Tan. On packing low-diameter spanning trees. In 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, July 8-11, 2020, Saarbrücken, Germany (Virtual Conference), pages 33:1-33:18, 2020.
Fault-tolerant spanners: better and simpler. Michael Dinitz, Robert Krauthgamer, Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing. the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computingACMMichael Dinitz and Robert Krauthgamer. Fault-tolerant spanners: better and simpler. In Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing, pages 169-178. ACM, 2011.
Efficient and simple algorithms for fault tolerant spanners. Michael Dinitz, Caleb Robelle, Michael Dinitz and Caleb Robelle. Efficient and simple algorithms for fault tolerant spanners. 2020.
Efficient and simple algorithms for fault-tolerant spanners. Michael Dinitz, Caleb Robelle, PODC '20: ACM Symposium on Principles of Distributed Computing, Virtual Event. ItalyMichael Dinitz and Caleb Robelle. Efficient and simple algorithms for fault-tolerant spanners. In PODC '20: ACM Symposium on Principles of Distributed Computing, Virtual Event, Italy, August 3-7, 2020, pages 493-500, 2020.
On the size of separating systems and families of perfect hash functions. L Michael, János Fredman, Komlós, SIAM Journal on Algebraic and Discrete Methods. 51Michael L. Fredman and János Komlós. On the size of separating systems and families of perfect hash functions. SIAM Journal on Algebraic and Discrete Methods, 5(1):61- 68, 1984.
Storing a sparse table with 0(1) worst case access time. Michael L Fredman, János Komlós, Endre Szemerédi, J. ACM. 313Michael L. Fredman, János Komlós, and Endre Szemerédi. Storing a sparse table with 0(1) worst case access time. J. ACM, 31(3):538-544, 1984.
Recursive bounds for perfect hashing. Emanuela Fachini, Alon Nilli, Discret. Appl. Math. 1113Emanuela Fachini and Alon Nilli. Recursive bounds for perfect hashing. Discret. Appl. Math., 111(3):307-311, 2001.
A new class of linear correcting codes. Goppa Valerii Denisovich, Problemy Peredachi Informatsii. 63Valerii Denisovich Goppa. A new class of linear correcting codes. Problemy Peredachi Informatsii, 6(3):24-30, 1970.
Essential Coding Theory. Venkatesan Guruswami, Atri Rudra, Madhu Sudan, Venkatesan Guruswami, Atri Rudra, and Madhu Sudan. Essential Coding The- ory. 2019. Available at http://www.cse.buffalo.edu/faculty/atri/courses/ coding-theory/book.
On the asymptotic behaviour of some towers of function fields over finite fields. Arnaldo Garcia, Henning Stichtenoth, Journal of Number Theory. 612Arnaldo Garcia and Henning Stichtenoth. On the asymptotic behaviour of some towers of function fields over finite fields. Journal of Number Theory, 61(2):248 -273, 1996.
Faster replacement paths and distance sensitivity oracles. Fabrizio Grandoni, Virginia Vassilevska Williams, 15:1-15:25ACM Trans. Algorithms. 161Fabrizio Grandoni and Virginia Vassilevska Williams. Faster replacement paths and distance sensitivity oracles. ACM Trans. Algorithms, 16(1):15:1-15:25, 2020.
Round-efficient distributed byzantine computation. CoRR, abs. Yael Hitron, Merav Parter, Yael Hitron and Merav Parter. Round-efficient distributed byzantine computation. CoRR, abs/2004.06436, 2020.
Perfect hashing and probability. Alon Nilli, Comb. Probab. Comput. 3Alon Nilli. Perfect hashing and probability. Comb. Probab. Comput., 3:407-409, 1994.
Splitters and near-optimal derandomization. Moni Naor, Leonard J Schulman, Aravind Srinivasan, 36th Annual Symposium on Foundations of Computer Science. Milwaukee, Wisconsin, USAMoni Naor, Leonard J. Schulman, and Aravind Srinivasan. Splitters and near-optimal derandomization. In 36th Annual Symposium on Foundations of Computer Science, Milwaukee, Wisconsin, USA, 23-25 October 1995, pages 182-191, 1995.
Dual failure resilient bfs structure. Merav Parter, Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing. the 2015 ACM Symposium on Principles of Distributed ComputingMerav Parter. Dual failure resilient bfs structure. In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, pages 481-490, 2015.
Small cuts and connectivity certificates: A fault tolerant approach. Merav Parter, 33rd International Symposium on Distributed Computing. Merav Parter. Small cuts and connectivity certificates: A fault tolerant approach. In 33rd International Symposium on Distributed Computing, 2019.
Small cuts and connectivity certificates: A fault tolerant approach. CoRR, abs. Merav Parter, Merav Parter. Small cuts and connectivity certificates: A fault tolerant approach. CoRR, abs/1908.03022, 2019.
Nearly optimal vertex fault-tolerant spanners in optimal time: sequential, distributed, and parallel. Merav Parter, STOC '22: 54th Annual ACM SIGACT Symposium on Theory of Computing. Stefano Leonardi and Anupam GuptaRome, ItalyACMMerav Parter. Nearly optimal vertex fault-tolerant spanners in optimal time: sequen- tial, distributed, and parallel. In Stefano Leonardi and Anupam Gupta, editors, STOC '22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 -24, 2022, pages 1080-1092. ACM, 2022.
Sparse fault-tolerant BFS structures. Merav Parter, David Peleg, 11:1-11:24ACM Trans. Algorithms. 131Merav Parter and David Peleg. Sparse fault-tolerant BFS structures. ACM Trans. Algorithms, 13(1):11:1-11:24, 2016.
Low congestion cycle covers and their applications. Merav Parter, Eylon Yogev, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019. the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019San Diego, California, USAMerav Parter and Eylon Yogev. Low congestion cycle covers and their applications. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1673-1692, 2019.
Secure distributed computing made (nearly) optimal. Merav Parter, Eylon Yogev, Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019. the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019Toronto, ON, CanadaMerav Parter and Eylon Yogev. Secure distributed computing made (nearly) optimal. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019, Toronto, ON, Canada, July 29 -August 2, 2019, pages 107-116, 2019.
Polynomial codes over certain finite fields. Irving S Reed, Gustave Solomon, Journal of the Society for Industrial and Applied Mathematics (SIAM). 82Irving S. Reed and Gustave Solomon. Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics (SIAM), 8(2):300 -304, 1960.
A low-complexity algorithm for the construction of algebraic-geometric codes better than the Gilbert-Varshamov bound. Kenneth W Shum, P Vijay Ilia Aleshnikov, Henning Kumar, Vinay Stichtenoth, Deolalikar, IEEE Trans. Information Theory. 476SAK + 01SAK + 01] Kenneth W. Shum, Ilia Aleshnikov, P. Vijay Kumar, Henning Stichtenoth, and Vinay Deolalikar. A low-complexity algorithm for the construction of algebraic-geometric codes better than the Gilbert-Varshamov bound. IEEE Trans. Information Theory, 47(6):2225-2241, 2001.
Maximum distance q -nary codes. Richard C Singleton, IEEE Trans. Information Theory. 102Richard C. Singleton. Maximum distance q -nary codes. IEEE Trans. Information Theory, 10(2):116-118, 1964.
The spatial complexity of oblivious k-probe hash functions. Jeanette P Schmidt, Alan Siegel, SIAM J. Comput. 195Jeanette P. Schmidt and Alan Siegel. The spatial complexity of oblivious k-probe hash functions. SIAM J. Comput., 19(5):775-786, 1990.
Modular curves, shimura curves, and goppa codes, better than varshamov-gilbert bound. M A Tsfasman, S G Vlȃdutx, Th Zink, Mathematische Nachrichten. 1091M. A. Tsfasman, S. G. Vlȃdutx, and Th. Zink. Modular curves, shimura curves, and goppa codes, better than varshamov-gilbert bound. Mathematische Nachrichten, 109(1):21-28, 1982.
Sensitive distance and reachability oracles for large batch updates. Jan Van Den Brand, Thatchaphol Saranurak, 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). IEEEJan van den Brand and Thatchaphol Saranurak. Sensitive distance and reachability oracles for large batch updates. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 424-435. IEEE, 2019.
Explicit constructions of perfect hash families from algebraic curves over finite fields. Huaxiong Wang, Chaoping Xing, J. Comb. Theory, Ser. A. 931Huaxiong Wang and Chaoping Xing. Explicit constructions of perfect hash families from algebraic curves over finite fields. J. Comb. Theory, Ser. A, 93(1):112-124, 2001.
Replacement paths and distance sensitivity oracles via fast matrix multiplication. Oren Weimann, Raphael Yuster, ACM Transactions on Algorithms (TALG). 9214Oren Weimann and Raphael Yuster. Replacement paths and distance sensitivity oracles via fast matrix multiplication. ACM Transactions on Algorithms (TALG), 9(2):14, 2013.
Answering distance queries in directed graphs using fast matrix multiplication. Raphael Yuster, Uri Zwick, 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05). IEEERaphael Yuster and Uri Zwick. Answering distance queries in directed graphs using fast matrix multiplication. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05), pages 389-396. IEEE, 2005.
A Comparison with. Par19a] and [BDR20A Comparison with [Par19a] and [BDR20]
The construction of [Par19a, Par19b] is based on a computation of a family of perfect hash functions H = {h : [n] → [2(L + f ) 2 ]} with poly(Lf log n) functions. The covering subgraph family G of [Par19a, Par19b] consists of |H|·(4Lf ) 2f = (4Lf log n) O(1)+2f subgraphs. In the context of [Par19a], it was sufficient for the value of the covering to be polynomial in L, and for the computation time to be polynomial in n. Also note that despite the fact that [Par19a, Par19b] explicitly considers the setting where L ≥ f , their construction can be extended to provide a covering of value poly(f log n) also for the case 11 of L ≤ f . Specifically, this can be done by applying very minor modifications to Lemma 17 of [Par19b]: set a = f and b = L, then let the set S h. L , F )-Rpc For L ≥ F, The notion of (L, f )-RPC is introduced for the first time in the current paper, and in [Par19a] the construction is referred to as a derandomization of the FT-sampling technique. Par19a], the second author provided the first deterministic constructions of. i 1 ,i 2 ,...,i b of the lemma be given by S h,i 1 ,i 2 ,...,i b = { ∈ [n] | h(. ∈ {i 1 , i 2 , . . . , i b }}, ∀h ∈ H and i 1 , i 2 , . . . , i b ∈ [2(L + f ) 2 ] . (8)In [Par19a], the second author provided the first deterministic constructions of (L, f )-RPC for L ≥ f . The notion of (L, f )-RPC is introduced for the first time in the current paper, and in [Par19a] the construction is referred to as a derandomization of the FT-sampling technique. The construction of [Par19a, Par19b] is based on a computation of a family of perfect hash functions H = {h : [n] → [2(L + f ) 2 ]} with poly(Lf log n) functions. The covering subgraph family G of [Par19a, Par19b] consists of |H|·(4Lf ) 2f = (4Lf log n) O(1)+2f subgraphs. In the context of [Par19a], it was sufficient for the value of the covering to be polynomial in L, and for the computation time to be polynomial in n. Also note that despite the fact that [Par19a, Par19b] explicitly considers the setting where L ≥ f , their construction can be extended to provide a covering of value poly(f log n) also for the case 11 of L ≤ f . Specifically, this can be done by applying very minor modifications to Lemma 17 of [Par19b]: set a = f and b = L, then let the set S h,i 1 ,i 2 ,...,i b of the lemma be given by S h,i 1 ,i 2 ,...,i b = { ∈ [n] | h( ) ∈ {i 1 , i 2 , . . . , i b }}, ∀h ∈ H and i 1 , i 2 , . . . , i b ∈ [2(L + f ) 2 ] . (8)
The argument then follows in a symmetric manner as in the proof of Lemma 17 of. I E , the only modification for L ≤ f is in replacing the / ∈ sign with ∈ in Eq. Par19b]. To summarize, the construction of. Par19a, Par19b] provides an (L, f )-RPC of value poly(min{L, f } log nI.e., the only modification for L ≤ f is in replacing the / ∈ sign with ∈ in Eq. (8). The argument then follows in a symmetric manner as in the proof of Lemma 17 of [Par19b]. To summarize, the construction of [Par19a, Par19b] provides an (L, f )-RPC of value poly(min{L, f } log n).
optimize the construction of [Par19a] in several ways. First, we almost match the optimal values (L, f )-RPCs for a wide range of parameters (e.g., when f = O(1)), providing a polynomial improvement in max{L, f } compared to. In this work, we considerably. Par19a, Par19bIn this work, we considerably optimize the construction of [Par19a] in several ways. First, we almost match the optimal values (L, f )-RPCs for a wide range of parameters (e.g., when f = O(1)), providing a polynomial improvement in max{L, f } compared to [Par19a, Par19b].
Theorems 28) which have extensive applications. Those properties follow immediately by the randomized construction, and are proven in a quite natural manner in our deterministic setting as well. For example, in order to provide a "perfect" derandomization of Weimann and Yuster DSO [WY13] as provided in the paper, we must use our nearly optimal constructions of (L, f )-RPCs. Using the suboptimal (L, f )-RPC constructions of. Second, we establish several key properties of (L, f )-RPCs. Par19a, Par19b] lead to a polynomially larger query time compared to that of [WY13Second, we es- tablish several key properties of (L, f )-RPCs (e.g., Theorems 28) which have extensive applications. Those properties follow immediately by the randomized construction, and are proven in a quite natural manner in our deterministic setting as well. For example, in order to provide a "perfect" derandomization of Weimann and Yuster DSO [WY13] as provided in the paper, we must use our nearly optimal constructions of (L, f )-RPCs. Using the suboptimal (L, f )-RPC constructions of [Par19a, Par19b] lead to a polynomially larger query time compared to that of [WY13].
We also note that our techniques differ from [Par19a, Par19b] and are based on various coding schemes. Independent to our work, very recently [BDR20] presented a (randomized) slack version of the greedy algorithm to obtain (vertex) fault-tolerant spanners of optimal size. To derandomize their construction, [BDR20] provided a deterministic construction (L = 2, f )-RPC (using our terminology) with additional properties. The work of [BDR20] leaves a gap in the running time depending on the value of the number of faults, f . Specifically, for f ≥ n c for some constant c, their derandomization matches the bounds of their randomized construction. In contrast, for smaller values of f , there is a gap of poly(f ) factor in the running time. Third, we provide the first lower bound for the values of the (L, f ) covering. In our work, using the generalized construction of (L, f )-RPC with L = 2 and in particular using Theorem 28. instead of Theorem 5.3 of [BDR20])Third, we provide the first lower bound for the values of the (L, f ) covering. We also note that our techniques differ from [Par19a, Par19b] and are based on various coding schemes. Independent to our work, very recently [BDR20] presented a (randomized) slack version of the greedy algorithm to obtain (vertex) fault-tolerant spanners of optimal size. To derandomize their construction, [BDR20] provided a deterministic construction (L = 2, f )-RPC (using our terminol- ogy) with additional properties. The work of [BDR20] leaves a gap in the running time depending on the value of the number of faults, f . Specifically, for f ≥ n c for some constant c, their derandom- ization matches the bounds of their randomized construction. In contrast, for smaller values of f , there is a gap of poly(f ) factor in the running time. In our work, using the generalized construction of (L, f )-RPC with L = 2 and in particular using Theorem 28 (instead of Theorem 5.3 of [BDR20])
Elaborating, their non-optimality in derandomization stems from a not completely tight analysis of some additional properties of the RPC that they construct, and in order to compensate for this analysis, they rely on using "bulkier" objects such as almost k-wise independent families in a black-box manner. More formally, by using Theorem 28. instead of Lemma 5.3 of [BDR20], we showElaborating, their non-optimality in derandomization stems from a not completely tight anal- ysis of some additional properties of the RPC that they construct, and in order to compensate for this analysis, they rely on using "bulkier" objects such as almost k-wise independent families in a black-box manner. More formally, by using Theorem 28 instead of Lemma 5.3 of [BDR20], we show:
By claim (I2) of Theorem 28, there is a collection of O(f ) subgraphs G P =e that contain both endpoints of e. This is the analogue to the set L e defined by [BDR20]. For every fixed set F of at most f vertex faults, let G e,F be the subset of subgraphs in G e that fully avoid F . To provide a spanner of optimal size in Alg. 2 of [BDR20], it is required that for every P, F , the ratio |G e,F |/|G e | ≥ c, for some constant c. Indeed, by claim (I4) it holds that |G P,F | ≥ |G P |/2 for every F . By setting τ to 1/3 in Alg. Let G 2,f be the (2, f )-RPC of Theorem 28. 2 of [BDR20] the correctness and the size of the spanner follows by Lemma 5.4 and 5.5 in [BDR20]. (In contrast, in [BDR20] the ratio |G e,F |/|G e | depends also on some parameter δ of their universal hash functionProof. Let G 2,f be the (2, f )-RPC of Theorem 28. By claim (I2) of Theorem 28, there is a collection of O(f ) subgraphs G P =e that contain both endpoints of e. This is the analogue to the set L e defined by [BDR20]. For every fixed set F of at most f vertex faults, let G e,F be the subset of subgraphs in G e that fully avoid F . To provide a spanner of optimal size in Alg. 2 of [BDR20], it is required that for every P, F , the ratio |G e,F |/|G e | ≥ c, for some constant c. Indeed, by claim (I4) it holds that |G P,F | ≥ |G P |/2 for every F . By setting τ to 1/3 in Alg. 2 of [BDR20] the correctness and the size of the spanner follows by Lemma 5.4 and 5.5 in [BDR20]. (In contrast, in [BDR20] the ratio |G e,F |/|G e | depends also on some parameter δ of their universal hash function).
There is an algorithm A which given a set. S = {(a , B) | A , B ⊆ [n ] |a| ≤ A, |b| ≤ B, A ∩ B = ∅}, HM hash family H as input, and outputs a collection of hash functions H S = {h. Lemma 49. Let b ≤ a ≤ N all be integers. N ] → [q]} such that the following holds: • (P1) For every (A, B) ∈ S, ∃h ∈ H S such that ∀(x, y) ∈ A × B, we have h(x) = h(y)Lemma 49. Let b ≤ a ≤ N all be integers. There is an algorithm A which given a set S = {(A, B) | A, B ⊆ [N ], |A| ≤ a, |B| ≤ b, A ∩ B = ∅} and a [N, a, b, ] q -Strong HM hash family H as input, and outputs a collection of hash functions H S = {h : [N ] → [q]} such that the following holds: • (P1) For every (A, B) ∈ S, ∃h ∈ H S such that ∀(x, y) ∈ A × B, we have h(x) = h(y).
. • (p2) |h S | = O, log |S|• (P2) |H S | = O(log |S|).
Moreover, A runs in time O(T H + a · · |S|), where T H is the computation time of H. Moreover, A runs in time O(T H + a · · |S|), where T H is the computation time of H.
x) = h i (y)}. Since H is a Strong HM hash family, we have |H A,B | ≥ /2. The desired collection of hash functions H S is obtained by computing a small hitting set for the sets {H A,B | (A, B) ∈ S}. This can be done by the algorithm of Lemma 36. We next analyze the computation time. First we compute the × N Boolean matrix M H corresponding to H where the (i, x) th entry of M H is simply h i (x). ∈ S , H A , B = {i ∈ ; | ∀(x, Y) ∈ A × B, After the computation of M H we simply go over each (A, B) ∈ S and compute the sets H A,B . The computation time of all the H A,B sets takes O(|S| · · (a + b)) time. Then, the set H S is computed by applying the hitting set algorithm of Lemma 36. with parameters n = , L = /2, and q = |S|. Thus the total computation time is O(T H + a · · |S|Proof. For every (A, B) ∈ S, let H A,B = {i ∈ [ ] | ∀(x, y) ∈ A × B, h i (x) = h i (y)}. Since H is a Strong HM hash family, we have |H A,B | ≥ /2. The desired collection of hash functions H S is obtained by computing a small hitting set for the sets {H A,B | (A, B) ∈ S}. This can be done by the algorithm of Lemma 36. We next analyze the computation time. First we compute the × N Boolean matrix M H corresponding to H where the (i, x) th entry of M H is simply h i (x). After the computation of M H we simply go over each (A, B) ∈ S and compute the sets H A,B . The computation time of all the H A,B sets takes O(|S| · · (a + b)) time. Then, the set H S is computed by applying the hitting set algorithm of Lemma 36 with parameters n = , L = /2, and q = |S|. Thus the total computation time is O(T H + a · · |S|).
L,f for the critical set D L,f . The proof of the following lemma is similar to that of Theorem 2, but it is based on Lemma 23 and Lemma 49 rather than on Theorem 14. For the sake of brevity. we only prove the below for Reed-Solomon codes, as it suffices to give the claim in Theorem 48Finally, we show how to compute a covering graph family G * L,f for the critical set D L,f . The proof of the following lemma is similar to that of Theorem 2, but it is based on Lemma 23 and Lemma 49 rather than on Theorem 14. For the sake of brevity, we only prove the below for Reed-Solomon codes, as it suffices to give the claim in Theorem 48.
Given a critical set D, there is a deterministic algorithm for computing an (L, f )-RPC. Lemma 50Lemma 50. Given a critical set D, there is a deterministic algorithm for computing an (L, f )-RPC
G(D) of cardinality O((2Lf log N ) f · log(|D|)) in time O((2Lf log N ) f +1 · m + (n · L f ) · (L · f ) 2 ). G(D) of cardinality O((2Lf log N ) f · log(|D|)) in time O((2Lf log N ) f +1 · m + (n · L f ) · (L · f ) 2 ).
the set S is a legal input to Claim 49 combined with Reed-Solomon Strong HM hash family from Lemma 23. We then safely apply Claim 49 to compute a collection of hash functions H S = {h. = L , = , N = M Let S = D L,F . ; P, F ) Where P ∩ F = ∅, |f |p | ≤ L, For every h ∈ H S and for every subset i 1. i b ∈ [1, 2ab log N ], define: G h,i 1 ,i 2 ,...,i b = {e ∈ E(G) | h( ) / ∈ {i 1 , i 2 , . . . , i b }}Proof. Set a = L, b = f and N = m and let S = D L,f . Note that since each pair in D is given by (P, F ) where P ∩ F = ∅, |P | ≤ L and |F | =≤ f , the set S is a legal input to Claim 49 combined with Reed-Solomon Strong HM hash family from Lemma 23. We then safely apply Claim 49 to compute a collection of hash functions H S = {h : [N ] → [2ab log N ]} that satisfies properties (P1) and (P2). For every h ∈ H S and for every subset i 1 , . . . , i b ∈ [1, 2ab log N ], define: G h,i 1 ,i 2 ,...,i b = {e ∈ E(G) | h( ) / ∈ {i 1 , i 2 , . . . , i b }} .
Letting A = E(P ) and B = F , we have that (A, B) ∈ S. By property (P1) of H S , there exists a function h that does not collide on A, B. That is, there exists a function h ∈ H such that h(i) = h(j) for every i ∈ A and j ∈ B. Thus, letting B = {s 1 , . . . , s b } and i 1 = h(s 1 ), . . . , i b = h(s b ), we have that h(s j ) / ∈ {i 1 , . . . , i b } for every s j ∈ A. Therefore, the subgraph G h,i 1 ,i 2. G Overall, . . (d) = {g H, . . I B | H ∈ H S, F ∩ G = ∅ I B ; P ⊆ G, The cardinality of G w L,f is bounded by O(|H S | · (2Lf log N ) b ) = O((2Lf log N ) f · log(|D|)),. To show that G(D) satisfies properties of Theorem 48, it is sufficient to show that it resiliently covers all the pairs in the critical set D L,f . Fix (P, F ) ∈ D where P is a u-v path. We will show that there exists at least one subgraph G ∈ G(D) satisfying that. i b satisfies that A ⊆ S h,i 1 ,i 2 ,...,i b and B ∩ S h,i 1 ,i 2 ,...,i b = ∅Overall, G(D) = {G h,i 1 ,i 2 ,...,i b | h ∈ H S , i 1 , i 2 , . . . , i b ∈ [1, 2ab log N ]}. The cardinality of G w L,f is bounded by O(|H S | · (2Lf log N ) b ) = O((2Lf log N ) f · log(|D|)),. To show that G(D) satisfies properties of Theorem 48, it is sufficient to show that it resiliently covers all the pairs in the critical set D L,f . Fix (P, F ) ∈ D where P is a u-v path. We will show that there exists at least one subgraph G ∈ G(D) satisfying that P ⊆ G and F ∩ G = ∅. Letting A = E(P ) and B = F , we have that (A, B) ∈ S. By property (P1) of H S , there exists a function h that does not collide on A, B. That is, there exists a function h ∈ H such that h(i) = h(j) for every i ∈ A and j ∈ B. Thus, letting B = {s 1 , . . . , s b } and i 1 = h(s 1 ), . . . , i b = h(s b ), we have that h(s j ) / ∈ {i 1 , . . . , i b } for every s j ∈ A. Therefore, the subgraph G h,i 1 ,i 2 ,...,i b satisfies that A ⊆ S h,i 1 ,i 2 ,...,i b and B ∩ S h,i 1 ,i 2 ,...,i b = ∅.
· L f ) · (L · f ) 2 ) time. Next, consider the evaluation all functions in H S on all the elements in [m]. This takes O(log(|D|) · m) = O(m · log(|D|)). Next, for a fixed hash function h ∈ H S and i 1 , i 2. Finally, we analyze the computation time. By Cl. 49, the computation of H S takes O(Lf · m +. i b ∈ [1, 2ab log N ], the computation of the subgraph G h,i 1 ,i 2 ,...,i b can be done in O(m) time. Thus, the computation of all the subgraphs takes O((L · f log N ) f · log(n · L f ) · m) timeFinally, we analyze the computation time. By Cl. 49, the computation of H S takes O(Lf · m + (n · L f ) · (L · f ) 2 ) time. Next, consider the evaluation all functions in H S on all the elements in [m]. This takes O(log(|D|) · m) = O(m · log(|D|)). Next, for a fixed hash function h ∈ H S and i 1 , i 2 , . . . , i b ∈ [1, 2ab log N ], the computation of the subgraph G h,i 1 ,i 2 ,...,i b can be done in O(m) time. Thus, the computation of all the subgraphs takes O((L · f log N ) f · log(n · L f ) · m) time.
we show that for every (P, F ) ∈ D, there are at most O(log N ). Finally, subgraphs in G(D) that contain no edge from FFinally, we show that for every (P, F ) ∈ D, there are at most O(log N ) subgraphs in G(D) that contain no edge from F .
. P , F ) ∈ D. Then, |{g ∈ G W L,F | F ∩ G = ∅}| = O, Lemma. 51Lemma 51. Fix (P, F ) ∈ D. Then, |{G ∈ G w L,f | F ∩ G = ∅}| = O(log N ).
We claim that the only subgraphs in G * L,f that fully avoid a fixed set of exactly F = {e j 1 , . . . , e j f } edge faults is given by the subset of subgraphs G F = {G h,h(e j 1 ),...,h(e j f ) | h ∈ H S }. G * L ; Let, S = D L , H S = {h ; P , F ) ∈ D L , F , } be the covering graph family for S. Fix. f described in the proof of Lemma 50. 2ab log N. To see this consider a subgraph G = G h,i 1 ,...,i f where there exists e j such that h(e j ) / ∈ {i 1 , . . . , i f }. In this case, we have that e j ∈ G . Since G F consists of exactly one subgraph per hash function in H S , we get that |G F | = O(log(|D L,f |) = O(log NProof. Consider the construction of G * L,f described in the proof of Lemma 50. Let S = D L,f and let H S = {h : [N ] → [2ab log N ]} be the covering graph family for S. Fix (P, F ) ∈ D L,f . We claim that the only subgraphs in G * L,f that fully avoid a fixed set of exactly F = {e j 1 , . . . , e j f } edge faults is given by the subset of subgraphs G F = {G h,h(e j 1 ),...,h(e j f ) | h ∈ H S }. To see this consider a subgraph G = G h,i 1 ,...,i f where there exists e j such that h(e j ) / ∈ {i 1 , . . . , i f }. In this case, we have that e j ∈ G . Since G F consists of exactly one subgraph per hash function in H S , we get that |G F | = O(log(|D L,f |) = O(log N ).
| []
|
[
"A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting",
"A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting",
"A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting",
"A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting"
]
| [
"Zann Koh [email protected] \nEngineering and Product Development\nEngineering and Product Development\nSingapore University of Technology and Design Singapore\nSingapore\n",
"Yan Qin [email protected] \nSchool of Electrical and Electronics Engineering\nSingapore University of Technology and Design Singapore\nSingapore\n",
"Yong Liang Guan \nEngineering and Product Development\nNanyang Technological University Singapore\nSingapore\n",
"Chau Yuen [email protected] \nSingapore University of Technology and Design Singapore\nSingapore\n",
"Zann Koh [email protected] \nEngineering and Product Development\nEngineering and Product Development\nSingapore University of Technology and Design Singapore\nSingapore\n",
"Yan Qin [email protected] \nSchool of Electrical and Electronics Engineering\nSingapore University of Technology and Design Singapore\nSingapore\n",
"Yong Liang Guan \nEngineering and Product Development\nNanyang Technological University Singapore\nSingapore\n",
"Chau Yuen [email protected] \nSingapore University of Technology and Design Singapore\nSingapore\n"
]
| [
"Engineering and Product Development\nEngineering and Product Development\nSingapore University of Technology and Design Singapore\nSingapore",
"School of Electrical and Electronics Engineering\nSingapore University of Technology and Design Singapore\nSingapore",
"Engineering and Product Development\nNanyang Technological University Singapore\nSingapore",
"Singapore University of Technology and Design Singapore\nSingapore",
"Engineering and Product Development\nEngineering and Product Development\nSingapore University of Technology and Design Singapore\nSingapore",
"School of Electrical and Electronics Engineering\nSingapore University of Technology and Design Singapore\nSingapore",
"Engineering and Product Development\nNanyang Technological University Singapore\nSingapore",
"Singapore University of Technology and Design Singapore\nSingapore"
]
| []
| The ability to predict traffic flow over time for crowded areas during rush hours is increasingly important as it can help authorities make informed decisions for congestion mitigation or scheduling of infrastructure development in an area. However, a crucial challenge in traffic flow forecasting is the slow shifting in temporal peaks between daily and weekly cycles, resulting in the nonstationarity of the traffic flow signal and leading to difficulty in accurate forecasting. To address this challenge, we propose a slow shifting concerned machine learning method for traffic flow forecasting, which includes two parts. First, we take advantage of Empirical Mode Decomposition as the feature engineering to alleviate the nonstationarity of traffic flow data, yielding a series of stationary components. Second, due to the superiority of Long-Short-Term-Memory networks in capturing temporal features, an advanced traffic flow forecasting model is developed by taking the stationary components as inputs. Finally, we apply this method on a benchmark of real-world data and provide a comparison with other existing methods. Our proposed method outperforms the state-of-art results by 14.55% and 62.56% using the metrics of root mean squared error and mean absolute percentage error, respectively. | 10.1109/sm57895.2023.10112492 | [
"https://export.arxiv.org/pdf/2303.17782v1.pdf"
]
| 257,900,947 | 2303.17782 | 13cbc1fb95e3eeaae9c9eb2d5b18588fe609f7b2 |
A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting
Zann Koh [email protected]
Engineering and Product Development
Engineering and Product Development
Singapore University of Technology and Design Singapore
Singapore
Yan Qin [email protected]
School of Electrical and Electronics Engineering
Singapore University of Technology and Design Singapore
Singapore
Yong Liang Guan
Engineering and Product Development
Nanyang Technological University Singapore
Singapore
Chau Yuen [email protected]
Singapore University of Technology and Design Singapore
Singapore
A Slow Shifting Concerned Machine Learning Method for Short-term Traffic Flow Forecasting
Index Terms-Traffic flow forecastingLong-short term mem- oryEmpirical mode decomposition
The ability to predict traffic flow over time for crowded areas during rush hours is increasingly important as it can help authorities make informed decisions for congestion mitigation or scheduling of infrastructure development in an area. However, a crucial challenge in traffic flow forecasting is the slow shifting in temporal peaks between daily and weekly cycles, resulting in the nonstationarity of the traffic flow signal and leading to difficulty in accurate forecasting. To address this challenge, we propose a slow shifting concerned machine learning method for traffic flow forecasting, which includes two parts. First, we take advantage of Empirical Mode Decomposition as the feature engineering to alleviate the nonstationarity of traffic flow data, yielding a series of stationary components. Second, due to the superiority of Long-Short-Term-Memory networks in capturing temporal features, an advanced traffic flow forecasting model is developed by taking the stationary components as inputs. Finally, we apply this method on a benchmark of real-world data and provide a comparison with other existing methods. Our proposed method outperforms the state-of-art results by 14.55% and 62.56% using the metrics of root mean squared error and mean absolute percentage error, respectively.
I. INTRODUCTION
With the recent improvement of technologies in sensing as well as data analysis, there are an increasing number of real-world applications, one of which is the field of smart mobility. Most humans need to travel from place to place daily, whether for work commute or leisure. To accommodate and ensure smoothness of travel, the local governments would need to have a clearer picture of the travel patterns of the general population. If the traffic flow at a certain location over different times of day can be predicted, then authorities can take measures against events such as congestion or traffic jams, or even schedule road works or expansions to adapt to the traffic flow in the relevant regions.
With a large amount of data currently available to be gathered for traffic flow, it is feasible to examine the use of advanced machine learning algorithms that can be used to make sense of such data. Various machine learning algorithms have been reported in traffic flow forecasting. However, the most common choice of recurrent neural network (RNN) used in the field of traffic flow forecasting is the Long-Short Term Memory (LSTM) network according to Tedjopurnomo et al. [1]. Literature using LSTM networks makes up 18 out of the 37 surveyed works using RNN in their paper. LSTM has also been extensively used in other fields such as indoor temperature modeling [2], charge estimation for battery health [3], forecasting tourism demand [4], as well as for prediction of the next location of a vehicle's trajectory [5]. An additional advantage of LSTM is that it is easily transferable [6].
Works using LSTM in traffic flow forecasting include the work by Zou et al. [7] where they adopted a simple LSTM model to predict the citywide traffic flows. Luo et al. [8] use a combination of k-nearest neighbors (KNN) and LSTM in their work to predict the traffic flow of roads at certain detector stations located along the roads of interest. They applied KNN to select a fixed number of nearest stations that are closely related to the target station. Using the data of those related stations in their model, the prediction accuracy is improved. Shin et al. [9] and Tian et al. [10] made use of the patterns found in missing data as part of their application of LSTM in traffic prediction. LSTM has also been used in conjunction with other neural networks for improved forecasting performance, such as with graph neural networks by Lu et al. [11] and with convolutional neural networks by Liu et al. [12] and Wang et al. [13].
In [14], LSTM and a spatial convolutional neural network were used to harness the additional information in the spatial dimension and improve the prediction performance. They also made use of attention mechanisms to capture temporal shifting in peaks from day to day and week to week. Attention mechanisms have also been used in works such as those by Zhao et al. [15], Yang et al. [16], and Guo et al. [17] in conjunction with LSTMs for improvement in prediction results. Although the traffic forecasting algorithms proposed in the aforementioned works perform well, they mostly include other external data sources to aid their prediction, such as spatial data and weather data. However, this kind of external information may be difficult to obtain. Our target is to come up with a traffic flow forecasting method that does not rely on external data, yet has comparable results to such methods.
To improve the performance of classical LSTM, feature engineering can also be applied. There are various forms of feature engineering such as calculating a meaningful index [18], using canonical variate analysis [19], or decomposition of the raw signal [20]. To determine the most appropriate feature engineering method, we examine the properties of the relevant data. Daily traffic flow data roughly approximates a periodic nature but is not exactly periodic, due to the slow shifting of temporal peaks in daily and weekly cycles. The challenge of this slow shifting nature is present in other fields as well [21]. To address this slow shifting nature, we decided to explore the usage of Empirical Mode Decomposition (EMD). EMD is a method for finding an intuitive representation of the different frequency components within complicated and dynamic signals. In the case of representing the daily variations in traffic flow, EMD is more advantageous over the commonly used Fourier transform [22] as traffic flow is not perfectly periodic. There can be general daily patterns such as 'peak in the morning, valley in the evening' but these peaks and valleys do not necessarily occur at the same timing of every day or even the same day of every week. Representing these dynamic signals with the Fourier transform may require a large number of different frequencies of sinusoidal basis functions. With EMD, the fluctuations within a time series are automatically and adaptively selected from the time series itself, without requiring a predetermined set of mathematical functions.
EMD has also been used to improve results in several time series prediction applications. Tian et al. used it in their work [23] to extract Intrinsic Mode Functions (IMFs) from simulated network traffic flows and combine them with the Autoregressive Moving Average (ARMA) algorithm for prediction in the field of Internet of Vehicles. In the field of hydrology, it has been used by Agana and Homaifar [24] with deep belief networks for drought forecasting. For traffic flow forecasting, Feng et al. [25] used it in conjunction with wavelet neural networks, while Chen et al. [26] used it as part of an ensemble framework that includes deep learning. However, in these cases, they have mainly utilized EMD as a means of noise removal rather than as a feature extraction method. In the case of the work by Wang et al. [27], EMD was used to decompose the original time series signal and used multiple AutoRegressive Integrated Moving Average (ARIMA) to forecast each decomposed IMF individually. Other works including those by Hao et al [28] and Chen et al [29] use LSTM to separately forecast the decomposed IMFs, then combined the forecasting results of all the IMFs to form the final result.
In the present research, there are many works making use of LSTM and EMD for time series prediction. However, to the best of the authors' knowledge, it is a pointer to utilize EMD as a feature extraction method instead of a denoising method and combine it with an LSTM model for the prediction of traffic flow. In this study, we propose a method named Slow-shifting Temporal Traffic flow Forecasting (STTF). STTF makes use of the IMFs obtained through the process of EMD to extract features from a given traffic flow data at a given location for machine learning. These features are then fed into an LSTM model and the output of the model is the forecast value of the traffic volume at the next time interval, which is different from those in previous works which use LSTM to predict the next values of the IMFs and combine those predicted values, rather than directly predict the traffic flow value at the next interval, which is what we have done. Our method aims to improve the prediction results for the traffic flow of the next time interval in terms of root-mean-squared error (RMSE) as well as mean absolute percentage error (MAPE). We show that our method gives comparable results to state-of-the-art technology while making use of only the temporal data, thus reducing the need for external sources of data.
The remainder of the paper is organized as follows: Section II describes the adopted dataset. Section III details our proposed methodology, the Slow-shifting Temporal Traffic flow Forecasting. The experimental results and comparison of Slow-Shift Temporal Prediction to other methods are shown in Section IV. Finally, Section V concludes the paper.
II. DATASET DESCRIPTION
To perform traffic flow forecasting, having a good source of mobility data is crucial. One viable source of mobility data for the prediction of traffic flow is taxis that are equipped with Global Positioning System (GPS) receivers. Such taxicabs generate large volumes of data daily, as noted in the work of Zheng et al. [30], in which they state that there are around 67,000 licensed taxicabs that generate over 1.2 million occupied trips per day in Beijing alone. If actionable mobility insights can be extracted from these large volumes of taxi data, it would save governing bodies time and resources which they would otherwise have to spend on installing specific infrastructure and collecting specific data.
For this study, we used the NYC taxi dataset, which was collected by the New York Taxi Limousine Company 1 and processed by Yao et al. [14]. This dataset represents the end flow volume at each 1km-by-1km square in their 10km-by-20km selected study area in 30min intervals over 60 days. The first 40 days of data were used as the training set, while the remaining 20 days were used as the testing set.
For our study, the time series data of the single 1km-by-1km square with the highest average traffic flow value was selected. To help the signal be more compatible with EMD, we took the difference between the mean of the signal and the original signal and used that as an input for EMD. The mean value of the signal was noted and added back to the predicted value afterward, before comparing it with the original labels of the testing dataset.
To prepare the data for the LSTM input, we used a sliding window to extract the values of two time steps before as well as the current time step as the features and the value of the next time step as the label. This means that for a time t, we have the values of all the separated IMFs at times t − 2, t − 1, and t as the features, and we want to predict the actual value of the traffic flow at time t + 1.
III. SLOW-SHIFTING TEMPORAL TRAFFIC FLOW FORECASTING
This section introduces our proposed methodology as well as a brief overview of the algorithms used. The overall structure of our data flow is illustrated in Fig. 1. To address the slow shifting nature of the traffic flow data, the time series data is firstly decomposed into its constituent Intrinsic Mode Functions (IMFs) using the process of EMD. The separated IMFs are then used as individual features and fed into an LSTM model with attention layers. For each sample, the features comprise the values of each IMF at that timestep, as well as the IMFs at the timestep before that, up to 2 prior timesteps in this study. Thus, we call our proposed method Slow-shifting Temporal Traffic flow Forecasting (STTF).
A. Feature Engineering
One of the main challenges in traffic flow forecasting is the slow shifting in the peak locations in the time signal of the traffic flow data. Traffic may peak at slightly different timings each day, and this slow shifting in peak timings leads to nonstationarity, which contributes to the difficulty of prediction. To address this, we choose to use EMD. EMD was proposed by Huang et al. [31] in order to separate the representations of different frequency components within a signal or time series.
EMD operates under the following assumptions:
• The data has at least two extreme values, one maximum, and one minimum. • The local time-domain characteristics of the raw data are uniquely determined by the time scale between the extreme points. • If there is no extreme point in the data but there is an inflection point, the result can be obtained by taking the derivative of the data one or more times to obtain the extreme value and then integrating it.
The different frequency components within a signal or time series are called Intrinsic Mode Functions (IMFs). Each IMF has to satisfy the following conditions: • The total number of local extrema (local maxima and local minima) and the number of zero-crossings must either be equal or differ at most by one. • At any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is near zero.
The EMD process of extracting IMFs from a time signal X(t) is as follows. Firstly, all the local extrema (maxima and minima) are located. Next, all the local maxima are connected by a cubic spline line to form an upper envelope e up (t). A similar process happens with all the local minima to obtain a lower envelope e lo (t). These two envelopes should cover all the data in between.
Next, a mean value m(t) is computed by taking the mean of these two envelopes. m(t) = (e up (t) + e lo (t))/2
(1)
The first test component d(t) is then extracted as the difference between the original data X(t) and the mean of the envelopes m(t).
d(t) = X(t) − m(t)(2)
At this point, the properties of d(t) are checked to determine if d(t) fulfills the conditions of an IMF. If it does, d(t) is taken to be an IMF, and the original signal X(t) is replaced by the residual r(t) = X(t) − d(t) for further computation of subsequent IMFs. If it does not, then X(t) is replaced directly by d(t) for further computation. This iterative process continues until the last residual function becomes a monotonic function or the number of extrema is less than or equal to one. This means that no more IMFs can be extracted.
A plot of IMFs that were extracted via EMD is shown in Fig. 2. The signal in black at the top is sum of all the IMFs which ideally would reproduce the original signal. The individual component IMFs are plotted in the rows below from highest to lowest frequency. Fig. 3. Diagram of an LSTM unit.
i t h t-1 x t h t h t f t σ σ tanh σ o t C t C t-1 C t × × + × tanh
B. Development of the Forecasting Model with LSTM
After obtaining IMFs as features, the next step is to develop a predictive learning model. LSTM is one such predictive model that is particularly suited to time-series predictions. The LSTM was proposed by Hochreiter and Schmidhuber [32] to improve the long-term temporal prediction ability from a basic RNN. The LSTM targets the problem of vanishing gradients for long-term predictions in RNNs by allowing the 'forgetting' or ignoring of data that is not useful for the prediction in the network.
A diagram of an LSTM unit is shown in Fig. 3. The LSTM unit has three gates -a forget gate, an input gate, and an output gate, denoted by the f , i, and o, respectively, in the following equations. W and b represent weights and biases, respectively. σ represents a sigmoid function and the subscript t denotes the value at time t.
Firstly, the forget gate decides which parts of the previous cell state to forget by computing f t .
f t = σ(W f · [h t−1 , x t ] + b f )(3)
The new information from the current sample to store in the cell state is decided by the input state i t .
i t = σ(W i · [h t−1 , x t ] + b i )(4)
Next, a vector of new candidate values for the cell state,
C t , is created.C t = tanh(W C · [h t−1 , x t ] + b C )(5)
The cell state C t is then updated with the summed values of the convolution of f t with the previous cell state C t−1 as well as the convolution of i t with the candidate cell state vector C t . The output gate decides which parts of the information to be outputted.
C t = f t * C t−1 + i t * C t(6)o t = σ(W o · [h t−1 , x t ] + b o )(7)
Lastly, the hidden state is updated with the convolution of o t with the tanh function output of the cell state C t .
h t = o t * tanh(C t )(8)
For this study, we used a sequential model with two LSTM layers and an attention layer after each LSTM layer, followed by a densely connected layer and an output layer. The number of neurons in each of the LSTM and dense layers were tested from a set of different numbers of neurons. We found that ten neurons in each LSTM layer, as well as the densely connected layer gave the best result.
IV. EXPERIMENTAL RESULTS AND DISCUSSION
This section presents the results of performing STTF on our test dataset as well as the results of comparison against previous time series methods and state-of-the-art methods.
For this experiment, the time series data from the end flow of the grid square with the highest average traffic flow was used. STTF and each of the other methods were used to forecast the values of each subsequent timestep for the length of the testing data. A visual prediction comparison between the proposed STTF and pure LSTM is shown in Fig. 4. From Fig. 4, it can be observed that although the pure LSTM can detect the locations of the fluctuations reasonably well, its predictions still have a larger margin of error as compared to STTF.
For numerical comparison of model performance, the metrics root-mean-squared-error (RMSE) and mean absolute percentage error (MAPE) are selected. The definitions of RMSE and MAPE are shown in (9) and (10), respectively. In these equations, i refers to the time step of the prediction, n is the total number of predictions made, y true,i is the true value at the time step i, and y pred,i is the predicted value at the time step i.
RM SE = 1 n n i=1 (y true,i − y pred,i ) 2 (9) M AP E = 1 n n i=1 y true,i − y pred,i y true,i × 100(10)
For both of these metrics, a lower value indicates better performance as the error is lower. We perform a comparison with the state-of-the-art STDN model, against pure LSTM without the EMD portion, as well as two well-known time series prediction algorithms, ARMA and ARIMA [33]. The respective RMSE and MAPE values for each model type are shown in Table II. We can observe that the proposed STTF model outperforms both the STDN as well as the pure LSTM model. The pure LSTM model may not have been able to capture the full extent of the peaks and valleys, thus resulting in a higher RMSE value. The ARMA and ARIMA models performed relatively poorly, which may have been caused by the predictions flatlining due to an extended prediction horizon.
V. CONCLUSION
In this paper, we have proposed a method for traffic flow forecasting called Slow-shifting Temporal Traffic flow Forecasting that aims to address the challenge of nonstationarity in the traffic flow time signal. We incorporate EMD as a means of feature extraction by extracting the IMFs of the traffic flow data to serve as feature inputs of each sample. LSTM with attention layers is then used as a prediction algorithm. The proposed method was performed on a benchmark dataset and performed better in terms of RMSE and MAPE as compared to the state-of-the-art and some other common algorithms. The proposed method can be applied to time series data from different locations and at different granularities.
Fig. 1 .
1The overall structure of the proposed Slow-shifting Temporal Traffic flow Forecasting.
Fig. 2 .
2Decomposition of a sample time signal from the data using EMD. Summed IMFs in the top row in black correspond to the original signal.
Fig. 4 .
4Forecasting trend comparison between the proposed method and LSTM using the data from time intervals 565 to 665.
TABLE I DETAILS
IOF DATASET USED IN THIS STUDYDataset Characteristics
Value
Time span (DD/MM/YYYY)
01/01/2015 -01/03/2015
Time interval
30 min
Range of traffic flow values
per interval (min -max)
0 -954
Median traffic flow value
431
TABLE II COMPARISON
IIBETWEEN THE PROPOSED METHOD AND ITS COUNTERPARTS REGARDING PREDICTION ACCURACYModel
RMSE
MAPE (%)
ARMA
315.19
73.51
ARIMA
187.29
96.42
LSTM
57.53
15.13
STDN [14]
19.05
15.60
STTF (Proposed)
16.25
5.84
https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page
This research is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund -Pre Positioning (IAF-PP) (Grant No. A19D6a0053). Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of A*STAR.
A survey on modern deep neural network for traffic prediction: Trends, methods and challenges. D A Tedjopurnomo, Z Bao, B Zheng, F Choudhury, A K Qin, IEEE Transactions on Knowledge and Data Engineering. D. A. Tedjopurnomo, Z. Bao, B. Zheng, F. Choudhury, and A. K. Qin, "A survey on modern deep neural network for traffic prediction: Trends, methods and challenges," IEEE Transactions on Knowledge and Data Engineering, 2020.
Cnnlstm architecture for predictive indoor temperature modeling. F Elmaz, R Eyckerman, W Casteels, S Latré, P Hellinckx, Building and Environment. 206108327F. Elmaz, R. Eyckerman, W. Casteels, S. Latré, and P. Hellinckx, "Cnn- lstm architecture for predictive indoor temperature modeling," Building and Environment, vol. 206, p. 108327, 2021.
Transfer learning-based state of charge estimation for lithium-ion battery at varying ambient temperatures. Y Qin, S Adams, C Yuen, IEEE Transactions on Industrial Informatics. 1711Y. Qin, S. Adams, and C. Yuen, "Transfer learning-based state of charge estimation for lithium-ion battery at varying ambient temperatures," IEEE Transactions on Industrial Informatics, vol. 17, no. 11, pp. 7304- 7315, 2021.
Using sarima-cnn-lstm approach to forecast daily tourism demand. K He, L Ji, C W D Wu, K F G Tso, Journal of Hospitality and Tourism Management. 49K. He, L. Ji, C. W. D. Wu, and K. F. G. Tso, "Using sarima-cnn-lstm approach to forecast daily tourism demand," Journal of Hospitality and Tourism Management, vol. 49, pp. 25-33, 2021.
Spatiotemporal capsule neural network for vehicle trajectory prediction. Y Qin, Y L Guan, C Yuen, IEEE Transactions on Vehicular Technology. Y. Qin, Y. L. Guan, and C. Yuen, "Spatiotemporal capsule neural net- work for vehicle trajectory prediction," IEEE Transactions on Vehicular Technology, 2023.
A transferable multi-stage model with cycling discrepancy learning for lithium-ion battery state of health estimation. Y Qin, C Yuen, X Yin, B Huang, IEEE Transactions on Industrial Informatics. 2022in pressY. Qin, C. Yuen, X. Yin, and B. Huang, "A transferable multi-stage model with cycling discrepancy learning for lithium-ion battery state of health estimation," IEEE Transactions on Industrial Informatics, 2022, in press.
City-level traffic flow prediction via lstm networks. Z Zou, P Gao, C Yao, Proceedings of the 2nd International Conference on Advances in Image Processing. the 2nd International Conference on Advances in Image ProcessingZ. Zou, P. Gao, and C. Yao, "City-level traffic flow prediction via lstm networks," in Proceedings of the 2nd International Conference on Advances in Image Processing, 2018, pp. 149-153.
Spatiotemporal traffic flow prediction with KNN and LSTM. X Luo, D Li, Y Yang, S Zhang, Journal of Advanced Transportation. 2019X. Luo, D. Li, Y. Yang, and S. Zhang, "Spatiotemporal traffic flow prediction with KNN and LSTM," Journal of Advanced Transportation, vol. 2019, 2019.
Prediction of traffic congestion based on lstm through correction of missing temporal and spatial data. D.-H Shin, K Chung, R C Park, IEEE Access. 8D.-H. Shin, K. Chung, and R. C. Park, "Prediction of traffic congestion based on lstm through correction of missing temporal and spatial data," IEEE Access, vol. 8, pp. 150 784-150 796, 2020.
Lstm-based traffic flow prediction with missing data. Y Tian, K Zhang, J Li, X Lin, B Yang, Neurocomputing. 318Y. Tian, K. Zhang, J. Li, X. Lin, and B. Yang, "Lstm-based traffic flow prediction with missing data," Neurocomputing, vol. 318, pp. 297-305, 2018.
Lstm variants meet graph neural networks for road speed prediction. Z Lu, W Lv, Y Cao, Z Xie, H Peng, B Du, Neurocomputing. 400Z. Lu, W. Lv, Y. Cao, Z. Xie, H. Peng, and B. Du, "Lstm variants meet graph neural networks for road speed prediction," Neurocomputing, vol. 400, pp. 34-45, 2020.
Short-term traffic flow prediction with conv-lstm. Y Liu, H Zheng, X Feng, Z Chen, 2017 9th International Conference on Wireless Communications and Signal Processing. IEEEY. Liu, H. Zheng, X. Feng, and Z. Chen, "Short-term traffic flow prediction with conv-lstm," in 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). IEEE, 2017, pp. 1-6.
An effective dynamic spatiotemporal framework with external features information for traffic prediction. J Wang, W Zhu, Y Sun, C Tian, Applied Intelligence. 51J. Wang, W. Zhu, Y. Sun, and C. Tian, "An effective dynamic spa- tiotemporal framework with external features information for traffic prediction," Applied Intelligence, vol. 51, pp. 3159-3173, 2021.
Revisiting spatialtemporal similarity: A deep learning framework for traffic prediction. H Yao, X Tang, H Wei, G Zheng, Z Li, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33H. Yao, X. Tang, H. Wei, G. Zheng, and Z. Li, "Revisiting spatial- temporal similarity: A deep learning framework for traffic prediction," in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 5668-5675.
Attention based graph bi-lstm networks for traffic forecasting. H Zhao, H Yang, Y Wang, D Wang, R Su, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEEH. Zhao, H. Yang, Y. Wang, D. Wang, and R. Su, "Attention based graph bi-lstm networks for traffic forecasting," in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020, pp. 1-6.
Traffic flow prediction using lstm with feature enhancement. B Yang, S Sun, J Li, X Lin, Y Tian, Neurocomputing. 332B. Yang, S. Sun, J. Li, X. Lin, and Y. Tian, "Traffic flow prediction using lstm with feature enhancement," Neurocomputing, vol. 332, pp. 320-327, 2019.
Ma-lstm: A multi-attention based lstm for complex pattern extraction. J Guo, K Tian, K Ye, C.-Z Xu, 2020 25th International Conference on Pattern Recognition (ICPR). IEEEJ. Guo, K. Tian, K. Ye, and C.-Z. Xu, "Ma-lstm: A multi-attention based lstm for complex pattern extraction," in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 3605- 3611.
Implementing a deep learning framework for short term traffic flow prediction. H Yi, K.-H N Bui, H Jung, Proceedings of the 9th international conference on web intelligence, mining and semantics. the 9th international conference on web intelligence, mining and semanticsH. Yi, K.-H. N. Bui, and H. Jung, "Implementing a deep learning framework for short term traffic flow prediction," in Proceedings of the 9th international conference on web intelligence, mining and semantics, 2019, pp. 1-8.
Transfer-learning-based state-ofhealth estimation for lithium-ion battery with cycle synchronization. K Q Zhou, Y Qin, C Yuen, IEEE/ASME Transactions on Mechatronics. 2022in pressK. Q. Zhou, Y. Qin, and C. Yuen, "Transfer-learning-based state-of- health estimation for lithium-ion battery with cycle synchronization," IEEE/ASME Transactions on Mechatronics, 2022, in press.
Short-term traffic flow forecasting using ensemble approach based on deep belief networks. J Liu, N Wu, Y Qiao, Z Li, IEEE Transactions on Intelligent Transportation Systems. J. Liu, N. Wu, Y. Qiao, and Z. Li, "Short-term traffic flow forecasting using ensemble approach based on deep belief networks," IEEE Trans- actions on Intelligent Transportation Systems, 2020.
Slow-varying dynamicsassisted temporal capsule network for machinery remaining useful life estimation. Y Qin, C Yuen, Y Shao, B Qin, X Li, IEEE Transactions on Cybernetics. 2022in pressY. Qin, C. Yuen, Y. Shao, B. Qin, and X. Li, "Slow-varying dynamics- assisted temporal capsule network for machinery remaining useful life estimation," IEEE Transactions on Cybernetics, 2022, in press.
The Fourier transform and its applications. R N Bracewell, R N Bracewell, McGraw-Hill31999New YorkR. N. Bracewell and R. N. Bracewell, The Fourier transform and its applications. McGraw-Hill New York, 1986, vol. 31999.
An EMD and ARMA-based network traffic prediction approach in sdn-based internet of vehicles. M Tian, C Sun, S Wu, Wireless Networks. M. Tian, C. Sun, and S. Wu, "An EMD and ARMA-based network traffic prediction approach in sdn-based internet of vehicles," Wireless Networks, pp. 1-13, 2021.
EMD-based predictive deep belief network for time series prediction: an application to drought forecasting. N A Agana, A Homaifar, Hydrology. 5118N. A. Agana and A. Homaifar, "EMD-based predictive deep belief net- work for time series prediction: an application to drought forecasting," Hydrology, vol. 5, no. 1, p. 18, 2018.
Prediction of traffic flow based on the EMD and wavelet neural network. T Feng, X Wang, Y He, 2015 2nd International Conference on Electrical. Atlantis PressComputer Engineering and ElectronicsT. Feng, X. Wang, and Y. He, "Prediction of traffic flow based on the EMD and wavelet neural network," in 2015 2nd International Confer- ence on Electrical, Computer Engineering and Electronics. Atlantis Press, 2015, pp. 933-938.
Traffic flow prediction by an ensemble framework with data denoising and deep learning model. X Chen, H Chen, Y Yang, H Wu, W Zhang, J Zhao, Y Xiong, Physica A: Statistical Mechanics and Its Applications. 565125574X. Chen, H. Chen, Y. Yang, H. Wu, W. Zhang, J. Zhao, and Y. Xiong, "Traffic flow prediction by an ensemble framework with data denoising and deep learning model," Physica A: Statistical Mechanics and Its Applications, vol. 565, p. 125574, 2021.
A novel work zone short-term vehicle-type specific traffic speed prediction model through the hybrid emd-arima framework. H Wang, L Liu, S Dong, Z Qian, H Wei, Transportmetrica B: Transport Dynamics. 43H. Wang, L. Liu, S. Dong, Z. Qian, and H. Wei, "A novel work zone short-term vehicle-type specific traffic speed prediction model through the hybrid emd-arima framework," Transportmetrica B: Transport Dy- namics, vol. 4, no. 3, pp. 159-186, 2016.
A hybrid emdlstm model for non-stationary wave prediction in offshore china. W Hao, X Sun, C Wang, H Chen, L Huang, Ocean Engineering. 246110566W. Hao, X. Sun, C. Wang, H. Chen, and L. Huang, "A hybrid emd- lstm model for non-stationary wave prediction in offshore china," Ocean Engineering, vol. 246, p. 110566, 2022.
A hybrid attention-based emd-lstm model for financial time series prediction. L Chen, Y Chi, Y Guan, J Fan, 2019 2nd international conference on artificial intelligence and big data (ICAIBD). IEEEL. Chen, Y. Chi, Y. Guan, and J. Fan, "A hybrid attention-based emd-lstm model for financial time series prediction," in 2019 2nd international conference on artificial intelligence and big data (ICAIBD). IEEE, 2019, pp. 113-118.
Urban computing with taxicabs. Y Zheng, Y Liu, J Yuan, X Xie, Proceedings of the 13th international conference on Ubiquitous computing. the 13th international conference on Ubiquitous computingY. Zheng, Y. Liu, J. Yuan, and X. Xie, "Urban computing with taxicabs," in Proceedings of the 13th international conference on Ubiquitous computing, 2011, pp. 89-98.
The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis. N E Huang, Z Shen, S R Long, M C Wu, H H Shih, Q Zheng, N.-C Yen, C C Tung, H H Liu, mathematical, physical and engineering sciences. 454N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.- C. Yen, C. C. Tung, and H. H. Liu, "The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis," Proceedings of the Royal Society of London. Series A: mathematical, physical and engineering sciences, vol. 454, no. 1971, pp. 903-995, 1998.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Time series analysis: forecasting and control. G E Box, G M Jenkins, G C Reinsel, G M Ljung, John Wiley & SonsG. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time series analysis: forecasting and control. John Wiley & Sons, 2015.
| []
|
[
"Study of the inner dust envelope and stellar photosphere of the AGB star R Doradus using SPHERE/ZIMPOL",
"Study of the inner dust envelope and stellar photosphere of the AGB star R Doradus using SPHERE/ZIMPOL"
]
| [
"T Khouri \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n\nAstronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands\n",
"M Maercker \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n",
"L B F M Waters \nAstronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands\n\nSRON Netherlands Institute for Space Research\nSorbonnelaan 23584 CAUtrechtThe Netherlands\n",
"W H T Vlemmings \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n",
"P Kervella \nDepartamento de Astronomía\nCNRS UMI 3386)\nUnidad Mixta Internacional Franco-Chilena de Astronomía\nUniversidad de Chile\nCamino El Observatorio 1515\n\nLas Condes\nSantiagoChile\n\nLESIA (UMR 8109\nObservatoire de Paris\nPSL\nCNRS\nUPMC\nUniv. Paris-Diderot\n5 place Jules Janssen92195MeudonFrance\n",
"A De Koter \nAstronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands\n",
"C Ginski \nSterrewacht Leiden\nNiels Bohrweg 2P.O. Box 95132300RALeidenThe Netherlands\n",
"E De Beck \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n",
"L Decin \nInstituut voor Sterrenkunde\nCelestijnenlaan200D B-2401, 3001Leuven, LeuvenKUBelgium\n",
"M Min \nAstronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands\n\nSRON Netherlands Institute for Space Research\nSorbonnelaan 23584 CAUtrechtThe Netherlands\n",
"C Dominik \nAstronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands\n",
"E O'gorman \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n",
"H.-M Schmid \nInstitute for Astronomy\nETH Zurich\n8093ZurichSwitzerland\n",
"R Lombaert \nDepartment of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden\n",
"E Lagadec \nLaboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\nBlvd de l'Observatoire34229, 06304Nice cedex 4CSFrance\n"
]
| [
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Astronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands",
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Astronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands",
"SRON Netherlands Institute for Space Research\nSorbonnelaan 23584 CAUtrechtThe Netherlands",
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Departamento de Astronomía\nCNRS UMI 3386)\nUnidad Mixta Internacional Franco-Chilena de Astronomía\nUniversidad de Chile\nCamino El Observatorio 1515",
"Las Condes\nSantiagoChile",
"LESIA (UMR 8109\nObservatoire de Paris\nPSL\nCNRS\nUPMC\nUniv. Paris-Diderot\n5 place Jules Janssen92195MeudonFrance",
"Astronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands",
"Sterrewacht Leiden\nNiels Bohrweg 2P.O. Box 95132300RALeidenThe Netherlands",
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Instituut voor Sterrenkunde\nCelestijnenlaan200D B-2401, 3001Leuven, LeuvenKUBelgium",
"Astronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands",
"SRON Netherlands Institute for Space Research\nSorbonnelaan 23584 CAUtrechtThe Netherlands",
"Astronomical Institute \"Anton Pannekoek\"\nUniversity of Amsterdam\nPO Box 942491090 GEAmsterdamThe Netherlands",
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Institute for Astronomy\nETH Zurich\n8093ZurichSwitzerland",
"Department of Earth and Space Sciences\nChalmers University of Technology\nOnsala Space Observatory\n439 92OnsalaSweden",
"Laboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\nBlvd de l'Observatoire34229, 06304Nice cedex 4CSFrance"
]
| []
| On the asymptotic giant branch (AGB) low-and intermediate-mass stars eject a large fraction of their envelope, but the mechanism driving these outflows is still poorly understood. For oxygen-rich AGB stars, the wind is thought to be driven by radiation pressure caused by scattering of radiation off dust grains. We use high-angular-resolution images obtained with SPHERE/ZIMPOL to study the photosphere, the warm molecular layer, and the inner wind of the close-by oxygen-rich AGB star R Doradus and its inner envelope. We present observations in filters V, cntHα, and cnt820 and investigate the surface brightness distribution of the star and of the polarised light produced in the inner envelope. Thanks to second-epoch observations in cntHα, we are able to see variability on the stellar photosphere. We study the polarised-light data using a continuum-radiative-transfer code that accounts for direction-dependent scattering of photons off dust grains. We find that in the first epoch the surface brightness of R Dor is asymmetric in V and cntHα, the filters where molecular opacity is stronger, while in cnt820 the surface brightness is closer to being axisymmetric. The second-epoch observations in cntHα show that the morphology of R Dor has changed completely in a timespan of 48 days to a more axisymmetric and compact configuration. This variable morphology is probably linked to changes in the opacity provided by TiO molecules in the extended atmosphere. The observations show polarised light coming from a region around the central star. The inner radius of the region from where polarised light is seen varies only by a small amount with azimuth. The value of the polarised intensity, however, varies by between a factor of 2.3 and 3.7 with azimuth for the different images. We fit the radial profile of the polarised intensity using a spherically symmetric model and a parametric description of the dust density profile, ρ(r) = ρ • r −n . On average, we find exponents of −4.5 ± 0.5 that correspond to a much steeper density profile than that of a wind expanding at constant velocity. The dust densities we derive imply an upper limit for the dust-to-gas ratio of ∼ 2 × 10 −4 at 5.0 R . Considering all the uncertainties in observations and models, this value is consistent with the minimum values required by wind-driving models for the onset of a wind, of ∼ 3.3 × 10 −4 . However, if the steep density profile we find extends to larger distances from the star, the dust-to-gas ratio will quickly become too small for the wind of R Dor to be driven by the grains that produce the scattered light. | 10.1051/0004-6361/201628435 | [
"https://arxiv.org/pdf/1605.05504v1.pdf"
]
| 53,050,398 | 1605.05504 | 02188b5ba1bd9ad0b07a0764670f3322149abf1f |
Study of the inner dust envelope and stellar photosphere of the AGB star R Doradus using SPHERE/ZIMPOL
May 19, 2016 May 19, 2016
T Khouri
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
Astronomical Institute "Anton Pannekoek"
University of Amsterdam
PO Box 942491090 GEAmsterdamThe Netherlands
M Maercker
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
L B F M Waters
Astronomical Institute "Anton Pannekoek"
University of Amsterdam
PO Box 942491090 GEAmsterdamThe Netherlands
SRON Netherlands Institute for Space Research
Sorbonnelaan 23584 CAUtrechtThe Netherlands
W H T Vlemmings
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
P Kervella
Departamento de Astronomía
CNRS UMI 3386)
Unidad Mixta Internacional Franco-Chilena de Astronomía
Universidad de Chile
Camino El Observatorio 1515
Las Condes
SantiagoChile
LESIA (UMR 8109
Observatoire de Paris
PSL
CNRS
UPMC
Univ. Paris-Diderot
5 place Jules Janssen92195MeudonFrance
A De Koter
Astronomical Institute "Anton Pannekoek"
University of Amsterdam
PO Box 942491090 GEAmsterdamThe Netherlands
C Ginski
Sterrewacht Leiden
Niels Bohrweg 2P.O. Box 95132300RALeidenThe Netherlands
E De Beck
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
L Decin
Instituut voor Sterrenkunde
Celestijnenlaan200D B-2401, 3001Leuven, LeuvenKUBelgium
M Min
Astronomical Institute "Anton Pannekoek"
University of Amsterdam
PO Box 942491090 GEAmsterdamThe Netherlands
SRON Netherlands Institute for Space Research
Sorbonnelaan 23584 CAUtrechtThe Netherlands
C Dominik
Astronomical Institute "Anton Pannekoek"
University of Amsterdam
PO Box 942491090 GEAmsterdamThe Netherlands
E O'gorman
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
H.-M Schmid
Institute for Astronomy
ETH Zurich
8093ZurichSwitzerland
R Lombaert
Department of Earth and Space Sciences
Chalmers University of Technology
Onsala Space Observatory
439 92OnsalaSweden
E Lagadec
Laboratoire Lagrange
Université Côte d'Azur
Observatoire de la Côte d'Azur
CNRS
Blvd de l'Observatoire34229, 06304Nice cedex 4CSFrance
Study of the inner dust envelope and stellar photosphere of the AGB star R Doradus using SPHERE/ZIMPOL
May 19, 2016 May 19, 2016Astronomy & Astrophysics manuscript no. Khouri˙RDor˙SPHERE˙v2 c ESO 2016 Preprint online version:
On the asymptotic giant branch (AGB) low-and intermediate-mass stars eject a large fraction of their envelope, but the mechanism driving these outflows is still poorly understood. For oxygen-rich AGB stars, the wind is thought to be driven by radiation pressure caused by scattering of radiation off dust grains. We use high-angular-resolution images obtained with SPHERE/ZIMPOL to study the photosphere, the warm molecular layer, and the inner wind of the close-by oxygen-rich AGB star R Doradus and its inner envelope. We present observations in filters V, cntHα, and cnt820 and investigate the surface brightness distribution of the star and of the polarised light produced in the inner envelope. Thanks to second-epoch observations in cntHα, we are able to see variability on the stellar photosphere. We study the polarised-light data using a continuum-radiative-transfer code that accounts for direction-dependent scattering of photons off dust grains. We find that in the first epoch the surface brightness of R Dor is asymmetric in V and cntHα, the filters where molecular opacity is stronger, while in cnt820 the surface brightness is closer to being axisymmetric. The second-epoch observations in cntHα show that the morphology of R Dor has changed completely in a timespan of 48 days to a more axisymmetric and compact configuration. This variable morphology is probably linked to changes in the opacity provided by TiO molecules in the extended atmosphere. The observations show polarised light coming from a region around the central star. The inner radius of the region from where polarised light is seen varies only by a small amount with azimuth. The value of the polarised intensity, however, varies by between a factor of 2.3 and 3.7 with azimuth for the different images. We fit the radial profile of the polarised intensity using a spherically symmetric model and a parametric description of the dust density profile, ρ(r) = ρ • r −n . On average, we find exponents of −4.5 ± 0.5 that correspond to a much steeper density profile than that of a wind expanding at constant velocity. The dust densities we derive imply an upper limit for the dust-to-gas ratio of ∼ 2 × 10 −4 at 5.0 R . Considering all the uncertainties in observations and models, this value is consistent with the minimum values required by wind-driving models for the onset of a wind, of ∼ 3.3 × 10 −4 . However, if the steep density profile we find extends to larger distances from the star, the dust-to-gas ratio will quickly become too small for the wind of R Dor to be driven by the grains that produce the scattered light.
Introduction
The asymptotic giant branch (AGB) is one of the final stages of the evolution of low-and intermediate-mass stars, when a slow and dense outflow develops (Habing & Olofsson 2003). The wind is thought to be driven by radiation pressure on dust grains that can only form because pulsations enhance the density-scaleheight of the stellar atmospheres. For oxygen-rich AGB stars (where the carbon-to-oxygen ratio is lower than one), it has been proposed that large, translucent dust grains provide the required opacity and drive the wind through scattering of photons (Höfner 2008). However, many intricacies of the formation and processing of the oxygen-rich dust grains remain poorly constrained Send offprint requests to T. Khouri e-mail: [email protected] from observations. The translucent nature of these grains implies that they are not expected to produce significant infrared emission and, hence, most likely cannot be identified from infrared spectra. The best way to study such grains is through the scattered stellar light they are expected to produce.
To advance our understanding of the AGB mass loss, we use high-angular-resolution observations acquired with SPHERE/ZIMPOL (Beuzit et al. 2008) on the Very Large Telescope (VLT) to investigate light polarised through scattering off dust grains in the inner wind of the AGB star R Doradus. This oxygen-rich star with spectral type M8 has a large angular diameter in the sky (≈ 57 mas) and a relatively low mass-loss rate, between 0.9 and 2.0×10 −7 M yr −1 (Olofsson et al. 2002;Maercker et al. 2008;Khouri 2014). Its pulsation properties switch between one mode with a period of 332 days and ∆V = 1.5 mag, and another with a period of 175 days and ∆V < 1 mag (Bedding et al. 1998). Polarised light from a region very close to the star (≈ 1.5 R ) has been recently detected using NACO (Norris et al. 2012). The authors found that a model with grain radii of 0.3 µm and a inner radius of the dust envelope of 43.3 mas gives the best fit to the data.
Observations
2.1. Data acquisition and data reduction R Dor was observed with ZIMPOL during the SPHERE science verification time using three filters, V, cntHα, and cnt820. The observations were taken in two epochs, 10 and 11 of December 2014 (V, cntHα, and cnt820) and 28 of January 2015 (cntHα). The total integration times for each filter and epoch are given in Table 1. Observations using filter cnt820 were done both with and without a neutral density (ND) filter. In the images obtained without the ND filter, the star saturated the detector. When the first-epoch cntHα images were taken the seeing was too high (> 1.2 ) for an optimal behaviour of the instrument, therefore these data have to be interpreted with care.
The observations of R Dor resulted in individual data cubes, containing the frames recorded with the two cameras of the instrument (both equiped with the same filter). We processed these cubes individually using the data reduction pipeline of the instrument, in its pre-release version 0.14.0 1 . Each cube produced Stokes +Q, −Q, +U and −U frames for the two cameras, together with intensity frames I Q and I U . We then aligned and de-rotated the resulting average frames using custom python routines. We adopt a pixel scale of 3.602 ± 0.011 mas pix −1 and a position angle of the vertical axis with respect to North of 357.95±0.55 deg (Ch. Ginski, in prep). Recently, new directiondependent corrections to the pixel scale of less than 0.5% have been determined (see Ginski et al, in prep). Since these are much smaller than the uncertainties from choosing the central pixel (see Section 3.2) for the region we consider, we have not included them.
We deconvolved the total intensity images (only) using the Lucy-Richardson (L-R) algorithm implemented in the IRAF 2 software package. The point-spread function (PSF) reference images of ψ 2 Ori were taken on the night of 31 March 2015, under comparable seeing conditions (σ = 0.9 ) as the R Dor observations. For the cnt820 filter, we adopted the PSF observation in the TiO717 filter (with λ • = 716.8 µm and ∆λ = 19.7 µm) as no other PSF observation in the cnt820 filter was available. We stopped the L-R deconvolution after 80 iterations, as the deconvolved images do not show a significant evolution in additional processing steps. Since the PSF images were not acquired simultaneously to the images of R Dor, we only show the deconvolved images to illustrate what the underlying source morphology might be. All the quantities that we present and model were extracted from the observed images and not from the deconvolved ones.
Observational results
The stellar photosphere
We first discuss the total intensity (Stokes I) images. The high spatial resolution achieved with SPHERE/ZIMPOL (≈ 20 mas) allows us to resolve the stellar disc of R Dor in the total intensity images ( Figs. 1 and 2). The images acquired in the first epoch reveal a very asymmetrical source, with a horseshoe-shaped morphology both in V and in cntHα. In these images, more emission arises from the northern hemisphere and there is a region of low surface brightness in the south-west. We find an azimuthallyaveraged full-width at half maximum (FWHM) in V and cntHα in this epoch of ≈ 71 mas (see Table 1). This asymmetric morphology is not as prominent in the cnt820 images and the source is also smaller at this wavelength, with FWHM ≈ 58.3 mas.
The surface brightness distribution of R Dor in cntHα changes between the two epochs of observation (Fig. 2). In Jan 2015, emission is concentrated in the central region and the intensity distribution is more axisymmetric, different from what is seen in the first epoch. In the second epoch the disc of R Dor has a FWHM ≈ 59.4 mas in cntHα. For reference, the 48 days that separate the two epochs correspond to slightly more than one-fourth of the shortest pulsation period of R Dor (175 days).
The values of the FWHM measured in the second epoch in filters cnt820 and cntHα are comparable to the stellar radius for a uniform disc obtained by Norris et al. (2012) of ≈ 27.2 mas, from observations that probe the stellar continuum in the nearinfrared. However, the values of the FWHM in the first epoch in V and in cntHα are significantly larger than that (see Table 1).
A strong dependence of the size of the stellar disc on wavelength is a known feature of AGB stars. For instance, the measured uniform-disc diameters in the near-infrared is found to correlate with molecular spectral bands of CO and H 2 O (see, e.g., Wittkowski et al. 2008;Woodruff et al. 2009). In the visible wavelength range, TiO is expected to be the main source of opacity for a late-type M star as R Dor. This molecule is found to dominate the spectrum in the wavelength range of V and cntHα, but its opacity is lower in the wavelength range of filter cnt820. For comparison, molecular contribution is expected to be very small in the wavelengths that Norris et al. performed their observations. For examples of spectra of late-type stars with band identification, see, e.g., Lançon & Wood (2000). Jacob et al. (1997) measured the diameter of R Dor in the pseudo-continuum region around 0.82 µm and in a TiO absorption band at 0.85 µm and found the stellar diameter to be 20% larger in the TiO band. Follow-up observations ) between 0.65 µm and 0.99 µm confirmed that the stellar disc size increases in the TiO bands. Ireland et al. (2004) found asymmetries in the stellar disc of R Dor in the same wavelength range observed by Jacob et al. (2004). The authors, however, could not determine whether these asymmetries were caused by variations in molecular excitation or in the light scattered by dust grains. Since we find that the size and the morphology of the stellar disc change considerably while the polarisation degree does not decrease significantly (see Section 2.2.2), we conclude that the difference in FWHM, the asymmetries, and the variation in morphology are mainly caused by variability of molecular (TiO) opacity.
The variability in TiO opacity can be caused by changes in the gas density and/or in the molecular abundance or excitation. Hydrodynamical models calculated with the code CO5BOLD (Freytag 2013) for a star with a few solar masses show that convective motions can produce large-scale density variations on time scales comparable to the one we find 3 . However, these models lack a realistic wavelength-dependent radiative transfer that takes into account molecular opacity. As molecular excitation, and hence the surface brightness of the stellar disc, is significantly affected by density variations, by episodic dissipation of Top panels: total intensity normalised using the peak flux as observed using ZIMPOL on 10-and 11-Dec-2014 in the three filters, V, cntHα, and cnt820. Middle panels: the images of the PSF-reference star ψ 2 Ori normalised to the peak flux. Bottom panels: the corresponding deconvolved images, again normalised using the peak flux. The dashed red circles show the size of the stellar disc derived by Norris et al. (2012) from observations in the near-infrared (same as shown in Figs. 2, 3, 4, and 5). The full red circles show the FWHM of the PSF used as reference.
energy carried by shocks, and by variations in the stellar radiation field, a quantitative comparison between observations and such models is not yet possible.
The polarised light
Polarised light thought to be produced by scattering of stellar light off dust grains was detected from around the central star (see Fig. 3). The observed polarisation vectors are in the plane of the sky and tangential to a circle centred on the star, as expected for grains distributed in a circumstellar envelope illuminated by a central star. Fig. 4 shows that the directions of the observed polarisation vectors match this expected behaviour up to distances of about 160 mas in the images obtained in the V filter, about 130 mas in the first-epoch cntHα image, and about 145 mas in the second-epoch cntHα image. At these radii the polarisation signal disappears in the noise. We did not include the observations in cnt820 in this analysis because these were either Notes. The average Julian date and the total integration time of the observations are given in columns 2 and 3. DIT is the exposure time of the individual frames and NDIT is the number of frames per exposure. Each cycle (DIT × NDIT) was repeated four times to obtain +Q, −Q, +U, and −U frames and the whole cycle was repeated several times for each filter and epoch until the total exposure time was reached. AM and θ are the airmass and the visible seeing respectively. λ • and ∆λ are the central wavelength and the full-width at half maximum of the filters used. FWHM is the azimuthally-averaged full-width at half maximum observed for R Dor. Peak PD, IPF, and MPF (defined in Section 3.2) stand, respectively, for the peak value of the polarisation degree, the integrated polarised fraction, and the maximum polarised fraction (not given for cnt820, see Section 2.2.2). In the last column, ND1 is the neutral density filter used and sat. indicates that the CCD is saturated. saturated or had to be acquired with a neutral density (ND) filter. The region where the detector was saturated includes the inner rim of the ring from where polarised light is seen and the ND filter can introduce uncalibrated instrumental polarisation.
The region from where we see polarised light is very similar for V and cntHα in the first epoch, but the brightness distribution is somewhat different. In V the polarised intensity is slightly more concentrated in the south part of the image (60%), while in cntHα both hemispheres show roughly the same emission with 53% of the polarised intensity originating from the southern hemisphere. There is no obvious correlation between the directions of maximum or minimum polarised intensity and those with maximum or minimum total intensity. Interestingly, although the total intensity distribution in cntHα changes dras- . Polarised intensity seen in V, cntHα, and in cnt820 on 11-Dec-2014 and in cntHα on 28-Jan-2015 normalised to the peak value, shown with a square-root scaling. The dashed red circles show the size of the stellar disc derived by Norris et al. (2012) from observations in the near-infrared (also shown in Figs. 1, 2, 4, and 5). The white circles (not shown for cnt820, see Section 2.2.2) mark the region where we find the polarised light produced in the envelope to dominate over instrumental effects.
tically from one epoch to the other, the polarised intensity distribution does not change significantly. The departure from axisymmetry is stronger in Jan 2015 but the location of the maxima and minima of the polarised flux does not change much between the two epochs. We divided the image in octants to facilitate our analysis and to minimise errors introduced by asymmetries of the dust envelope. Octant 1 is limited by the north and north-east directions and the numeration follows clock-wise. The value of the polarised intensity changes considerably for the different octants. In Table 2 we list the fraction of the polarised light measured per octant per filter. The ratio between the intensity from the octants with maximum and minimum polarised intensity is roughly 13.5 13.5 11.5 7 7.5 7.5 5 8
10 The dashed blue circles enclose the regions where we find the polarised intensity produced in the envelope to be a factor of three larger than the expected instrumental polarisation (not shown for cnt820, see Section 2.2.2).
2.3 for V and cntHα in the first epoch and 3.7 for cntHα in the second epoch.
Analysis and modelling of the polarised light
We now focus on our modelling efforts to derive the density radial profile of the grains that produce the scattered light.
Modelling approach
We calculated spherically symmetric models using the continuum radiative-transfer code MCMax (Min et al. 2009). The code calculates the direction-dependent scattering of radiation by dust grains and outputs images of the Stokes parameters. We convolved the Q and U images produced by the models with the PSF images. This typically caused a decrease by a factor of 1. Figs. 1, 2, 3, and 4). The dashed blue circles enclose the regions where we find the polarised intensity produced in the envelope to be a factor of three larger than the expected instrumental polarisation (not shown for cnt820, see Section 2.2.2).
of the output integrated polarised flux. This is because in the Q and U images the negative and positive lobes from a symmetrical envelope can overlap when the resolution is lowered. This leads to polarised signal in the images to cancel out. The poorer the angular resolution, the larger is the effect. The convolved Q and U images were then combined to obtain the polarised intensity for each model.
We calculated the radial profile of the polarised intensity from the observations for each octant and compared these profiles independently to the models. We considered two different envelope structures: with dust grains distributed in a thin halo with constant density (model with no outflow), and with a dust density gradient given by ρ(r) = ρ • (R • /r) n , R • being the inner radius of the dust envelope and ρ • = ρ(R • ). We varied R • between 1.2 R and 2.0 R for both envelope structures and the halo thicknesses between 0.25 R and 1.5 R for the thin-halo scenario, where R = 27.2 mas (Norris et al. 2012).
We use amorphous Mg 2 SiO 4 grains with the optical constants obtained by Jäger et al. (2003), as this dust species is one of the main candidates for driving the winds of oxygen-rich AGB stars (see, e.g., Bladh & Höfner 2012). We have also experimented with optical constants for Mg 2 SiO 3 and Al 2 O 3 , as discussed in Section 4.1.2. To calculate the absorption opacities and the direction-dependent scattering properties, we approximate the actual size distribution of particle sizes by a distribution of hollow spheres (Min et al. 2003). We consider the radii, a, to be given by the standard distribution for grains in the interstellar medium n(a) ∝ a −3.5 (Mathis et al. 1977). The mini-mum, a min , and maximum, a max , grain radii of the distribution were varied to fit the observations (see Section 4.1). Our fits are only sensitive to grains with a 0.1 µm, since smaller grains do not provide significant scattering opacity. For distributions with a min = 0.01 µm and a max between 0.2 µm and 0.5 µm, the amount of mass in grains with a 0.1 µm is roughly between 40% and 70%.
Observational contraints
Our models only consider opacity due to dust grains, while molecular absorption may also be very important close to the star. Neglecting this source of opacity will probably cause us to overestimate the inner radius of the dust envelope. This is because photons scattered close to the star, where the gas densities are high, have a larger chance of being absorbed before escaping the envelope. Hence, the inner radius of the dust envelope would appear larger.
Instrumental polarisation may also affect the observations, though, data reduction procedures reduce it to about 0.5%. This residual instrumental polarisation mainly affects the data at projected distances > 100 mas from the central source. To minimise the errors introduced, we only analysed the polarised light from the inner region up to 127 mas from the star, where the polarisation degree is larger than 1.5% and where we see the expected behaviour of the polarisation angles for the image in V and the two images in cntHα.
We fit our models to the integrated polarised fraction (IPF), the maximum polarised fraction (MPF), and the radial profile of the polarised intensity. We define the IPF as the polarised intensity integrated within a radius of 127 mas divided by the total intensity integrated over the same region. The MPF is the maximum value of the polarised fraction per octant integrated within a radius of 127 mas divided by the total intensity integrated over a circle with a 127 mas radius. These observed quantities are given in Table 1. The errors given are the 1-σ uncertainties derived from the uncertainties on the polarised intensity. We also compared our models to the radial profile of the total intensity to set upper limits on the dust densities.
The IPF in the image in V is significantly smaller than those in cntHα in both epochs. The polarised signal in the first-epoch cntHα image can be affected by the not-ideal conditions when the data were taken. Specifically, the Strehl ratio in that image is different from that in the reference PSF (ψ 2 Ori), which was taken under good sky conditions. This can cause more instrumental polarisation to be produced, affecting the IPF we measure. This is specially important in the regions where the polarisation degree is low ( 1%). The effect of the worse conditions can be seen in the images of the direction of the polarisation vectors (Fig. 4). The first-epoch image in cntHα shows the signal from circumstellar polarisation up to a smaller distance than the second-epoch image. This would not be expected if both images had been taken under equal conditions because in the firstepoch exposure time was longer. Nonetheless, decreasing the radius of integration from 127 mas to 110 mas only reduces the value we obtain for the IPF by a factor of 1.01. Therefore, the effect of the worse sky conditions on the IPF are not significant. Moreover, the observations in Jan 2015 also show a high IPF in cntHα. This leads us to conclude that the higher IPF in cntHα when compared to that in V is a real feature of the inner envelope of R Dor. However, we were not able to reproduce this wavelength dependence of the IPF with our models, which always show higher IPF in V or at most similar values of the IPF between the two filters.
An important consideration is that the measured IPF can be suppressed by molecular absorption. This is because the scattered photons travel longer through the envelope than photons that do not interact and, hence, have a higher probability of being absorbed by molecules. This causes the IPF produced by scattering off dust grains to decrease if the molecular absorption opacity increases in a given wavelength. Given these considerations, it is more likely that polarised photons in the V band are absorbed by molecules than artificially created in cntHα because of instrumental polarisation. Therefore, we consider the values measured in cntHα as more representative of the IPF produced by the dust. This approach also guarantees that we do not underestimate the dust densities we derive. Hence, we fit our models to the uncertainty-weighed-averaged IPF in cntHα from both epochs combined, of 2.35 ± 0.1%.
In order to calculate radial profiles, we need to define a central pixel but the complex morphology of the source makes this determination difficult. For reference, the FWHM of R Dor in the total intensity images in V is of roughly 20 pixels (72 mas, see Table 1). We have chosen the central pixel in the images in V as the one more closely equidistant from the peak of polarised light for different azimuths. The central pixels in the other filters were chosen by establishing the best match when overlaying the polarised-light images with those in V. By following this approach, we find that the chosen central pixels differ from the centre of light by a distance of one (3.6 mas) or two pixels (7.2 mas) for the images in V and in both epochs in cntHα. The pixels where the total intensity peaks differ from our chosen central pixel by a distance of between 3 (10.8 mas) and 4 pixels (14.4 mas) for the image in V and the first-epoch image in cntHα. For the second-epoch image in cntHα, in which the stellar disc is more axisymmetric, the intensity peaks on our chosen central pixel. These considerations support our approach for determining the central pixel. The uncertainty caused by the choice of the central pixel on the derived radial profiles of the polarised intensity was taken as the maximum difference between considering the chosen central pixels or one of its eight neighbouring pixels. This approach overestimates the uncertainty on the slope of the radial profile of the polarised intensity. The effect of considering different central pixels on the IPF and MPF is negligible compared to the uncertainty of the measurements.
In Fig. 6 we compare the radial profiles for the eight octants in the images in V and in cntHα. The profiles were normalised using the integrated polarised intensity for each filter. The radial profiles obtained from the three images are very similar and agree well within the uncertainties. The fact that the profiles from cntHα in the first epoch deviate slightly from the other two for r 3 R can be attributed to the worse conditions when those observations were taken and to a higher instrumental polarisation.
Results and discussion
We were not able to determine the particle size from the wavelength dependence of the scattering. This is because of the expected contribution of molecular opacity in the wavelength range of the observations that can disrupt the wavelength dependence imprinted by scattering. We infer that molecular opacity is important in the first epoch because of the higher IPF in V than in cntHα, which we are not able to reproduce with our dust model.
We fit the IPF of 2.35 ± 0.1% and the radial profiles within 127 mas (5 R , see Section 3.2) measured in the first epoch in V and in the second epoch in cntHα. This means that our results are limited to a small region around the star. For reference, ma- terial moving outwards at the maximum gas expansion velocity, of 5.7 km s −1 , would take roughly five years to cross the region we probe. We also used the radial profile of the total intensity from the second-epoch observations in cntHα to set an upper limit on the dust densities allowed by the models. Finally, we investigated whether a high value of ρ • is enough to reproduce the MPF.
The integrated polarised fraction and the radial profile of the polarised intensity
Optical-depth effects become important when we consider grain size distributions with maximum radii, a max , between 0.2 µm and 0.5 µm. For these values of a max , our models require an average optical depth, τ V , 1 to reproduce the observed IPF. In the optically-thick regime our fit parameters, ρ • , R • , and n cannot be determined independently. Typically, the IPF we obtain from our models is larger the larger the dust mass within the region of integration (127 mas). However, when τ V 2, increasing ρ • , and hence the dust mass, does not lead to a larger IPF because of the effect of multiple scattering. Also, for τ V 1.5, the peak of the radial profile of the polarised intensity becomes broader and shifts to larger radii, in comparison to optically-thin models. This causes models with τ V 1.5 to require smaller values of R • and larger values of n, when compared to optically-thin models. Our models with τ V 1.5 require R • = 1.45 ± 0.1 R and 3.5 n 5 (with an average best-fitting value of n ≈ 4.5) to fit the radial profile of the polarised intensity. A model with τ V ≈ 3.5 requires R • ≈ 1.2 and n ≈ 5.5 instead.
Constraints from the total intensity images
Although we do not attempt to fit the unpolarised surface brightness of R Dor, we use the radial profile of the total intensity to set an upper limit on the scattering optical depth of the envelope. the stellar disc is seen to be very large in that epoch both in V and in cntHα. However, as shown in Fig. 7, the second-epoch images in cntHα do offer important constraints. Optically-thick models overpredict the total intensity for all distances from the star. Optically-thin models also overpredict the total intensity for r 30 mas but that is because our limb-darkened stellar model is not accurate enough to reproduce the brightness distribution of the stellar disc. Since the polarised fraction in cntHα does not Table 3. The calculated reduced-χ 2 values of the models shown in Fig. 8 fit to the radial profiles measured in the V filter for each octant. These were computed using the normalised profiles of models and observations between 18 mas and 127 mas to reflect the goodness of the fit to the shape of the observed radial profiles. The lowest reduced-χ 2 value for each octant is highlighted with boldface. seem to decrease from the first to the second epoch, we conclude that this limit on τ V applies to both epochs. This reinforces the conclusion that the variability and asymmetries in the total intensity images in the first epoch are caused by molecular opacity. We conclude that models with scattering optical depths 1.2 do not provide acceptable fits to the data. Moreover, the value derived by Norris et al. (2012) for the inner radius of the dust envelope of R Dor, of R • = 1.6 R , is in better agreement with what we obtain for models with τ V 1.5. Since the scattering opacities in the wavelengths of the observations of Norris et al. are roughly one order of magnitude lower than those in the wavelength range of our observations, it is unlikely that their results are affected by high scattering optical depths in the envelope. This reinforces the conclusion that optically-thin models are preferred.
Larger grain sizes and different dust species
We have tested the effect of increasing a max to values between 0.6 µm and 1.0 µm. We find that such models produce lower values of the IPF for a given optical depth, when compared to models with smaller values of a max . Therefore, the effects of high τ V are stronger the larger a max , for a max 0.1 µm.
We have also experimented using optical constants for amorphous Al 2 O 3 (Koike et al. 1995) and MgSiO 3 (Dorschner et al. 1995), with all other parameters kept constant. We find IPF values that differ only by a few percent from those obtained using Mg 2 SiO 4 . We conclude that the effect of the assumed grain species on our results is small.
The radial profile of the dust density
The best-fits to the observed radial profile measured in V are shown in Fig. 8. In Table 3, we show reduced-χ 2 values for these same models fit to the normalised profile of the observed polarised flux. As can be seen, models with dust grains confined to a thin halo are not able to reproduce the observations for r 2.5 R , while models with n 3 have dust density profiles that are too shallow and that overpredict the observed polarised intensity also for r 2.5 R . The dust density profiles we find are not sensitive to the size of the particles we consider, as long as the optical depth in the visual, τ V , does not vary.
The best-fitting values of τ V and ρ • for models with different minimum and maximum grain sizes are given in Table 4. The dust density profiles (n ≈ 4.5) we obtain are steeper than that of a wind expanding with constant speed (n = 2). This can be the result of acceleration, of destruction of the dust grains in the observed region, of a decreasing mass-loss rate on short time scales, or be caused by the density structure imprinted by consecutive shock waves in the inner envelope, as predicted by 1-D wind-driving models (see, e.g., Höfner et al. 2003, for gas density profiles from models for carbon-stars). Ireland et al. (2005) observed polarised light from two lowmass-loss rate and oxygen-rich AGB stars, R Car and RR Sco, and found that the data was better reproduced by a model with a central star surrounded by a thin shell, instead of an outflow. If these sources also have steep density profiles in the inner wind, the structure they interpreted as a thin shell might actually be the edge of a dust envelope similar to that of R Dor.
Our models produce values of the scattered-light fractions in the near-infrared roughly a factor of 3.5 larger than those reported by Norris et al. (2012) for R Dor. It is not clear whether this is caused by variability of R Dor between the times of the two observations, by systematic errors in the data acquisition methods, or by differences in the models used by us and by Norris et al. to derive these quantities.
The maximum polarised fraction
We estimated the variation in ρ • required to reproduce the difference in the observed polarisation degree between the octants. We are not able to reproduce the MPF in cntHα (> 3 %) with models that have a max 0.2 µm because the model envelopes become too optically thick and do not reach such high polarised fractions. By considering a max = 0.1 µm we are able to fit the observed IPF with much lower maximum optical depths of ≈ 0.2 and we are, then, able to reproduce the MPF. Norris et al. (2012) report grains of 0.3 µm in the outflow of R Dor. This value is in the range of grain radii for which we cannot reproduce the MPF.
Tangential optical depths
Given the considerable optical depths in the wavelengths we observe, asymmetries can cause tangential optical depths in the envelope to be significantly different from those in our spherically symmetrical model. The tangential optical depths are measured along the line-of-sight and in the direction of the observer between the plane of scattering and the outer edge of the envelope. For lines-of-sight with lower tangential optical depths, more polarised photons can escape. In this case, the MPF could be reproduced by an envelope with grains with a max 0.2 µm and tangential optical depths varying with azimuth. In order for the polarised intensity from one octant to be a factor of 1.5 higher than that of a spherically symmetric model, the tangential optical depth would have to be 0.4 smaller. Given that the optical depths we find are of order unity, this would imply a decrease of the tangential optical depth of roughly 40% for the octants with maximum polarisation. Since the polarised fraction would depend on both the tangential optical depth and on ρ • for each octant for non-spherically-symmetric envelopes we have not explored this possibility further. Table 4. Best-fit models for the radial distribution of scattering grains using different minimum and maximum grain sizes. ρ • is the density needed at R • to produce the IPF for a model with n = 4.5. τ V is the radial V-band optical depth of the corresponding model. We also show the range in ρ • and τ V required to reproduce the observed variation of polarisation degree between the octants with maximum and minimum values. The question marks indicate that we were not able to reproduce the MPF because the models become too optically thick. symmetric radiative transfer model to reproduce the observations. This might affect the derived optical depths because the tangential optical depths can be considerably different from a spherically symmetric model (see Section 4.2.1). Moreover, the authors constrained the dust mass also taking into account the scattered light frac-tions reported by Norris et al. (2012) for W Hya. Therefore, the differences in the derived optical depths might be related to the fact that we overpredict the scattered light fractions given by Norris et al. for R Dor (see Section 4.1.3). However, we are not able to determine the cause of these discrepancies and further investigation is needed.
Comparison with ZIMPOL results for W Hya
We note that if our models underestimate the polarised flux for a given value of the optical depth of the envelope, the dust densities we find will be overestimated. Hence, the actual dust densities and the dust-to-gas ratio in the envelope will be smaller than the values we report.
Dust grains as wind drivers
We now investigate whether the grains we see around R Dor are sufficient for driving the wind. Wind-driving models indicate that the minimum dust-to-gas ratio (d/g) for a wind to develop is d/g ≈ 6 × 10 −4 (Höfner 2008) or even d/g ≈ 3.3 × 10 −4 (Bladh et al. 2015) 4 . These models consider grains of a single radius, which is typically 0.2 µm.
We estimate an empirical upper limit for the d/g at a radius r in the envelope of R Dor by considering a lower limit for the gas density, ρ gas (r) Ṁ gas 4πr 2 υ ∞ , and the dust density profiles we find. We use the parameters commonly obtained for the gaseous outflow of R Dor ofṀ gas = 9 × 10 −8 M yr −1 and υ ∞ = 5.7 km s −1 (see, e.g., Khouri 2014). This should give us a robust lower limit on the gas densities, since the gas expansion velocities are expected to be smaller than υ ∞ close to the star.
For the dust density profile, we use n = 4.5 and the average density from the model with only large grains, ρ • = 3 × 10 −18 g cm −3 , as this is the model more directly comparable to the single-grain-size models of Höfner and Bladh et al.. This choice should not have a strong effect on our results be-cause, at a given distance from the star, all our best-fitting models have densities of grains with a 0.15 µm which are always smaller or very similar to that of the model with only large grains.
We find d/g 5 × 10 −3 at r = 1.5 R and d/g 2 × 10 −4 at r = 5.0 R . Although the upper limit for the d/g we derive at r = 1.5 R does not provide strong constraints, the low value for the upper limit at r = 5.0 R shows that at that radius the d/g in the outflow of R Dor is close to the limit of what wind-driving models require. We note that our results can underestimate the dust densities if the polarisation efficiency of the grains is overestimated by our model. However, this would imply higher scattering optical depths for the envelope, which our results disfavour. Given the low expansion velocity and mass-loss rate of R Dor, a low value of the d/g is not unexpected. However, if the steep radial profile of the dust density extends beyond 5 R , the d/g would soon become too low for the dust to provide the required opacity to drive the outflow. This would be a problem because the maximum expansion velocity of the outflow of R Dor only equals the escape velocity of a 1 M star at r ≈ 35 R , and hence the radiation pressure force would have to be larger than the gravitational pull up to that distance.
Summary and conclusions
We have observed the oxygen-rich AGB star R Dor using SPHERE/ZIMPOL on the VLT in three filters: V, cntHα, and cnt820. Observations in cntHα were acquired in two epochs 48 days apart. The stellar disc is resolved in all observations and we are able to study asymmetries and variability in the star. We find the total intensity distribution of R Dor to have a horseshoeshaped morphology in the first epoch both in V and in cntHα. In the pseudo-continuum filter cnt820 the stellar disc is smaller and any departures from axisymmety are much less pronounced. In the second-epoch, taken 48 days later, the image in cntHα shows a source with a very different morphology. Moreover, the stellar disc is significantly smaller than in the first-epoch image in the same filter and is comparable in size to what we see in cnt820, also in the first epoch. We interpret these differences in size and morphology as being caused by variability in the excitation and/or density of TiO molecules in the extended atmosphere of the star.
We detect polarised light coming from a ring that encloses the central source and that spans a similar region in the three images we consider, in V and the two epochs in cntHα. However, the polarised intensity varies significantly with azimuth for the three images and the ratio of the polarised intensity between two images also varies somewhat with azimuth. We find the integrated polarised fraction in V to be smaller than those in cntHα in the two epochs and the integrated polarised fraction to increase in cntHα from the first to the second epoch. Our fits to the integrated polarised fraction and the radial profile of the polarised intensity show that we see outflowing dust grains. Considering models with dust density profiles of the type ρ(r) = ρ • r −n , we find that the dust density decreases much more steeply with radius than expected for a wind expanding at constant speed. This can be caused by the acceleration of the wind, but could also be explained by destruction of the dust grains or by a varying mass-loss rate on short time scales.
We use our best-fitting dust models and literature values for the gaseous outflow to calculate upper limits for the dust-to-gas ratio as a function of radius. We compare the limits we obtain to results from wind-driving models and we find that the upper limit we derive for the dust-to-gas ratio at 5 R is somewhat lower than the minimum values required by such models for a wind to develop. Given the approximations we use for the grain model and the envelope structure, the upper limit we find for the dust-to-gas ratio is consistent with the value found in winddriving models. However, if the steep dust density gradient we derive extends to larger radii, the dust-to-gas ratio would surely become too small for the outflow to be driven. Given the low expansion velocity of the outflow of R Dor, the wind only reaches the escape velocity of a one solar mass star about 35 R from the central star. Therefore, if the grains we see are the main source of opacity that drives the wind, we would expect a flattening in the power law of the dust density profile not much farther out than the maximum radius we probe.
Further investigation of the outflow of R Dor will help to better understand how its wind is accelerated. Particularly, deeper observations using ZIMPOL can probe the dust density farther out in the envelope to better constrain the role of the scattering grains in driving the wind. Similar studies for other AGB stars will show whether what we see for R Dor is representative in any way and how the distribution of the dust in the inner wind changes for different stars. This will help advance our knowledge of the driving of AGB outflows and of the AGB evolution in general.
Fig. 1 .
1Fig. 1. Top panels: total intensity normalised using the peak flux as observed using ZIMPOL on 10-and 11-Dec-2014 in the three filters, V, cntHα, and cnt820. Middle panels: the images of the PSF-reference star ψ 2 Ori normalised to the peak flux. Bottom panels: the corresponding deconvolved images, again normalised using the peak flux. The dashed red circles show the size of the stellar disc derived by Norris et al. (2012) from observations in the near-infrared (same as shown in Figs. 2, 3, 4, and 5). The full red circles show the FWHM of the PSF used as reference.
Fig. 2 .
2Top panels: total intensity normalised to unity as observed using ZIMPOL in cntHα on 10-Dec-2014 and 28-Jan-2015. Bottom panels: the corresponding deconvolved images, again normalised to unity. The dashed red circles show the size of the stellar disc derived byNorris et al. (2012) from observations in the near-infrared (also shown inFigs. 1, 3, 4, and 5). The full red circles show the FWHM of the PSF used as reference.
Fig. 3
3Fig. 3. Polarised intensity seen in V, cntHα, and in cnt820 on 11-Dec-2014 and in cntHα on 28-Jan-2015 normalised to the peak value, shown with a square-root scaling. The dashed red circles show the size of the stellar disc derived by Norris et al. (2012) from observations in the near-infrared (also shown in Figs. 1, 2, 4, and 5). The white circles (not shown for cnt820, see Section 2.2.2) mark the region where we find the polarised light produced in the envelope to dominate over instrumental effects.
Fig. 4 .
4Direction of the measured polarisation vector for the four images given in degrees relatively to the north direction. The red circles show the size of the stellar disc derived byNorris et al. (2012) from observations in the near-infrared (also shown in Figs. 1, 2, 3, and 5).
Fig. 5 .
5Polarisation degree for the four images. The red dashed circles show the size of the stellar disc derived byNorris et al. (2012) from observations in the near-infrared (also shown in
Fig. 6 .
6Comparison between the radial profile of the polarised intensity for the different octants obtained from the observations in the first epoch in V (green line) and in cntHα (red line), and in the second epoch in cntHα (blue line). The profiles were normalised using the value of the integrated polarised intensity for each filter. The octants are identified in each panel (see Section 2.2.2). The filled region shows the three-σ errors from combining the uncertainty given by the ESO pipeline with that from choosing the central pixels.
Fig. 7 .
7The first-epoch images do not offer strong constraints because Observed azimuthally-averaged radial profile of the total intensity in the second-epoch cntHα images (full black line) compared to models with τ V = 0.65, R • = 1.5 R , and n = 4 (dashed red line), τ V = 1.3, R • = 1.45 R , and n = 4.5 (dotted green line), and τ V = 1.9, R • = 1.4 R , and n = 5 (dotted-dashed blue line). The values of R • and n for each of the dust models shown have been determined by fitting the radial profile of the polarised intensity. For reference, we also show the radial profile of our model star with no dust envelope (double-dashed purple line).
Fig. 8 .
8Ohnaka et al. (2016) reported SPHERE/ZIMPOL observations of the O-rich AGB star W Hya that resolved the direct stellar emission and the region in the close circumstellar environment where polarised light is produced. The authors find large grain Comparison between the radial profile of the polarised intensity obtained from models and from the observations in V (black line) for the different octants. The fraction of the IPF arising from each octant is given in percents. Octant one is limited by the north and north-east directions and the numeration follows clock-wise. The grey-filled region shows the three-σ errors from combining the uncertainty given by the ESO pipeline with that from choosing the central pixel. The different octants are identified by number in each panel (see Section 2.2.2). We show the best fit models for: a thin halo (pink line), n = 5 (cyan line), n = 4 (red line), n = 3 (green line), and n = 2 (blue line). The filled regions around the model lines show variation with direction introduced by the convolution with the PSF.
µm) and small optical depths, of 0.
Table 1 .
1Observation log.Filter Average JD Exp time DIT[s]
λ •
∆λ AM
θ
FWHM Peak PD
MPF
IPF
Rem.
[2457000+]
[min]
x NDIT [nm] [nm]
[ ]
[mas]
[%]
[%]
[%]
V -11 Dec 14
2.563
42.7
4 x 10 554.0 80.6 1.40 1.0
70.6
4.8
1.85 ± 0.15 1.25 ± 0.1
-
cntHα -10 Dec 14
1.767
48.0
10 x 4 644.9 4.1 1.40 1.35
72.1
3.0
3.15 ± 0.2
2.3 ± 0.1
-
cntHα -28 Jan 15
50.534
11.5
1.2 x 36 644.9 4.1 1.25 0.93
59.4
9.0
5.45 ± 0.9
3.6 ± 0.5
-
cnt820 -10 Dec 14
2.589
4.8
1.2 x 30 817.3 19.8 1.35 1.1
58.3
-
-
-
ND1
cnt820 -10 Dec 14
2.584
14.4
1.2 x 30 817.3 19.8 1.35 1.2
-
-
-
sat.
Table 2 .
2Fraction of the polarised flux arising from the different octants for the image in V and in cntHα.Fraction of polarised flux
Octant
V
cntHα Dec cntHα Jan
[%]
[%]
[%]
1
11
15.5
13
2
11
13.5
16
3
13
9
14.5
4
17.5
13.5
14.5
5
16
17.5
18.5
6
1, in visual wavelengths. These are very different from our preferred models for R Dor, with optical depths in visible wavelengths of ∼ 1. Since the polarisation degree reported by Ohnaka et al. for W Hya is comparable to what we find for R Dor and the mass-loss rate of W Hya (of ∼ 1.3 × 10 −7 M yr −1 , Khouri et al. 2014) is similar to that of R Dor, the differences between the results reported by Ohnaka et al. and ours are not expected. Ohnaka et al. used a non-spherically
Downloadable from ftp://ftp.eso.org/pub/dfs/pipelines/sphere/ 2 http://iraf.noao.edu
See http://www.astro.uu.se/˜bf/ for the model results.
Both studies report silicon condensation fractions. We calculated the d/g by adopting solar silicon abundances(Asplund et al. 2009)
. M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
. T R Bedding, A A Zijlstra, A Jones, G Foster, MNRAS. 3011073Bedding, T. R., Zijlstra, A. A., Jones, A., & Foster, G. 1998, MNRAS, 301, 1073
J.-L Beuzit, M Feldt, K Dohlen, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 7014701418Beuzit, J.-L., Feldt, M., Dohlen, K., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Ground- based and Airborne Instrumentation for Astronomy II, 701418
. S Bladh, S Höfner, A&A. 54676Bladh, S. & Höfner, S. 2012, A&A, 546, A76
. S Bladh, S Höfner, B Aringer, K Eriksson, A&A. 575105Bladh, S., Höfner, S., Aringer, B., & Eriksson, K. 2015, A&A, 575, A105
. J Dorschner, B Begemann, T Henning, C Jaeger, H Mutschke, A&A. 300503Dorschner, J., Begemann, B., Henning, T., Jaeger, C., & Mutschke, H. 1995, A&A, 300, 503
. B Freytag, Memorie della Societa Astronomica Italiana Supplem. 2426Freytag, B. 2013, Memorie della Societa Astronomica Italiana Supplem., 24, 26
Asymptotic Giant Branch Stars Höfner. A&A. Habing, H. J. & Olofsson, H.4911Habing, H. J. & Olofsson, H., eds. 2003, Asymptotic Giant Branch Stars Höfner, S. 2008, A&A, 491, L1
. S Höfner, R Gautschy-Loidl, B Aringer, U G Jørgensen, A&A. 399589Höfner, S., Gautschy-Loidl, R., Aringer, B., & Jørgensen, U. G. 2003, A&A, 399, 589
. M J Ireland, P G Tuthill, T R Bedding, J G Robertson, A P Jacob, MNRAS. 350365Ireland, M. J., Tuthill, P. G., Bedding, T. R., Robertson, J. G., & Jacob, A. P. 2004, MNRAS, 350, 365
. M J Ireland, P G Tuthill, J Davis, W Tango, MNRAS. 361337Ireland, M. J., Tuthill, P. G., Davis, J., & Tango, W. 2005, MNRAS, 361, 337
. A P Jacob, T R Bedding, J G Robertson, MNRAS. 349303Jacob, A. P., Bedding, T. R., Robertson, J. G., et al. 2004, MNRAS, 349, 303
A P Jacob, T R Bedding, J G Robertson, IAU Symposium. T. R. Bedding, A. J. Booth, & J. Davis18910Jacob, A. P., Bedding, T. R., Robertson, J. G., et al. 1997, in IAU Symposium, Vol. 189, IAU Symposium, ed. T. R. Bedding, A. J. Booth, & J. Davis, 10
. C Jäger, J Dorschner, H Mutschke, T Posch, T Henning, A&A. 408193Jäger, C., Dorschner, J., Mutschke, H., Posch, T., & Henning, T. 2003, A&A, 408, 193
. T ; T Khouri, A De Koter, L Decin, A&A. 5615University of Amsterdam Khouri,PhD thesisKhouri, T. 2014, PhD thesis, University of Amsterdam Khouri, T., de Koter, A., Decin, L., et al. 2014, A&A, 561, A5
. C Koike, C Kaito, T Yamamoto, Icarus. 114203Koike, C., Kaito, C., Yamamoto, T., et al. 1995, Icarus, 114, 203
. A Lançon, P R Wood, A&AS. 146217Lançon, A. & Wood, P. R. 2000, A&AS, 146, 217
. M Maercker, F L Schöier, H Olofsson, P Bergman, S Ramstedt, A&A. 479779Maercker, M., Schöier, F. L., Olofsson, H., Bergman, P., & Ramstedt, S. 2008, A&A, 479, 779
. J S Mathis, W Rumpl, K H Nordsieck, ApJ. 217425Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425
. M Min, C P Dullemond, C Dominik, A De Koter, J W Hovenier, A&A. 497155Min, M., Dullemond, C. P., Dominik, C., de Koter, A., & Hovenier, J. W. 2009, A&A, 497, 155
. M Min, J W Hovenier, A De Koter, A&A. 40435Min, M., Hovenier, J. W., & de Koter, A. 2003, A&A, 404, 35
. B R M Norris, P G Tuthill, M J Ireland, Nature. 484220Norris, B. R. M., Tuthill, P. G., Ireland, M. J., et al. 2012, Nature, 484, 220
. K Ohnaka, G Weigelt, K.-H Hofmann, Ohnaka, K., Weigelt, G., & Hofmann, K.-H. 2016, ArXiv e-prints
. H Olofsson, D González Delgado, F Kerschbaum, F L Schöier, A&A. 3911053Olofsson, H., González Delgado, D., Kerschbaum, F., & Schöier, F. L. 2002, A&A, 391, 1053
. M Wittkowski, D A Boboltz, T Driebe, A&A. 47921Wittkowski, M., Boboltz, D. A., Driebe, T., et al. 2008, A&A, 479, L21
. H C Woodruff, M J Ireland, P G Tuthill, ApJ. 6911328Woodruff, H. C., Ireland, M. J., Tuthill, P. G., et al. 2009, ApJ, 691, 1328
| []
|
[]
| []
| []
| []
| We propose to the community a comprehensive UV/optical/NIR imaging survey of Galactic star formation regions to probe all aspects of the star formation process. The primary goal of such a study is to understand the evolution of circumstellar protoplanetary disks and other detailed aspects of star formation in a wide variety of different environments. This requires a comprehensive emission-line survey of nearby star-forming regions in the Milky Way, where a high spatial resolution telescope+camera will be capable of resolving circumstellar material and shock structures. In addition to resolving circumstellar disks themselves, such observations will study shocks in the jets and outflows from young stars, which are probes of accretion in the youngest protoplanetary disks still embedded in their surrounding molecular clouds. These data will allow the measurement of proper motions for a large sample of stars and jets/shocks in massive star-forming regions for the first time, opening a new window to study the dynamics of these environments. It will require better than 30 mas resolution and a stable PSF to conduct precision astrometry and photometry of stars and nebulae. Such data will allow production of precise color-color and color magnitude diagrams for millions of young stars to study their evolutionary states. One can also determine stellar rotation, multiplicity, and clustering statistics as functions of environment and location in the Galaxy. For the first time we can systematically map the detailed excitation structure of HII regions, stellar winds, supernova remnants, and supershells/superbubbles. This survey will provide the basic data required to understand star formation as a fundamental astrophysical process that controls the evolution of the baryonic contents of the Universe. | null | [
"https://arxiv.org/pdf/0904.2002v1.pdf"
]
| 54,699,887 | 0904.2002 | 40d7bce84bcd87670bdd9f34ae25615ce47a43da |
UnderstandingGlobalGalacticStarFormation Scowenetal. 1
We propose to the community a comprehensive UV/optical/NIR imaging survey of Galactic star formation regions to probe all aspects of the star formation process. The primary goal of such a study is to understand the evolution of circumstellar protoplanetary disks and other detailed aspects of star formation in a wide variety of different environments. This requires a comprehensive emission-line survey of nearby star-forming regions in the Milky Way, where a high spatial resolution telescope+camera will be capable of resolving circumstellar material and shock structures. In addition to resolving circumstellar disks themselves, such observations will study shocks in the jets and outflows from young stars, which are probes of accretion in the youngest protoplanetary disks still embedded in their surrounding molecular clouds. These data will allow the measurement of proper motions for a large sample of stars and jets/shocks in massive star-forming regions for the first time, opening a new window to study the dynamics of these environments. It will require better than 30 mas resolution and a stable PSF to conduct precision astrometry and photometry of stars and nebulae. Such data will allow production of precise color-color and color magnitude diagrams for millions of young stars to study their evolutionary states. One can also determine stellar rotation, multiplicity, and clustering statistics as functions of environment and location in the Galaxy. For the first time we can systematically map the detailed excitation structure of HII regions, stellar winds, supernova remnants, and supershells/superbubbles. This survey will provide the basic data required to understand star formation as a fundamental astrophysical process that controls the evolution of the baryonic contents of the Universe.
Introduction & Scientific Context
Stars are the fundamental building blocks of the Universe and influence its evolution on all scales, from the cosmological to the planetary. The formation of stars locks away baryons for a Hubble time, they produce the energy that establishes the state of matter in the interstellar medium (ISM), they control the fate of self-gravitating masses, and they produce the light that renders distant galaxies visible. It is because of stars that elements heavier than helium are created. Without stars, there would be no planets, no carbon, and no free energy to drive the evolution of life. Star formation is the fundamental process underpinning the evolution of the Universe and life within it. Progress toward understanding the cosmic history of normal matter, the formation and evolution of galaxies, the birth and fate of planetary systems, and our own origins requires a comprehensive understanding of star formation as a large-scale, coherent, systematic process.
There has been remarkable progress in our understanding of star formation during recent decades. Molecular clouds form from the ISM. Their densest cores suffer gravitational collapse to form protostars which are 10 7 times smaller and 21 orders of magnitude denser. Spin and pressure gradients channel accretion from the envelope onto a spinning disk. Magnetic fields grow, extract angular momentum, and drive accretion from the disk onto the star. Dynamo-generated stellar magnetic fields regulate stellar rotation, accrete matter onto the star at high latitudes, and expel supersonic jets and bipolar outflows. Particles in the disk grow, sediment, and eventually form planets around the young star.
Observations, theory and numerical simulations have led to major paradigm shifts in this simple description of star and planet formation. First, the birth of isolated stars from a quiescent dark cloud is rare. Observations have shown that most stars form in turbulent giant molecular clouds with supersonic motions having Mach numbers of 10 to 100. Second, most stars form in dense clusters in close proximity to tens, hundreds, or even thousands of other stars. Some siblings are massive stars with powerful stellar winds, intense UV radiation fields, and violent and explosive deaths that dramatically affect the surrounding ISM. The vast majority of normal stars, probably including our own Sun, formed in such OB associations. Feedback of light, energy, and matter drives and regulates cloud formation, gravitational collapse, and the properties of the individual stars, multiple systems, and the clusters that form. These stochastic turbulent processes appear to be fundamental to understanding the origin and distribution of stellar masses and other stellar properties.
Compelling Science Themes Based on Recent Advances
We believe that to understand and address star formation as a global system, we need to design and engage in a systematic program of imaging that covers a large number and variety of Galactic star forming regions. To understand star birth in the early Universe, to understand galaxy formation and evolution, to understand the origin of the stellar mass spectrum, to understand the formation of planets, and to understand feedback, we must treat star birth as an integrated systemic process. We must observe star forming complexes in their entirety: we must trace the interactions between gas and stars, between stars and stars, and between disks and their environments. To make progress, we must spatially resolve disks, multiple stars, and star clusters. We must measure stellar motions, and perform relative photometry with sufficient precision to age-date young stars. All these top-level goals make specific requirements of any instrumentation designed to execute this program -requirements that we will detail in subsequent sections. At the heart of this program is the goal of providing critical advances in our knowledge of star and planet birth. The goals of our Galactic star forming imaging program are to make major advances in the following topics:
Young stellar objects (YSOs): Masses, mass-spectra, rotation rates, variability, ages, multiplicity, clustering statistics, motions, brown dwarfs, free-floating proto-planets.
We need to be able to trace individual star, multiple star, and cluster properties to assay the range of star formation products and the manner in which they are assembled -a goal that requires the combination of a wide field of view, high angular resolution, and photometric and astrometric stability potentially enabling sub-milliarcsecond relative astrometry, and milli-magnitude relative photometry. Measurement of the orbital motions of stars is necessary to gain insight into the dynamics of stars once they are produced, how cluster dispersion varies, and the possible detection of high velocity stars, as well as mapping large-scale nebular motions. Such measurements all require kilometer-per-second proper motion sensitivity for both stars and compact nebulae. Measurement of the stellar rotation rates for most stars is necessary to understand the resulting dynamics of each star formation episode and is achieved by recording star-spot modulation using precise relative photometry. Of particular interest is the search for transiting proto-planets in a subset of edge-on disks, and with a large-area imaging survey we will capture extremely rare types of events such as proto-planet collisions in 1 to 100 Myr old debris disks in associations. Precise cluster and association ages will be determined by fitting of HR diagram turn-on and turn-off loci requiring accurate relative photometry. Extending this same photometry to binaries will enable the best calibrations of pre-main sequence evolutionary tracks. Addressing the questions of clustering, young cluster evolution, and cluster dissipation will require stellar positions and motions to be probed. With such datasets we will identify many young browndwarfs and free-floating protoplanets.
Disks: Sizes, masses, structure, mass-loss rates, photoevaporation, density distributions, survival times.
A primary goal is to identify thousands of protoplanetary disks seen in silhouette, and embedded within evaporating proplyd envelopes in dozens of nearby HII regions, out to a distance of about 2 kpc. The widefield survey images taken toward regions such as Orion or Carina will extend the surveyed areas by one to two orders of magnitude over the most ambitious HST surveys undertaken so far. It will be possible to sample disks with ages ranging from 0.1 Myr to over 100 Myr when a variety of selected lines of sight are observed toward the Perseus, Orion, and Carina regions. It will be possible to look for spiral structure, gaps, and other evidence for disk perturbations from both internal and external influences. The nearest disks are 50 pc from the Sun toward TW Hya, Sco-Cen, and Perseus. We believe we will need to approach an angular resolution of nearly 1 AU at the shortest wavelengths toward these systems (20 mas at λ ≈ 0.2 µm). Hα and other key spectral line-diagnostics will be used to estimate photo-ionization induced mass-loss rates in irradiated proplyds, giving critical clues to their typical lifetimes.
Outflows: Microjets, jets, wide-angle flows, winds, motions, momenta, mass-loss rates, turbulence, shocks.
HST has demonstrated that sub-arcsecond imaging is needed to begin to resolve the structure of shocks, and distinguish shock fronts from post-shock cooling layers. Furthermore, only space-based UV/optical observations can measure proper motions on a time-scale short compared to the cooling time. The survey observations will measure the proper motions of hundreds of outflows, enabling the first direct measure of the momentum and energy injected into the ISM by protostellar outflows for a wide-range of stellar masses and star forming environments. Jet orientation changes will trace the history of stellar encounters in clusters. We will also measure the angular momentum of jets to determine their launch points.
While jets and shocks are interesting in their own right, as they emerge from a molecular cloud they also provide a signpost of the youngest protoplanetary accretion disks that are still deeply embedded. The spacing of major ejecta within a single outflow system traces the accretion history of the source YSO. In this way, jet structure provides a fossil record of the accretion and mass-loss histories of the source stars.
Nebulae: Excitation, motion, ionization fronts, triggered star formation;
This imaging program is deliberately designed to investigate the formation of HII regions and expanding bubble systems. How do ionization fronts disrupt surrounding clouds? Under what conditions to they trigger star formation in the medium? High spatial-resolution images with multiple narrowband filters are essential to resolve and correctly model the complex stratified structure of an ionization front. Each HII region / OB association provides a snapshot in time of a range of evolutionary stages. The portions of each region closest to the massive stars are likely to be the most evolved, oldest, and most processed parts of each region. As one moves away from the center, the gas, stars, and disks are likely to be in a younger evolutionary state.
Massive stars: Motions, variations, winds, interactions with siblings, HII regions
The program will also investigate stellar wind bubbles in HII regions and the interactions of stellar winds with cometary clouds, proplyds, naked young stars and their winds and jets, and the surrounding ISM. Another goal will be to investigate the properties of C-symmetric jets and outflows, wind-jet interactions, supernova-protostellar jet interactions in Orion, Carina, Rosette, NGC 3576, and other regions.
Recycling: Supernova remnants and planetary nebulae, bulk motions, excitation, shocks.
The late stages of stellar evolution -especially in massive stars -are an integral piece of the star and planet formation puzzle, because outflows from the deaths of massive stars drive the chemical evolution and energetics of the ISM. In particular, supernova ejecta enrich the ISM with the elements needed for life to exist, while supernova shocks and stellar winds may compress the surrounding ISM to trigger new star formation. Outflows from the deaths of intermediate mass stars (planetary nebulae) also enrich the ISM with dust, which is vital to the formation of molecular clouds.
By studying the structure and proper motions of a representative sample of nearby supernova remnants (the Crab, IC443, Cas A, Vela, the Cygnus Loop, etc), WR star bubbles (NGC6888, NGC2359), and planetary nebulae (the Helix, M27, etc), the fine details of the shocks and ionization fronts can be spatially resolved. The supernova remnants IC443 and Vela are particularly interesting, as they are directly interacting with molecular clouds. Also, this survey will probe unique regions such as Carina and NGC3603, where the stars are so massive and their lifetimes so short that their imminent death (Eta Carinae and Sher 25) is directly affecting the birth of stars in the same region.
Altogether, these data will probe the disruption of clouds, the recycling of stellar ejecta, and the compression of the ISM into new generation of clouds and the triggering and propagation of star formation.
Superbubbles: Destruction of clouds, OB associations, T associations, global structure and evolution of star forming regions.
The energy input from the combined influence of UV radiation, stellar winds, and supernovae from massive stars makes "swiss cheese" out of the ISM. In the most massive star forming regions, where dozens of OB stars live fast and die young before moving very far from their birth sites, the combined effect of this feedback can blow giant shells or "superbubbles" that may eventually break out of the galactic plane, driving a galactic fountain that is vital to the recycling of the ISM. In a few regions such as Carina, NGC3603, NGC3576, W1, and W4, we have the opportunity to study the formation of superbubbles in exquisite detail, where we can actually resolve the structure of the expanding bubbles and model their physical properties.
The Galactic Ecology: Impact of spiral arms, formation of clouds, Galactic gradients in YSO and cluster properties, the Galactic Center
We believe an investigation of the "galactic ecology" is vital to understanding the global nature of the star formation process -the formation of giant molecular clouds from the ISM. How do HII regions and superbubble ionization fronts compress the surrounding ISM? Does ram-pressure trigger cloud formation? How do spiral arms trigger cloud formation? How do clouds and cloud cores collapse into clusters, and multiple stars?
Key Advances in Observation Needed
To achieve the science goals of this program a variety of capabilities need to be implemented. The majority of the tracers and the various phases of the ISM and stellar populations being targeted require the angular resolution and wavelength agility of a medium to large aperture (1.5-4m) UV/optical space telescope combined with a widefield imaging camera that can provide diffraction-limited images into the UV-blue to capture the UV-bright stellar populations that HST has been unable to reach. Such a telescope needs to be located in an orbit that is both dynamically and thermally stable (such as L2) to produce the photometric stability required by many aspects of the science goals. A broad complement of both broad-and narrow-band filters will be necessary to isolate and measure not only the unique tracers of specific atomic species but also the trends in stellar color across entire swaths of our local Galactic neighborhood.
Over the next decade the specific technological capabilities that need to be developed include the ability to construct large focal plane arrays that are flight rated for space in a reliable and straightforward fashion that simultaneously mitigates the risk, maximizes the yield rate and keeps the costs down. This is a major challenge that affects not only this project but many others, and requires real investment on the part of the community to allow such systems to be built routinely. In addition, the design of next generation coatings and dichroic optical elements will allow for the design and construction of truly advanced telescope/camera systems that can yield remarkable advances in imaging efficiency for a minimal investment.
Four Central Questions to be Addressed
1. What is the formation and survival rate of Solar System class objects in massive star forming regions? There is a growing body of evidence that many stars form in these environments, and that our own Sun was one such system, based on meteoritic evidence concerning 60 Fe. 2. What is the role of triggering and feedback in star formation propagation? A wide range of predictions from numerical simulations describe the role of triggering and feedback as being anything from dominant to negligible. What is the correlation between environment and the nature of the stellar population that forms in secondary and even tertiary star formation events? 3. How is the distribution of star formation across a galactic disk managed? We see evidence that an increase in the efficiency or intensity of star formation occurs almost simultaneously across large distances -what is the source of these global modes -what environmental changes are necessary to initiate and support star formation at these levels? 4. When considering global star formation, what are the determining factors that cause stars to form in one place as opposed to another? At the microphysics level, how does elevated or starburst star formation compare to the more common modes? What dictates the intrinsic efficiency of the star formation process? These latter questions will require comparison with observations from other nearby galaxies such as the LMC, but the database of observations from this program will be necessary to lay the groundwork to answer them.
Area of Unusual Discovery Potential for the Next Decade
While the science program in this paper have defined a loose set of specifications (see Table 1), it should be recognized that the opportunity for truly unique discovery is made possible by the combination of both a wide angular field of view (tens of arcminutes on a side) with the diffraction limit of a medium to large aperture in the UV/optical (resolution elements below 10-20 mas). HST and JWST have provided and will provide exquisite resolution but over very small fields of view. Many problems, such as those discussed here, and others such as the nature of the Universe around the time of Reionization, require not only large collecting area and high resolution, but large fields of view to locate and measure very rare objects, or suites of objects whose location cannot be known a priori. The potential discovery rate from such a combined capability cannot be underestimated, and should be very seriously considered by the Decadal Survey.
Parameter Specification Justification
Field of View
At least 200 sq. arcmin
To allow a statistically complete survey of as many targets and environments as possible in a reasonable period of time Resolution
Diffraction Limited to 300nm
To provide access to UV-blue stellar populations; to resolve structure in YSO jets, protoplanetary disks, ionization fronts, etc. Aperture 1.5-4m This is driven by the limiting surface brightnesses and magnitudes needed traded against the necessary exposure times to achieve them -the larger the better Stability A small percentage of a pixel
To allow the stable photometry and astrometric measurements necessary to achieve the science goals Photometric Stability Combination of gain, A/D conversion and QE need to be stable to better than 10 -5 Again to provide the photometric stability to achieve the science goals of the project Dictated by both broad-band colors needed to survey stellar populations and the narrow-band diagnostics necessary to probe the resolved gas structure and dynamics Optical Design Efficient design offering a wide, well-corrected field of view to be populated by a large focal plane
The science program can only be achieved by an efficient design that offers parallel observing in the red and blue, with little field distortion, and as large an objective as possible
Detectors
High yield, efficient detectors, customized in their response to the passbands needed
Tiling the large focal plane will be challenging -we need an efficient manufacture and testing process, combined with the ability to match response to the optical channels
Figure1
-theHSTmosaicoftheCarinaNebula(Smithetal2008).Thisisthekindofdatasetthat,when replicatedacrossallmassivestarformingregionswithin2-2.5kpcoftheSun,willyieldadatasetcapable ofunlockingthesecretsofstarformationasaglobalprocess
Table 1 :
1Science Driven General Specifications
| []
|
[
"Private GANs, Revisited *",
"Private GANs, Revisited *",
"Private GANs, Revisited *",
"Private GANs, Revisited *"
]
| [
"Alex Bie \nUniversity of Waterloo\nUniversity of Waterloo\n\n",
"Gautam Kamath \nUniversity of Waterloo\nUniversity of Waterloo\n\n",
"Guojun Zhang [email protected] \nUniversity of Waterloo\nUniversity of Waterloo\n\n",
"Alex Bie \nUniversity of Waterloo\nUniversity of Waterloo\n\n",
"Gautam Kamath \nUniversity of Waterloo\nUniversity of Waterloo\n\n",
"Guojun Zhang [email protected] \nUniversity of Waterloo\nUniversity of Waterloo\n\n"
]
| [
"University of Waterloo\nUniversity of Waterloo\n",
"University of Waterloo\nUniversity of Waterloo\n",
"University of Waterloo\nUniversity of Waterloo\n",
"University of Waterloo\nUniversity of Waterloo\n",
"University of Waterloo\nUniversity of Waterloo\n",
"University of Waterloo\nUniversity of Waterloo\n"
]
| []
| We show that the canonical approach for training differentially private GANs -updating the discriminator with differentially private stochastic gradient descent (DPSGD) -can yield significantly improved results after modifications to training. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between the generator and discriminator necessary for successful GAN training. We show that a simple fix -taking more discriminator steps between generator steps -restores parity and improves results. Additionally, with the goal of restoring parity between the generator and discriminator, we experiment with other modifications to improve discriminator training and see further improvements in generation quality. Our results demonstrate that on standard benchmarks, DPSGD outperforms all alternative GAN privatization schemes. * Authors GK and GZ are listed in alphabetical order. † | 10.48550/arxiv.2302.02936 | [
"https://export.arxiv.org/pdf/2302.02936v1.pdf"
]
| 256,615,755 | 2302.02936 | 5a83797cdc118fb9e2c982d3cd67386931701d5b |
Private GANs, Revisited *
Alex Bie
University of Waterloo
University of Waterloo
Gautam Kamath
University of Waterloo
University of Waterloo
Guojun Zhang [email protected]
University of Waterloo
University of Waterloo
Private GANs, Revisited *
Huawei Noah's Ark Lab
We show that the canonical approach for training differentially private GANs -updating the discriminator with differentially private stochastic gradient descent (DPSGD) -can yield significantly improved results after modifications to training. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between the generator and discriminator necessary for successful GAN training. We show that a simple fix -taking more discriminator steps between generator steps -restores parity and improves results. Additionally, with the goal of restoring parity between the generator and discriminator, we experiment with other modifications to improve discriminator training and see further improvements in generation quality. Our results demonstrate that on standard benchmarks, DPSGD outperforms all alternative GAN privatization schemes. * Authors GK and GZ are listed in alphabetical order. †
Introduction
Differential privacy (DP) (Dwork et al., 2006b) has emerged as a compelling approach for training machine learning models on sensitive data. However, incorporating DP requires significant changes to the training process. Notably, it prevents the modeller from working directly with sensitive data, complicating debugging and exploration. Furthermore, upon exhausting their allocated privacy budget, the modeller is restricted from interacting with sensitive data. One approach to alleviate these issues is by producing differentially private synthetic data, which can be plugged directly into existing machine learning pipelines, without further concern for privacy.
Towards generating high-dimensional, complex data (such as images), a line of work has examined privatizing generative adversarial networks (GANs) (Goodfellow et al., 2014) to produce DP synthetic data. Initial efforts proposed to use differentially private stochastic gradient descent (DPSGD) (Abadi et al., 2016) as a drop-in replacement for SGD to update the GAN discriminator -an approach referred to as DPGAN (Xie et al., 2018;Beaulieu-Jones et al., 2019;Torkzadehmahani et al., 2019). Follow-up work (Jordon et al., 2019;Long et al., 2021;Chen et al., 2020;Wang et al., 2021) departs from this approach, proposing alternative privatization schemes for GANs, and reports significant improvements over the DPGAN baseline.
However, even the best of these GAN-based schemes leave much to be desired, as they are associated with significant drops in utility (Table 1). Other methods for generating DP synthetic data diverge from GAN-based architectures, yielding improvements to utility in most cases (Table 2). This raises the question of whether GANs are suitable for DP training, or if bespoke architectures are required for DP synthetic data generation.
Our contributions. We show that DPGANs give far better utility than previously demonstrated, and compete with or outperform almost all other methods for DP synthetic data. 1 Previously demonstrated deficiencies of DPGANs should not be attributed to inherent limitations of the framework, but rather, training refers to downstream classification accuracy of CNN models trained with generated data. The middle two rows are a composite of the best results reported in the literature for DPGANs and other GAN privatization schemes (see Tables 2 and 3 for correspondences). Here we use Gopi et al. (2021) privacy accounting for our results. We find significant improvement over all previous GAN-based methods for DP synthetic data.
issues. Specifically, we propose that the asymmetric noise addition in DPGANs (adding noise to discriminator updates only) weakens the discriminator relative to the generator, disrupting the careful balance necessary for successful GAN training. We propose that taking more discriminator steps between generator updates addresses the imbalance introduced by noise. With this change, DPGANs improve significantly (see Figure 1 and Table 1). Furthermore, we show this perspective on DPGAN training ("restoring parity to a discriminator weakened by DP noise") can be applied to improve training. We make other modifications to discriminator training -larger batch sizes and adaptive discriminator step frequency -to improve discriminator training and further improve upon the aforementioned results. In summary, we make the following contributions:
• We find that taking more discriminator steps between generator steps significantly improves DPGANs. Contrary to the previous results in the literature, DPGANs do compete with alternative GAN privatization schemes.
• We present empirical findings towards understanding why more frequent discriminator steps help. We propose an explanation based on asymmetric noise addition for why vanilla DPGANs do not perform well, and why taking more frequent discriminator steps helps.
• We employ our explanation as a principle for designing better private GAN training recipes, and indeed are able to improve over the aforementioned results.
Related work
Private GANs. The baseline DPGAN that employs a DPSGD-trained discriminator was introduced in Xie et al. (2018), and studied in follow-up work of Torkzadehmahani et al. (2019); Beaulieu-Jones et al. (2019). Despite significant interest in the approach (≈ 300 citations at time of writing), we were unable to find studies that explore the modifications we perform or uncover similar principles for improving training. As a consequence, subsequent work has departed from this approach, examining alternative privatization schemes for GANs (Jordon et al., 2019;Long et al., 2021;Chen et al., 2020;Wang et al., 2021). Broadly speaking, these approaches employ subsample-and-aggregate (Nissim et al., 2007) via the PATE approach (Papernot et al., 2017), dividing the data into ≥ 1K disjoint partitions and training teacher discriminators separately on each one. Our work shows that these privatization schemes are outperformed by DPSGD.
Other DP generative models. Other generative modelling frameworks have been applied to generate DP synthetic data: VAEs (Chen et al., 2018), maximum mean discrepancy (Harder et al., 2021;Vinaroz et al., 2022;Harder et al., 2022), Sinkhorn divergences (Cao et al., 2021), normalizing flows (Waites and Cummings, 2021), and diffusion models (Dockhorn et al., 2022). In a different vein, Chen et al. (2022) avoids learning a generative model, and instead generates a coreset of examples (≈ 20 per class) for the purpose of training a classifier. These approaches fall into two camps: applications of DPSGD to existing, highly-performant generative models; or custom approaches designed specifically for privacy which fall short of GANs when evaluated at their non-private limits (ε → ∞).
Concurrent work on DP diffusion models. Simultaneous and independent work 2 by Dockhorn et al. (2022) is the first to investigate DP training of diffusion models. They achieve impressive state-of-the-art results for generating DP synthetic data in a variety of settings, in particular, outperforming our results for DPGANs reported in this paper. We consider our results to still be of significant interest to the community, as we challenge the conventional wisdom regarding deficiencies of DPGANs, showing that they give much better utility than previously thought. Indeed, GANs are still one of the most popular and well-studied generative models, and consequently, there are many cases where one would prefer a GAN over an alternative approach. By revisiting several of the design choices in DPGANs, we give guidance on how to seamlessly introduce differential privacy into such pipelines. Furthermore, both our work and the work of Dockhorn et al. (2022) are aligned in supporting a broader message: training conventional machine learning architectures with DPSGD frequently achieves state-of-the-art results under differential privacy. Indeed, both our results and theirs outperform almost all custom methods designed for DP synthetic data. This reaffirms a similar message recently demonstrated in other private ML settings, including image classification (De et al., 2022) and NLP (Li et al., 2022;Yu et al., 2022).
DP tabular data synthesis. Our investigation focuses on image datasets, while many important applications of private data generation involve tabular data. In these settings, marginal-based approaches (Hardt et al., 2012;Zhang et al., 2017;McKenna et al., 2019) perform the best. While Tao et al. (2021) find that private GAN-based approaches fail to preserve even basic statistics in these settings, we believe that our techniques may yield similar improvements.
Preliminaries
Our goal is to train a generative model on sensitive data that is safe to release, i.e., it does not leak the secrets of individuals in the training dataset. We do this by ensuring the training algorithm A -which takes as input the sensitive dataset D ∈ U and returns the parameters of a trained (generative) model θ ∈ Θsatisfies differential privacy.
Definition 1 (Differential Privacy, Dwork et al. 2006b). A randomized algorithm A : U → Θ is (ε, δ)differentially private if for every pair of neighbouring datasets D, D ∈ U, we have
P{A(D) ∈ S} ≤ exp(ε) · P{A(D ) ∈ S} + δ for all S ⊆ Θ.
Algorithm 1 TrainDPGAN(D; ·) 1: Input: Labelled dataset D = {(xj, yj)} n j=1 . Discriminator D and generator G initializations φ0 and θ0. Optimizers OptD, OptG. Privacy parameter δ. Hyperparameters: nD (D steps per G step), T (total number of D steps), B (expected batch size), C (clipping norm), σ (noise multiplier).
2: q ← B/|D| and t, k ← 0 Calculate sampling rate q, initialize counters. 3: while t < T do Update D with DPSGD.
4:
St ∼ PoissonSample(D, q) Sample a real batch St by including each (x, y) ∈ D w.p. q.
5:
St ∼ G(·; θ k ) B Sample fake batch St.
6:
g φ t ← (x,y)∈S t clip (∇ φ t (− log(D(x, y; φt))); C) + ( x, y)∈ S t clip (∇ φ t (− log(1 − D(
x, y; φt))); C) Clip per-example gradients.
7:
g φ t ← 1 2B (g φ t + zt), where zt ∼ N (0, C 2 σ 2 I)) Add Gaussian noise. 8: φt+1 ← OptD(φt, g θ t )
9:
t ← t + 1 10:
if nD divides t then Perform G update every nD steps.
11:
S t ∼ G(·; θ k ) B 12:
g θ k ← 1 B ( x, y)∈ S t ∇ θ k (− log(D( x, y; φt))) 13: θ k+1 ← OptG(θ k , g θ k ) 14: k ← k + 1 15: end if 16: end while 17: ε ← PrivacyAccountant(T, σ, q, δ)
Compute privacy budget spent. 18: Output: Final G parameters θ k . (ε, δ)-DP guarantee.
In this work, we adopt the add/remove definition of DP, and say two datasets D and D are neighbouring if they differ in at most one entry, that is,
D = D ∪ {x} or D = D ∪ {x}.
We highlight one convenient property of DP, known as closure under post-processing. This says that interacting with a privatized model (e.g., using it to compute gradients on non-sensitive data, generate samples) does not lead to any further privacy violation.
Proposition 2 (Post-processing). Let A : U → Θ be a randomized algorithm that is (ε, δ)-DP, and f : Θ → Y be an arbitrarily randomized mapping. Then f • A : U → Y is (ε, δ)-DP.
DPSGD.
A gradient-based learning algorithm can be privatized by employing differentially private stochastic gradient descent (DPSGD) (Song et al., 2013;Bassily et al., 2014;Abadi et al., 2016) as a drop-in replacement for SGD. DPSGD involves clipping per-example gradients and adding Gaussian noise to their sum, which effectively bounds and masks the contribution of any individual point to the final model parameters. Privacy analysis of DPSGD follows from several classic tools in the DP toolbox: Gaussian mechanism, privacy amplification by subsampling, and composition (Dwork et al., 2006a;Dwork and Roth, 2014;Abadi et al., 2016;Wang et al., 2019). In our work, we use two different privacy accounting methods for DPSGD: (a) the classical approach of Mironov et al. (2019), implemented in Opacus (Yousefpour et al., 2021), and (b) the recent exact privacy accounting of Gopi et al. (2021). By default, we use the former technique for a closer direct comparison with prior works (though we note that some prior works use even looser accounting techniques). However, the latter technique gives tighter bounds on the true privacy loss, and for all practical purposes, is the preferred method of privacy accounting. We use Gopi et al. (2021) accounting only where indicated in Tables 1, 2, and 3.
DPGANs. Algorithm 1 details the training algorithm for DPGANs, which is effectively an instantiation of DPSGD. Note that only gradients for the discriminator D must be privatized (via clipping and noise), and not those for the generator G. This is a consequence of post-processing (Proposition 2) -the generator only interacts with the sensitive dataset indirectly via discriminator parameters, and therefore does not need further privatization.
Frequent discriminator steps improves private GANs
In this section, we discuss our main finding: the number of discriminator steps taken between each generator step (n D from Algorithm 1) plays a significant role in the success of private GAN training. For a fixed setting of DPSGD hyperparameters, there is an optimal range of values for n D that maximizes generation quality, in terms of both visual quality and utility for downstream classifier training. This value can be quite large (n D ≈ 100 in some cases).
Experimental details
Setup. We focus on labelled generation of MNIST (LeCun et al., 1998) and FashionMNIST , both of which are comprised of 60K 28 × 28 grayscale images divided into 10 classes. To build a strong baseline, we begin from an open source PyTorch (Paszke et al., 2019) implementation 3 of DCGAN (Radford et al., 2016) that performs well non-privately, and copy their training recipe. We then adapt their architecture to our purposes: removing BatchNorm layers (which are not compatible with DPSGD) and adding label embedding layers to enable labelled generation. Training this configuration non-privately yields labelled generation that achieves FID scores of 3.2 on MNIST and 15.9 on FashionMNIST. Finally, we note that these models are not small: D and G have 1.72M and 2.27M trainable parameters respectively. For further details, please see Appendix B.1.
Privacy implementation.
To privatize training, we use Opacus (Yousefpour et al., 2021) which implements per-example gradient computation. As discussed before, we use the Rényi differential privacy (RDP) accounting of Mironov et al. (2019) (except in a few noted instances, where we instead use the tighter Gopi et al. (2021) accounting). For our baseline setting, we use the following DPSGD hyperparameters: we keep the non-private (expected) batch size B = 128, and use a noise scale σ = 1 and clipping norm C = 1. Under these settings, we have the budget for T = 450K discriminator steps when targeting (10, 10 −5 )-DP.
Evaluation. We evaluate our generative models by examining the visual quality and utility for downstream tasks of generated images. Following prior work, we measure visual quality by computing the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 60K generated images and entire test set. 4 To measure downstream task utility, we again follow prior work, and train a CNN classifier on 60K generated image-label pairs and report its accuracy on the real test set.
Results
More frequent discriminator steps improves generation. We plot in Figures 1a and 2 the evolution of FID and downstream accuracy during DPGAN training for both MNIST and FashionMNIST, under varying discriminator update frequencies n D . The effect of this parameter has outsized impact on the final results. For MNIST, n D = 50 yields the best results; on FashionMNIST, the best FID is obtained at n D = 200 and the best accuracy at n D = 100.
We emphasize that increasing the frequency of discriminator steps, relative to generator steps, does not affect the privacy cost of Algorithm 1. For any setting of n D , we perform the same number of noisy gradient queries on real data -what changes is the total number of generator steps taken over the course of training, which is reduced by a factor of n D .
Private GANs are on a path to mode collapse. For the MNIST results in Figures 1a and 2a, we observe that at low discriminator update frequencies (n D = 10), the best FID and accuracy scores occur early in training, well before the privacy budget we are targeting is exhausted. 5 In fact, at 50K discriminator steps (ε ≈ 2.85), n D = 10 has better FID (30.6) and accuracy (83.3%) than other settings of n D . However, (a) As a measure of synthetic data utility, we plot the test set accuracy of a CNN trained on generated data only. Accuracy mirrors the FID scores from Figure 1a. Going from nD = 1 to nD = 50 improves accuracy from 33.7% → 92.9%. Further nD increases hurt accuracy. (b) and (c) We obtain similar results for FashionMNIST. Note that the optimal nD is higher (around nD ≈ 100). At nD = 100, we obtain an FID of 91.5 and accuracy of 71.1%.
t = 50K t = 100K t = 150K t = 200K
Figure 3: Evolution of samples drawn during training with nD = 10, when targeting (10, 10 −5 )-DP. This setting reports its best FID and downstream accuracy at t = 50K iterations (ε ≈ 2.85). As training progresses beyond this point, we observe mode collapse for several classes (e.g., the 6's and 7's, particularly at t = 150K), co-occuring with the deterioration in evaluation metrics (these samples correspond to the first 4 data points in the nD = 10 line in Figures 1a and 2a).
these results deteriorate with continued training. In Figure 3, we plot the evolution of generated images for this n D = 10 run over the course of training and observe qualitative evidence of mode collapse, co-occurring with the deterioration in FID and accuracy observed in the first 4 data points of the n D = 10 run in Figures 1a and 2a.
An optimal discriminator update frequency. These results suggest that fixing other DPSGD hyperparameters, there is an optimal setting for the discriminator step frequency n D that strikes a balance between: (a) being too low, causing the generation quality to peak early in training and then undergo mode collapse; resulting in all subsequent training to consume additional privacy budget without improving the model ; and (b) being too high, preventing the generator from taking enough steps to converge before the privacy budget is exhausted (an example of this is the n D = 200 run in Figure 2a). Striking this balance results in the most effective utilization of privacy budget towards improving the generator.
5 Why does taking more steps help?
In this section, we present empirical findings towards understanding why more frequent discriminator steps improves DPGAN training. We propose an explanation that is conistent with our findings.
How does DP affect GAN training? Figure 4 compares the accuracy of the GAN discriminator on held-out real and fake examples immediately before each generator step, between private and non-private : Exponential moving average of GAN discriminator accuracy on mini-batches immediately before each generator step. While non-privately the discriminator maintains a 60% accuracy, the private discriminator with nD = 1 is effectively a random guess. Increasing the number of discriminator steps recovers the discriminator's advantage early on, leading to generator improvement. As the generator improves, the discriminator's task is made more difficult, driving down accuracy.
training with different settings of n D . We observe that non-privately at n D = 1, discriminator accuracy stabilizes at around 60%. Naively introducing DP (n D = 1) leads to a qualitative difference: DP causes discriminator accuracy to drop to 50% (i.e., comparable accuracy to randomly guessing) immediately at the start of training, to never recover. 6 For other settings of n D , we make following observations: (1) larger n D corresponds to higher discriminator accuracy in early training; (2) in a training run, discriminator accuracy decreases throughout as the generator improves; (3) after discriminator accuracy falls below a certain threshold, the generator degrades or sees limited improvement. 7 Based on these observations, we propose the following explanation for why more frequent discriminator steps help:
• Generator improvement occurs when the discriminator is effective at distinguishing between real and fake data.
• The asymmetric noise addition introduced by DP to the discriminator makes such a task difficult, resulting in limited generator improvement.
• Allowing the discriminator to train longer on a fixed generator improves its accuracy, recovering the non-private case where the generator and discriminator are balanced.
Checkpoint restarting experiment. We perform a checkpoint restarting experiment to examine this explanation in a more controlled setting. We train a non-private GAN for 3K generator steps, and save checkpoints of D and G (and their respective optimizers) at 1K, 2K, and 3K steps. We restart training from each of these checkpoints for 1K steps under different n D and privacy settings. We plot the progression of discriminator accuracy, FID, and downstream classification accuracy. Results are pictured in Figure 5. Broadly, our results corroborate the observations that discriminator accuracy improves with larger n D and decreases with better generators, and that the generator improvement occurs when the discriminator has sufficiently high accuracy.
Does reducing noise accomplish the same thing? In light of the above explanation, we ask if reducing the noise level σ can offer the same improvement as taking more steps, as reducing σ should also improve discriminator accuracy before a generator step. To test this: starting from our setting in Section 4, fixing n D = 1, and targeting MNIST at ε = 10, we search over a grid of noise levels σ (the lowest of which, σ = 0.4, admits a budget of only T = 360 discriminator steps). Results are pictured in Figure 6. We obtain generator steps into non-private training. We plot the progression of discriminator accuracy, FID, and downstream classification accuracy. The black dots correspond to the initial values of a checkpoint. We observe that low nD settings do not achieve comparable discriminator accuracy to non-private training (a), and results in degradation of utility ((b) and (c)). Discriminator accuracy for nD = 50 tracks non-private training, and we observe utility improvement throughout training like in the non-private setting.
a best FID of 127.1 and best accuracy of 57.5% at noise level σ = 0.45. Hence we can conclude that in this experimental setting, incorporating discriminator update frequency in our design space allows for more effective use of privacy budget for improving generation quality.
Does taking more discriminator steps always help? As we discuss in more detail in Section 6.1, when we are able to find other means to improve the discriminator beyond taking more steps, tuning discriminator update frequency may not yield improvements. To illustrate with an extreme case, consider eliminating the privacy constraint. In non-private GAN training, taking more steps is known to be unnecessary. We corroborate this result: we run our non-private baseline from Section 4 with the same number of generator steps, but opt to take 10 discriminator steps between each generator step instead of 1. FID worsens from 3.2 → 8.3, and accuracy worsens from 96.8% → 91.3%.
Better generators via better discriminators
Our proposed explanation in Section 5 provides a concrete suggestion for improving GAN training: effectively use our privacy budget to maximize the number of generator steps taken when the discriminator has sufficiently high accuracy. We experiment with modifications to the private GAN training recipe towards these ends, which translate to improved generation.
Larger batch sizes
Several recent works have demonstrated that for classification tasks, DPSGD achieves higher accuracy with larger batch sizes, after tuning the noise scale σ accordingly (Tramèr and Boneh, 2021;Anil et al., 2022; Figure 6: On MNIST, we fix nD = 1 and report results for various settings of the DPSGD noise scale σ, where the number of iterations T is chosen for each σ to target (10, 10 −5 )-DP. The gap between the dashed lines represent the advancement of the utility frontier by incorporating the choice of nD into our design space. De et al., 2022). GAN training is typically conducted with small batch sizes (for example, DCGAN uses B = 128, which we adopt; StyleGAN uses B = 32). Therefore it is interesting to see if large batch sizes indeed improve private GAN training. We corroborate that larger batch sizes do not significantly improve our non-private MNIST baseline from Section 4: when we go up to B = 2048 from B = 128, FID stays at 3.2 and accuracy improves from 96.8% → 97.5%.
Results. We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and n D (details in Appendix B.2). We target both ε = 1 and ε = 10. We report the best results from our hyperparameter search in in Table 2. We find that larger batch sizes leads to improvements: for both ε = 1 and ε = 10, the best results are achieved at B = 512 and B = 2048. We also note that for large batch sizes, the optimal number of generator steps can be quite small. For B = 2048, σ = 4.0, targeting MNIST at ε = 10, n D = 5 is the optimal discriminator update frequency, and improves over our best B = 128 setting employing n D = 50.
Adaptive discriminator step frequency
Our observations from Section 4 and 5 motivate us to consider adaptive discriminator step frequencies.
As pictured in Figure 4, discriminator accuracy drops during training as the generator improves. In this scenario, we want to take more steps to improve the discriminator, in order to further improve the generator. However, using a large discriminator update frequency right from the beginning of training is wasteful -as evidenced by the fact that low n D achieves the best FID and accuracy early in training. Hence we propose to start at a low discriminator update frequency (n D = 1), and ramp up when our discriminator is performing poorly. Accuracy on real data must be released with DP. While this is feasible, it introduces the additional problem of having to find the right split of privacy budget for the best performance. We observe that discriminator accuracy is related to discriminator accuracy on fake samples only (which are free to evaluate on, by post-processing). Hence we use it as a proxy to assess discriminator performance.
We propose an adaptive step frequency, parameterized by β and d. β is the decay parameter used to compute the exponential moving average (EMA) of discriminator accuracy on fake batches before each generator update. d is the accuracy floor that upon reaching, we move to the next update frequency n D ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, ...}. Additionally, we promise a grace period of 2/(1−β) generator steps before moving on to the next update frequency -motivated by the fact that β-EMA's value is primarily determined by its last 2/(1 − β) observations. We use β = 0.99 in all settings, and try d = 0.6 and d = 0.7, finding that 0.7 works better for large batches. The additional benefit of the adaptive step frequency is that it means we do not have to search for the optimal update frequency. Although the adaptive step frequency introduces the extra hyperparameter of the threshold d, we found that these two settings (d = 0.6 and d = 0.7) were sufficient to improve over results of a much more extensive hyperparameter search over n D (whose optimal value varied significantly based on the noise level σ and expected batch size B). For downstream accuracy, we report the best accuracy among classifiers they use, and compare against our CNN classifier accuracy. (*) For our results, we target ε = 10/ε = 1 with Opacus accounting and additionally report ε using the improved privacy accounting of Gopi et al. (2021). 6.3 Comparison with previous results in the literature
MNIST and FashionMNIST
Table 2 summarizes our best experimental settings for MNIST and FashionMNIST, and situates them in the context of previously reported results for the task. We also present a visual comparison in Figure 7. We also provide some examples of generated images in Figures 9 and 10 for ε = 10, and Figures 11 and 12 for ε = 1.
Plain DPSGD beats all alternative GAN privatization schemes. Our baseline DPGAN from Section 4, with the appropriate choice of n D (and without the modifications described in this section yet), outperforms all other GAN-based approaches proposed in the literature (GS-WGAN, PATE-GAN, G-PATE, and DataLens) uniformly across both metrics, both datasets, and both privacy levels.
8 Since PSG produces a coreset of only 200 examples (20 per class), the covariance of its InceptionNet-extracted features is singular, and therefore it is not possible to compute an FID score. 9 We group per-class unconditional GANs together with conditional GANs under the DPGAN umbrella. 10 Results from Vinaroz et al. (2022) are presented graphically in the paper. Exact numbers can be found in their code. Long et al. (2021) which target (ε, 10 −5 )-DP). (*) For our results, we target ε = 10 with Opacus accounting and additionally report ε using the improved privacy accounting of Gopi et al. (2021). DPDM reports a much better FID score that our DPGAN (which itself, is an improvement over previous results). Our DPGAN achieves the best reported accuracy score. Bottom half of the table: Results for GANbased approaches reported in Long et al. (2021) and Wang et al. (2021), which are not directly comparable because they target (10, 10 −5 )-DP and use 64 × 64 CelebA-Gender.
Large batch sizes and adaptive discriminator step frequency improve GAN training. Broadly speaking, across both privacy levels and both datasets, we see an improvement from taking larger batch sizes, and then another with an adaptive step frequency.
Comparison with state-of-the-art. With the exception of DPDM, our best DPGANs are competitive with state-of-the-art approaches for DP synthetic data, especially in terms of FID scores.
CelebA-Gender
We also report results on generating 32 × 32 CelebA, conditioned on gender at (10, 10 −6 )-DP. For these experiments, we used slightly larger models (2.64M and 3.16M parameters for D and G respectively), and employed large batches (B = 2048) and adaptive discriminator step frequency with threshold d = 0.6. Results are summarized in Table 3 and visualized in Figure 8. For more example generations, see Figure 13.
Conclusion
We revisit differentially private GANs and show that, with appropriate tuning of the training procedure, they can perform dramatically better than previously thought. Some crucial modifications include increasing the number of discriminator steps, increasing the batch size, and introducing adaptive discriminator step frequency. We explore the hypothesis that the previous deficiencies of DPGANs were due to poor classification accuracy of the discriminator. More broadly, our work supports the recurring finding that carefully-tuned DPSGD on conventional architectures can yield strong results for differentially private machine learning.
A Generated samples
We provide a few non-cherrypicked samples for MNIST and FashionMNIST at ε = 10 and ε = 1, as well as 32 × 32 CelebA-Gender at ε = 10. (Radford et al., 2016) (available at this link) that performs well non-privately, and copy their training recipe. This includes: batch size B = 128, the Adam optimizer (Kingma and Ba, 2015) with parameters (α = 0.0002, β 1 = 0.5, β 2 = 0.999) for both G and D, the non-saturating GAN loss (Goodfellow et al., 2014), and a 5-layer fully convolutional architecture with width parameter d = 128.
To adapt it to our purposes, we make three architectural modifications: in both G and D we (1) remove all BatchNorm layers (which are not compatible with DPSGD); (2) add label embedding layers to enable labelled generation; and (3) adjust convolutional/transpose convolutional stride lengths and kernel sizes as well as remove the last layer, in order to process 1 × 28 × 28 images without having to resize. Finally, we remove their custom weight initialization, opting for PyTorch defaults.
Our baseline non-private GANs are trained for 45K steps. We train our non-private GANs with poisson sampling as well: for each step of discriminator training, we sample real examples by including each element of our dataset independently with probability B/n, where n is the size of our dataset. We then add B fake examples sampled from G to form our fake/real combined batch.
Clipping fake sample gradients. When training the discriminator privately with DPSGD, we draw B fake examples and compute clipped per-example gradients on the entire combined batch of real and fake examples (see Algorithm 1). This is the approach taken in the prior work of Torkzadehmahani et al. (2019). We remark that this is purely a design choice -it is not necessary to clip the gradients of the fake samples, nor to process them together in the same batch. So long as we preserve the sensitivity of gradient queries with respect to the real data, the same amount of noise will suffice for privacy.
B.2 Large batch size hyperparameter search
We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and n D . For B = 128 targeting ε = 10, we search over three noise scales, Σ ε=10 B=128 = {0.6, 1.0, 1.4}. We choose candidate noise scales for other batch sizes as follows: when considering a batch size 128n, we search over Σ ε10 128n := { √ n · σ : σ ∈ Σ ε=10 B=128 }. We also target the high privacy (ε = 1) regime. For ε = 1, we multiply all noise scales by 5, Σ ε=1 B = {5σ : σ ∈ Σ ε=10 B }. For each setting of (B, σ), we search over a grid of n D ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. Due to compute limitations, we omit some values that we are confident will fail (e.g., trying n D = 1 when mode collapse occurs for n D = 5).
Figure 1 :
1DPGAN results on MNIST synthesis at (10, 10 −5 )-DP. (a) We find that increasing nD, the number of discriminator steps taken between generator steps, significantly improves image synthesis. Increasing nD = 1 to nD = 50 improves FID from 205.9 → 19.4. (b) Corresponding synthesized images (each are trained with the same privacy budget). We observe that large nD improves visual quality, and low nD leads to mode collapse.
Figure 2 :
2DPGAN results over training runs using different discriminator update frequencies nD, targeting (10, 10 −5 )-DP. Each plotted line indicates the utility of a model over a single training run, as the privacy budget is expended.
Figure 4
4Figure 4: Exponential moving average of GAN discriminator accuracy on mini-batches immediately before each
Figure 5 :
5We restart training under various privacy and nD settings at 3 checkpoints taken at 1K, 2K, and 3K
σ only vs. accuracy at nD = 1
Figure 7 :
7MNIST and FashionMNIST results at (10, 10 −5 )-DP for different methods. Images of other methods from(Cao et al., 2021).
Figure 8 :
832 × 32 CelebA-Gender at (10, 10 −6 )-DP. From top to bottom: DPDM (unconditional generation), DP-Sinkhorn, and our DPGAN.
Figure 9 :
9Some non-cherrypicked MNIST samples from our method, ε = 10.
Figure 10 :
10Some non-cherrypicked FashionMNIST samples from our method, ε = 10.
Figure 11 :
11Some non-cherrypicked MNIST samples from our method, ε = 1.
Figure 12 :
12Some non-cherrypicked FashionMNIST samples from our method, ε = 1.
Figure 13 :
13Some non-cherrypicked CelebA samples from our method, ε = 10. B Implementation details B.1 MNIST and FashionMNIST training recipe For MNIST and FashionMNIST, we begin from an open source PyTorch implementation of DCGAN
Table 1 :
1A summary of our results, compared to results reported in previous work on private GANs. Acc.(%)
Table 2 :
2We gather previously reported results in the literature on the performance of various methods for labelled generation of MNIST and FashionMNIST, compared with our results. Note that Reported In refers to the source of the numerical result, not the originator of the approach.
Table 3 :
3Top half of the table: Comparison to state-of-the-art results on 32 × 32 CelebA-Gender, targeting (ε, 10 −6 )-DP (except for the results of
The initial versions of their work and ours appeared online simultaneously(Anonymous, 2023a,b).
Courtesy of Hyeonwoo Kang (https://github.com/znxlwm). Code available at this link.4 We use an open source PyTorch implementation to compute FID: https://github.com/mseitzer/pytorch-fid.5 This observation has been reported inNeunhoeffer et al. (2021), serving as motivation for their remedy of taking a mixture of intermediate models encountered in training. We are not aware of any mentions of this aspect of DPGAN training in papers reporting DPGAN baselines for labelled image synthesis.
Our plot only shows the first 15K generator steps, but we remark that this persists until the end of training (450K steps). 7 For n D = 10, accuracy falls below 50% after 5K G steps (= 50K D steps), which corresponds to the first point in the n D = 10 line inFigures 1a and 2a. For n D = 50, accuracy falls below 50% after 5K G steps (= 250K D steps), which corresponds to the 5th point in the n D = 50 line inFigures 1a and 2a.
Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan Mcmahan, Ilya Mironov, Kunal Talwar, Li Zhang, CCS'16: 2016 ACM SIGSAC Conference on Computer and Communications Security. Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In CCS'16: 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016.
Large-scale differentially private BERT. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, Pasin Manurangsi, Findings of the Association for Computational Linguistics: EMNLP 2022. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. Large-scale differentially private BERT. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022.
Differentially private diffusion models. Anonymous, Submitted to the 11th International Conference on Learning Representations (ICLR'23). Anonymous. Differentially private diffusion models. In Submitted to the 11th International Conference on Learning Representations (ICLR'23), 2023a. URL https://openreview.net/forum?id=pX21pH4CsNB. Under review.
Private GANs, revisited. Anonymous, Submitted to the 11th International Conference on Learning Representations (ICLR'23). Anonymous. Private GANs, revisited. In Submitted to the 11th International Conference on Learning Rep- resentations (ICLR'23), 2023b. URL https://openreview.net/forum?id=QEmn_Hvh7j8. Under review.
Private empirical risk minimization: Efficient algorithms and tight error bounds. Raef Bassily, Adam Smith, Abhradeep Thakurta, IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS'14). Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient al- gorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS'14), 2014.
Privacy-preserving generative deep neural networks support clinical data sharing. K Brett, Zhiwei Steven Beaulieu-Jones, Chris Wu, Ran Williams, Lee, P Sanjeev, James Brian Bhavnani, Casey S Byrd, Greene, Circulation: Cardiovascular Quality and Outcomes. 127Brett K. Beaulieu-Jones, Zhiwei Steven Wu, Chris Williams, Ran Lee, Sanjeev P. Bhavnani, James Brian Byrd, and Casey S. Greene. Privacy-preserving generative deep neural networks support clinical data sharing. Circulation: Cardiovascular Quality and Outcomes, 12(7), 2019.
Don't generate me: Training differentially private generative models with Sinkhorn divergence. Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis, Advances in Neural Information Processing Systems. 342021NeurIPS'21Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, and Karsten Kreis. Don't generate me: Training differentially private generative models with Sinkhorn divergence. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021.
GS-WGAN: A gradient-sanitized approach for learning differentially private generators. Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz, Advances in Neural Information Processing Systems 33 (NeurIPS'20). 2020Dingfan Chen, Tribhuvanesh Orekondy, and Mario Fritz. GS-WGAN: A gradient-sanitized approach for learning differentially private generators. In Advances in Neural Information Processing Systems 33 (NeurIPS'20), 2020.
Private set generation with discriminative information. Dingfan Chen, Raouf Kerkouche, Mario Fritz, Advances in Neural Information Processing Systems 35 (NeurIPS'22. 2022Dingfan Chen, Raouf Kerkouche, and Mario Fritz. Private set generation with discriminative information. In Advances in Neural Information Processing Systems 35 (NeurIPS'22), 2022.
Differentially private data generative models. Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaafar, Haojin Zhu, abs/1812.02274CoRRQingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaafar, and Haojin Zhu. Differentially private data generative models. CoRR, abs/1812.02274, 2018.
How to train a GAN? Tips and tricks to make GANs work. Soumith Chintala, Emily Denton, Martin Arjovsky, Michael Mathieu, GitHubSoumith Chintala, Emily Denton, Martin Arjovsky, and Michael Mathieu. How to train a GAN? Tips and tricks to make GANs work. GitHub, 2016.
Unlocking high-accuracy differentially private image classification through scale. Soham De, Leonard Berrada, Jamie Hayes, L Samuel, Borja Smith, Balle, abs/2204.13650CoRRSoham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale. CoRR, abs/2204.13650, 2022.
Differentially private diffusion models. Tim Dockhorn, Tianshi Cao, Arash Vahdat, Karsten Kreis, abs/2210.09929CoRRTim Dockhorn, Tianshi Cao, Arash Vahdat, and Karsten Kreis. Differentially private diffusion models. CoRR, abs/2210.09929, 2022.
The algorithmic foundations of differential privacy. Cynthia Dwork, Aaron Roth, Foundations and Trends in Theoretical Compututer Science. 9Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Compututer Science, 9(3--4):211--407, 2014.
Our data, ourselves: Privacy via distributed noise generation. Cynthia Dwork, Krishnaram Kenthapadi, Frank Mcsherry, Ilya Mironov, Moni Naor, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT'06). Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, our- selves: Privacy via distributed noise generation. In 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT'06), 2006a.
Calibrating noise to sensitivity in private data analysis. Cynthia Dwork, Frank Mcsherry, Kobbi Nissim, Adam Smith, Theory of Cryptography. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, 2006b.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems 27 (NIPS'14). Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS'14), 2014.
Numerical composition of differential privacy. Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz, Advances in Neural Information Processing Systems 34, NeurIPS '21. Curran Associates, IncSivakanth Gopi, Yin Tat Lee, and Lukas Wutschitz. Numerical composition of differential privacy. In Advances in Neural Information Processing Systems 34, NeurIPS '21. Curran Associates, Inc., 2021.
DP-MERF: Differentially private mean embeddings with random features for practical privacy-preserving data generation. Frederik Harder, Kamil Adamczewski, Mijung Park, 24th International Conference on Artificial Intelligence and Statistics (AISTATS'21). 2021Frederik Harder, Kamil Adamczewski, and Mijung Park. DP-MERF: Differentially private mean embeddings with random features for practical privacy-preserving data generation. In 24th International Conference on Artificial Intelligence and Statistics (AISTATS'21), 2021.
Differentially private data generation needs better features. Frederik Harder, Milad Jalali Asadabadi, Danica J Sutherland, Mijung Park, abs/2205.12900CoRRFrederik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, and Mijung Park. Differentially private data generation needs better features. CoRR, abs/2205.12900, 2022.
A simple and practical algorithm for differentially private data release. Moritz Hardt, Katrina Ligett, Frank Mcsherry, Advances in Neural Information Processing Systems 25 (NIPS'12). Moritz Hardt, Katrina Ligett, and Frank Mcsherry. A simple and practical algorithm for differentially private data release. In Advances in Neural Information Processing Systems 25 (NIPS'12), 2012.
GANs trained by a two time-scale update rule converge to a local nash equilibrium. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, Advances in Neural Information Processing Systems. 3017Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30 (NIPS'17), 2017.
Generating synthetic data with differential privacy guarantees. James Jordon, Jinsung Yoon, Mihaela Van Der Schaar, Pate-Gan, 7th International Conference on Learning Representations (ICLR'19). James Jordon, Jinsung Yoon, and Mihaela van der Schaar. PATE-GAN: Generating synthetic data with differential privacy guarantees. In 7th International Conference on Learning Representations (ICLR'19), 2019.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations (ICLR'15). Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR'15), 2015.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proc. IEEE. IEEE86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to docu- ment recognition. Proc. IEEE, 86(11):2278-2324, 1998.
Large language models can be strong differentially private learners. Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto, 10th International Conference on Learning Representations (ICLR'22. 2022Xuechen Li, Florian Tramèr, Percy Liang, and Tatsunori Hashimoto. Large language models can be strong differentially private learners. In 10th International Conference on Learning Representations (ICLR'22), 2022.
Scalable differentially private data generator via private aggregation of teacher discriminators. Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, Bo Li , . G-Pate , Advances in Neural Information Processing Systems. 342021NeurIPS'21Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, and Bo Li. G- PATE: Scalable differentially private data generator via private aggregation of teacher discriminators. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021.
Graphical-model based estimation and inference for differential privacy. Ryan Mckenna, Daniel Sheldon, Gerome Miklau, Proceedings of the 36th International Conference on Machine Learning (ICML'19). the 36th International Conference on Machine Learning (ICML'19)Ryan McKenna, Daniel Sheldon, and Gerome Miklau. Graphical-model based estimation and inference for differential privacy. In Proceedings of the 36th International Conference on Machine Learning (ICML'19), 2019.
Rényi differential privacy of the sampled gaussian mechanism. CoRR, abs. Ilya Mironov, Kunal Talwar, Li Zhang, Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the sampled gaussian mechanism. CoRR, abs/1908.10530, 2019.
Private post-GAN boosting. Marcel Neunhoeffer, Steven Wu, Cynthia Dwork, 9th International Conference on Learning Representations (ICLR'19. 2021Marcel Neunhoeffer, Steven Wu, and Cynthia Dwork. Private post-GAN boosting. In 9th International Conference on Learning Representations (ICLR'19), 2021.
Smooth sensitivity and sampling in private data analysis. Kobbi Nissim, Sofya Raskhodnikova, Adam Smith, Proceedings of the 39th Annual ACM Symposium on the Theory of Computing, STOC '07. the 39th Annual ACM Symposium on the Theory of Computing, STOC '07New York, NY, USAACMKobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In Proceedings of the 39th Annual ACM Symposium on the Theory of Computing, STOC '07, pages 75-84, New York, NY, USA, 2007. ACM.
Semi-supervised knowledge transfer for deep learning from private training data. Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J Goodfellow, Kunal Talwar, 5th International Conference on Learning Representations. Nicolas Papernot, Martín Abadi,Úlfar Erlingsson, Ian J. Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In 5th International Conference on Learning Representations (ICLR'2017), 2017.
PyTorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems 32 (NeurIPS'19). Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (NeurIPS'19), 2019.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, 4th International Conference on Learning Representations (ICLR'16). Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convo- lutional generative adversarial networks. In 4th International Conference on Learning Representations (ICLR'16), 2016.
Stochastic gradient descent with differentially private updates. Shuang Song, Kamalika Chaudhuri, Anand D Sarwate, 2013 IEEE Global Conference on Signal and Information Processing. Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, 2013.
Benchmarking differentially private synthetic data generation algorithms. Yuchao Tao, Ryan Mckenna, Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, abs/2112.09238CoRRYuchao Tao, Ryan McKenna, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. Benchmarking differentially private synthetic data generation algorithms. CoRR, abs/2112.09238, 2021.
DP-CGAN: Differentially private synthetic data and label generation. Reihaneh Torkzadehmahani, Peter Kairouz, Benedict Paten, IEEE Conference on Computer Vision and Pattern Recognition Workshops, (CVPR Workshops'19). Reihaneh Torkzadehmahani, Peter Kairouz, and Benedict Paten. DP-CGAN: Differentially private synthetic data and label generation. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, (CVPR Workshops'19), 2019.
Differentially private learning needs better features (or much more data). Florian Tramèr, Dan Boneh, 9th International Conference on Learning Representations (ICLR'21. 2021Florian Tramèr and Dan Boneh. Differentially private learning needs better features (or much more data). In 9th International Conference on Learning Representations (ICLR'21), 2021.
Hermite polynomial features for private data generation. Margarita Vinaroz, Frederik Mohammad-Amin Charusaie, Kamil Harder, Mi Jung Adamczewski, Park, Proceedings of the 39th International Conference on Machine Learning (ICML'22). the 39th International Conference on Machine Learning (ICML'22)2022Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, and Mi Jung Park. Hermite polynomial features for private data generation. In Proceedings of the 39th International Confer- ence on Machine Learning (ICML'22), 2022.
Differentially private normalizing flows for privacy-preserving density estimation. CoRR, abs. Chris Waites, Rachel Cummings, Chris Waites and Rachel Cummings. Differentially private normalizing flows for privacy-preserving density estimation. CoRR, abs/2103.14068, 2021.
DataLens: Scalable privacy preserving training via gradient compression and aggregation. Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, Bo Li, CCS'21: 2021 ACM SIGSAC Conference on Computer and Communications Security. Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, and Bo Li. DataLens: Scalable privacy preserving training via gradient compression and aggregation. In CCS'21: 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021.
Subsampled Rényi differential privacy and analytical moments accountant. Yu-Xiang Wang, Borja Balle, Shiva Prasad Kasiviswanathan, 22nd International Conference on Artificial Intelligence and Statistics (AISTATS'19). Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. Subsampled Rényi differential privacy and analytical moments accountant. In 22nd International Conference on Artificial Intelligence and Statistics (AISTATS'19), 2019.
Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf, abs/1708.07747CoRRHan Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017.
Differentially private generative adversarial network. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou, abs/1802.06739CoRRLiyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. CoRR, abs/1802.06739, 2018.
Opacus: User-friendly differential privacy library in PyTorch. Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Gosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov, abs/2109.12298CoRRAshkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Gosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, and Ilya Mironov. Opacus: User-friendly differential privacy library in PyTorch. CoRR, abs/2109.12298, 2021.
Differentially private fine-tuning of language models. Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, A Huseyin, Gautam Inan, Janardhan Kamath, Yin Tat Kulkarni, Andre Lee, Lukas Manoel, Sergey Wutschitz, Huishuai Yekhanin, Zhang, 10th International Conference on Learning Representations (ICLR'22). 2022Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang. Differen- tially private fine-tuning of language models. In 10th International Conference on Learning Representations (ICLR'22), 2022.
PrivBayes: Private data release via bayesian networks. Jun Zhang, Graham Cormode, Cecilia M Procopiuc, Divesh Srivastava, Xiaokui Xiao, ACM Trans. Database Syst. 424C Additional discussionJun Zhang, Graham Cormode, Cecilia M. Procopiuc, Divesh Srivastava, and Xiaokui Xiao. PrivBayes: Private data release via bayesian networks. ACM Trans. Database Syst., 42(4), 2017. C Additional discussion
This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. Ganhacks ; Of Chintala, While Chintala et al. 14prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). provides little elaboration, looking at further explorations of this principle in the non-private setting may offer guidance for improving DPGANsGANhacks. Guidance in the non-private setting (tip 14 of Chintala et al. (2016)) prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. While Chintala et al. (2016) provides little elaboration, looking at further explorations of this principle in the non-private setting may offer guidance for improving DPGANs.
| [
"https://github.com/znxlwm).",
"https://github.com/mseitzer/pytorch-fid.5"
]
|
[
"Quasi-uniform structures and functors",
"Quasi-uniform structures and functors"
]
| [
"Minani Iragi \nDepartment of Mathematical Sciences\nUniversity of South Africa\nP.O. Box 392003UnisaSouth Africa\n",
"David Holgate [email protected] \nDepartment of Mathematics and Applied Mathematics\nInstitute of Mathematics\nFaculty of Mechanical Engineering\nUniversity of the Western Cape\n7535BellvilleSouth Africa\n\nBrno University of Technology\nTechnická 2616 69BrnoCzech Republic\n"
]
| [
"Department of Mathematical Sciences\nUniversity of South Africa\nP.O. Box 392003UnisaSouth Africa",
"Department of Mathematics and Applied Mathematics\nInstitute of Mathematics\nFaculty of Mechanical Engineering\nUniversity of the Western Cape\n7535BellvilleSouth Africa",
"Brno University of Technology\nTechnická 2616 69BrnoCzech Republic"
]
| [
"AMS Subject Classification"
]
| We study a number of categorical quasi-uniform structures induced by functors. We depart from a category C with a proper (E, M)-factorization system, then define the continuity of a C-morphism with respect to two syntopogenous structures (in particular with respect to two quasi-uniformities) on C and use it to describe the quasi-uniformities induced by pointed and copointed endofunctors of C. In particular, we demonstrate that every quasi-uniformity on a reflective subcategory of C can be lifted to a coarsest quasi-uniformity on C for which every reflection morphism is continuous.Thinking of categories supplied with quasi-uniformities as large "spaces", we generalize the continuity of C-morphisms (with respect to a quasi-uniformity) to functors. We prove that for an M-fibration or a functor that has a right adjoint, we can obtain a concrete construction of the coarsest quasi-uniformity for which the functor is continuous. The results proved are shown to yield those obtained for categorical closure operators. Various examples considered at the end of the paper illustrate our results. | null | [
"https://export.arxiv.org/pdf/2302.02757v1.pdf"
]
| 256,616,096 | 2302.02757 | c7887f685d905762efd2a3785a460955c03b87bb |
Quasi-uniform structures and functors
2020
Minani Iragi
Department of Mathematical Sciences
University of South Africa
P.O. Box 392003UnisaSouth Africa
David Holgate [email protected]
Department of Mathematics and Applied Mathematics
Institute of Mathematics
Faculty of Mechanical Engineering
University of the Western Cape
7535BellvilleSouth Africa
Brno University of Technology
Technická 2616 69BrnoCzech Republic
Quasi-uniform structures and functors
AMS Subject Classification
2020* Corresponding author. 1Closure operatorSyntopogenous structureQuasi-uniform structure(co)pointed endofunctor and Adjoint functor
We study a number of categorical quasi-uniform structures induced by functors. We depart from a category C with a proper (E, M)-factorization system, then define the continuity of a C-morphism with respect to two syntopogenous structures (in particular with respect to two quasi-uniformities) on C and use it to describe the quasi-uniformities induced by pointed and copointed endofunctors of C. In particular, we demonstrate that every quasi-uniformity on a reflective subcategory of C can be lifted to a coarsest quasi-uniformity on C for which every reflection morphism is continuous.Thinking of categories supplied with quasi-uniformities as large "spaces", we generalize the continuity of C-morphisms (with respect to a quasi-uniformity) to functors. We prove that for an M-fibration or a functor that has a right adjoint, we can obtain a concrete construction of the coarsest quasi-uniformity for which the functor is continuous. The results proved are shown to yield those obtained for categorical closure operators. Various examples considered at the end of the paper illustrate our results.
Introduction
The introduction of categorical closure operators ( [6]) by Dikranjan and Giuli was the point of departure for study of topological structures on categories. This approach eventually motivated the introduction of categorical interior ( [20]) and neighbourhood ( [15]) operators. While the categorical interior operators were shown to be pleasantly related to neighbourhood operators, a nice relationship between closure and neighbourhood operators in a category was lacking until the categorical topogenous structures ( [14,16]) were recently introduced. Indeed the conglomerate of categorical topogenous structures is order isomorphic to the conglomerate all neighbourhood operators and contains both the conglomerates of all interior and all closure operators as reflective subcategories.
A natural generalization of the definition of a categorical topogenous structures leads to the concept of categorical syntopogenous structure which provides a convenient setting to investigate a quasi-uniform structure on a category. This is the point of departure in ( [13,17]) where a categorical quasi-uniform structure is introduced and studied. Moreover, the use of syntopogenous structures allows the description of a quasi-uniformity as a family of categorical closure operators (see e.g [12]). A recent account of this relationship between quasi-uniformity and closure operators can be found in [13,18] .
The present paper aims to further study a categorical quasi-uniform structure. Considering a category C with a proper (E, M)-factorization system, we show that for a syntopogenous structure S on C and an E-pointed endofunctor (F, η) of C, there is a coarsest syntopogenous structure S F,η on C for which every η X : X −→ F X is (S F,η , S)-continuous. Since a categorical quasi-uniformity is equivalent to a co-perfect syntopogenous structure and simple co-perfect syntopogenous structures are equivalent to idempotent closure operators (see e.g [12]), S F,η allows us to construct the quasi-uniform structure and the closure operator induced by a pointed endofunctor. In particular, we demonstrate that every quasi-uniformity U on a reflective subcategory of C can be lifted to a coarsest quasi-uniformity U F,η on C for which every reflection morphism is (U F,η , U)-continuous. When applied to spaces, U F,η turns out to describe initial structures induced by reflection maps. Dual results shall be obtained in the case of a copointed endofunctor. For a functor F : A −→ C and quasi-uniformities U and V on A and C respectively, we introduce the (U, V)-continuity of F . It is shown that if F is an M-fibration or has a right adjoint, then one can concretely describe the coarsest quasi-uniformity V F on A for which F is (V F , V)-continuous. We then use the categorical co-perfect syntopogenous structures, to obtain a concrete description of the largest closure operator making F continuous.
In section 4, we describe categorical quasi-uniform structures induced by (co) pointed endofunctors, which we construct using the syntopogenous structures (Proposition 4.4, Theorems 4.4 and 4.9). It is interesting to note that particular cases of these quasi-uniform structures correspond to the closure operators obtained by Dikranjan and Tholen in [4] (chapter 5, Theorems 5.12 and 5.12 * ). The study of continuity of functors with respect to two quasi-uniform structures and its use to describe the initial quasi-uniform structures induced by an M-fibration or a functor having a right adjoint (Proposition 5.4, 5.7 and 5.9, Theorems 5.5 and 5.8) are devoted to section 5. Finally in section 6, we present a number of examples to illustrate the results obtained.
Preliminaries
Our blanket reference for categorical concepts is [1]. The basic facts on categorical closure operators used here can be found in [4] or [6]. For the categorical topogenous, quasi-uniform and syntopogenous structures, we use [17] and [14]. Throughout the paper, we consider a category C supplied with a proper (E, M)-factorization system for morphisms. The category C is assumed to be M-complete so that pullbacks of M-morphisms along C-morphisms and arbitrary M-intersections of M-morphisms exist and are again in M. For any X ∈ C, subX = {m ∈ M | cod(m) = X}. It is ordered as follows: n ≤ m if and only if there exists j such that m • j = n. If m ≤ n and n ≤ m then they are isomorphic. We shall simply write m = n in this case. SubX is a (possibly large) complete lattice with greatest element 1 X : X −→ X and the least element 0 X :
O X −→ X. Any C-morphism, f : X −→ Y induces an image/pre-image adjunction f (m) ≤ n if and only if m ≤ f −1 (n) for all n ∈ subY , m ∈ subX with f (m) the M-component of the (E, M)-factorization of f • m while f −1 (n) is the pullback of n along f . We have from the image/pre-image adjunction that f (f −1 (n)) ≤ n (with f (f −1 (n)) = n if f ∈ E and E is pullback stable along M-morphisms) and m ≤ f −1 (f (m)) (with m = f −1 (f (m)) if f ∈ M)
for any n ∈ subY and m ∈ subX.
Applying adjointness repeatedly we obtain the lemma below.
Lemma 1. Let X Y f / / X ′ X p ′ X ′ Y ′ f ′ / / Y ′ Y p
be a commutative diagram. Then for any subobject n ∈ subY ′ , p ′ (f ′−1 (n)) ≤ f −1 (p(n)).
Definition 1.
A pointed endof unctor of C is a pair (F, η) consisting of a functor F : C −→ C and a natural transformation η :
1 C −→ F . For any C-morphism f : X −→ Y , (F, η) induces the commutative diagram below. Y F Y η Y / / X Y f X F X η X / / F X F Y F f If each η X ∈ F where F is a class of C-morphisms, then (F, η) is F -pointed.
A copointed endofunctor of C is defined dually.
Definition 2.
A closure operator c on C with respect to M is given by a family of maps {c X : subX −→ subX | X ∈ C} such that:
(C1) m ≤ c X (m) for all m ∈ subX; (C2) m ≤ n ⇒ c X (m) ≤ c X (n) for all m, n ∈ subX; (C3) every morphism f : X −→ Y is c-continuous, that is: f (c X (m)) ≤ c Y (f (m)) for all m ∈ subX.
We denote by CL(C, M) the conglomerate of all closure operators on C with respect to M ordered as follows:
c ≤ c ′ if c X (m) ≤ c ′ X (m)
for all m ∈ subX and X ∈ C.
Definition 3. A closure operator c on C is idempotent if c X (c X (m)) = c X (m)
for all m ∈ subX and X ∈ C.
ICL(C, M) will denote the conglomerate of all idempotent closure operators on C.
Definition 4. [14]
A topogenous order ❁ on C is a family {❁ X | X ∈ C} of relations, each ❁ X on subX, such that:
(T 1) m ❁ X n ⇒ m ≤ n for every m, n ∈ subX, (T 2) m ≤ n ❁ X p ≤ q ⇒ m ❁ X q for every m, n, p, q ∈ subX, and
(T 3) every morphism f : X −→ Y in C is ❁-continuous, m ❁ Y n ⇒ f −1 (m) ❁ X f −1 (n) for every m, n ∈ subY .
Given two topogenous orders ❁ and ❁ ′ on C, ❁⊆❁ ′ if and only if m ❁ X n ⇒ m ❁ ′ X n for all m, n ∈ subX. The resulting ordered congolomerate of all topogenous orders on C is denoted by TORD(C, M). A topogenous order ❁ is said to be
(1) -preserving if (∀i ∈ I : m ❁ X n i ) ⇒ m ❁ X n i , and (2) interpolative if m ❁ X n ⇒ (∃ p) | m ❁ X p ❁ X n for all X ∈ C.
The ordered conglomerate of all -preserving and interpolative topogenous orders is
The quasi-uniform structures
It is well known (see e.g [8]) that an (entourage) quasi-uniformity on a set X can be equivalently expressed as an appropriate family of maps U : X −→ P(X). Since these maps can easily be extended to endomaps on P(X), it is possible to think of a quasi-uniformity on C as a suitable family of endomaps on subX for each X ∈ C. This is the point expressed in Definition 5. Let us denote by F (subX) the endofunctor category on subX for each X ∈ C.
It is clear that for all U, V ∈ F (subX), U ≤ V if U(m) ≤ V (m) for all m ∈ subX.
Definition 5. [17]
A quasi-unif ormity on C with respect to M is a family U = {U X | X ∈ C} with U X a full subcategory of F (subX) for each X such that:
(U1) For any U ∈ U X , 1 X ≤ U, (U2) For any U ∈ U X , there is U ′ ∈ U X such that U ′ • U ′ ≤ U, (U3) For any U ∈ U X and U ≤ U ′ , U ′ ∈ U X , (U4) For any U, U ′ ∈ U X , U ∧ U ′ ∈ U X , (U5) For any C-morphism f : X −→ Y and U ∈ U Y , there is U ′ ∈ U X such that f (U ′ (m)) ≤ U(f (m)) for any m ∈ subX.
We shall denote by QUnif(C, M) the conglomerate of all quasi-uniform structures on C. It is ordered as follows: U ≤ V if for all X ∈ C and U ∈ U X , there is V ∈ V X such that V ≤ U. In most cases we describe a quasi-uniformity by defining a base for it. A base for a quasi-uniformity U on C is a family B = {B X |X ∈ C} with each B X a full subcategory of F (subX) for all X ∈ C satisfying all the axioms in Definition 5 except (U3). If B X for any X ∈ C is a base element with a single member V , we shall write V X . A base for quasiuniformity on C is transitive if for all X ∈ C and U ∈ B X , U • U = U. A quasi-uniformity with a transitive base is called a transitive quasi-unif ormity. The ordered conglomerate of all transitive quasi-uniformities on C will be denoted by TQUnif(C, M).
Definition 6. [17]
A syntopogenous structure on C with respect to M is a family S = {S X | X ∈ C} such that each S X is a set of relations on subX satisfying:
(S1) Each ❁ X ∈ S X is a relation on subX satisfying (T 1) and (T 2), (S2) S X is a directed set with respect to inclusion, (S3) ❁ X = S X is an interpolative topogenous order.
The ordering of topogenous orders can be extended to syntopogenous structures in the following way:
S ≤ S ′ if for all X ∈ C and ❁ X ∈ S X , there is ❁ ′ X ∈ S ′ X such that ❁ X ⊆❁ ′ X . The resulting conglomerate will be denoted by SYnt(C, M). S ∈ SYnt(C, M) is co-perf ect if each ❁ X ∈ S X is -preserving for all X ∈ C. It is interpolative if every ❁ X ∈ S X interpo- lates.
The ordered conglomerate of all interpolative co-perfect syntopogenous structures will be denoted by INTCSYnt(C, M). The ordered conglomerate of all co-perfect syntopogenous structures will be denoted by CSYnt
(C, M). S ∈ SYnt(C, M) is simple if S X = {❁ X } where
❁ X is an interpolative topogenous order for any X ∈ C. Theorem 1. [17] QUnif(C, M) is order isomorphic to CSYnt(C, M). The inverse assignments of each other U −→ S U and S −→ U S are given by
S B X = {❁ U X | U ∈ B X } where m ❁ U X n ⇔ U(m) ≤ n, and B S X = {U ❁ | ❁ X ∈ S X } where U ❁ (m) = {n | m ❁ X n} for all X ∈ C and m, n ∈ subX.
Since S X ⊆ -TORD (C, M) for each S ∈ CSYnt(C, M), it follows from the above theorem and Proposition 1 that a quasi-uniformity on C is a collection of families of closure operators.
By Corollary 1 ( see also [18], 4 Quasi-uniform structures induced by (co)pointed endofunctors Throughout this section, the class E will be assumed to be stable under pullbacks along M-morphisms.
Already the axiom (S3) of Definition 6 includes the fact that every morphism in C must be continuous with respect to the syntopogenous structure. In the next definition, we introduce the continuity of C-morphisms with respect to two syntopogenous structures on C. Our aim being to use this definition to construct new syntopogenous structures from old. In particular new quasi-uniformities and new closure operators from old. These are particularly important as they turn out to describe initial structures induced by certain maps in spaces.
Definition 7. Let S, S ′ ∈ SYnt(C, M). A morphism f : X −→ Y is (S, S ′ )-continous if for all ❁ ′ Y ∈ S ′ Y , there is ❁ X ∈ S X such that f (m) ❁ ′ Y n ⇒ m ❁ X f −1 (n) for all m ∈ subX and n ∈ subY , equivalently m ❁ ′ Y n ⇒ f −1 (m) ❁ X f −1 (n) for all n, m ∈ subY. Since every C-morphism f is (S, S)-continuous and (S ′ , S ′ )-continuous, f is (S, S ′ )- continuous if S ′ ≤ S. Because S is simple if each S X = {❁ X }
where ❁ X is an interpolative topogenous order, we obtain the following proposition.
V ∈ B S ′ Y there is U ∈ B S X such that f (U(m)) ≤ V (f (m)) for all m ∈ subX.
Proof. Assume that f : X −→ Y is (S, S ′ )-continuous and S, S ′ ∈ SYnt(C, M). Then for
any V ∈ B S ′ Y , there is ❁ ′ Y ∈ S ′ Y which determines V and there is ❁ X ∈ S X such that f (m) ❁ ′ Y n ⇒ m ❁ X f −1 (n). Now U(m) = U ❁ X (m) = {p | m ❁ X p} ≤ {f −1 (n) | f (m) ❁ ′ Y n} = f −1 (V (f (m))) ⇒ U(m) ≤ f −1 (V (f (m)) ⇔ f (U(m)) ≤ V (f (m)). Conversely, assume that for any V ∈ B S Y there is U ∈ B S X such that f (U(m)) ≤ V (f (m)). Now, for any ❁ ′ Y ∈ S ′ Y , there is, by Theorem 1, V ∈ B S such that ❁ Y =❁ V . Thus f (m) ❁ ′ Y n ⇔ V (f (m)) ≤ n ⇒ f (U(m)) ≤ n ⇔ U(m) ≤ f −1 (n) ⇔ m ❁ U X f −1 (n) ⇔ m ❁ X f −1 (n).
The proposition above provides us with the next definition.
Definition 8. Let U, U ′ ∈ QUnif(C, M) and f : X −→ Y a C-morphism. f is (U, U ′ )- continous if for any U ′ ∈ U ′ Y , there is U ∈ U X such that f (U(m)) ≤ U ′ (f (m)) for all m ∈ subX.
Propositions 2 and Corollary 1 allow us to prove the following.
e S X = {❁ X }, S ′ X = {❁ ′ X } ⊆ −INT ORD(C, M). Then f is (S, S ′ )-continuous if and only if f (c ❁ X (m)) ≤ c ❁ ′ X (f (m)) for all m ∈ subX. Definition 9. [4] Let c, c ′ ∈ CL(C, M) and f : X −→ Y a C-morphism. f is (c, c ′ )- continuous if f (c X (m)) ≤ c ′ X (f (m)) for all m ∈ subX.
For a syntopogenous structure S on C and a class F of C-morphisms, we ask if there is a coarsest syntopogenous structure S ′ on C for which every morphism in F is (S ′ , S)-continous. In the next theorem, we provide an answer to this question in the case F = {η X : X ∈ C}, for an E-pointed endofunctor (F, η) of C. Later on we shall deal with a somehow dual case. Let us also note that a similar question has been asked in the case of a closure operator (see [4], chapter 5). We prove that the results obtained in ( [4]) can be deduced from those we prove here.
Theorem 2. Let (F, η) be an E-pointed endofunctor of C and S a syntopogenous structure on C with respect to M. Then S F,η X = {❁ F,η X | ❁ F X ∈ S F X } with m ❁ F,η X n ⇔ η X (m) ❁ F X p and η −1 X (p) ≤ n for some p ∈ subF X is the coarsest syntopogenous structure on C with respect to M for which every η X : X −→ F X is (S F,η , S)-continuous. If S is interpolative (co-perfect), then S F,η is interpolative (co-perfect, respectively).
Proof. S F,η is clearly a syntopogenous structure and η X is (S F,η , S)-continuous, since for all
❁ X ∈ S X , η X (m) ❁ F X n ⇒ η X (m) ❁ F X (η X (η −1 X (n)) ⇔ m ❁ F,η X η −1 X (n).
If S ′ is another syntopogenous structure on C such that η X is (S ′ , S)-continuous, then for any ❁ F,η X ∈ S F,η X , m ❁ F,η X n ⇔ η X (m) ❁ F X p and η −1 X (p) ≤ n. This implies that there is
❁ ′ X ∈ S ′ X such that m ❁ ′ X η −1 X (p) ≤ n ⇒ m ❁ ′ X n. Thus S F,η ≤ S ′ . If S is interpolative and m ❁ F,η X n, then η X (m) ❁ F X p and η −1 X (p) ≤ n for some p ∈ subF X. This implies that there is l ∈ subF X such that η X (m) ❁ F X l ❁ F X p. Thus η X (m) ❁ F X η X (η −1 X (l)) ❁ F X p, that is m ❁ F,η X η −1 X (l) ❁ F,η X n.
It is also not hard to see that S F,η is co-perfect if S has the same property.
Viewing a reflector as endofunctor of C, one obtains the proposition below.
Corollary 2. Let
A be E-reflective subcategory of C and S a syntopogenous structure on A with respect to M.
Then S A X = {❁ A F X | ❁ F X ∈ S F X } with m ❁ A X n ⇔ η X (m)
❁ F X p and η −1 X (p) ≤ n for some p ∈ subF X is the coarsest syntopogenous structure on C with respect to M for which every reflection morphism η X : X −→ F X is (S A , S)-continous. If S is interpolative (co-perfect), then S A is interpolative (co-perfect, respectively).
Since S F,η is co-perfect provided S is co-perfect, Theorem 1 gives us the next proposition.
B S F,η X = {U ❁ F,η | U ❁ ∈ B S F X } with U ❁ F,η (m) = η −1 X (U ❁ (η X (m)))
is a base for the coarsest quasi-uniformity on C with respect to M for which every η X : X −→ F X is (U S F,η , U S )-continous. B S F,η is a transitive base provided that S is interpolative.
Proof. (U1), (U2) and (U4) are clear.
(U5) Let f : X −→ Y be a C-morphism and U ❁ F,η ∈ B S F,η Y for ❁ F Y ∈ S F Y . Then there is ❁ F X ∈ S F X such that f (V ❁ F X (m)) ≤ U ❁ F Y (f (m)). Thus f (V ❁ F,η (m)) = f (η −1 X (V ❁ F X (η X (m))) ≤ η −1 Y (F f )(V ❁ F X (η X (m))) Lemma 1 ≤ η −1 Y (U ❁ F Y (F f )(η X (m))) = η −1 Y (U ❁ F X (η Y (f (m))) Definition 1 = U ❁ F,η (f (m))
Since, for any ❁ X ∈ S X , U ❁ F,η (m) = η −1
X (U ❁ (η X (m))) ⇒ η X (U ❁ F,η (m)) ≤ U ❁ (η X (m)), η X is (U S F,η , U)-continous for all X ∈ C. If S is interpolative then U ❁ F,η (U ❁ F,η (m)) = U ❁ F,η (η −1 X (U ❁ (η X (m))) = η −1 X (U ❁ (η X (η −1 X (U ❁ (η X (m)))) ≤ η −1 X (U ❁ (U ❁ (η X (m)))) = η −1 X (U ❁ (η X (m))) = U ❁ F,η (m).
Let B ′ be a base for another quasi-uniformity U ′ on C such that η X is (U ′ , U S )-continuous, then for any
U ❁ ∈ B S F X , there is U ′ ∈ B ′ X such that η X (U ′ (m)) ≤ U ❁ (η X (m)) ⇔ U ′ (m) ≤ η −1 X (U ❁ (η X (m))) = U ❁ F,η (m). Thus B S F,η ≤ B ′ .
One sees from the proof of the above proposition that the condition of (F, η) being Epointed is not needed when the syntopogenous structure is co-perfect.
Then c ❁ F,η (m) = η −1 X (c ❁ F X (η X (m)))
is an idempotent closure operator. It is the largest closure operator on C for which every η X :
X −→ F X is (c ❁ F,η , c ❁ )-continuous.
The above closure operator was first introduced on the category of topological spaces and continuous maps by L. Stramaccia ([19]), then on topological categories by D. Dikranjan ([5]) and later on an arbitrary category by Dikranjan and Tholen ([4]). It is a special case of the pullback closure studied by D. Holgate in [11,10].
Corollary 3. Let A be a reflective subcategory of C and S a co-perfect syntopogenous structure on A with respect to M. Then
B A X = {U ❁ A | U ❁ ∈ B S F X } with U ❁ A (m) = η −1 X (U ❁ (η X (m)))
is a base for the coarsest quasi-uniformity on C with respect to M for which every reflection morphism η X : X −→ F X is (U S A , U S )-continous. B S A is a transitive base provided that S F,η is interpolative.
Corollary 3 allows us to obtain the quasi-uniform structure induced by any reflective subcategory of QUnif and to conclude that it is the initial quasi-uniformity for which the reflection map is quasi-uniformly continous (see Example 6.1).
Theorem 3. Let (G, ε) be a M-copointed endofunctor of C and S a syntopogenous structure on C, then S G,ε
X = {❁ G,ε X | ❁ GX ∈ S GX } with m ❁ G,ε X n ⇔ ε −1 X (n) ❁ GX ε −1 X (n)
for all m ∈ subX and n ≥ m, is the finest syntopogenous structure on C for which every ε X : GX −→ X is (S, S G,ε )-continuous.
Proof. A routine check shows that S G,ε is a syntopogenous structure on C. For all X ∈ C, ε X : GX −→ X is (S, S G,ε )-continuous, since for any ❁ G,ε X ∈ S G,ε X and m, n ∈ subX with n ≤ m, m ❁ G,ε X n ⇒ ε −1 X (n) ❁ GX ε −1 X (n).
If S ′ is another syntopgenous structure on C such that ε X is (S, S ′ )-continuous, then for
any ❁ X ∈ S ′ X , m ❁ ′ X n ⇒ ε X (ε −1 X (m)) ❁ ′ X n ⇒ ∃ ❁ GX ∈ S GX | ε −1 X (m) ❁ X ε −1 X (n) ⇔ m ❁ G,ε X n.A X = {❁ A X | ❁ GX ∈ S GX } with m ❁ A X n ⇔ ε −1 X (n) ❁ GX ε −1 X (n)
for all m ∈ subX and n ≥ m, is the finest syntopogenous structure on C for which every coreflection ε X : GX −→ X is (S, S A )-continuous.
Proposition 7. Assume that f −1 commutes with the join of subobjects for any f ∈ C. Let (G, ε) be an M-copointed endofunctor of C and S ∈CSYnt(C, M). Then
B S G,ε X = {V ❁ G,ε | V ❁ ∈ B S GX } with V ❁ F,ε X (m) = m ∨ ε X (V ❁ (ε −1 X (m))
) is a base for the finest quasi-uniformity on C which makes every ε X (V, V G,ε )-continous.
Proof. It is not hard to check that B S G,ε X is a base for a quasi-uniformity on C. Since
ε X (V ❁ (ε −1 X (m))) ≤ V ❁ G,ε (m) ⇔ V ❁ (ε −1 X (m)) ≤ ε −1 X (V ❁ G,ε (m)), ε X is (V, V S G,ε )-continous. Let B ′ be base for another quasi-uniformity V ′ on C such that ε X is (V, V ′ )-continuous. Then for all V ′ ∈ V ′ X , there is V ∈ V GX such V (ε −1 X (m)) ≤ ε −1 X (V ′ (m)) ⇔ ε X (V (ε −1 X (m))) ≤ V ′ (m) ⇒ m ∨ ε X (V (ε −1 X (m))) ≤ V ′ (m) ⇔ V ❁ G,ε (m) ≤ V ′ (m)
. Thus B ′ ≤ B G,ε . Proposition 8. Let (G, ε) be a copointed endofunctor of C and S be simple and co-perfect syntopogenous structure i.e S X = {❁ X } ∈ −INTORD(C, M), then for all m ∈ subX,
c ❁ G,ε (m) = m ∨ ε X (c ❁ GX (ε −1 X (m)))
is is an idempotent closure operator on C. It is the least closure operator for which every ε X : GX −→ X is (c, c G,ε )-continuous.
Corollary 5. Assume that f −1 commutes with the join of subobjects for any f ∈ C. Let A be an M-coreflective subcategory of C and S a syntopogenous A. Then
B A X = {V ❁ A | V ❁ ∈ B S GX } with V ❁ A (m) = m ∨ ε X (V ❁ (ε −1 X (m)
) is a base for finest quasi-uniformity on C which makes every coreflection morphism ε X (V, V A )-continous.
The continuity of functors with respect to quasiuniform structures
Let A be a category endowed with an (E ′ , M ′ )-factorization system for morphisms and A be M ′ -complete.
Definition 10. [4]
A functor F : A −→ C is said to preserve subobjects provided that F m is an M-subobject for every M ′ -subobject m. It preserves inverse images (resp. images) of (m))) for any A-morphism f : X −→ Y and subobjects n ∈ subY , m ∈ subX.
subobjects if F f −1 (n) = (F f ) −1 (F n) (resp. (F f )(F m) = F (f
Definition 11. Let F : A −→ C be a functor that preserves subobjects, U ∈ QUnif(A, M ′ ) and V ∈ QUnif(C, M).
Then F is (U, V)-continuous if for all V ∈ V F X , there is U ∈ U X such that F U(m) ≤ V (F m) for all m ∈ subX, X ∈ A.
It can be easily seen that our definition for (U, V)-continuity of F is a generalization of U-continuity of morphisms to functors. Using Theorem 1, we can formulate an equivalent definition of the (U, V)-continuity of F in terms of co-perfect syntopogenous structures so that F is (S, S ′ )-continuous will mean that F is continuous with respect to the quasi-uniform structures associated with S and S ′ .
Proposition 9. Let F : A −→ C be a functor that preseves subobjects, S ∈ QUnif(A, M ′ ) and S ′ ∈ QUnif(C, M). Then F is (S, S ′ )-continuous if for all ❁ ′ F X ∈ S ′ F X , there is ❁ X ∈ S X such that F U ❁ (m) ≤ U ❁ ′ (F m) for all m ∈ subX, X ∈ A.
Continuity of a functor between categories supplied with fixed closure operators has been studied in [4]. We next use the above proposition together with Corollary 1 and the fact that −INT ORD(C, M) is equivalent to the simple co-perfect syntopogenous structures to produce the (U, V)-continuity of F in terms of idempotent closure operators. In particular, Proposition 11. Let F : A −→ C be faithful M-fibration and S be a syntopogenous structure on C with respect to M. Then
S F X = {❁ F X | ❁ F X ∈ S F X } where m ❁ F X n ⇔ F m ❁ F X γ X (n)
is a syntopogenous structure on A with respect to M F which is interpolative, co-perfect provided S has the same properties. Moreover, an A-morphism f is S F -initial provided F f is S-initial.
Theorem 4. Let F : A −→ C be a faithful M-fibration and B be a base for a quasiuniform structure on C with respect to M.
Then B F X = {U F | U ∈ B F X } where U F (m) = δ X (U(F m)
) is a base for quasi-uniformity on A with respect to M F . It is the coarsest quasiuniformity for which F is (U F , U)-continuous. B F is transitive provided that B is a transitive base. Moreover an A-morphism f is U F -initial provided F f is S-initial.
Proof. It is clear that B F is a base for a quasi-uniformity on A which is transitive if B is transitive. F is (U F , U)-continuous, since for any U ∈ B F X , U F (m) = δ X (U(F m)) ⇔ γ X (U F (m)) = U(F m) ⇔ F (U F (m)) = U(F m). If B ′ is a base for another quasi-uniformity U ′ on A such that F is (U ′ , U)-continuous, then for all
U F ∈ B F X , there is U ′ ∈ B ′ such that F U ′ (m) ≤ U(F m) = F U F (m). Thus U ′ (m) = δ X (F U ′ (m)) ≤ δ X (F U F (m)) = U F (m), that is B F ≤ B ′ . If F f is U-initial and U F ∈ U F X , there is U ′ ∈ U F Y such that (F f ) −1 (U ′ (F f )(p)) ≤ U(p) for all p ∈ subF X. Now f −1 (U ′F (f (m))) = f −1 (δ Y (U ′ (F f (m)))) = δ X ((F f ) −1 (U ′ (F f (m)))) = δ X ((F f ) −1 (U ′ ((F f )(F m)))) ≤ δ X (U(F m)) = U F (m) for all m ∈ subX.
Corollary 6. Under the assumptions of Theorem 4 and F is essentially surjective on objects, then B is the base of the finest quasi-uniformity on C for which F is (U F , U)-continous.
Proof. By essential surjectivity of F on objects, we have that for all Y ∈ C, Y ∼ = F X for some X ∈ A. Thus if B ′ is another quasi-uniformity on C such that F is (U F , U ′ )-continuous, then for all Y ∈ C and U ′ ∈ U ′ Y , there is X ∈ A and U F ∈ B F such that Y ∼ = F X and
F U F (m) ≤ U ′ (F m) ⇔ U(F m) = F δ X (U(F m)) ≤ U ′ (F m) = U ′ (F m). Thus B ′ ≤ B.
Proposition 12. Let F : A −→ C be a faithful M-fibration and S be a simple co-perfect syntopogenous structure on C with respect to M i.e S = {❁ X } ∈ −INT ORD . Then
c ❁ F (m) = δ X (c ❁ (F m)) is an idempotent closure operator on A with respect to M F . It is the largest closure operator on A for which F is (c ❁ F , c ❁ )-continuous.
Proof. It is easily seen that c ❁ F is a closure operator for any simple co-perfect syntopogenous structure S. Now, c ❁ F (c ❁ F (m)) = c ❁ F (δ X (c ❁ F X (F m))) = δ X (c ❁ F X (F δ X (c ❁ (F m)))) = δ X (c ❁ F X (c ❁ F X (F m))) = δ X (c ❁ F X (F m)) = c ❁ F (m), thus c ❁ F is idempotent. F is (c ❁ F , c ❁ )continuous since, γ X (c ❁ F (m)) = c ❁ (F m) ⇔ F c ❁ F (m) = c ❁ (F m). If c ′ is another closure operator on A such that F is (c ′ , c ❁ )-continuous, then F c ′ X (m) ≤ c ❁ (F m). Thus c ′ X (m) = δ X (F (c X (m)) ≤ δ X (c ❁ F X (F m)) = c ❁ F X (m).
The closure operator in Proposition 12 was already obtained in [4] without use of the methods of syntopogenous structures. The interested reader will, in this book, find a number of examples for such closure.
Theorem 5. Let F ⊣ G : C −→ A be adjoint functors and B be a base for a quasi-uniformity U ∈ QUnif(C, M). Assume that G and F preserve subobjects. Then B η X = {U η | U ∈ B F X } with U η (m) = η −1 X (GU(F m)) for any X ∈ A is a base for a quasi-uniformity on A. B η is a base for the coarsest quasi-uniformity for which F is (U η , U)-continuous.
such that f (V (m)) ≤ U(f (m)).
Thus f (V η (m)) = f (η −1 X (GV (F m)))) ≤ η −1 Y (GF f )(GV (F m))) Lemma 1
≤ η −1 X (G(F f )(V (F m))) ≤ η −1 Y (GU((F f )(F m))) U-continuity of F f = η −1 Y (GU(F f (m))) = U η (f (m)).
F is (U η , U)-continuous, since for any U ∈ U F X , F U η (m) ≤ U(F m) for any X ∈ C. Let B ′ be a base for another quasi-uniformity U on C such that F is (U ′ , U)-continuous. Then for any U η ∈ B η X , there is U ′ ∈ B ′ X such that F U ′ (m) ≤ U(F m). Thus η X (U ′ (m)) ≤ GF U ′ (m) ≤ GU(F m) ⇒ η X (U ′ (m)) ≤ GU(F m)) ⇔ U ′ (m) ≤ η −1 X (GU(F m)) = U η (m), that is U η ≤ U ′ .
If A is a reflective subcategory of C, then B A and B η are equivalent.
Proposition 13. Let F ⊣ G : C −→ A be adjoint functors and S ∈ CSYnt(C, M). Assume that G and F preserves subobjects.
Then S η = {❁ η X | ❁ F X ∈ S F X } with m ❁ η X n ⇔ η −1 X (GU ❁ (F m)) ≤ n is a coperfect syntopogenous structure on A. It is the coarsest syntopogenous structure for which F is (S η , S)-continuous.
denoted by -TORD(C, M) and INTORD(C, M). respectively. -INTORD(C, M) will denote the conglomerate of all interpolative meet preserving topogenous orders.
Proposition 1 .
1[14] −T ORD(C, M) is order isomorphic to CL(C, M). The inverse assignments of each other are given by c ❁ X (m) = {p | m ❁ X p} and m ❁ c X n ⇔ c X (m) ≤ n for all X ∈ C. Corollary 1. −INT ORD(C, M) is order isomorphic to ICL(C, M).
Corollary 4.2.3), -INTORD(C, M) is isomorphic to the conglomerate of idempotent closure operators and from Theorem 1, CSYnt(C, M) ∼ = QUnif(C, M). Thus every idempotent closure operator on C is a quasi-uniformity.
Proposition 2 .
2Let S and S ′ be simple syntopogenous structures i.e S X = {❁ X }, S ′ X = {❁ ′ X } ⊆INTORD(C, M). Then f is (S, S ′ )-continuous if and only if f (m) ❁ ′ Y n ⇒ m ❁ X f −1 (n) for all m ∈ subX and n ∈ subY .The next proposition is obtained from Theorem 1.
Proposition 3 .
3If S, S ′ ∈ SYnt(C, M). Then f is (S, S ′ )-continuous if and only if for any
Proposition 4 .
4Let S and S ′ be simple and co-perfect syntopogenous structures i.
Proposition 5 .
5Let (F, η) be a pointed endofunctor of C and S ∈CSYnt(C, M). Then
Proposition 6 .
6Let (F, η) be a pointed endofunctor of C and S be simple and co-perfect syntopogenous structures i.e S X = {⊑ X } ∈ −INTORD(C, M).
Corollary 4 .
4Let A be an M-coreflective subcategory of C and S a syntopogenous structure on A, then S
Proposition 10 .
10Let F : A −→ C be a functor that preseves subobjects, S ∈ CSYnt(A, M ′ ) and S ∈ CSYnt(C, M) with S and S ′ being simple i.e S X = {❁ X } andS ′ F X = {❁ ′ F X }. Then F is (S, S ′ )-continuous if and only if for all F c ❁ X (m) ≤ c ❁ ′ F X (F m) for all m ∈ subX, X ∈ A.Definition 12. [4] Let F : A −→ C a faithful functor. F is called a fibration if every g : A −→ F Y has an F -initial (F -cartesian) lifting. If we require the existence of an F -cartesian lifting of g : A −→ F Y only if g ∈ M, then F is called an M-fibration. Let us denote by IniF the class of all F -initial morphisms in A. Then for an M-fibration F : A −→ C, (E F , M F ) where E F = F −1 E = {e ∈ C | F e ∈ E} and M F = F −1 M IniF is a factorization system in A and M-subobject properties in C are inherited by M F -subobjects in A.
Proposition 14 .
14Under the assumptions of Proposition 13, if S ∈ CSYnt(C, M) and simple i.e S = {❁ X } ∈ −INTORD(C, M) ∼ = ICL(C, M). Thenc ❁ η X (m) = η −1 X (Gc ❁ F X (F m))is an idempotent closure operator on A. It is the largest closure operator for which F is (c ❁ η , c ❁ )continuous.
A has M F -pullbacks if C has M-pullbacks. A has M F -pullbacks if C has M-pullbacks.
A is M F -complete if C is M-complete. A is M F -complete if C is M-complete.
the M F -images and M F -inverse images are obtained by initially lifting M-images and M-inverse images. Consequently F preserves images and inverse images of subobjects. the M F -images and M F -inverse images are obtained by initially lifting M-images and M-inverse images. Consequently F preserves images and inverse images of subobjects.
Lemma 2. Let F : A −→ C be a faithful M-fibration. Lemma 2. [4] Let F : A −→ C be a faithful M-fibration.
For Any X ∈ A, are order equivalent with the inverse assignments, γ X : subX −→ subF X and δ X : subF X −→ subX, given by γ X (m) = F m and δ X (n) = p with F p = n and p ∈ IniF. For any X ∈ A, subX and subF X are order equivalent with the inverse assignments, γ X : subX −→ subF X and δ X : subF X −→ subX, given by γ X (m) = F m and δ X (n) = p with F p = n and p ∈ IniF .
. X −→ Y ∈ A, For any f : X −→ Y ∈ A and suitable subobjects n, m, n ′ and m ′ .
γ Y (f (m)) = (F f )(γ X (m)). γ Y (f (m)) = (F f )(γ X (m)).
X (n)) = δ Y (F f )(n). f (δ X (n)) = δ Y (F f )(n).
f −1 (δ Y (m ′ )) = δ X ((F f ) −1 (m ′ )). f −1 (δ Y (m ′ )) = δ X ((F f ) −1 (m ′ )).
Let QUnif o be the category of T o quasi-uniform spaces and quasi-uniformly continuous maps with (surjective, embeddings)-factorization system. It is known that bQUnif o (see e.g [3]), the category of bicomplete quasi-uniform spaces and quasi-uniformly continuous maps is an epi-reflective subcategory of QUnif o . Let (F, η) be the bicompletion reflector into QUnif o . For any (X, U) ∈ QUnif o , η X : (X, U) −→ ( X, U) takes each x ∈ X to its neighbourhood filter in the topology induced by the join of U and its inverse. It is known that η X is a quasi-uniform embedding. Details about this can be found inLet QUnif o be the category of T o quasi-uniform spaces and quasi-uniformly continuous maps with (surjective, embeddings)-factorization system. It is known that bQUnif o (see e.g [3]), the category of bicomplete quasi-uniform spaces and quasi-uniformly con- tinuous maps is an epi-reflective subcategory of QUnif o . Let (F, η) be the bicompletion reflector into QUnif o . For any (X, U) ∈ QUnif o , η X : (X, U) −→ ( X, U) takes each x ∈ X to its neighbourhood filter in the topology induced by the join of U and its inverse. It is known that η X is a quasi-uniform embedding. Details about this can be found in
X (x), η X (y)) ∈ U} is a base for the quasi-uniform structure U F,η on X. Since η X is quasi-uniform embedding, U X is the initial quasi-uniformity for which η X is quasi-uniformly continuous. B F Now, = {u F,Η | U ∈ U X } Where U F,Η = {(x, Y) ∈ X ×x | ; Thus, U F , X = U X , Now, B F,η = {U F,η | U ∈ U X } where U F,η = {(x, y) ∈ X ×X | (η X (x), η X (y)) ∈ U} is a base for the quasi-uniform structure U F,η on X. Since η X is quasi-uniform em- bedding, U X is the initial quasi-uniformity for which η X is quasi-uniformly continuous. Thus U F,η X = U X .
Let (G, ε) be the coreflector into Unif. For any (X, U) ∈ QUnif, ε X : (X, U U −1 ) −→ (X, U) is an identity map. Since U U −1 is the finest quasiuniformity on X for which ε X is quasi-uniformly continuous. U G,ε X = U U −1The category Unif of uniform spaces and quasi-uniformly continuous maps is coreflective in QUnifThe category Unif of uniform spaces and quasi-uniformly continuous maps is core- flective in QUnif. Let (G, ε) be the coreflector into Unif. For any (X, U) ∈ QUnif, ε X : (X, U U −1 ) −→ (X, U) is an identity map. Since U U −1 is the finest quasi- uniformity on X for which ε X is quasi-uniformly continuous, U G,ε X = U U −1
We know from [2] that the category cTopGrp 2 of complete Hausdorff topological groups (those topological groups which are complete with respect to the two-sided uniformity) is coreflective in TopGrp 2 . Let (F, η) be the completion reflector into TopGrp 2 and for any (X, ·) ∈ cTopGrp, let β(e) be the neighbourhood filter of the identity element e. For all U ∈ β(e), put U c = {(x, y) ∈ X × X : y ∈ xU ∩ Ux} so that B c X = {U c | U ∈. Consider TopGrp 2 the category of Hausdorff topological groups and continuous group homomorphisms with the (surjective, injective)-factorization structure. β(e)} is a base for the two-sided uniformity U c on (X; ·;Consider TopGrp 2 the category of Hausdorff topological groups and continuous group homomorphisms with the (surjective, injective)-factorization structure. We know from [2] that the category cTopGrp 2 of complete Hausdorff topological groups (those topo- logical groups which are complete with respect to the two-sided uniformity) is coreflec- tive in TopGrp 2 . Let (F, η) be the completion reflector into TopGrp 2 and for any (X, ·) ∈ cTopGrp, let β(e) be the neighbourhood filter of the identity element e. For all U ∈ β(e), put U c = {(x, y) ∈ X × X : y ∈ xU ∩ Ux} so that B c X = {U c | U ∈ β(e)} is a base for the two-sided uniformity U c on (X; ·;
Since η X is again an embedding of (X, ·, T ) ∈ TopGrp 2 into its completion ( X. T } , T }. Since η X is again an embedding of (X, ·, T ) ∈ TopGrp 2 into its completion ( X;
. · , T ) We Have That U F,Η = U C, ·, T ), we have that U F,η = U c .
The forgetful functor F : TopGrp −→ Grp is a mono-fibration. Thus by Proposition 11, every syntopogenous structure on Grp can be initially lifted to a syntopogenous structure on TopGrp. The forgetful functor F : TopGrp −→ Grp is a mono-fibration. Thus by Proposition 11, every syntopogenous structure on Grp can be initially lifted to a syntopogenous structure on TopGrp.
G(U), the topology induced by U, obtained by taking a base of neighbouhoods at a point x the filter {U[x] | U ∈ U} where U[x] = {y ∈ X : (x, y) ∈ U} and F : Top −→ Qunif which sends every topological space (X, T ) to the finest quasi-uniformity U on X with G(U) = T . It is known (see e.g [7]) that F is left adjoint to G. For any. Consider the functors G : QUnif −→ Top which sends every quasi-uniform space (X, U) to the topological space. X, F (T )). Now S (X,U ) = {❁ U X | U ∈ U} where A ❁ U B ⇔ U(A) ⊆ B for any A, B ⊆ X is a co-perfect syntopogenous structure on Qunif for any (X, U) ∈ Qunif. Let (X, T ) ∈ Top, A ❁ η X B ⇔ η −1 X (GU(F A)Consider the functors G : QUnif −→ Top which sends every quasi-uniform space (X, U) to the topological space (X, G(U)) with G(U), the topology induced by U, obtained by taking a base of neighbouhoods at a point x the filter {U[x] | U ∈ U} where U[x] = {y ∈ X : (x, y) ∈ U} and F : Top −→ Qunif which sends every topological space (X, T ) to the finest quasi-uniformity U on X with G(U) = T . It is known (see e.g [7]) that F is left adjoint to G. For any (X, T ) ∈ Top, the unit η X : (X, T ) −→ (X, GF (T )) is a continuous map where (X, GF (T )) is the set X endowed with the topology of the finest quasi-uniformity (X, F (T )). Now S (X,U ) = {❁ U X | U ∈ U} where A ❁ U B ⇔ U(A) ⊆ B for any A, B ⊆ X is a co-perfect syntopogenous structure on Qunif for any (X, U) ∈ Qunif. Let (X, T ) ∈ Top, A ❁ η X B ⇔ η −1 X (GU(F A))
A)) = η −1. X ( Gu, X (GU(F A)) = η −1
A)) is a neighbourhood of A in T . Thus S X = {❁ η X | X ∈ Top} with A ❁ η X B ⇔ V ⊆ B where V a is neighbourhood of A in T so that A ❁ η. X B ⇔ A ⊆ O ⊆ B For Some O ∈ T, X (GU(A)), η −1 X (GUX (GU(A)), η −1 X (GU(A)) is a neighbourhood of A in T . Thus S X = {❁ η X | X ∈ Top} with A ❁ η X B ⇔ V ⊆ B where V a is neighbourhood of A in T so that A ❁ η X B ⇔ A ⊆ O ⊆ B for some O ∈ T .
Let (F, η) be the reflector into Top. For any X ∈ Top, η X : X −→ X/ ∼ takes each x ∈ X to its equivalence class [x] = {y ∈ X | {x} = {y}}. S X = {❁ Define, A Xo | Xo ∈ Top O } By A ❁ Xo B ⇔ A ⊆ B For Any X O ⊆ Top O, B ⊆ X O ; Thus, S X = {❁ F,Η X | X ∈ Top} With A ❁ F,Η X B ⇔ ; ⊆ B A, B ⊆ X, Let Top be the category of topological spaces and continuous maps with its (surjections, emdeddings)-factorization structure. It is well known that Top o , the category of T otopological spaces and continuous maps is a epi-reflective subcategory of Top. η −1 X (η X (ALet Top be the category of topological spaces and continuous maps with its (surjections, emdeddings)-factorization structure. It is well known that Top o , the category of T o - topological spaces and continuous maps is a epi-reflective subcategory of Top. Define S X = {❁ Xo | Xo ∈ Top o } by A ❁ Xo B ⇔ A ⊆ B for any X o ⊆ Top o , A, B ⊆ X o . Let (F, η) be the reflector into Top. For any X ∈ Top, η X : X −→ X/ ∼ takes each x ∈ X to its equivalence class [x] = {y ∈ X | {x} = {y}}. Thus S X = {❁ F,η X | X ∈ Top} with A ❁ F,η X B ⇔ η −1 X (η X (A)) ⊆ B A, B ⊆ X.
Abstract and concrete categories: the joy of cats. J Adámek, H Herrlich, G E Strecker, Repr. Theory Appl. Categ. 17WileyReprint of the 1990 originalJ. Adámek, H. Herrlich, and G. E. Strecker. Abstract and concrete categories: the joy of cats. Repr. Theory Appl. Categ.,(17), 1-507, 2006. Reprint of the 1990 original [Wiley, New York].
N Bourbaki, General Topology: Chapters 1-4. Springer Science & Business Media18N. Bourbaki. General Topology: Chapters 1-4, volume 18. Springer Science & Business Media, 1998.
Categorical aspects of the theory of quasi-uniform spaces. G C L Brümmer, Proceedings of the "I Spanish-Italian Congress on General Topology and its Applications. the "I Spanish-Italian Congress on General Topology and its ApplicationsGandia30SpanishG. C. L. Brümmer. Categorical aspects of the theory of quasi-uniform spaces. In Pro- ceedings of the "I Spanish-Italian Congress on General Topology and its Applications" (Spanish)(Gandia, 1997), volume 30, pages 45-74, 1999.
Categorical structure of closure operators with Applications to Topology. D Dikrajan, W Tholen, of Mathematics and its Applications. DordrechtKluwer Academic Publishers Group346D. Dikrajan and W. Tholen. Categorical structure of closure operators with Applica- tions to Topology, Algebra and Discrete Mathematics. Volume 346 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1995.
Semiregular closure operators and epimorphisms in topological categories. D Dikranjan, V International Meeting on Topology in Italy (Italian). 29Lecce, 1990/Otranto, 1990). Rend. Circ. Mat. Palermo (2) SupplD. Dikranjan. Semiregular closure operators and epimorphisms in topological categories. In V International Meeting on Topology in Italy (Italian)(Lecce, 1990/Otranto, 1990). Rend. Circ. Mat. Palermo (2) Suppl, volume 29, pages 105-160, 1992.
Closure operators I. D Dikranjan, E Giuli, Topology and its Applications. 272D. Dikranjan and E. Giuli. Closure operators I. Topology and its Applications, 27(2):129-143, 1987.
Separation and epimorphisms in quasi-uniform spaces. D Dikranjan, H.-P Künzi, Papers in honour of Bernhard Banaschewski. Cape Town8D. Dikranjan and H.-P. Künzi. Separation and epimorphisms in quasi-uniform spaces. volume 8, pages 175-207. 2000. Papers in honour of Bernhard Banaschewski (Cape Town, 1996).
Mappings of proximity structures. General Topology and its Relations to Modern Analysis and Algebra. C Dowker, C. Dowker. Mappings of proximity structures. General Topology and its Relations to Modern Analysis and Algebra, pages 139-141, 1962.
Quasi-uniform spaces. P Fletcher, W F Lindgren, Lectures notes in Pure Apll. Math. 77P. Fletcher and W. F. Lindgren. Quasi-uniform spaces. Lectures notes in Pure Apll. Math.77, Dekker, New York, 1982.
The pullback closure, perfect morphisms and completions. D Holgate, University of Cape TownPhD ThesisD. Holgate. The pullback closure, perfect morphisms and completions. PhD Thesis, University of Cape Town, 1995.
The pullback closure operator and generalisations of perfectness. D Holgate, Applied Categorical Structures. 41D. Holgate. The pullback closure operator and generalisations of perfectness. Applied Categorical Structures, 4(1):107-120, 1996.
Quasi-uniform and syntopogenous structures on categories. D Holgate, M Iragi, Topology and its Applications. 263D. Holgate and M. Iragi. Quasi-uniform and syntopogenous structures on categories. Topology and its Applications, 263:16-25, 2019.
Quasi-uniform structures determined by closure operators. D Holgate, M Iragi, Topology Appl. 2952021Paper No. 107669, 9D. Holgate and M. Iragi. Quasi-uniform structures determined by closure operators. Topology Appl., 295:Paper No. 107669, 9, 2021.
Topogenous and nearness structures on categories. D Holgate, M Iragi, A Razafindrakoto, Appl. Categor. Struct. 24D. Holgate, M. Iragi, and A. Razafindrakoto. Topogenous and nearness structures on categories. Appl. Categor. Struct., (24):447-455, 2016.
Categorical neighborhood operators. D Holgate, J , Topology Appl. 15817D. Holgate and J.Šlapal. Categorical neighborhood operators. Topology Appl., 158(17):2356-2365, 2011.
Topogenous structures on categories. M Iragi, University of the Western CapeMSc ThesisM. Iragi. Topogenous structures on categories. MSc Thesis, University of the Western Cape, 2016.
Quasi-uniform and syntopogenous structures on categories. M Iragi, University of the Western CapePhD ThesisM. Iragi. Quasi-uniform and syntopogenous structures on categories. PhD Thesis, University of the Western Cape, 2019.
Transitive quasi-uniform structures depending on a parameter. M Iragi, J , 10.1007/s00010-022-00937-8Aequat. Math. M. Iragi and J.Šlapal. Transitive quasi-uniform structures depending on a parameter. Aequat. Math. (2023). https://doi.org/10.1007/s00010-022-00937-8.
Classes of spaces defined by an epireflector. L Stramaccia, Third National Conference on Topology (Italian). Trieste18L. Stramaccia. Classes of spaces defined by an epireflector. Number 18, pages 423-432. 1988. Third National Conference on Topology (Italian) (Trieste, 1986).
Interior operators in general categories. S J R Vorster, Quaest. Math. 234S. J. R. Vorster. Interior operators in general categories. Quaest. Math., 23(4):405- 416, 2000.
| []
|
[
"Indirect search of Heavy Neutral Leptons using the DUNE Near Detector",
"Indirect search of Heavy Neutral Leptons using the DUNE Near Detector"
]
| [
"S Carbajal \nDepartamento de Ciencias\nSección Física\nPontificia Universidad Católica del Perú\nApartado 1761LimaPerú\n",
"A M Gago \nDepartamento de Ciencias\nSección Física\nPontificia Universidad Católica del Perú\nApartado 1761LimaPerú\n"
]
| [
"Departamento de Ciencias\nSección Física\nPontificia Universidad Católica del Perú\nApartado 1761LimaPerú",
"Departamento de Ciencias\nSección Física\nPontificia Universidad Católica del Perú\nApartado 1761LimaPerú"
]
| []
| We evaluate the potential of the DUNE Near Detector (DUNEND) for establishing bounds for heavy neutral leptons in the region of masses below 500 MeV. These bounds are obtained from the deficits of muon and electron charged current events expected at the LArTPC of DUNEND. Each deficit is due to two sources: the active neutrinos, decay products of the heavy ones, that do not hit the DUNEND and the disappearance oscillation of the active neutrinos, decay products of the parent mesons. We get limits of |Uµ4| 2 < 6.5 × 10 −2 (2.5 × 10 −5 ) and |Ue4| 4 < 2 × 10 −2 (3 × 10 −5 ) for masses below 10 MeV, five years per each mode (neutrino/antineutrino) and for a 20%(0%) overall normalization uncertainty in neutrino charged current event rates prediction. These limits, within the region of masses below 2(10) MeV, are better than those that can be achieved by DUNE direct searches for the case of a 20%(0%) uncertainty. We can also impose limits on |Uµ4| 2 in a mass region free of constraints (40 eV -1 MeV). For masses below 1 MeV, we improve the current experimental constraints by up to 2 orders of magnitude when no uncertainties are considered. However, when a 20% uncertainty is present, our limits can only improve current constraints on |Ue4| 2 by up to a factor of 3 in a small region around 5 eV. | null | [
"https://export.arxiv.org/pdf/2202.09217v3.pdf"
]
| 246,996,826 | 2202.09217 | 01457bac3bef665dd20c142839fbcf365a8f92cd |
Indirect search of Heavy Neutral Leptons using the DUNE Near Detector
S Carbajal
Departamento de Ciencias
Sección Física
Pontificia Universidad Católica del Perú
Apartado 1761LimaPerú
A M Gago
Departamento de Ciencias
Sección Física
Pontificia Universidad Católica del Perú
Apartado 1761LimaPerú
Indirect search of Heavy Neutral Leptons using the DUNE Near Detector
We evaluate the potential of the DUNE Near Detector (DUNEND) for establishing bounds for heavy neutral leptons in the region of masses below 500 MeV. These bounds are obtained from the deficits of muon and electron charged current events expected at the LArTPC of DUNEND. Each deficit is due to two sources: the active neutrinos, decay products of the heavy ones, that do not hit the DUNEND and the disappearance oscillation of the active neutrinos, decay products of the parent mesons. We get limits of |Uµ4| 2 < 6.5 × 10 −2 (2.5 × 10 −5 ) and |Ue4| 4 < 2 × 10 −2 (3 × 10 −5 ) for masses below 10 MeV, five years per each mode (neutrino/antineutrino) and for a 20%(0%) overall normalization uncertainty in neutrino charged current event rates prediction. These limits, within the region of masses below 2(10) MeV, are better than those that can be achieved by DUNE direct searches for the case of a 20%(0%) uncertainty. We can also impose limits on |Uµ4| 2 in a mass region free of constraints (40 eV -1 MeV). For masses below 1 MeV, we improve the current experimental constraints by up to 2 orders of magnitude when no uncertainties are considered. However, when a 20% uncertainty is present, our limits can only improve current constraints on |Ue4| 2 by up to a factor of 3 in a small region around 5 eV.
I. INTRODUCTION
Heavy neutral leptons (HNLs) are singlet (righthanded) fermions states introduced for explaining the non-zero neutrino masses, interacting, via Yukawa coupling, with the Higgs boson and the leptonic doublet, a Dirac mass term, and also appearing in the Majorana mass term. The nearly sterile states that arise after the diagonalization of the mass terms mentioned before interact with matter via suppressed mixing to the active neutrinos of the Standard Model (SM) [1,2].
The HNLs are candidates to solve important particle physics and cosmology issues [1]. They can help explain the smallness of the active neutrino masses via the Seesaw mechanism [3], act as possible dark matter candidates [4] and also explain the baryon asymmetry of the universe through their role in leptogenesis (see [1,5] and references therein). On the other hand, neutrino oscillations involving light sterile states have been proposed to explain the excess of electron antineutrino and neutrino events at LSND and MiniBoone, respectively, as well as the deficit of electron antineutrino events at reactor experiments [6]. The HNL masses required for solving the before mentioned problems fall within a mass range that spans from keV to TeV. As a consequence of their relevance, there have been several HNL searches in this wide mass range, placing limits on the possible values of the HNL mass m N and its mixing to the SM neutrinos |U α4 | 2 [2,7].
In particular, searches of HNLs in the range of masses of 1-400 MeV have been conducted in accelerator-based experiments through searches for low-energy peaks in the energy spectrum of the muons resulting from pion (π ± → µ ± ν H ) and kaon decays (K ± → µ ± ν H ) [8][9][10][11]. With no positive results found so far, they obtain upper bounds for |U µ4 | 2 such as 10 −6 for m N ∼ 10 MeV and 10 −9 for m N ∼ 300 MeV.
This work aims to assess the sensitivity of the DUNE experiment in setting upper limits for |U µ4 | 2 and |U e4 | 2 for masses below 500 MeV. We achieved the latter by measuring a deficit of neutrino charged current (ν CC) events at the DUNE Near Detector (DUNEND) [12]. This deficit of CC events comes from those active neutrinos that are born from HNLs and have angular distributions spanned outside the detector coverage. We consider the decrease in CC events as an indirect signal of HNLs and use it to set limits on the mixing parameters. Additionally, we present an analysis of the possibility of finding confidence regions for the values of (m N , |U α4 | 2 ) if a deficit of CC events is found at DUNE [13]. This paper goes as follows: in the second section we discuss the theoretical framework of HNL production and decay. Then, in the third one, we describe the experimental setup. In the fourth section, the details of our simulation are given. While, in the fifth one, our results are presented. We draw our conclusions in the final section.
II. THEORETICAL FRAMEWORK
As we already mentioned, the nearly sterile mass eigenstates couple to the active flavor states via an extended version of the Pontecorvo-Maki-Nakagawa matrix (PMNS) [14], which can be expressed as follows:
ν α = i=1,2,3 U αi ν i + U α4 N,(1)
where N represents the HNL field. It is also helpful to write the new active neutrino flavor states in terms of the flavor states of the SM ν SM α , which represent the neutrino flavor states when the values of the 3 × 3 PMNS mixing matrix are assumed. This can be done by the approximation [15]
ν α ≈ ν SM α 1 − |U α4 | 2 2 + U α4 N.(2)
arXiv:2202.09217v3 [hep-ph] 23 Apr 2023
Due to the connection above, the HNLs can be produced in any weak decay involving active neutrinos. The production rate of HNLs depends kinematically on its mass m N , the strength of its mixing to active neutrinos |U α4 | 2 and the nature of the decaying particle that produces it, which, from now on, we will refer to as its parent. In this work, we are interested in HNLs with masses below the kaon mass (m K ). The production of HNLs from kaon and pion decays, followed by the muon decays, dominate at the typical energies of beam dump experiments such as DUNE. Their production from heavier particles, such as D mesons or τ leptons, is also possible, but it is rare since the production of the latter is heavily suppressed in comparison to the light mesons. Table I shows the dominant HNL production channels from light leptons and mesons, along with the maximum kinematically allowed values of the masses for the HNLs. A rough estimation of these values is obtained by subtracting the total rest mass of the particles produced, other than the HNLs, from the corresponding mass of their parent particle. We calculated the branching ratios for HNL production by using the formulas from Ref. [16]. For instance, Fig. 1 shows the branching ratios of the dominant HNL production channels below the kaon mass for |U µ4 | 2 = 1. We can note that almost all the branching ratios decrease with m N , with the only exception being the leptonic decays of charged kaons, K ± → N µ ± . Above 34 MeV, the production from pions is kinematically forbidden; this is important since this means that all heavy neutral leptons above this mass will be produced only from kaon decays. As the value of m N increases, the branching ratio of K ± → N µ ± keeps increasing as well, surpassing the branching ratios of K ± → N π 0 µ ± at around 80 MeV and of K 0 L → N π 0 µ ± at around 160 MeV. Finally, the branching ratio of K ± → N µ ± reaches its maximum at around 260 MeV and then decreases until it is kinematically forbidden. The endpoint of each branching ratio corresponds to the maximum m N given in Table I.
The production of HNLs via semileptonic decays involves hadronic currents that cannot be calculated from first principles due to the non-perturbative nature of QCD at low energies. Therefore, the dynamics of these decays are modeled by form factors that represent the momentum distribution of the quarks inside the mesons and parametrize the momentum transfer between the hadronic current and the lepton pair [17]. For all the semileptonic decays in Table I, we used the form factors presented in [16].
After their production, all the HNLs propagate and then decay on flight via mixing with active neutrinos. Table II shows all the decay channels for the HNLs considered in this work. We included all the kinematically allowed decays to final states involving pseudoscalar mesons as well as pure leptonic decays for m N < m K . A more complete table can be found in [18]. The partial width of a HNL decay channel involving a final lepton l α or light neutrino ν α is directly proportional to the mixing parameter squared |U α4 | 2 . Therefore, the total width and lifetime of the HNLs also depend on the relevant mixing parameters. The lifetime dependence on the values of |U α4 | 2 can have a huge impact on the position of the decay vertex of the HNL and hence on its possible signal at a detector. Setting small values for the |U α4 | 2 means that the HNLs are being produced at a lower rate, but, at the same time, that these HNLs have a greater lifetime and therefore decay further away from the detector.
When we determine the individual partial widths of each channel, there is a factor of two that differentiates between the decays of Dirac and Majorana HNLs [18]. For instance, a Dirac HNL can decay to charged pions only via N → e − π + , while a Majorana one can also decay through N → e + π − . This evidently has an effect on the rates of π + /π − production from HNL decays but does not affect the partial decay widths. This means that CC mediated channels have the same partial widths for Dirac and Majorana neutrinos:
Γ(N M → l − X + ) = Γ(N D → l − X + ), Γ(N M → l + X − ) = Γ(N D → l + X − ).(3)
On the other hand, NC mediated channels do distinguish between Dirac and Majorana HNLs. This is because the contractions of the NC operator add an additional contribution to the differential decay width of the Majorana HNLs [18,19],
dΓ(N M → νX) = dΓ(N D → νX) + dΓ(N D →νX).(4)
Therefore, a factor of two appears when comparing the partial widths of NC mediated decays,
Γ(N M → νX) = 2Γ(N D → νX).(5)
Equations (3) and (5) imply that the total widths (Γ T ) of Majorana and Dirac HNLs are related by
Γ T (N M ) = 2Γ T (N D ),(6)
which translates into a difference between their lifetimes,
τ (N M ) = 1 2 τ (N D ).(7)
For very low masses (m N m e ), the factor of two in Eq. (5) disappears [20], making the total widths and lifetimes of Dirac and Majorana HNLs indistinguishable. Part of the mass range that we will explore in this work falls in the region of very low masses.
At the end of this section, we will describe how the active neutrinos flux is affected by the production of HNLs. For this purpose, we will show how the SM parent meson's branching ratios are modified when the production of HNL occurs. Let us start by defining the SM total decay rate of the pion (Γ SM π ):
Γ SM π = Γ SM (π → eν e ) + Γ SM (π → µν µ ),(8)
and the decay rate with heavy neutral leptons (Γ BSM π ):
Γ BSM π = Γ BSM (π → eν e ) + Γ BSM (π → µν µ ) + Γ(π → N X) ≈ Γ SM (π → eν e ) 1 − |U e4 | 2 2 + Γ SM (π → µν µ ) 1 − |U µ4 | 2 2 + Γ(π → N X).(9)
The branching ratio of ν µ production from pion decays in the presence of HNLs can then be written as
BR BSM (π → µν µ ) = Γ BSM (π → µν µ ) Γ BSM π ≈ Γ SM (π → µν µ ) 1 − |Uµ4| 2 2 Γ SM π · Γ SM π Γ BSM π ≈ Γ SM (π → µν µ ) Γ SM π · Γ SM π Γ BSM π 1 − |U µ4 | 2 2 ≈ BR SM (π → µν µ ) · Γ SM π Γ BSM π 1 − |U µ4 | 2 2 .(10)
A similar relation can be found for the branching ratio of ν e production from pion decays:
BR BSM (π → eν e ) ≈ BR SM (π → eν e ) · Γ SM π Γ BSM π 1 − |U e4 | 2 2 ,(11)
where BR SM (π → µ(e)ν µ(e) ) represents the branching ratio of ν µ (ν e ) production from pion decays in the SM. We can see that the introduction of HNLs causes the production of either muon or electron neutrinos from pions to be suppressed by the factor
K α π m N , |U α4 | 2 = Γ SM π Γ BSM π 1 − |U α4 | 2 2 ,(12)
with α = e, µ. Fig. 2 illustrates the dependence on m N of the factor K µ for several parents assuming |U µ4 | 2 = 10 −4 . For each meson, the suppression factor acts only up to a maximum HNL mass due to kinematical constraints, which are the same constraints shown in Table I and Fig. 1. Although the effect is small, the high luminosity of DUNE makes it possible to use this effect to set limits on the heavy neutral leptons parameters. Thus, each particle capable of producing active neutrinos can produce HNLs, leading to a suppression of the former. The latter happens even when only one mixing |U α4 | 2 is turned on. In fact, we can see from Eqs. (10) and (11) that, if we turn off either one of the mixings |U α4 | 2 , the production of the active neutrinos of flavor α is still suppressed by the factor Γ SM π /Γ BSM π . As we will show further ahead, the reduction in the active neutrino flux would imply the possibility that they do not reach the DUNEND, decreasing the number of expected CC events at this facility.
III. EXPERIMENTAL SETUP
In order to simulate how the presence of HNLs affects the number of ν CC events at DUNE, we based our experimental setup in the DUNE Near Detector, described in Ref. [13].
FIG. 3.
Experimental setup for the LArTPC in neutrino mode (not to scale). Charged particles are deflected by the magnetic horns.
We assume that the LBNF-DUNE beam collides protons with 120 GeV of energy into a graphite target, producing 1.47 × 10 21 POTs per year. At each collision, several mesons are produced, including mostly pions, kaons and charmed mesons.
The muons and long-lived charged mesons (π ± and K ± ) produced are deflected by focusing magnetic horns located right after the target; as a consequence, their trajectories end up preferably oriented along the beam axis, as shown schematically in Fig. 3. On the other hand, the trajectories of neutral mesons (D 0 , K 0 L and π 0 ), tau leptons and short-lived charged heavy mesons (D ± and D ± s ) are not affected by the focusing horns. Most particles decay in flight inside the decay pipe, a cylinder with a length of 230 m and a diameter of 2 m; however, a small number of long-lived particles reach the end of the decay pipe and decay at rest at the decay pipe's surface.
The Near Detector Liquid Argon Time Projection Chamber (LArTPC) is located at 574 m from the target. It has the shape of a parallelepiped with width and height (both transverse to the beam direction) of 7 m and 3 m, respectively, and a length of 5 m in the beam direction. The LArTPC is filled with a fiducial mass of 50 tons of liquid Argon. There is also the Multi-Purpose Detector (MPD), which is a magnetic spectrometer designed to study particles exiting the LArTPC that contains a one-ton high-pressure cylindrical gaseous argon time projection chamber. Since we are interested in the effects of HNLs on the ν CC events at the DUNE Near Detector, we will not take into account the MPD in our simulation setup because its impact on our results is negligible.
We take also into account the possibility of moving the detectors to several off-axis positions along the x-axis, a setup known as DUNE-PRISM [21].
IV. SIMULATION ROUTE FOR HNLS
A. Parents Production
For the simulation of the production of HNLs from light mesons, we used the data provided by the DUNE Beam Interface Working Group (BIWG) [22], which makes use of GEANT4 [23,24] and FLUKA [25,26]. This data includes information about the decay positions and momenta of pions, kaons and muons after they exit the focusing horns. The most abundant light parent in DUNE is the pion, followed by kaons and finally muons, as can be seen in Fig. 4. In this work we will consider that the neutrino CC event rates might have an overall normalization uncertainty of up to 20% due to uncertainties in the modeling of production of mesons and leptons at the DUNE target and neutrino cross sections. We encapsulate this uncertainty by a parameter σ a that varies from 0 to 0.2. Setting σ a = 0 is equivalent to assume that there are no uncertainties in the DUNE neutrino CC event rates and, for a systematic uncertainty of 20%, σ a takes the value of 0.2. The production of HNLs from heavier particles such as D mesons and τ leptons is also possible, but it is expected to have a negligible effect on the active neutrino flux, which is totally dominated by production from lighter mesons. In order to test the relevance of HNL production from these heavy particles, we used PYTHIA8 [27] to estimate the neutrino flux generated by D 0 ,D 0 , D ± , D ± s and τ ± at DUNE. We observed that these heavy parents do not contribute significantly to the DUNE neutrino flux and hence the production of HNLs coming from them will have a negligible effect on the number of CC events. Consequently, our analysis is restricted only to the production of HNLs from light mesons and muons.
B. Production of HNLs
The production and decay chain of a HNL will depend on its mass, the mass of its parent, the nature of its parent (lepton, scalar meson or vector meson), the parent decay channel, the HNL nature (Dirac or Majorana), the HNL decay channel and the value of the mixing parameter involved. In principle, we could turn on, simultaneously, the three mixing parameters |U α4 | 2 , α = e, µ and τ ; however, in our analysis we will consider only one non-zero mixing parameter at a time.
Given a HNL mass and nature, we gave PYTHIA8 the kinematic information of the parents and let it handle the kinematics of all the HNL production and decay chain, up to final active neutrinos. As expected, the HNL production and decay channels are weighted with their corresponding branching ratios.
In Fig. 5 we show the number of HNLs produced at DUNE from mesons decays in one year and in neutrino mode for |U µ4 | 2 = 10 −4 . Production from pion decays dominates at low masses, followed by charged and neutral kaons. The spectrum endpoint for pions and kaons corresponds to the maximum allowed m N displayed in Table I when they decay into muons. For completeness, we also present the production from charmed mesons, which, as expected, is comparatively smaller and completely overshadowed for masses below 387.81 MeV. Above this threshold, HNL production from pions and kaons is kinematically forbidden, and the contribution from charmed meson decays dominates. This contribution is several orders of magnitude smaller than the one from light mesons, as we already claimed.
C. Decay of HNL -Active Neutrinos
We focus on the active neutrinos produced from the HNL decays. We are interested in differentiating the number of these neutrinos that fall within the detector's geometrical acceptance from those outside of it. With this aim, we parametrize the probability that an active neutrino hits the detector by two distances along the HNL propagation axis. These distances represent two different decay vertices of the HNL and are calculated considering the geometrical coverage of the detector and the kinematical information provided by PYTHIA8, which depends on its lifetime, production vertex, velocity and the direction of propagation of the active neutrino. The aforementioned probability is given by:
w(d 1 , d 2 ) = exp − d 1 vγτ 0 − exp − d 2 vγτ 0 ,(13)
where v is the HNL's velocity, γ its Lorentz factor and τ 0 its proper lifetime.
For illustrative purposes, we present in Fig. 6 the scheme of the explained above, for the case when the HNL moves along the beam axis. It is clear that our analysis is general and takes into account the tridimensional shape of the LArTPC and all the possible ways in which an active neutrino might enter the detector, including cases where the HNL is outside the detector coverage.
FIG. 6.
A HNL N propagates and decays into an active neutrino νµ. If the HNL decays between positions 1 and 2, the active neutrino νµ hits the LArTPC.
It is important to mention that when we deactivate the HNL production, we reproduce the (pure SM) active neutrino fluxes arriving at the LArTPC predicted by the DUNE Collaboration [28].
In Fig. 7 we display the average HNLs' decay positions measured from the target and projected along the Z-axis for |U µ4 | 2 = 10 −4 and |U µ4 | 2 = 10 −1 and for Dirac and Majorana HNLs. The dotted line represents the position of the LArTPC, which is located at z = 574 m. Given that the lifetime of the HNL is inversely proportional to |U µ4 | 2 , we can see that, as long as the mixing decreases, the average decay positions at Z increase. In the mass range we studied, for |U µ4 | 2 = 10 −4 , on average, all the HNLs decay behind the LArTPC; hence, one active neutrino is lost in the DUNE flux at the LArTPC per each HNL produced. On the other hand, for |U µ4 | 2 = 10 −1 , the average HNL decay position coincides with the LArTPC location at m N ≈ 255 MeV, which implies that, above this mass, the HNLs decay mainly before the detector.
We also note that in both cases there is a small increase in the average decay positions around 30 MeV. This happens because the production of HNLs from pion decays becomes kinematically forbidden around this energy and decays from kaons start to dominate. This makes the average HNL more energetic and therefore it can travel larger distances before decaying.
D. Oscillation effects in Active Neutrinos from meson decays
The existence of HNLs forces us to modify the neutrino oscillation probabilities. Therefore, the effects of neutrino oscillations have to be taken into account in our simulations. Particularly, the place where neutrino oscillations can affect our results is in the disappearance of active neutrinos produced in meson decays. The survival probability of these active neutrinos is given by
P να→να = − 4 1 − |U α4 | 2 |U α4 | 2 sin 2 1.27m 2 N L E e − Γ 4 L 2 + 2 1 − |U α4 | 2 |U α4 | 2 e − Γ 4 L 2 + 1 − |U α4 | 2 2 + |U α4 | 4 e −Γ4L ,(14)
where E represents the energy of the active neutrino, L the distance that it travels before reaching the DUNEND, Γ 4 the decay rate of the HNL and we have considered that the mass of the active neutrino is negligible when compared to the HNL mass m N . This survival probability will effectively decrease the number of active neutrinos that reach the DUNE ND and the number of neutrino CC events at the Near Detector Complex. We incorporated Eq. (14) in our simulations as an extra weight for each active neutrino.
There is also the possibility of oscillation of HNLs into active neutrinos. However, since the HNL flux is very small when compared to the active neutrino flux, the effects of these oscillations in the neutrino CC event rates are negligible and were not considered in this work.
V. RESULTS
A. Impact on CC events at DUNEND As we can infer from what we have shown before, the DUNE neutrino flux fired at the DUNEND will be affected by the production of HNLs. Each HNL produced from the decay of its parent meson (or muon) replaces one active neutrino in the SM DUNE neutrino flux. In principle, there is a possibility to recover this active neutrino since the HNL can decay into one or more active ones, which, depending on their direction, could or not impact the DUNEND. However, as it is demonstrated in Fig. 7, it is unlikely that a relevant portion of these spurious active neutrinos would be created before or inside the LArTPC of the DUNEND for the mass range used in this work. This decrease in active neutrinos translates into a decrease in the CC event rates at the LArTPC. Our strategy is to use this deficit of CC events as an indirect signal of the existence(production) of HNLs at the DUNE neutrino flux. Hence, in that sense, we are conducting an indirect search for HNLs. This indirect method for searching for HNLs is complementary to the direct searches [7], which look for HNL decays inside one of the DUNE's detectors. As we will show in the following sections, our method can work comparatively better than direct searches for masses below 10 MeV and is sensitive to masses below 1 MeV, a region primarily inaccessible through direct searches.
The deficit in the total CC event rates depends on the mass of the HNL, the value of |U α4 | 2 and the off-axis position of the detector. In order to have a first estimate of the maximum significance of this deficit allowed by current limits on the mixing parameters, we calculated the active neutrino flux in presence of HNLs using the maximum values of |U α4 | 2 allowed by accelerator experiments at 90% confidence level [29] and then convoluted these fluxes with GENIE 2.8.4 [30] CC inclusive cross sections. As a point of reference, in Fig. 8 we show the ν µ CC event rates at the LArTPC for m N = 1 MeV and |U µ4 | 2 = 10 −2 assuming Majorana neutrinos, on-axis position, 10 years of operation (5 in neutrino and 5 in antineutrino mode) and σ a = 0. The significance of the change in the number of the CC events in each bin is estimated by 15) where N SM represents the expected number of CC events assuming only SM interactions and N BSM the number of CC events when HNLs are produced. As a first approximation, we are also ignoring all normalization uncertainties in the CC event rates, so that σ = √ N SM is the uncertainty in each bin (which have a width of 0.25 GeV). Due to the high luminosity of the DUNE experiment, under this setup, the production of HNLs causes a decrease in the total number of CC event rates on the order of 10 6 events near 2.5 GeV. This implies a deviation from the SM prediction by approximately 100σ around this energy. This indicates that DUNE's sensitivity to |U µ4 | 2 might be beyond the current experimental limits for this particular HNL mass. As the HNL mass increases, its production is suppressed, and, consequently, its presence on the active neutrino flux is reduced. As an example of the latter, we display in Fig. 9 the event rates for m N = 3 MeV and the maximum value allowed for |U α4 | 2 by experiments at 90% confidence level for this mass. In this case, there is a (small) deviation, from the SM prediction, lower than 1σ. This happens because of the tighter constraint on the mixing parameter.
N σ = |N BSM − N SM | √ N SM = |∆N | σ(
In Fig. 10 we can see the behavior of the significance N σ of the total deficit of ν µ CC events at the LArTPC for a wide range of m N , considering 10 years of operation and on-axis position and the ideal case of no systematic uncertainties. For each value of m N , we fix the |U µ4 | 2 at its corresponding maximum allowed value at 90% confidence level. The mass dependence seen in the plot is originated by the dependence of the maximum values of |U µ4 | 2 on m N . This mass dependence is absent in the flat region between 40 eV and 1 MeV, where the mixing parameters are set to 1 since there are no accelerator-based constraints. This figure establishes that, for masses below 1 MeV, the production of HNLs would be high enough for getting a deficit in the total number of CC events compatible with significances of at least 100σ. Above 10 MeV, current limits on the mixing parameters heavily suppress the possible effects of HNLs on the number of CC events at DUNE; hence, the significance drops below 1σ. Thus, DUNE will have good significance for indirect hints of the existence of low mass HNLs such as reductions in the CC events (in comparison to expected ones).
B. Sensitivity
We estimate the sensitivity of DUNE to (m N , |U α4 | 2 ) through the following χ 2 [31]:
χ 2 = a 2 σ 2 a + (1 + a)N BSM tot − N SM tot 2 N SM tot ,(16)
where N BSM tot represents the total neutrino CC events at the LArTPC when HNLs are produced and N SM tot the DUNE prediction of the pure SM CC events at the LArTPC. The small parameter a encompasses the overall normalization uncertainty σ a due to flux and cross section uncertainties. This parameter is profiled in the calculation of the χ 2 . Our results are presented in Fig. 11. The left panel of this figure shows the estimated DUNE sensitivity to |U µ4 | 2 at 90% confidence level on the LArTPC assuming Majorana neutrinos, ten years of operation (five in neutrino and five in antineutrino mode) and on-axis position. In our analysis, the CC event rates from all neutrino flavors are considered (read the discussion at the end of section II). For masses close to 1 eV, the limits decrease because, for the typical energies and flight distances of active neutrinos at DUNEND, the probability of neutrino oscillations into HNLs tends to zero as the value of m N approaches 1 eV. Right above 1 MeV, the limits start to oscillate since the survival probability of the active neutrinos is sensitive to m N . For masses between 10 eV and 10 MeV, the limits are independent of m N . The latter is because of three factors. The first one is the averaging out of the neutrino oscillations into HNLs for large values of m N . The second one is that, for these very low masses, the total number of HNLs produced is practically independent of m N (see Fig. 5). The other factor is that the HNL lifetime for lower masses is enormous (see Fig.7), decaying all of them far away from the detector without the possibility of leaving a trace on it. As we already know, above m = 33.91 MeV, the production channel π + → µ + N is kinematically forbidden, and there is a sudden loss in the sensitivity. As the mass increases, production from charged kaons starts to dominate. Since charged kaons decay mostly into muon neutrinos, this translates into a small increase in sensitivity up to the end of the curve, which is at 387.81 MeV. In the ideal case of no systematic uncertainties, these limits are competitive with experimental constraints below 5 MeV and with direct searches below 10 MeV. In particular, for σ a = 0, below 10 MeV the limits are equal to |U µ4 | 2 < 2.5×10 −5 . These limits weaken when the value of σ a increases. For instance, for σ a = 0.01 and σ a = 0.2, the sensitivity of DUNE below 10 MeV is around |U µ4 | 2 < 3 × 10 −3 and |U µ4 | 2 < 6.5 × 10 −2 , respectively. We point out that even in the conservative case of σ a = 0.2 our limits are with direct searches below 1.5 MeV.
The right panel of Fig. 11 shows the expected DUNE sensitivity when we turn on |U e4 | 2 being the other ones zero. The rest of the characteristics are the same as for the left panel. In general, the sensitivity pattern is similar to the one observed for the left panel. The limits oscillate close to 1 eV and for higher masses they become mass independent since most HNLs decay behind the LArTPC. Above 10 MeV, the pion decay channel π ± → e ± N starts to dominate because, in contrast to π ± → e ± ( − ) ν e , it is less suppressed by helicity due to the larger size of the HNL mass. This effect decreases the number of both ν e and ν µ CC events according to the suppression factor in Eq. (12), therefore increasing the sensitivity. At around 139 MeV, HNL production from pion decays becomes kinematically forbidden, which translates into a decrease in the sensitivity. Finally, the curve ends when production from kaons is kinematically forbidden at 493.17 MeV. In the ideal case of no systematic uncertainties, these limits are competitive with experimental constraints below 3.5 MeV and with direct searches below 10 MeV. In particular, for σ a = 0, below 10 MeV the limits are equal to |U e4 | 2 < 3 × 10 −5 . For σ a = 0.01 and σ a = 0.2, the sensitivity of DUNE below 10 MeV is around |U e4 | 2 < 2×10 −3 and |U e4 | 2 < 4 × 10 −2 , respectively. Even in the conservative case of σ a = 0.2 our limits are with direct searches below 1.5 MeV.
We must point out that our results are blind to the Dirac or Majorana nature of the HNL. The distinction between Dirac and Majorana HNLs is usually performed in direct searches by analyzing the distributions of charged mesons and leptons produced when the HNL decays inside the detector. We are not looking into the direct search mode since it has already been discussed in [7]. Besides their decay products, Dirac and Majorana HNLs can also be differentiated by their lifetimes due to the factor of two present in Eq. (5). However, this effect is not relevant for us because, for the mass range we studied and small mixings, almost all the HNL decays occur behind the LArTPC, as shown in Fig. 7. Furthermore, as we have discussed in section II, for very low m N the Dirac and Majorana neutrinos are indistinguishable. Thus, we can conclude that nearly all the active neutrinos produced from the HNL decays are lost independently of the nature of neutrinos. In this way, the critical magnitude in our analysis is the production rate of HNLs, which is independent of the nature of neutrinos, so the deficit of the CC event rates is independent too. Therefore, it would not be possible to distinguish between Dirac or Majorana neutrinos through the approach presented here. Estimated limits of DUNE to |Uµ4| 2 (left, red) and |Ue4| 2 (right, blue) at 90% confidence level by CC events disappearance at the LArTPC of the DUNEND, for 10 years of operation (5 in neutrino and 5 in antineutrino mode) and on-axis position. The regions of experimental constraints (gray) were taken from [29,32,33]. The estimated sensitivity of DUNE obtained in [7] by direct searches of HNL decays is shown for comparison.
C. Off-axis sensitivity
The DUNE experiment also considers the possibility of moving the DUNE near detectors horizontally, a setup known as DUNE PRISM. We study the impact in our results of moving the LArTPC up to 30 m horizontally. We see in Fig. 12 that, for ten years of operation and σ a = 0, the sensitivity to |U µ4 | 2 decreases at 30 m offaxis from |U µ4 | 2 < 2.5 × 10 −5 to |U µ4 | 2 < 1.2 × 10 −4 . This increase in the limits by a factor of 4.8 is almost the same for all masses. We also show the limits for different values of σ a . In particular, for σ a = 0.2 the sensitivy becomes |U µ4 | 2 < 5 × 10 −2 , which implies that even for a systematic error of 20%, this off-axis case is still competitive with direct searches below 1.5 MeV and covers a region free of constraints between 40 eV and 1 MeV. Comparison between on-axis and 30 m off-axis estimated sensitivities of DUNE to |Uµ4| 2 at 90% confidence by neutrino CC events disappearance for 10 years of operation (5 in neutrino and 5 in antineutrino mode). The regions of experimental constraints were taken from [29,32,33] and the estimated sensitivity of DUNE by direct searches from [7].
D. Allowed regions for (mN , |Uα4| 2 )
We also explore the potential to constraint the (m N , |U α4 | 2 ) parameter space region in the context of this indirect search. So, assuming that the disappearance CC events are originated by the presence of HNLs within the neutrino beam, we perform a χ 2 analysis fixing our simulation in certain values of (m N , |U α4 | 2 ). The allowed regions for m N = 0.1 MeV and |U µ4 | 2 = 10 −2 are presented in Fig. 13 for σ a = 0.01 (purple) and σ a = 0.025 (orange), were we include the 95% (solid) and 90% (dashed) confidence regions. These regions are bounded to the right, but extend to the left up to m N = 1 eV, a mass degeneracy that reflects the fact that our approach is not sensitive to m N for low masses. For the case σ a = 0.01 (purple) the 95% confidence region is sufficiently small that it is possible to constraint |U µ4 | 2 within an uncertainty of 40%. However, when we include larger systematic uncertainties (orange), we found that even for σ a = 0.025 there is a huge degeneracy both in |U µ4 | 2 and m N , a result that was expected since a 2.5% normalization uncertainty makes it very difficult to differentiate the effects of different points of the parameter space.
VI. CONCLUSIONS
The cornerstone of this work is to assume that the parent mesons, which products compose a neutrino beam, can decay into Heavy Neutral Leptons, resulting in the disappearance of ν µ and ν e CC events in the LArTPC of the DUNEND.
For five years per mode (neutrino/antineutrino), onaxis configuration and the ideal scenario of no systematic uncertainties, we obtain limits of |U µ4 | 2 < 2.5 × 10 −5 and |U e4 | 2 < 3 × 10 −5 for masses below 10 MeV. In a more realistic case of a 10% overall normalization uncertainty we get limits of |U µ4 | 2 < 3.2×10 −2 and |U e4 | 2 < 2×10 −2 below 1.5 MeV. We also included a more pessimistic scenario of a 20% systematic uncertainty and were still able to set bounds of |U µ4 | 2 < 6.5×10 −2 and |U e4 | 2 < 4×10 −2 below 1.4 MeV. These limits are better than the ones pre-dicted by DUNE direct searches or even placed in mass regions inaccessible to them. These bounds are still competitive for the off-axis configuration. Besides, we explore the capacity of determining the allowed parameter space region (m N , |U α4 | 2 ) for some specific pair values, obtaining uncertainties in the order of 40% for m N = 0.1 MeV and |U µ4 | 2 = 10 −2 for a 1% systematic uncertainty; however, for a systematic uncertainty of 2.5%, we found that the degeneracy is too large to reliably constraint the values of (m N , |U µ4 | 2 ). Finally, it is worth noting that the disappearance of CC events as a HNL signature is complementary to the direct observation or HNL decays, showing an attractive potential to be used in neutrino Near Detectors with high ν CC event rates.
FIG. 1 .
1Branching ratios of the dominant HNL production channels for |Uµ4| 2 = 1.
FIG. 2 .
2Suppression factor K µ mN , |Uµ4| 2 = 10 −4 of muon neutrino production as a function of mN .
FIG. 4 .
4Spectra of light particles capable of producing HNLs in the DUNE beam. Different bin widths have been used for different particles.
FIG. 5 .
5Heavy Neutral leptons produced from mesons in one year in neutrino mode for |Uµ4| 2 = 10 −4 .
FIG. 7 .
7Average HNL's decay positions proyected along the Z axis for |Uµ4| 2 = 10 −4 and |Uµ4| 2 = 10 −1 . The dotted line represents the position of the LArTPC.
FIG. 8 .
8νµ CC event rates for mN = 1 MeV assuming the maximum value allowed for |Uµ4| 2 at 90% confidence level, on-axis position, 10 years of operation and σa = 0. The error bars are amplified by 100.
FIG. 9 .
9νµ CC event rates for mN = 3 MeV assuming the maximum value allowed for |Uµ4| 2 at 90% confidence level, on-axis position, 10 years of operation and σa = 0.
FIG. 10 .
10Significance of the maximum possible deficit of νµ CC events at the LArTPC due to HNLs for σa = 0. Maximum values of |Uα4| 2 allowed by experiments at 90% confidence level, 10 years of operation and on-axis position were assumed.
FIG. 11. Estimated limits of DUNE to |Uµ4| 2 (left, red) and |Ue4| 2 (right, blue) at 90% confidence level by CC events disappearance at the LArTPC of the DUNEND, for 10 years of operation (5 in neutrino and 5 in antineutrino mode) and on-axis position. The regions of experimental constraints (gray) were taken from [29, 32, 33]. The estimated sensitivity of DUNE obtained in [7] by direct searches of HNL decays is shown for comparison.
FIG. 12. Comparison between on-axis and 30 m off-axis estimated sensitivities of DUNE to |Uµ4| 2 at 90% confidence by neutrino CC events disappearance for 10 years of operation (5 in neutrino and 5 in antineutrino mode). The regions of experimental constraints were taken from [29, 32, 33] and the estimated sensitivity of DUNE by direct searches from [7].
FIG. 13 .
13χ 2 regions for 90% (dashed) and 95% (solid) confidence levels for mN = 0.1 MeV, |Uµ4| 2 = 10 −2 , 10 years of operation (5 in neutrino and 5 in antineutrino mode) and on axis position. The purple curves represent the regions for σa = 0.01 and the orange ones for σa = 0.025.
TABLE I .
IChannels considered for the production of HNLs. The maximum possible value of mN is shown for each channel. Charged conjugate channels were also considered.Channel mN (MeV)
Channel mN (MeV)
µ + → e + νeνµ
105.14 K + → µ + νµ
387.81
π + →
µ + νµ
33.91
π 0 e + νe
358.19
e + νe
139.06
π 0 µ + νµ
253.04
K 0
L → π ± e ∓ νe
357.12
e + νe
493.17
π ± µ ∓ νµ
252.38
TABLE II .
IIHNL decay channels considered in this work. The minimum required value of mN is shown for each channel.Channel
Threshold
Channel
Threshold
[MeV]
[MeV]
ννν
10 −9
e ∓ π ±
140.08
νe + e −
1.02
νµ + µ −
211.32
νe ± µ ∓
106.17
µ ∓ π ±
245.23
νπ 0
134.98
e ∓ K ±
494.19
The Phenomenology of Right Handed Neutrinos. M Drewes, 10.1142/S0218301313300191Int. J. Mod. Phys. E. 221330019M. Drewes, The Phenomenology of Right Handed Neu- trinos, Int. J. Mod. Phys. E 22, 1330019 (2013).
The Search for Heavy Majorana Neutrinos. A Atre, T Han, S Pascoli, B Zhang, 10.1088/1126-6708/2009/05/030JHEP. 0530A. Atre, T. Han, S. Pascoli, and B. Zhang, The Search for Heavy Majorana Neutrinos, JHEP 05, 030.
Neutrino mass models. S F King, 10.1088/0034-4885/67/2/R01Rept. Prog. Phys. 67107S. F. King, Neutrino mass models, Rept. Prog. Phys. 67, 107 (2004).
A White Paper on keV Sterile Neutrino Dark Matter. M Drewes, 10.1088/1475-7516/2017/01/025JCAP. 0125M. Drewes et al., A White Paper on keV Sterile Neutrino Dark Matter, JCAP 01, 025.
. S Davidson, E Nardi, Y Nir, 10.1016/j.physrep.2008.06.002Leptogenesis, Phys. Rept. 466105S. Davidson, E. Nardi, and Y. Nir, Leptogenesis, Phys. Rept. 466, 105 (2008).
Light Sterile Neutrinos: A White Paper. K N Abazajian, arXiv:1204.5379hep-phK. N. Abazajian et al., Light Sterile Neutrinos: A White Paper, (2012), arXiv:1204.5379 [hep-ph].
Searches for Decays of New Particles in the DUNE Multi-Purpose Near Detector. J M Berryman, A De Gouvea, P J Fox, B J Kayser, K J Kelly, J L Raaf, 10.1007/JHEP02(2020)174JHEP. 02174J. M. Berryman, A. de Gouvea, P. J. Fox, B. J. Kayser, K. J. Kelly, and J. L. Raaf, Searches for Decays of New Particles in the DUNE Multi-Purpose Near Detector, JHEP 02, 174.
New Tests For, and Bounds On, Neutrino Masses and Lepton Mixing. R E Shrock, 10.1016/0370-2693(80)90235-XPhys. Lett. B. 96159R. E. Shrock, New Tests For, and Bounds On, Neutrino Masses and Lepton Mixing, Phys. Lett. B 96, 159 (1980).
General Theory of Weak Leptonic and Semileptonic Decays. 1. Leptonic Pseudoscalar Meson Decays, with Associated Tests For, and Bounds on, Neutrino Masses and Lepton Mixing. R E Shrock, 10.1103/PhysRevD.24.1232Phys. Rev. D. 241232R. E. Shrock, General Theory of Weak Leptonic and Semileptonic Decays. 1. Leptonic Pseudoscalar Meson Decays, with Associated Tests For, and Bounds on, Neu- trino Masses and Lepton Mixing, Phys. Rev. D 24, 1232 (1981).
Search for heavy neutrinos in π → µν decay. A Aguilar-Arevalo, PIENU)10.1016/j.physletb.2019.134980Phys. Lett. B. 798134980A. Aguilar-Arevalo et al. (PIENU), Search for heavy neu- trinos in π → µν decay, Phys. Lett. B 798, 134980 (2019).
Search for heavy neutrinos in K + → µ + νH decays. Artamonov, E949 Collaboration10.1103/PhysRevD.91.052001Phys. Rev. D. 9152001Artamonov et al. (E949 Collaboration), Search for heavy neutrinos in K + → µ + νH decays, Phys. Rev. D 91, 052001 (2015).
Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume I Introduction to DUNE. B Abi, DUNE)10.1088/1748-0221/15/08/T08008JINST. 15088008B. Abi et al. (DUNE), Deep Underground Neutrino Ex- periment (DUNE), Far Detector Technical Design Re- port, Volume I Introduction to DUNE, JINST 15 (08), T08008.
Deep Underground Neutrino Experiment (DUNE). B Abi, DUNEarXiv:2002.03005Far Detector Technical Design Report. IIPhysics. hep-exB. Abi et al. (DUNE), Deep Underground Neu- trino Experiment (DUNE), Far Detector Technical De- sign Report, Volume II: DUNE Physics, (2020), arXiv:2002.03005 [hep-ex].
Neutrino oscillations: The rise of the PMNS paradigm. C Giganti, S Lavignac, M Zito, 10.1016/j.ppnp.2017.10.001Prog. Part. Nucl. Phys. 981C. Giganti, S. Lavignac, and M. Zito, Neutrino oscilla- tions: The rise of the PMNS paradigm, Prog. Part. Nucl. Phys. 98, 1 (2018).
Extending Limits on Neutral Heavy Leptons. M Gronau, C N Leung, J L Rosner, 10.1103/PhysRevD.29.2539Phys. Rev. D. 292539M. Gronau, C. N. Leung, and J. L. Rosner, Extending Limits on Neutral Heavy Leptons, Phys. Rev. D 29, 2539 (1984).
K Bondarenko, A Boyarsky, D Gorbunov, O Ruchayskiy, 10.1007/JHEP11(2018)032Phenomenology of GeV-scale Heavy Neutral Leptons. 32K. Bondarenko, A. Boyarsky, D. Gorbunov, and O. Ruchayskiy, Phenomenology of GeV-scale Heavy Neu- tral Leptons, JHEP 11, 032.
Leptonic and semileptonic decays of charm and bottom hadrons. J D Richman, P R Burchat, 10.1103/RevModPhys.67.893Rev. Mod. Phys. 67893J. D. Richman and P. R. Burchat, Leptonic and semilep- tonic decays of charm and bottom hadrons, Rev. Mod. Phys. 67, 893 (1995).
Heavy Neutral Leptons from low-scale seesaws at the DUNE Near Detector. P Ballett, T Boschi, S Pascoli, 10.1007/JHEP03(2020)111JHEP. 03111P. Ballett, T. Boschi, and S. Pascoli, Heavy Neutral Lep- tons from low-scale seesaws at the DUNE Near Detector, JHEP 03, 111.
Sterile neutrinos facing kaon physics experiments. A Abada, D Bečirević, O Sumensari, C Weiland, R Z , 10.1103/PhysRevD.95.075023Phys. Rev. D. 9575023A. Abada, D. Bečirević, O. Sumensari, C. Weiland, and R. Z. Funchal, Sterile neutrinos facing kaon physics ex- periments, Phys. Rev. D 95, 075023 (2017).
Distinguishing Between Dirac and Majorana Neutrinos in Neutral Current Reactions. B Kayser, R E Shrock, 10.1016/0370-2693(82)90314-8Phys. Lett. B. 112137B. Kayser and R. E. Shrock, Distinguishing Between Dirac and Majorana Neutrinos in Neutral Current Re- actions, Phys. Lett. B 112, 137 (1982).
Deep Underground Neutrino Experiment (DUNE) Near Detector Conceptual Design Report. A Abud, DUNE)10.3390/instruments5040031Instruments. 531A. Abed Abud et al. (DUNE), Deep Underground Neu- trino Experiment (DUNE) Near Detector Conceptual Design Report, Instruments 5, 31 (2021).
GEANT4-a simulation toolkit. S Agostinelli, GEANT410.1016/S0168-9002(03)01368-8Nucl. Instrum. Meth. A. 506250S. Agostinelli et al. (GEANT4), GEANT4-a simulation toolkit, Nucl. Instrum. Meth. A 506, 250 (2003).
Recent developments in Geant4. J Allison, 10.1016/j.nima.2016.06.125Nucl. Instrum. Meth. A. 835186J. Allison et al., Recent developments in Geant4, Nucl. Instrum. Meth. A 835, 186 (2016).
FLUKA: A multi-particle transport code (Program version. A Ferrari, P R Sala, A Fasso, J Ranft, 10.2172/877507A. Ferrari, P. R. Sala, A. Fasso, and J. Ranft, FLUKA: A multi-particle transport code (Program version 2005) 10.2172/877507 (2005).
The FLUKA Code: Developments and Challenges for High Energy and Medical Applications. T T Böhlen, F Cerutti, M P W Chin, A Fassò, A Ferrari, P G Ortega, A Mairani, P R Sala, G Smirnov, V Vlachoudis, 10.1016/j.nds.2014.07.049Nucl. Data Sheets. 120211T. T. Böhlen, F. Cerutti, M. P. W. Chin, A. Fassò, A. Ferrari, P. G. Ortega, A. Mairani, P. R. Sala, G. Smirnov, and V. Vlachoudis, The FLUKA Code: De- velopments and Challenges for High Energy and Medical Applications, Nucl. Data Sheets 120, 211 (2014).
An introduction to PYTHIA 8.2. T Sjöstrand, S Ask, J R Christiansen, R Corke, N Desai, P Ilten, S Mrenna, S Prestel, C O Rasmussen, P Z Skands, 10.1016/j.cpc.2015.01.024Comput. Phys. Commun. 191159T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. De- sai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, An introduction to PYTHIA 8.2, Comput. Phys. Commun. 191, 159 (2015).
Experiment Simulation Configurations Approximating DUNE TDR. B Abi, DUNE)arXiv:2103.04797hep-exB. Abi et al. (DUNE), Experiment Simulation Con- figurations Approximating DUNE TDR, (2021), arXiv:2103.04797 [hep-ex].
Neutrinoless double beta decay versus other probes of heavy sterile neutrinos. P D Bolton, F F Deppisch, P S Dev, 10.1007/JHEP03(2020)170JHEP. 03170P. D. Bolton, F. F. Deppisch, and P. S. Bhupal Dev, Neutrinoless double beta decay versus other probes of heavy sterile neutrinos, JHEP 03, 170.
The GENIE Neutrino Monte Carlo Generator. C Andreopoulos, 10.1016/j.nima.2009.12.009Nucl. Instrum. Meth. A. 61487C. Andreopoulos et al., The GENIE Neutrino Monte Carlo Generator, Nucl. Instrum. Meth. A 614, 87 (2010).
General Neutrino Interactions at the DUNE Near Detector. I Bischer, W Rodejohann, 10.1103/PhysRevD.99.036006Phys. Rev. D. 9936006I. Bischer and W. Rodejohann, General Neutrino Inter- actions at the DUNE Near Detector, Phys. Rev. D 99, 036006 (2019).
Improved Constraints on Sterile Neutrinos in the MeV to GeV Mass Range. D A Bryman, R Shrock, 10.1103/PhysRevD.100.053006Phys. Rev. D. 10053006D. A. Bryman and R. Shrock, Improved Constraints on Sterile Neutrinos in the MeV to GeV Mass Range, Phys. Rev. D 100, 053006 (2019).
Heavy neutral leptons below the kaon mass at hodoscopic neutrino detectors. C A Argüelles, N Foppiani, M Hostert, 10.1103/PhysRevD.105.095006Phys. Rev. D. 10595006C. A. Argüelles, N. Foppiani, and M. Hostert, Heavy neu- tral leptons below the kaon mass at hodoscopic neutrino detectors, Phys. Rev. D 105, 095006 (2022).
| []
|
[
"Detecting Human-Object Interactions with Action Co-occurrence Priors",
"Detecting Human-Object Interactions with Action Co-occurrence Priors"
]
| [
"Dong-Jin Kim \nKAIST\nSouth Korea\n",
"Xiao Sun \nMicrosoft Research\n\n",
"Jinsoo Choi \nKAIST\nSouth Korea\n",
"Stephen Lin \nMicrosoft Research\n\n",
"In So Kweon [email protected] \nKAIST\nSouth Korea\n",
"/ Dong-Jinkim ",
"/ Actioncooccurrencepriors "
]
| [
"KAIST\nSouth Korea",
"Microsoft Research\n",
"KAIST\nSouth Korea",
"Microsoft Research\n",
"KAIST\nSouth Korea"
]
| []
| A common problem in human-object interaction (HOI) detection task is that numerous HOI classes have only a small number of labeled examples, resulting in training sets with a long-tailed distribution. The lack of positive labels can lead to low classification accuracy for these classes. Towards addressing this issue, we observe that there exist natural correlations and anti-correlations among human-object interactions. In this paper, we model the correlations as action co-occurrence matrices and present techniques to learn these priors and leverage them for more effective training, especially on rare classes. The utility of our approach is demonstrated experimentally, where the performance of our approach exceeds the state-of-the-art methods on both of the two leading HOI detection benchmark datasets, HICO-Det and V-COCO. arXiv:2007.08728v2 [cs.CV] 27 Jul 2020 D. Kim et al. P(operate-hair_drier) = 0.001 P(operate-hair_drier | hold-hair_drier) = 0.958 P(blow-cake) = 0.007 P(blow-cake | cut-cake) = 0.005 | 10.1007/978-3-030-58589-1_43 | [
"https://arxiv.org/pdf/2007.08728v2.pdf"
]
| 220,633,430 | 2007.08728 | 47e2407ee34d1a9da09ff4a3f8ce8de3a5ba7fb1 |
Detecting Human-Object Interactions with Action Co-occurrence Priors
Dong-Jin Kim
KAIST
South Korea
Xiao Sun
Microsoft Research
Jinsoo Choi
KAIST
South Korea
Stephen Lin
Microsoft Research
In So Kweon [email protected]
KAIST
South Korea
/ Dong-Jinkim
/ Actioncooccurrencepriors
Detecting Human-Object Interactions with Action Co-occurrence Priors
A common problem in human-object interaction (HOI) detection task is that numerous HOI classes have only a small number of labeled examples, resulting in training sets with a long-tailed distribution. The lack of positive labels can lead to low classification accuracy for these classes. Towards addressing this issue, we observe that there exist natural correlations and anti-correlations among human-object interactions. In this paper, we model the correlations as action co-occurrence matrices and present techniques to learn these priors and leverage them for more effective training, especially on rare classes. The utility of our approach is demonstrated experimentally, where the performance of our approach exceeds the state-of-the-art methods on both of the two leading HOI detection benchmark datasets, HICO-Det and V-COCO. arXiv:2007.08728v2 [cs.CV] 27 Jul 2020 D. Kim et al. P(operate-hair_drier) = 0.001 P(operate-hair_drier | hold-hair_drier) = 0.958 P(blow-cake) = 0.007 P(blow-cake | cut-cake) = 0.005
Introduction
Human-object interaction (HOI) detection aims to localize humans and objects in an image and infer the relationships between them. An HOI is typically represented as a human-action-object triplet with the corresponding bounding boxes and classes. Detecting these interactions is a fundamental challenge in visual recognition that requires both an understanding of object information and highlevel knowledge of interactions.
A major issue that exists in HOI detection is that its datasets suffer from long-tailed distributions, in which many HOI triplets have few labeled instances. Similar to datasets for general visual relationship detection (VRD) [37], a reason for this is missing labels, where the annotation covers only a subset of the interactions present in an image. For the widely-used HICO-Det dataset [3], 462 out of the 600 HOI classes have fewer than 10 training samples. For such classes, the lack of positive labels can lead to inadequate training and low classification performance. How to alleviate the performance degradation on rare classes is thus a key issue in HOI detection.
To address the problem of long-tailed distributions, we propose to take advantage of natural co-occurrences in human actions. For example, the HOI of 'operate-hair dryer' is rarely labeled and consequently hard to detect in the left image of Fig. 1. However, 'operate-hair dryer' often occurs when the more commonly labeled HOI of 'hold-hair dryer' is present. As a result, detection of Fig. 1. Examples of action co-occurrence. The marginal/conditional probability values are computed from the distribution of the training label. Intuitively, detection of rarely labeled HOIs (operate-hair dryer) can be facilitated by detection of commonly cooccurring HOIs (hold-hair dryer). Also, non-detection of rare HOIs (blow-cake) can be aided by detection of incompatible HOIs (cut-cake). We leverage this intuition as a prior to learn an HOI detector effective on long-tailed datasets.
'operate-hair dryer' can be facilitated by detection of 'hold-hair dryer' in an image. On the other hand, the detection of an HOI may preclude other incompatible HOIs, such as for 'cut-cake' and 'blow-cake' in the right image of Fig. 1.
In this paper, we introduce the new concept of utilizing co-occurring actions as prior knowledge, termed as action co-occurrence priors (ACPs), to train an HOI detector. In contrast to the language-based prior knowledge which requires external data sources [27,37,61], the co-occurrence priors can be easily obtained from the statistics of the target dataset. We also propose two novel ways to exploit them. First, we design a neural network with hierarchical structure where the classification is initially performed with respect to action groups. Each action group is defined by one anchor action, where the anchor actions are mutually exclusive according to the co-occurrence prior. Then, our model predicts the fine-grained HOI class within the action group. Second, we present a technique that employs knowledge distillation [20] to expand HOI labels so they can have more positive labels for potentially co-occurring actions. During training, the predictions are regularized by the refined objectives to improve robustness, especially for classes in the data distribution tail. To the best of our knowledge, we are the first to leverage the label co-occurrence in HOI detection to alleviate long-tailed distribution.
The main contributions of this work can be summarized as: (1) The novel concept of explicitly leveraging correlations among HOI labels to address the problem of long-tailed distributions in HOI detection; (2) Two orthogonal ways to leverage action co-occurrence priors, namely through a proposed hierarchical architecture and HOI label expansion via knowledge distillation. The resulting model is shown to be consistently advantageous in relation to state-of-the-art techniques on both the HICO-Det [3] and V-COCO [16] benchmark datasets.
Related Work
Human-Object Interaction Human-Object Interaction was originally studied in the context of recognizing the function or 'affordance' of objects [6,12,14,15,46].
Early works focus on learning more discriminative features combined with variants of SVM classifiers [7,8,57], and leverage the relationship with human poses for better representation [7,8,59] or mutual context modeling [58].
Recently, a completely data-driven approach based on convolutional neural networks (CNNs) has brought dramatic progress to HOI. Many of the pioneering works created large scale image datasets [3,4,16,68] to set new benchmarks in this field. Henceforth, significant progress have been seen in using CNNs for this problem [1,3,11,13,18,25,32,33,35,43,45,49,50,52,53,54]. Most of these works follow a two-step scheme of CNN feature extraction and multi-information fusion, where the multiple information may include human and object appearance [3,11,13,43]; box relation (either box configuration or spatial map) [1,3,13,18,54]; object category [1,18,41]; human pose [18,32,33]; and particularly, linguistic prior knowledge [25,41]. More recent works tend to combine these various cues [18,33,50,53]. These works differ from one another mainly in their techniques for exploiting external knowledge priors. Kato et al . [25] incorporate information from Word-Net [39] using a Graph Convolutional Network (GCN) [29] and learn to compose new HOIs. Xu et al . [54] also use a GCN to model the general dependencies among actions and object categories by leveraging a VRD dataset [37]. Li et al . [33] utilize interactiveness knowledge learned across multiple HOI datasets. Peyre et al . [41] transfer knowledge from triplets seen at training to new unseen triplets at test time by analogy reasoning.
Different from the previous works, our approach is to reformulate the target action label space and corresponding loss function in a manner that leverages co-occurrence relationships among action classes for HOI detection. In principle, the proposed technique is complementary to all of the previous works and can be combined with any of them. For our experiments, we implemented our approach on a baseline presented in [18], with details given in Sec. 3.3. Visual Relationship Detection The closest problem to HOI detection is Visual Relationship Detection (VRD) [5,31,42,56,60,61,62,64,65,67], which deals with general visual relationships between two arbitrary objects. In the VRD datasets [30,37], the types of visual relationships that are modeled include verb (action), preposition, spatial and comparative phrase. The two tasks share common challenges such as long-tail distributions or even zero-shot problems [37]. Our work focuses on HOI detection, as co-occurrences of human-object interactions are often strong, but the proposed technique could be extended to model the general co-occurrences that exist in visual relationships. Label Hierarchy in Multi-label Learning The hierarchical structure of label categories has long been exploited for multi-label learning in various vision tasks, e.g., image/object classification [9,55], detection [23,38], and human pose estimation [47,48]. In contrast, label hierarchy has rarely been considered in HOI detection. Inspired by previous work [9] that uses Hierarchy and Exclusion (HEX) graphs to encode flexible relations between object labels, we present the first method to take advantage of an action label hierarchy for HOI recognition. While label hierarchies have commonly been used, our method is different in that it is defined by co-occurrences (rather than semantics or a taxonomy [9] Fig. 2. Examples of co-occurrence matrices constructed for several objects (bicycle, boat, dog). Along the Y-axis is the given action, and the X-axis enumerates conditional actions. Each element represents the conditional probability that an action occurs when another action is happening.
co-occurrence based hierarchy can be determined statistically, without direct human supervision.
Proposed Method
Our method for utilizing the co-occurrence information of HOI labels consists of three key components: (1) establishing action co-occurrence priors (Sec. 3.1),
Action Co-occurrence Priors
Here, we formalize the action co-occurrence priors. The priors for the actions are modeled by a co-occurrence matrix C ∈ R N ×N where an entry c ij in C represents the conditional probability that action j occurs when action i is happening:
c ij = p(j|i), i, j ∈ [0, N ),(1)
where N denotes the total number of actions classes and i, j are indices of two actions. C is constructed from the target HOI detection dataset by counting the image-level statistics of its training labels. Examples of co-occurrence matrices constructed for single object are visualized in Fig. 2. Meanwhile, we also consider the complementary event of action i (i.e., where the i-th action does not occur) and denote it as i , such that p(i )+p(i) = 1. The complementary action co-occurrence matrix C ∈ R N ×N can thus be defined by entries c ij in C that represent the conditional probability that an action j occurs when another action i does not occur:
c ij = p(j|i ), i, j ∈ [0, N ).(2)
It can be seen from Fig. 2, that different types of relationships can exist between actions. The types can be divided into three types. The first type is the prerequisite relationship, where the given action is highly likely to co-occur with the conditional action. For example, the HOI 'sit on-bicycle' is a prerequisite of the HOI 'ride-bicycle'. In this case, p(sit on-bicycle|ride-bicycle) is close to 1. Next, exclusion, where the given action is highly unlikely to co-occur with the conditional action. An example is that the HOI 'wash-bicycle' and HOI 'ridebicycle' are unlikely to happen together. As a result, p(wash-bicycle|ride bicycle) is close to 0. Finally, overlapping, where the given action and conditional action may possibly co-occur, for example HOI 'hold-bicycle' and HOI 'inspect-bicycle', such that p(hold-bicycle|inspect-bicycle) is in between 0 and 1.
The strong relationships that may exist between action labels can provide strong priors on the presence or absence of an HOI in an image. In contrast to previous works where models may implicitly learn label co-occurrence via relational architectures [2,63], we explicitly exploit these relationships between action labels as priors, to effectively train our model especially for rare HOIs.
Anchor Action Selection via Non-Exclusive Action Suppression
From a co-occurrence matrix for an object, it can be seen that some actions are close in semantics or commonly co-occur while others are not. Intuitively, closely related actions (e.g., 'sit on-bicycle' and 'straddle-bicycle') tend to be harder to distinguish from each other. If the positive labels for these actions are rare, then they become even more difficult to distinguish. Such cases require finegrained recognition [10] and demand more dedicated classifiers. This motivates us to learn HOI classes in a coarse-to-fine manner. Specifically, we first identify a set of mutually exclusive action classes, called anchor actions, which tend to be distinguishable from one another. The anchor actions will be used to partition the entire action label space into fine-grained sub-spaces. The other action classes will be attributed to one or more sub-spaces and recognized in the context of a specific anchor action. In summary, unlike previous HOI detection works which predict action probabilities independently of one another, we divide the whole action label set into two sets, one for anchor actions and one for regular actions, which are modeled in different ways as explained in detail in Sec. 3.3.
In selecting anchor actions, we seek a set of action classes that are exclusive of one another. Toward this end, we define the exclusiveness of an action class as counting the number of actions that will never occur if action i is happening (e i = j (1 if (c ij = 0), else 0)). e i will have a high value if few other actions can occur when i does. Based on exclusiveness values, the anchor action label set D is generated through non-exclusive suppression (NES) as described in Alg. 1. It iteratively finds the most exclusive action class as an anchor action and removes remaining action classes that are not exclusive to the selected anchor actions. The anchors in the list are action classes that never occur together in the training labels. For example, if an action such as 'toast' is on the anchor list, then actions like 'stand' and 'sit' cannot be on the list because they may co-occur with 'toast', while actions such as 'hunt' or 'hop on' can potentially be on the list. While there may exist other ways the anchor action selection could be done, we empirically found this approach to be simple, effective (detection accuracy), and efficient (less than 0.01 second).
The anchor action label set acts as a finite partition of the action label space (a set of pairwise disjoint events whose union is the entire action label space). To form a complete action label space, we add an 'other' anchor action, denoted as O, for when an action class does not belong to D. Finally, we get |D| + 1 anchor actions including D and the 'other' action class O. There are several benefits to having this anchor action label set. First, only one anchor action can happen at one time between a given human and object. Thus, we can use the relative (one hot) probability representation with softmax activation, whose features were shown to compare well against distance metric learningbased features [21]. Second, anchor actions tend to be easier to distinguish from one another since they generally have prominent differences in an image. Third, it decomposes the action label space into several sub-spaces, which facilitates a coarse-to-fine solution. Each sub-task will have a much smaller solution space, which can improve learning. Finally, each sub-task will use a standalone sub-network which focuses on image features specific to the sub-task, which is an effective strategy for fine-grained recognition [10].
Input: E = {e i , i ∈ [0, N )}, C = {c ij , i, j ∈ [0, N )}; Output: D begin D ← {} ; while E is not empty do # Find the most exclusive action class; m ← argmaxE; D ← D ∪ {m} ; for e k ∈ E do # Remove the action classes correlated (not exclusive) to m; if c mk > 0 then E ← E −e k ; C ← C −{c ij , i or j = k}
After selecting anchor actions, the entire action label set A is divided into the anchor action label set D and the remaining set of 'regular' action classes R, so that A = {D, R}. Each of the regular action classes is then associated with one or more anchor actions to form |D| + 1 action groups G = {G i ; i ∈ D ∪ O}, one for each anchor action. A regular action class j ∈ R will be assigned to the group of anchor action i (G i ) if action j is able to co-occur with the anchor action i,
j ∈ G i , if c ij > 0 (i ∈ D ∪ O, j ∈ R).(3)
Note that the anchor actions themselves are not included in the action groups and a regular action j can be assigned to multiple action groups since it may co-occur with multiple anchor actions.
Hierarchical Architecture
We implemented our hierarchical approach upon the 'No-Frills' (NFs) baseline presented in [18] on account of its simplicity, effectiveness, and code availability [17]. Here, we give a brief review of the NFs architecture. Baseline Network NFs follows the common scheme of CNN feature extraction followed by multi-information fusion. It uses the off-the-shelf Faster R-CNN [44] object detector with ResNet152 [19] backbone network to detect human and object bounding boxes. As illustrated in Fig. 3, the multiple information used in [18] (denoted as X) are fed through four separate network streams to generate fixed dimension features. Then, all the features are added together and sent through a sigmoid activation to get the action probability predictionÂ:
A = sigmoid(F (X)) ∈ R N ,(4)
whereÂ(a) = p(a|X) represents the probability prediction for action class a.
To eliminate training-inference mismatch, NFs directly optimizes the HOI class probabilities instead of separating the detection and interaction losses as done in [11,13]. The final HOI prediction is a joint probability distribution over M number of HOI classes computed from the probabilitiesĤ,Ô, and for human, object, and action, respectively:
Y = joint(Ĥ,Ô,Â) ∈ R M .(5)
Specifically, for a HOI class (h, o, a),
Y (h, o, a) =Ĥ(h) * Ô(o) * Â(a) = p(h|I) * p(o|I) * p(a|X),(6)
whereĤ(h) = p(h|I) andÔ(o) = p(o|I) are the probability of a candidate box pair being a human h and object o, provided by the object detector [44]. Finally, the binary cross-entropy loss L(Ŷ , Y gt ) is directly computed from the HOI prediction. This No-Frills baseline network is referred to as Baseline.
Modified Baseline Network For a stronger baseline comparison, we make two simple but very effective modifications on the baseline network. (1) Replace the one-hot representation with the Glove word2vec [40] representation for the object category.
(2) Instead of directly adding up the multiple information, we average them and forward this through another action prediction module to obtain the final action probability prediction. For a naive approach (the Modified Baseline), we simply use a sub-network f sub of a few FC layers as the action prediction module. Then Eq. (4) is modified tô A = sigmoid(f sub (F (X))).
Our hierarchical architecture further modifies the action prediction module by explicitly exploiting ACP information which is described in the next paragraph. Proposed Hierarchical Architecture Now we introduce the action prediction module for our hierarchical architecture (illustrated in Fig. 3) that better exploits the inherent co-occurrence among actions. While the baseline network predicts all the action probabilities directly from F (·) with a single feed-forward subnetwork f sub , we instead use |D| + 2 sub-networks where one (f anchor (·)) is first applied to predict the anchor action set and then one of the |D| + 1 other subnetworks (f Gi (·)) which corresponds to the predicted anchor action is used to estimate the specific action within the action group. Because of the mutually exclusive property of anchor actions, we use the softmax activation for anchor Fig. 3. Illustration of our overall network architecture. Our work differs from the baseline [18] by the addition of a hierarchical action prediction module. For our hierarchical architecture, anchor action probability is directly generated by a softmax sub-network. Regular action probability is generated by a matrix multiplication of the anchor probability and the output from a few sigmoid based conditional sub-networks.
(512 → ( + 1), softmax) 1 (�) (512 → ( − ), sigmoid) 2 (�) (512 → ( − ), sigmoid) +1 (�) (512 → ( − ),
action predictions, while employing the sigmoid activation for regular action predictions conditional to the action groups:
A anchor = sof tmax(f anchor (F (X))) ∈ R |D|+1 (8) A Gi = sigmoid(f Gi (F (X))) ∈ R N −|D| , where i ∈ D ∪ O,(9)
where anchor (i) is directly used as the final probability predictions for the anchor actions (p(i|X) = anchor (i), i ∈ D). We let Gi (j) represent the learned conditional probability that action j occurs when action i is happening(p(j|i, X) = A Gi (j)). Since the anchor action set is a finite partition of the entire action label space, the probability of a regular action j can be predicted according to the law of total probability:
A regular (j) = p(j|X) = i∈D∪O p(i|X) * p(j|i, X) = i∈D∪OÂ anchor (i) * ÂG i (j), (10) where j ∈ R. Thus, instead of Eq. (7), we obtain the final action probability predictions for our hierarchical architectureÂ(a) = p(a|X) aŝ
A(a) = Â anchor (a), if a ∈ D i∈D∪OÂ anchor (i) * Â Gi (a), otherwise.(11)
We use the same method as in Eq. (6) and cross-entropy loss to compute the final HOI probability predictionŶ and the corresponding loss L.
To demonstrate the effectiveness of the hierarchical learning, we introduce another two baselines, MultiTask and TwoStream, that lie between the Modified Baseline and our hierarchical learning. MultiTask only uses the anchor action classification as an additional multi-task element to the Modified Baseline. TwoStream separately predicts the anchor and the regular actions but without using the hierarchical modeling between anchor and regular actions.
ACP Projection for Knowledge Distillation
Knowledge distillation [20] was originally proposed to transfer knowledge from a large network to a smaller one. Recently, knowledge distillation has been utilized for various purpose such as life-long learning [34] or multi-task learning [28]. Hu et al . [22] extended this concept to distill prior knowledge in the form of logic rules into a deep neural network. Specifically, they propose a teacher-student framework to project the network prediction (student) to a rule-regularized subspace (teacher), where the process is termed distillation. The network is then updated to balance between emulating the teacher's output and predicting the true labels.
Our work fits this setting because the ACPs can act as a prior to distill. We first introduce an ACP Projection to map the action distribution to the ACP constraints. Then, we use the teacher-student framework [22] to distill knowledge from ACPs. ACP Projection In ACP Projection, an arbitrary action distribution A = {p(i), i ∈ [0, N )} ∈ R N is projected into the ACP-constrained probability space:
A * = project(A, C, C ) ∈ R N ,(12)
where A * is the projected action prediction. The projected probability for the j-th action A * (j) = p(j * ) is generated using the law of total probability:
p(j * ) = 1 N N i=1 (p(i) * p(j|i)+p(i ) * p(j|i )) = 1 N ( N i=1 p(i) * c ij + N i=1
(1−p(i)) * c ij ).
(13) In matrix form, the ACP projection is expressed as
project(A, C, C ) = AC + (1 − A)C N .(14)
In practice, we use the object-based action co-occurrence matrices C o ∈ R N ×N and C o ∈ R N ×N which only count actions related to a specific object o. Fig. 2 shows examples of C o with respect to object classes. Also, we give different weights α and β as hyper-parameters to the action co-occurrence matrix C o and its complementary matrix C o , with the weights subject to α + β = 2, α > β. The projection function is then modified as
project(A, C o , C o ) = αAC o + β(1 − A)C o N .(15)
This is done because we empirically found the co-occurrence relationships in C o to generally be much stronger then the complementary actions in C o .
Teacher-Student Framework Now we can distill knowledge from the ACPs using ACP Projection in both the training and inference phases. There are three ways ACP Projection can be used: (1) Directly project the action prediction into the ACP-constrained probability space at the testing phase to obtain the final action output (denoted as PostProcess).
(2) Project the action prediction A in the training phase and use the projected action as an additional learning target [22,61]. (3) Project the ground truth label H gt , O gt , and A gt3 to the ACP space in the training phase and use the projected action project(A gt , C O gt , C O gt ) as an additional learning target. The second and third items are incorporated into the teacher-student framework as terms in a new objective function (denoted as Distillation):
L total = λ 1 L(Ŷ , Y gt ) + λ 2 L(Ŷ ,Ŷ projO ) + λ 3 L(Ŷ , Y gt projO ),(16)whereŶ projO = joint(Ĥ,Ô, project(Â, CÔ, C Ô )) ∈ R M ,(17)Y gt projO = joint(H gt , O gt , project(A gt , C O gt , C O gt )) ∈ R M .(18)
λ 1 , λ 2 , λ 3 are balancing weights among the ground truth HOI term and the teacher objectives. The object type can be easily determined from the object probability predictionsÔ or the ground truth label O gt .
Experiments
The goal of the experiments is to show the effectiveness and generalizability of our method. In particular, we show that our method can consistently alleviate the long-tailed distribution problem in various setups by improving performance especially for rare HOI classes. In this section, we describe the experimental setups, competing methods and provide performance evaluations of HOI detection.
Datasets and Metrics
We evaluate the performance of our model on the two popular HOI detection benchmark datasets, HICO-Det [3] and V-COCO [16]. HICO-Det [3] For evaluation, HICO-Det uses the mean average precision (mAP) metric. Here, an HOI detection is counted as a true positive if the minimum of the human overlap IOU and object overlap IOU with the ground truth is greater than 0.5. Following [3], HOI detection performance is reported for three different (3) the remaining 462 categories with more than 10 training samples (Non-rare). V-COCO (Verbs in COCO) is a subset of MS-COCO [36], which consists of 10,346 images (2,533, 2,867, 4,946 for training, validation and test, respectively) and 16,199 human instances. Each person is annotated with binary labels for 26 action classes. For the evaluation metric, same as for evaluation on HICO-Det, we use the AP score.
Quantitative Results
Ablation study In the ablations, the 'No-Frills' baseline network [18] we used is denoted as the Baseline. We first evaluate the effectiveness of the core design components in the proposed method including (1) Table 1 gives a comprehensive evaluation for each component. We draw conclusions from it one-by-one.
First, our baseline network is strong. Our Modified Baseline achieves 19.09 mAP and surpasses the 'No-Frills' Baseline by 1.51 mAP (a relative 8.7% improvement), which is already competitive to the state-of-the-art result [41] and serves as a strong baseline.
Second, both hierarchical learning and knowledge distillation are effective. This is concluded by adding Hierarchical and Distillation to the Modified Baseline, respectively. Specifically, +Hierarchical improves the modified baseline by 0.94 mAP (a relative 4.9% improvement), and +Distillation (training with Eq. (16)) improves the modified baseline by 0.89 mAP (a relative 4.7% improvement). Including both obtains 1.16 mAP improvement (relatively better by 6.1%).
Third, the proposed ACP method achieves a new state-of-the-art. Our final result is generated by further using the PostProcess step (introduced in Sec. 3.4) that projects the final action prediction into the ACP constrained space. Our method achieves 20.59 mAP (relative 7.9% improvement) for Full HOI categories, 15.92 mAP (relative 21.6% improvement) for Rare HOI categories, and 21.98 mAP (relative 5.2% improvement) for Non-rare HOI categories. Table 4. Results on the V-COCO dataset. For our method, we show results both for constructing the ACP from V-COCO and for using the ACP constructed from HICO-Det instead. Both of these models show favorable performance against the current state-of-the-art models.
AProle Gupta et al . [16] impl. by [13] 31.8 InteractNet [13] 40.0 GPNN [43] 44.0 iCAN [11] 45.3 iHOI [53] 45.79 with Knowledge [54] 45.9 Interactiveness Prior [33] 48.7 Contextual Attention [51] 47.3 RPNN [66] 47.53 PMFNet [50] 52.0 Our baseline 48.91 ACP (Ours, V-COCO) 52.98 ACP (Ours, HICO-Det) 53.23 Note that our method made especially significant improvements for Rare classes, which supports the claim that the proposed method can alleviate the long-tailed distribution problem of HOI detection datasets. This result sets the new stateof-the-art on both the HICO-Det and V-COCO datasets as shown in Table 3 and Table 4. In addition, the MultiTask ,TwoStream, and our Hierarchical architecture are compared in Table 2. From MultiTask , it can be seen that the softmax based anchor action classification already brings benefits to the Modified Baseline when used only in a multi-task learning manner. From TwoStream, separately modeling the anchor and the regular classes leads to a slight more improvements compared to MultiTask . Moreover, our Hierarchical architecture improves upon TwoStream by explicitly modeling the hierarchy between anchor and regular action predictions. Comparison with the state-of-the-art We compare our method with the previous state-of-the-art techniques in Table 3. Among the methods included in this comparison are the benchmark model of the HICO-Det dataset [3], the baseline model that we modified from [18], and the current published state-ofthe-art method [41]. As shown in Table 3, our final model (Ours) shows significant improvements over our baseline model on all metrics, and shows favorable performance against the current state-of-the-art model in terms of all the metrics. In particular, our model surpasses the current state-of-the-art model [41] by 1.19 mAP. Results on V-COCO dataset To show the generalizability of our method, we also evaluate our method on the V-COCO dataset. Note that the exact same method is directly applied to both HICO-Det and V-COCO, including the cooccurrence matrix, anchor action selection, and the architecture design. We also constructed a co-occurrence matrix from V-COCO, but the matrix was sparse. Thus, to better take advantage of our idea, we instead use the co-occurrence matrix collected from the HICO-Det dataset to train on the V-COCO. Table 4 shows the performance of our model (Ours, HICO-Det) compared to the recent state-of-the-art HOI detectors on the V-COCO dataset. In addition, we show results of our model with the co-occurrence matrix constructed from the V-COCO dataset (Ours, V-COCO). Both of these models show favorable performance on the V-COCO dataset against the previous state-of-the-art model [50].
Results of the zero-shot setup on the HICO-Det dataset The zero-shot setting on the HICO-Det dataset is defined by Peyre et al . [41]. Specifically, we select a set of 25 HOI classes that we treat as unseen classes and exclude them and their labels in the training phase. However, we still let the model predict those 25 unseen classes in the test phase, which is known as the zero-shot problem. These HOI classes are randomly selected among the set of non-rare HOI classes. Since Peyre et al . did not provide which specific HOI classes they selected, we select the unseen HOI classes such that the performance (mAP) for these classes in our Modified Baseline model (introduced in Sec. 3.3) is similar to the corresponding Supervised baseline in [41]. In Table 5, we show results of our final model (ACP) and our modified baseline model compared to the corresponding setting reported in [41]. Our final model shows better performance (35.0 mAP) than Peyre et al . (28.6 mAP) by a large margin (relative 22.4% improvement). This result is remarkable in that our ACP model under the zero-shot setting even outperforms the supervised setting of our baseline model, indicating the power of the proposed ACP method to effectively leverage prior knowledge on action co-occurrences. Furthermore, the analogy transfer method proposed by Peyre et al . (denoted as aggregation) requires large-scale linguistic knowledge to train a word representation, whereas our model only requires the co-occurrence information of the labels in the dataset, which is much easier to obtain. We conclude that the proposed method is effective for the zero-shot problem while being easy to implement.
Additional Analysis
Score Improvement after ACP Projection We also show the HOI probability change from before to after applying the projection function project(·) on our model's HOI prediction (i.e., the effect of PostProcess introduced in Sec. 3.4) in Fig. 4. Leveraging co-occurrence matrix C can not only increase the Fig. 4. The HOI probability before and after applying the projection function project(·) on our model's HOI prediction (PostProcess). Note that PostProcess can be done without any optimization. score for true classes (top) but also reduce the score for false classes (bottom). Note that this change can be achieved without any optimization process. Performance on various sets with different number of training samples Finally, in Fig. 5, we show the relative mAP score improvements of our model compared to the baseline model by computing mAP on various sets of HOI classes that have different number of training samples. Our method shows positive performance improvements for all numbers of training samples. Also, there is a trend that HOI classes with a small number of training samples mostly show larger performance improvements. In particular, for HOI classes with the number of training samples between 0 and 9, our model achieves 38.24% improvement compared to the baseline model. These results indicate that the proposed method is able to improve the performance of an HOI detector, especially for classes with few training samples.
Conclusion
We introduced a novel method to effectively train an HOI detector by leveraging prior knowledge on action co-occurrences in two different ways, via the architecture and via the loss function. Our proposed method consistently achieves favorable performance compared to the current state-of-the-art methods in various setups. Co-occurrence information not only is helpful for alleviating the long-tailed distribution problem but also can be easily obtained. A direction for future work is to construct and utilize co-occurrence priors for other relationshipbased vision tasks [24,26,37].
hierarchical learning including anchor action selection (Sec. 3.2) and devising the hierarchical architecture (Sec. 3.3), and (3) ACP projection for knowledge distillation (Sec. 3.4).
:
Non-Exclusive Suppression (NES) algorithm for mutually exclusive anchor action selection.
our simple modification to Baseline in Sec. 3.3, denoted as Modified Baseline; (2) the hierarchical learning technique introduced in Sec. 3.3, denoted as Hierarchical ; and (3) the knowledge distillation technique presented in Eq. (16) of Sec. 3.4, denoted as Distillation.
Fig. 5 .
5The relative mAP score improvements for various HOI sets with the different number of training samples. Our method is able to improve the performance especially when the number of training samples is small (38.24% improvement for 0-9 samples).
). Thisbicycle
c a rr y
h o ld
h o p
o n
in
s p e c t
ju
m
p
n o
in
te
ra
c ti o n
p a rk
p u s h re
p a ir
ri d e s it
o n
s tr a d d le
w
a lk w
a s h
carry
hold
hop on
inspect
jump
no interaction
park
push
repair
ride
sit on
straddle
walk
wash
0
0.2
0.4
0.6
0.8
1
boat
b o a rd
d ri v e
e x it
in
s p e c t ju
m
p
la
u n c h
n o
in
te
ra
c ti o n re
p a ir
ri d e
ro
w
s a il s it
o n
s ta
n d
o n ti e w
a s h
board
drive
exit
inspect
jump
launch
no interaction
repair
ride
row
sail
sit on
stand on
tie
wash
0
0.2
0.4
0.6
0.8
1
dog
c a rr y
c h a s e
d ry fe
e d
g ro
o m h o ld h o s e h u g
in
s p e c t k is
s
n o
in
te
ra
c ti o n
p e t ru
n
s c ra
tc
h
s tr a d d le tr a in w
a lk w
a s h
carry
chase
dry
feed
groom
hold
hose
hug
inspect
kiss
no interaction
pet
run
scratch
straddle
train
walk
wash
0
0.2
0.4
0.6
0.8
1
sigmoid)⋮
Probability for
Anchor actions
Probability for
Regular actionŝ
∈ ℝ +1
∈ ℝ − × +1
( )
( −
)
MatMul
⋮
⋮
⋮
Fused vector
(512)
Action Score (̂∈ ℝ )
⋮
⋮
⋮
⋮
Action Prediction Module
[Hold -Umbrella]
[Stand_under -Umbrella]
[Carry -Umbrella]
HOI Score ( �)
( �, �,̂)
[Hold, Stand_under, Carry]
(⋅)
Table 1 .
1Ablation study on the HICO-Det dataset. Our final model that includes both hierarchical architecture and distillation followed by post processing (Ours, ACP) shows the best performance among the baselines.Full Rare Non-rare
Baseline
17.56 13.23 18.85
Modified Baseline
19.09 13.09 20.89
+Hierarchical only
20.03 14.52 21.67
+Distillation only
19.98 13.67 21.86
+Hierarchical+Distillation
20.25 15.33 21.72
+Hierarchical+Distillation+Post (Ours, ACP) 20.59 15.92 21.98
Table 2. Performance of our models
with different architectures for action
prediction module. Our model ((D)
Hierarchical) shows the best perfor-
mance among the design choices.
Full Rare Non-rare
(A) Modified Baseline 19.09 13.09 20.89
(B) MultiTask
19.54 13.93 21.22
(C) TwoStream
19.63 13.67 21.41
(D) Hierarchical
20.03 14.52 21.67
HOI category sets: (1) all 600 HOI categories (Full), (2) 138 categories with fewer
than 10 training samples (Rare), and
Table 3 .
3Results on the HICO-Det dataset compared to the previous state-of-the-art methods. Our model shows favorable performance against the current state-of-the-art models on all the metrics.Full Rare Non-rare
Shen et al . [45]
6.46 4.24
7.12
HO-RCNN [3]
7.81 5.37
8.54
Gupta et al . [16] impl. by [13] 9.09 7.02
9.71
InteractNet [13]
9.94 7.16
10.77
GPNN [43]
13.11 9.34
14.23
iCAN [11]
14.84 10.45 16.15
iHOI [53]
13.39 9.51
14.55
with Knowledge [54]
14.70 13.26 15.13
Interactiveness Prior [33]
17.22 13.51 18.32
Contextual Attention [51]
16.24 11.16 17.75
No-Frills [18]
17.18 12.17 18.68
RPNN [66]
17.35 12.78 18.71
PMFNet [50]
17.46 15.65 18.00
Peyre et al . [41]
19.40 15.40 20.75
Our baseline
17.56 13.23 18.85
ACP (Ours)
20.59 15.92 21.98
Table 5 .
5Results on the zero-shot triplets of the HICO-Det dataset. Our final model shows better performance than Peyre et al . by a large margin. Note that our ACP model under the zero-shot setting even outperforms the supervised setting of our baseline. mAP Peyre et al . [41] Supervised 33.7 Peyre et al . [41] Zero-Shot 24.1 Peyre et al . [41] Zero-Shot with Aggregation 28.6Ours Supervised (Modified Baseline)
33.27
Ours Zero-Shot (Modified Baseline)
20.34
Ours Zero-Shot (ACP)
34.95
hold-potted_plant (49.98 → 90.07) watch-bird (62.41 → 97.90) hold-horse (75.16 → 42.95) walk-dog (60.04 → 24.53)
The triplet ground truth labels H gt , O gt , and A gt are straightforward to determine from the HOI ground truth label Y gt .
Acknowledgements. This work was supported by the Institute for Information & Communications Technology Promotion (2017-0-01772) grant funded by the Korea government.
Detecting humanobject interactions via functional generalization. A Bansal, S S Rambhatla, A Shrivastava, R Chellappa, AAAI Conference on Artificial Intelligence (AAAI). Bansal, A., Rambhatla, S.S., Shrivastava, A., Chellappa, R.: Detecting human- object interactions via functional generalization. In: AAAI Conference on Artificial Intelligence (AAAI) (2020)
Object level visual reasoning in videos. F Baradel, N Neverova, C Wolf, J Mille, G Mori, European Conference on Computer Vision (ECCV). Baradel, F., Neverova, N., Wolf, C., Mille, J., Mori, G.: Object level visual reason- ing in videos. In: European Conference on Computer Vision (ECCV) (2018)
Learning to detect human-object interactions. Y W Chao, Y Liu, X Liu, H Zeng, J Deng, IEEE Winter Conference on Applications of Computer Vision (WACV). Chao, Y.W., Liu, Y., Liu, X., Zeng, H., Deng, J.: Learning to detect human-object interactions. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2018)
Hico: A benchmark for recognizing human-object interactions in images. Y W Chao, Z Wang, Y He, J Wang, J Deng, IEEE International Conference on Computer Vision (ICCV. Chao, Y.W., Wang, Z., He, Y., Wang, J., Deng, J.: Hico: A benchmark for recog- nizing human-object interactions in images. In: IEEE International Conference on Computer Vision (ICCV) (2015)
Detecting visual relationships with deep relational networks. B Dai, Y Zhang, D Lin, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Dai, B., Zhang, Y., Lin, D.: Detecting visual relationships with deep relational net- works. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Scene semantics from long-term observation of people. V Delaitre, D F Fouhey, I Laptev, J Sivic, A Gupta, A A Efros, European Conference on Computer Vision (ECCV). Delaitre, V., Fouhey, D.F., Laptev, I., Sivic, J., Gupta, A., Efros, A.A.: Scene se- mantics from long-term observation of people. In: European Conference on Com- puter Vision (ECCV) (2012)
Recognizing human actions in still images: a study of bag-of-features and part-based representations. V Delaitre, I Laptev, J Sivic, British Machine Vision Conference (BMVC). Delaitre, V., Laptev, I., Sivic, J.: Recognizing human actions in still images: a study of bag-of-features and part-based representations. In: British Machine Vision Conference (BMVC) (2010)
Learning person-object interactions for action recognition in still images. V Delaitre, J Sivic, I Laptev, Advances in Neural Information Processing Systems (NIPS). Delaitre, V., Sivic, J., Laptev, I.: Learning person-object interactions for action recognition in still images. In: Advances in Neural Information Processing Systems (NIPS) (2011)
Large-scale object classification using label relation graphs. J Deng, N Ding, Y Jia, A Frome, K Murphy, S Bengio, Y Li, H Neven, H Adam, European Conference on Computer Vision (ECCV). Deng, J., Ding, N., Jia, Y., Frome, A., Murphy, K., Bengio, S., Li, Y., Neven, H., Adam, H.: Large-scale object classification using label relation graphs. In: Euro- pean Conference on Computer Vision (ECCV) (2014)
Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. J Fu, H Zheng, T Mei, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Fu, J., Zheng, H., Mei, T.: Look closer to see better: Recurrent attention convolu- tional neural network for fine-grained image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
ican: Instance-centric attention network for humanobject interaction detection. C Gao, Y Zou, J B Huang, British Machine Vision Conference (BMVC). Gao, C., Zou, Y., Huang, J.B.: ican: Instance-centric attention network for human- object interaction detection. In: British Machine Vision Conference (BMVC) (2018)
The ecological approach to visual perception: classic edition. J J Gibson, Psychology PressGibson, J.J.: The ecological approach to visual perception: classic edition. Psychol- ogy Press (2014)
Detecting and recognizing humanobject interactions. G Gkioxari, R Girshick, P Dollár, K He, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gkioxari, G., Girshick, R., Dollár, P., He, K.: Detecting and recognizing human- object interactions. In: IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR) (2018)
What makes a chair a chair?. H Grabner, J Gall, L Van Gool, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Grabner, H., Gall, J., Van Gool, L.: What makes a chair a chair? In: IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR) (2011)
Objects in action: An approach for combining action understanding and object perception. A Gupta, L S Davis, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gupta, A., Davis, L.S.: Objects in action: An approach for combining action un- derstanding and object perception. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2007)
S Gupta, J Malik, arXiv:1505.04474Visual semantic role labeling. arXiv preprintGupta, S., Malik, J.: Visual semantic role labeling. arXiv preprint arXiv:1505.04474 (2015)
No-Frills Pytorch Github. T Gupta, A Schwing, D Hoiem, Gupta, T., Schwing, A., Hoiem, D.: No-Frills Pytorch Github. https://github.com/ BigRedT/no frills hoi det
No-frills human-object interaction detection: Factorization, layout encodings, and training techniques. T Gupta, A Schwing, D Hoiem, IEEE International Conference on Computer Vision (ICCV). Gupta, T., Schwing, A., Hoiem, D.: No-frills human-object interaction detection: Factorization, layout encodings, and training techniques. In: IEEE International Conference on Computer Vision (ICCV) (2019)
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
G Hinton, O Vinyals, J Dean, arXiv:1503.02531Distilling the knowledge in a neural network. arXiv preprintHinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Significance of softmax-based features in comparison to distance metric learning-based features. S Horiguchi, D Ikami, K Aizawa, IEEE Transactions on Pattern Analysis and Machine Intelligence. TPAMIHoriguchi, S., Ikami, D., Aizawa, K.: Significance of softmax-based features in com- parison to distance metric learning-based features. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2019)
Harnessing deep neural networks with logic rules. Z Hu, X Ma, Z Liu, E Hovy, E Xing, Annual Meeting of the Association for Computational Linguistics (ACL). Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. In: Annual Meeting of the Association for Computational Linguistics (ACL) (2016)
Sharing features between objects and their attributes. S J Hwang, F Sha, K Grauman, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hwang, S.J., Sha, F., Grauman, K.: Sharing features between objects and their attributes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011)
Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. J Johnson, B Hariharan, L Van Der Maaten, L Fei-Fei, C Lawrence Zitnick, R Girshick, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.: Clevr: A diagnostic dataset for compositional language and ele- mentary visual reasoning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Compositional learning for human object interaction. K Kato, Y Li, A Gupta, European Conference on Computer Vision (ECCV). Kato, K., Li, Y., Gupta, A.: Compositional learning for human object interaction. In: European Conference on Computer Vision (ECCV) (2018)
Dense relational captioning: Triplestream networks for relationship-based captioning. D J Kim, J Choi, T H Oh, I S Kweon, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kim, D.J., Choi, J., Oh, T.H., Kweon, I.S.: Dense relational captioning: Triple- stream networks for relationship-based captioning. In: IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR) (2019)
Image captioning with very scarce supervised data: Adversarial semi-supervised learning approach. D J Kim, J Choi, T H Oh, I S Kweon, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingKim, D.J., Choi, J., Oh, T.H., Kweon, I.S.: Image captioning with very scarce supervised data: Adversarial semi-supervised learning approach. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP) (2019)
Disjoint multi-task learning between heterogeneous human-centric tasks. D J Kim, J Choi, T H Oh, Y Yoon, I S Kweon, IEEE Winter Conference on Applications of Computer Vision (WACV). Kim, D.J., Choi, J., Oh, T.H., Yoon, Y., Kweon, I.S.: Disjoint multi-task learn- ing between heterogeneous human-centric tasks. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2018)
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, International Conference on Learning Representations (ICLR. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (ICLR) (2017)
Visual genome: Connecting language and vision using crowdsourced dense image annotations. R Krishna, Y Zhu, O Groth, J Johnson, K Hata, J Kravitz, S Chen, Y Kalantidis, L J Li, D A Shamma, International Journal of Computer Vision (IJCV). 1231Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision (IJCV) 123(1), 32-73 (2017)
VIP-CNN: A visual phrase reasoning convolutional neural network for visual relationship detection. Y Li, W Ouyang, X Wang, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Li, Y., Ouyang, W., Wang, X.: VIP-CNN: A visual phrase reasoning convolutional neural network for visual relationship detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Detailed 2d-3d joint representation for human-object interaction. Y L Li, X Liu, H Lu, S Wang, J Liu, J Li, C Lu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Li, Y.L., Liu, X., Lu, H., Wang, S., Liu, J., Li, J., Lu, C.: Detailed 2d-3d joint representation for human-object interaction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Transferable interactiveness knowledge for human-object interaction detection. Y L Li, S Zhou, X Huang, L Xu, Z Ma, H S Fang, Y Wang, C Lu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Li, Y.L., Zhou, S., Huang, X., Xu, L., Ma, Z., Fang, H.S., Wang, Y., Lu, C.: Transferable interactiveness knowledge for human-object interaction detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Learning without forgetting. Z Li, D Hoiem, European Conference on Computer Vision (ECCV. Li, Z., Hoiem, D.: Learning without forgetting. In: European Conference on Com- puter Vision (ECCV) (2016)
Ppdm: Parallel point detection and matching for real-time human-object interaction detection. Y Liao, S Liu, F Wang, Y Chen, J Feng, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Liao, Y., Liu, S., Wang, F., Chen, Y., Feng, J.: Ppdm: Parallel point detection and matching for real-time human-object interaction detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European Conference on Computer Vision (ECCV). Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European Conference on Computer Vision (ECCV) (2014)
Visual relationship detection with language priors. C Lu, R Krishna, M Bernstein, L Fei-Fei, European Conference on Computer Vision (ECCV). Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: European Conference on Computer Vision (ECCV) (2016)
Semantic hierarchies for visual object recognition. M Marszalek, C Schmid, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Marszalek, M., Schmid, C.: Semantic hierarchies for visual object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2007)
Wordnet: a lexical database for english. G A Miller, Communications of the ACM. 3811Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39-41 (1995)
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, Conference on Empirical Methods in Natural Language Processing. Pennington, J., Socher, R., Manning, C.: Glove: Global vectors for word repre- sentation. In: Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014)
Detecting unseen visual relations using analogies. J Peyre, I Laptev, C Schmid, J Sivic, IEEE International Conference on Computer Vision (ICCV). Peyre, J., Laptev, I., Schmid, C., Sivic, J.: Detecting unseen visual relations using analogies. In: IEEE International Conference on Computer Vision (ICCV) (2019)
Phrase localization and visual relationship detection with comprehensive linguistic cues. B A Plummer, A Mallya, C M Cervantes, J Hockenmaier, S Lazebnik, IEEE International Conference on Computer Vision (ICCV. Plummer, B.A., Mallya, A., Cervantes, C.M., Hockenmaier, J., Lazebnik, S.: Phrase localization and visual relationship detection with comprehensive linguistic cues. IEEE International Conference on Computer Vision (ICCV) (2017)
Learning human-object interactions by graph parsing neural networks. S Qi, W Wang, B Jia, J Shen, S C Zhu, European Conference on Computer Vision (ECCV). Qi, S., Wang, W., Jia, B., Shen, J., Zhu, S.C.: Learning human-object interactions by graph parsing neural networks. In: European Conference on Computer Vision (ECCV) (2018)
Faster R-CNN: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in Neural Information Processing Systems (NIPS). Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Pro- cessing Systems (NIPS) (2015)
Scaling human-object interaction recognition through zero-shot learning. L Shen, S Yeung, J Hoffman, G Mori, L Fei-Fei, IEEE Winter Conference on Applications of Computer Vision (WACV). Shen, L., Yeung, S., Hoffman, J., Mori, G., Fei-Fei, L.: Scaling human-object in- teraction recognition through zero-shot learning. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2018)
Achieving generalized object recognition through reasoning about association of function to structure. L Stark, K Bowyer, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 1310Stark, L., Bowyer, K.: Achieving generalized object recognition through reasoning about association of function to structure. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 13(10), 1097-1104 (1991)
Explicit spatiotemporal joint relation learning for tracking human pose. X Sun, C Li, S Lin, Proceedings of the IEEE International Conference on Computer Vision Workshops. the IEEE International Conference on Computer Vision WorkshopsSun, X., Li, C., Lin, S.: Explicit spatiotemporal joint relation learning for tracking human pose. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)
Cascaded hand pose regression. X Sun, Y Wei, S Liang, X Tang, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Sun, X., Wei, Y., Liang, S., Tang, X., Sun, J.: Cascaded hand pose regression. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Vsgnet: Spatial attention network for detecting human object interactions using graph convolutions. O Ulutan, A Iftekhar, B Manjunath, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ulutan, O., Iftekhar, A., Manjunath, B.: Vsgnet: Spatial attention network for detecting human object interactions using graph convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Pose-aware multi-level feature network for human object interaction detection. B Wan, D Zhou, Y Liu, R Li, X He, IEEE International Conference on Computer Vision (ICCV). Wan, B., Zhou, D., Liu, Y., Li, R., He, X.: Pose-aware multi-level feature net- work for human object interaction detection. In: IEEE International Conference on Computer Vision (ICCV) (2019)
Deep contextual attention for human-object interaction detection. T Wang, R M Anwer, M H Khan, F S Khan, Y Pang, L Shao, J Laaksonen, IEEE International Conference on Computer Vision (ICCV). Wang, T., Anwer, R.M., Khan, M.H., Khan, F.S., Pang, Y., Shao, L., Laaksonen, J.: Deep contextual attention for human-object interaction detection. In: IEEE International Conference on Computer Vision (ICCV) (2019)
Learning human-object interaction detection using interaction points. T Wang, T Yang, M Danelljan, F S Khan, X Zhang, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Wang, T., Yang, T., Danelljan, M., Khan, F.S., Zhang, X., Sun, J.: Learning human-object interaction detection using interaction points. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Interact as you intend: Intention-driven human-object interaction detection. B Xu, J Li, Y Wong, Q Zhao, M S Kankanhalli, IEEE Transactions on Multimedia. Xu, B., Li, J., Wong, Y., Zhao, Q., Kankanhalli, M.S.: Interact as you intend: Intention-driven human-object interaction detection. IEEE Transactions on Mul- timedia (2019)
Learning to detect humanobject interactions with knowledge. B Xu, Y Wong, J Li, Q Zhao, M S Kankanhalli, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xu, B., Wong, Y., Li, J., Zhao, Q., Kankanhalli, M.S.: Learning to detect human- object interactions with knowledge. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Hd-cnn: hierarchical deep convolutional neural networks for large scale visual recognition. Z Yan, H Zhang, R Piramuthu, V Jagadeesh, D Decoste, W Di, Y Yu, IEEE International Conference on Computer Vision (ICCV. Yan, Z., Zhang, H., Piramuthu, R., Jagadeesh, V., DeCoste, D., Di, W., Yu, Y.: Hd-cnn: hierarchical deep convolutional neural networks for large scale visual recog- nition. In: IEEE International Conference on Computer Vision (ICCV) (2015)
Shuffle-then-assemble: learning object-agnostic visual relationship features. X Yang, H Zhang, J Cai, European Conference on Computer Vision (ECCV). Yang, X., Zhang, H., Cai, J.: Shuffle-then-assemble: learning object-agnostic visual relationship features. In: European Conference on Computer Vision (ECCV) (2018)
Grouplet: A structured image representation for recognizing human and object interactions. B Yao, L Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Yao, B., Fei-Fei, L.: Grouplet: A structured image representation for recognizing human and object interactions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)
Modeling mutual context of object and human pose in humanobject interaction activities. B Yao, L Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Yao, B., Fei-Fei, L.: Modeling mutual context of object and human pose in human- object interaction activities. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)
Human action recognition by learning bases of action attributes and parts. B Yao, X Jiang, A Khosla, A L Lin, L Guibas, L Fei-Fei, IEEE International Conference on Computer Vision (ICCV). Yao, B., Jiang, X., Khosla, A., Lin, A.L., Guibas, L., Fei-Fei, L.: Human action recognition by learning bases of action attributes and parts. In: IEEE International Conference on Computer Vision (ICCV) (2011)
Zoom-net: Mining deep feature interactions for visual relationship recognition. G Yin, L Sheng, B Liu, N Yu, X Wang, J Shao, C Loy, European Conference on Computer Vision (ECCV). Yin, G., Sheng, L., Liu, B., Yu, N., Wang, X., Shao, J., Change Loy, C.: Zoom-net: Mining deep feature interactions for visual relationship recognition. In: European Conference on Computer Vision (ECCV) (2018)
Visual relationship detection with internal and external linguistic knowledge distillation. R Yu, A Li, V I Morariu, L S Davis, IEEE International Conference on Computer Vision (ICCV. Yu, R., Li, A., Morariu, V.I., Davis, L.S.: Visual relationship detection with internal and external linguistic knowledge distillation. In: IEEE International Conference on Computer Vision (ICCV) (2017)
On exploring undetermined relationships for visual relationship detection. Y Zhan, J Yu, T Yu, D Tao, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhan, Y., Yu, J., Yu, T., Tao, D.: On exploring undetermined relationships for visual relationship detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Co-occurrent features in semantic segmentation. H Zhang, H Zhang, C Wang, J Xie, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhang, H., Zhang, H., Wang, C., Xie, J.: Co-occurrent features in semantic seg- mentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Visual translation embedding network for visual relation detection. H Zhang, Z Kyaw, S F Chang, T S Chua, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Zhang, H., Kyaw, Z., Chang, S.F., Chua, T.S.: Visual translation embedding net- work for visual relation detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Relationship proposal networks. J Zhang, M Elhoseiny, S Cohen, W Chang, A Elgammal, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Zhang, J., Elhoseiny, M., Cohen, S., Chang, W., Elgammal, A.: Relationship pro- posal networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Relation parsing neural network for human-object interaction detection. P Zhou, M Chi, IEEE International Conference on Computer Vision (ICCV. Zhou, P., Chi, M.: Relation parsing neural network for human-object interaction detection. In: IEEE International Conference on Computer Vision (ICCV) (2019)
Towards context-aware interaction recognition for visual relationship detection. B Zhuang, L Liu, C Shen, I Reid, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Zhuang, B., Liu, L., Shen, C., Reid, I.: Towards context-aware interaction recog- nition for visual relationship detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Hcvrd: a benchmark for large-scale human-centered visual relationship detection. B Zhuang, Q Wu, C Shen, I Reid, A Van Den Hengel, AAAI Conference on Artificial Intelligence (AAAI). Zhuang, B., Wu, Q., Shen, C., Reid, I., van den Hengel, A.: Hcvrd: a benchmark for large-scale human-centered visual relationship detection. In: AAAI Conference on Artificial Intelligence (AAAI) (2018)
| []
|
[
"Dynamics of a mesoscopic nuclear spin ensemble interacting with an optically driven electron spin",
"Dynamics of a mesoscopic nuclear spin ensemble interacting with an optically driven electron spin"
]
| [
"M J Stanley \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"C Matthiesen \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"J Hansom \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"C Le Gall \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"C H H Schulte \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"E Clarke \nEPSRC National Centre for III-V Technologies\nUniversity of Sheffield\nS1 3JDSheffieldUK\n",
"M Atatüre \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n"
]
| [
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"EPSRC National Centre for III-V Technologies\nUniversity of Sheffield\nS1 3JDSheffieldUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom"
]
| []
| The ability to discriminate between simultaneously occurring noise sources in the local environment of semiconductor InGaAs quantum dots, such as electric and magnetic field fluctuations, is key to understanding their respective dynamics and their effect on quantum dot coherence properties. We present a discriminatory approach to all-optical sensing based on two-color resonance fluorescence of a quantum dot charged with a single electron. Our measurements show that local magnetic field fluctuations due to nuclear spins in the absence of an external magnetic field are described by two correlation times, both in the microsecond regime. The nuclear spin bath dynamics show a strong dependence on the strength of resonant probing, with correlation times decreasing by a factor of four as the optical transition is saturated. We interpret the behavior as motional averaging of both the Knight field of the resident electron spin and the hyperfine-mediated nuclear spin-spin interaction due to optically-induced electron spin flips. | 10.1103/physrevb.90.195305 | [
"https://arxiv.org/pdf/1408.6437v2.pdf"
]
| 62,816,499 | 1408.6437 | e364268803b5f99bd51e90b8fb24e1a1519320ed |
Dynamics of a mesoscopic nuclear spin ensemble interacting with an optically driven electron spin
(Dated: August 28, 2014)
M J Stanley
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
C Matthiesen
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
J Hansom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
C Le Gall
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
C H H Schulte
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
E Clarke
EPSRC National Centre for III-V Technologies
University of Sheffield
S1 3JDSheffieldUK
M Atatüre
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Dynamics of a mesoscopic nuclear spin ensemble interacting with an optically driven electron spin
(Dated: August 28, 2014)
The ability to discriminate between simultaneously occurring noise sources in the local environment of semiconductor InGaAs quantum dots, such as electric and magnetic field fluctuations, is key to understanding their respective dynamics and their effect on quantum dot coherence properties. We present a discriminatory approach to all-optical sensing based on two-color resonance fluorescence of a quantum dot charged with a single electron. Our measurements show that local magnetic field fluctuations due to nuclear spins in the absence of an external magnetic field are described by two correlation times, both in the microsecond regime. The nuclear spin bath dynamics show a strong dependence on the strength of resonant probing, with correlation times decreasing by a factor of four as the optical transition is saturated. We interpret the behavior as motional averaging of both the Knight field of the resident electron spin and the hyperfine-mediated nuclear spin-spin interaction due to optically-induced electron spin flips.
The ability to discriminate between simultaneously occurring noise sources in the local environment of semiconductor InGaAs quantum dots, such as electric and magnetic field fluctuations, is key to understanding their respective dynamics and their effect on quantum dot coherence properties. We present a discriminatory approach to all-optical sensing based on two-color resonance fluorescence of a quantum dot charged with a single electron. Our measurements show that local magnetic field fluctuations due to nuclear spins in the absence of an external magnetic field are described by two correlation times, both in the microsecond regime. The nuclear spin bath dynamics show a strong dependence on the strength of resonant probing, with correlation times decreasing by a factor of four as the optical transition is saturated. We interpret the behavior as motional averaging of both the Knight field of the resident electron spin and the hyperfine-mediated nuclear spin-spin interaction due to optically-induced electron spin flips.
I. INTRODUCTION
Semiconductor quantum dots (QDs) allow deterministic trapping and manipulation of single charge and spin carriers in a solid-state system 1 . Carrier wavefunctions are spread over the 10 4 -10 5 atoms that define the QD, giving rise to a large oscillator strength and the permanent dipole moment of the excited state 2 . Interactions of the QD ground and optically excited states with electric and magnetic fields are manifest in the Zeeman splitting of spin states and DC Stark shifts of transition energies [2][3][4] . While the sensitivity to ambient fields can be exploited for metrology applications, for instance electrometry 5 , or optomechanical coupling 6-8 , QDs constantly sense fields arising from the interaction with uncontrolled charges of the environment and the QD's bath of nuclear spins. The resulting inhomogeneous dephasing of a confined spin and the reduction of photon quality are particularly detrimental to application in emergent quantum technologies, where QD spins and photons have shown promise as qubit candidates 9,10 . Hence, with regards to applications there is great interest in identifying and characterizing environmental fluctuation processes. Recent work has focused on both the optical signatures and the microscopic origins of electric field fluctuations which have been observed on timescales ranging from nanoseconds to seconds [11][12][13][14] . While quantifying its effect is useful in assessing the QD device quality, the presence of charge noise depends sensitively upon material growth and device fabrication conditions rather than being an inherent property of QDs.
The interaction of a single resident electron spin with the bath of N ∼ 10 4 -10 5 nuclear spins, however, exposes a multitude of interesting effects [15][16][17] that are inherent to the photophysics of QDs. The contact hyperfine interaction can be described as an effective magnetic field acting on the electron spin where the fluctuation mag-nitude scales as 1/ √ N . This instance of the 'central spin problem' has been widely studied theoretically [18][19][20][21][22] and the resulting electron spin relaxation is expected to comprise three components with distinct dynamics: electron spin precession in the effective magnetic field of the nuclei (Overhauser field), nuclear spin precession in the effective magnetic field of the electron spin (Knight field), and nuclear spin dipolar interactions. The inhomogeneous electron spin dephasing occurring over a few nanoseconds as a consequence of precession in the slowly changing nuclear Overhauser field is well understood and measured [23][24][25] . Surprisingly, first experimental data on the timescales of the nuclear spin bath dynamics in QDs have only recently emerged, reporting in one case correlation times of 100 µs (5.5 µs) for a resonantly driven negatively charged (neutral) QD 26 , and in the other case nuclear coherence times of a few milliseconds for a neutral QD 27 . The dynamics, assigned to nuclear dipolar coupling in both reports, were obtained in the absence of an external magnetic field in the former and at fields of a few Tesla in the latter case. Studying nuclear spin bath dynamics for a driven QD is complicated by the simultaneously occurring electric field fluctuations which mask the optical signatures of the nuclear bath evolution. Recently, the advantage of resonant excitation for sensitive measurements was demonstrated by Kuhlmann et al., who identified two features in the power spectrum of QD resonance fluorescence attributed to electric and magnetic field noise 26 . However, a reliable method to isolate the effects of nuclear spin fluctuations, which would allow a direct study of their dynamics, is still missing.
In this work we find fingerprints of the environmental fluctuations in the noise of the QD resonance fluorescence intensity, measured via its autocorrelation function. The paper is organized as follows: In section II we discuss how electric and magnetic field fluctuations affect the QD's optical properties, focusing on the inherent fluctuations of the solid-state environment and their distribution functions. The experimental method is introduced in section III where we use the intensity autocorrelation function to characterize the fluctuations for a single QD. Taking advantage of the excitonic transition's linear response to electric fields we use two-color excitation to isolate noise in the resonance fluorescence of a negatively charged QD solely due to magnetic field fluctuations. Consequently, we unambiguously identify two timescales associated with nuclear spin dynamics. Both are shorter than the ∼ 100 µs expected for nuclear spin bath relaxation via a dipolar interaction in bulk material, but longer than ∼ 100 ns, which is predicted for electron spin dephasing as consequence of the nuclei precessing in the Knight field 18 . In section IV we find a strong dependence of these timescales on the optical driving strength. We discuss the relevance of optically induced electron spin flips to nuclear spin dynamics, providing a tentative explanation for our observations. Finally, we extract the time-averaged magnitudes of both electric and magnetic field fluctuations for several QDs in section V. We show that the time-averaged fluctuations are consistent with Gaussian electric and nuclear field distribution functions. The standard deviation of those distributions, together with the timescales, fully quantifies the QD's local environment. Figure 1 illustrates the effect of a fluctuating environment on the intensity of the QD's fluorescence. The X 1− transition serves as a fluctuation sensor for both electric (left column) and magnetic (right column) fields. In its ground state the QD contains a single electron and an additional electron-hole pair (exciton) is added in the excited state. In Figure 1(a) we consider local charge traps and impurities with fluctuating occupancy as sources of a noisy electric field. The large permanent dipole of the QD exciton renders the transition frequency sensitive to the component in the QD growth-direction of this field [E z (t)], leading to a time-dependent linear Stark shift. The local electric field strength is reflected in the instantaneous resonance frequency of the QD transition. In the limit of many contributing electric field sources observed over a long time period a Gaussian distribution is a good description for the electric field probability distribution 14 . Figure 1(b) depicts the resonance fluorescence intensity 'jitter' in the QD absorption lineshape for such a distribution function. A measurement of the absorption lineshape that is slow compared to the timescale of fluctuations would yield a Voigt profile in this case. The amplitude of resonance fluorescence fluctuations due to electric field noise corresponds to the variance of the fluorescence in the jitter plot and we highlight its detuning dependence here (see red arrows). In the bottom panel [ Fig. 1 (c)] the ratio of fluorescence variance to the squared fluorescence mean is plotted as a function of detuning, which represents the normalized fluorescence The sensitivity to electric field noise is given by the intensity variance calculated from (b), divided by the square of the absorption. (d) The nuclear spin bath acts via the hyperfine interaction to produce an effective magnetic field (Overhauser field). Splitting of ground states and a weak splitting of the excited states gives rise to a four-level system where optical selection rules change with the Overhauser field. (e) QD absorption jitter for a set of Overhauser field values from a three dimensional Gaussian distribution. Intensity variance is indicated by dotted red lines for two detunings. (f) Resulting sensitivity to Overhauser field fluctuations calculated from the intensity variance in (e). fluctuation amplitude.
II. ENVIRONMENT NOISE SOURCES
The effect of interaction with the nuclear spin bath is described in Fig. 1(d): the nuclear spins of Indium (In), Gallium (Ga) and Arsenic (As) interact primarily with the electron spin through the contact hyperfine interaction. The cumulative effect of the hyperfine interaction with the nuclear spin at each lattice site can be described by a single magnetic field, the Overhauser field 16 . Dynamics of this magnetic field modify the electronic energy levels [cf. four-level system in Fig. 1(d)] and, accordingly, fluorescence rates under resonant excitation at fixed frequency. We calculate the absorption lineshape for a par-ticular Overhauser field magnitude and orientation from the optical Bloch equations for the four-level system (see Appendix A). The absorption jitter plot for the fluctuating nuclear spin bath [ Fig. 1 (e)] is obtained by sampling over an isotropic Gaussian Overhauser field distribution function 16,28 . We note that the model predicts a time-averaged Lorentzian absorption lineshape, where the linewidth at saturation power is a factor of 1.5 larger than the power-broadened linewidth of an ideal two-level system. An Overhauser field of σ B = 25 mT standard deviation and an excited state lifetime of T 1 = 700 ps is assumed in this example calculation (see Appendix A).
The calculated fluorescence fluctuation amplitudes displayed in Figs. 1 (c), (f) can be recovered directly in experiments as bunching amplitudes in the autocorrelation of the resonance fluorescence. Measurement techniques and results for resonance fluorescence fluctuation spectroscopy (RFFS) will be introduced in the following section. We note that under electric field variation, the fluorescence fluctuation amplitude is reduced on resonance in comparison to excitation at an intermediate detuning, where the amplitude peaks and then decays as detuning is increased. In contrast, variations in the Overhauser field produce the largest fluorescence fluctuations at zero detuning and the sensitivity is clearly reduced at finite detuning. We employ the contrasting detuning dependence, pointed out in Ref. 26 before, in the following section for a qualitative interpretation of fluctuation amplitudes and again in section V to obtain numerical values for electric and magnetic field noise.
III. RESONANCE FLUORESCENCE FLUCTUATION SPECTROSCOPY
InGaAs QDs in a Schottky diode device are located in a liquid Helium bath cryostat at 4 K and at 0 T external magnetic field. We use frequency and power-stabilized lasers to resonantly excite single QDs in continuous-wave mode and linear polarization. QD resonance fluorescence is collected by means of a confocal microscope in a dark-field configuration 29 and detected by a single photon counting avalanche photodiode (APD). Photon arrival times are registered by a time-to-digital converter with a timing resolution of 81 ps and rebinned in postprocessing. We present results for two QDs, labeled A and B, in the main text. Additional data for a third QD (QD C) is presented in Appendix B. Figure 2 displays a set of RFFS measurements. Three example photon detection time traces from QD A are displayed in Fig. 2(a) for excitation on resonance and detunings of ∆ = 310 MHz and ∆ = 720 MHz, where the natural linewidth of the transition, Γ, is 270 MHz in linear frequency. The excitation power corresponds to a fifth of the saturation power, that is, s = 0.2, where s = 2 (Ω /Γ) 2 and Ω is the Rabi frequency. The standard deviation expected due to Poissonian shot noise is indicated by the thickness of white semi-transparent stripes. To extract fluorescence dynamics over a wide range of timescales we use the intensity autocorrelation function g (2) (τ ), where the variable τ specifies the time delay between photodetections. Obtaining and analyzing the autocorrelation of a fluorescence signal is a wellknown spectroscopy technique 30,31 , for example used to quantify molecular diffusion dynamics 32 . Here, we apply this technique to single QD resonance fluorescence, where fluctuations are instead due to the solid-state environment. In the autocorrelation function the shot noise limit corresponds to g (2) (τ ) = 1, while super-Poissonian correlation between photons will result in bunching, that is, g (2) (τ ) >1. Figure 2(b) displays the autocorrelations corresponding to the time traces of panel 2(a) acquired for ∼ 10 7 time-tagged detection events seconds for each time trace, or ∼ 100 − 200 s, depending on the laser detuning. Systematic errors, mainly due to APD afterpulsing, were accounted for by taking reference measurements of laser photon streams at comparable count rates, and subtraction of the corresponding autocorrelation from the QD resonance fluorescence autocorrelation (see Appendix B).
Fits of the experimental autocorrelations to a sum of exponential decays, shown as red lines in Fig. 2(b), reveal a set of distinct correlation times. In the case of telegraph noise a single exponential decay is expected 33 and a set of correlation times indicates several fluctuation processes are present: for QD A we resolve six timescales ranging from about 10 µs to 1 s in the fit. Detailed data on timescales and amplitudes of the individual correlation decays are presented in Fig. 2(c). Amplitudes (left column) corresponding to correlation times (right column) of ∼ 1 ms and longer are clearly reduced on resonance. In contrast, the shortest correlation time amplitudes are maximal on resonance. We compare this detuning dependence with the discussion of noise amplitudes around Fig. 1, and discern that electric field fluctuations make the dominant contribution to noise on timescales of 1 ms and longer. We label these timescales τ 3 − τ 6 . In contrast, the detuning dependence of the τ 1 process points to magnetic field fluctuations as source of noise. However, the large number of noise sources present for this QD can give rise to dependencies between fit parameters and make a direct identification challenging. The correlation amplitudes corresponding to τ 2 (∼ 100 µs) highlight the ambiguity in this approach: the detuning dependence does not fit into a single category, suggesting contributions from both noise sources. Similarly, we cannot exclude the presence of electric field noise in the fastest decay, at 10 µs, from this measurement while nuclear spin bath fluctuations could also be contributing to longer correlation decays.
In order to discriminate the noise sources unambiguously we isolate magnetic field noise in the QD fluorescence using two-color excitation. The concept is illustrated in Fig. 3 (a) where the effects of magnetic (top) and electric (bottom) field changes on fluorescence intensity are considered separately. Two lasers of equal power drive the QD transition at equal and opposite detuning from resonance. Linear Stark shifts due to changes in the ambient electric field cause opposite changes in intensity of resonance fluorescence at each frequency. Magnetic field noise, however, changes the splitting of the resonance and affects absorption equally at both laser frequencies (cf. white arrows). Figure 3 (b) presents a resonance fluorescence time trace for excitation with a single laser (top) at a detuning ∆ ∼ 250 MHz, which yields half the fluorescence intensity compared to excitation on resonance. The bottom time trace corresponds to excitation with two lasers at detunings ±∆. The total laser power incident on the sample is identical in both cases and corresponds to s ≈ 0.1 . The autocorrelation [cf. Fig. 3 (c)] for the two-laser excitation demonstrates a reduction of slow (τ > 1 ms) decay processes by up to two orders of magnitude in amplitude, while noise with short correlation times remains. The suppression of electric field-related noise in the fluorescence allows us to probe nuclear field fluctuations with greater clarity, revealing two distinct decays of τ N1 = 6µs and τ N2 = 40µs with similar amplitudes where the subscript N specifies the origin as nuclear spin noise. The next fastest correlation decay happens on a 1.5 ms timescale and is reduced by a factor of 50 in comparison to single laser excitation, consistent with residual electric field fluctuations. The correlation times measured here can to be compared to the established model of nuclear spin dynamics in bulk GaAs and their effect on electron spin dephasing 18 : precession of nuclear spins in the Knight field of the electron is expected to cause electron spin relaxation on a timescale of a few 100 ns while dipolar interactions between nuclear spins change the Overhauser field on a 100 µs timescale. However, strain in InGaAs QDs strongly modifies the dipolar interactions 27,34,35 and experimental considerations such as the details of the sample structure and the impact of optical excitation must be taken into account. Here we pursue the latter consideration, where we associate the fastest timescale, τ N1 with an effective Knight field precession time, and the second timescale τ N2 with an effective nuclear spin-spin interaction time.
IV. NUCLEAR SPIN CORRELATION TIMES FOR A DRIVEN QUANTUM DOT
Having established two timescales for magnetic field noise in the QD fluorescence we examine their dependence on external parameters. To obtain access to the detuning dependence we use single-laser excitation. The sensitivity to nuclear spin fluctuations is increased by selecting a different QD (QD B) on the same sample that has a smaller Stark coefficient, reducing the effect of electric field noise. It is also important to consider the excited state lifetime, as a short lifetime translates to a broad natural linewidth Γ = (2πT 1 ) −1 and consequently a smaller sensitivity to noise in general. For QD A we measure T 1 = (584 ± 10) ps, however for QD B we measure T 1 = (693 ± 5) ps, yielding a greater overall sensitivity to noise. Figure 4 (a) displays four autocorrelations for excitation of QD B close to resonance. The excitation power is varied from a tenth to twice the saturation power. The bunching amplitude of the autocorrelation function decreases markedly as a consequence of power broadening. This effect is analogous to the dependence of noise sensitivity on the natural linewidth: As the excitation power is increased the inherent broadening of the absorption reduces sensitivity to all fluctuations and consequently noise amplitudes. More surprisingly, however, the dynamics at short time delays, which we identified to be due to nuclear spin fluctuations, slow down with increasing power. Figure 4 (b) summarizes the power dependence of the fast timescales for QD B (dark filled circles). For reference we provide an additional set of data for QD C (light filled squares). Taking QD B data in particular, the correlation times increase from τ N1 = (2.5±0.5) µs at s = 0.09 to τ N1 = (11±1) µs at s = 1.8. Similarly, τ N2 increases from (13±2) µs to (47±8) µs in the same range. In fact, the ratio of correlation times is approximately constant in our measurements, giving τ N2 /τ N1 ∼ 4.5 in this case. QD C shows qualitatively the same behavior. We provide a tentative explanation here by considering electron spin flips due to optical excitation and their effect on the Knight field, and the hyperfine-mediated nuclear spin-spin interaction. First, we note that the Knight field is present while the QD is in the ground state. The field is negligible in the excited state as the electrons form a spin singlet and the heavy hole has a much weaker hyperfine interaction. Consequently, the electron's de-phasing rate γ N1 = 1/τ N1 should scale with the ground state population, and decrease in line with the optical saturation to half its maximum value at high probing power. Furthermore, the Knight field is affected by the electron spin lifetime. Electron spin flip rates γ sp comparable to, or faster than, the nuclear precession rate in the electron's Knight field result in a motional averaging and suppress the effect of the Knight field. Considering contributions to the electron spin flip rate in our experiments, we must include spin-flip co-tunneling processes, measured to be ∼ (100 µs) −1 for this QD device, and optically induced spin flips. Such spin-flip Raman transitions in the 4-level system are allowed for Overhauser field configurations with a component in the plane perpendicular to the growth axis. Spin pumping via this channel occurs on average after three optical cycles in the absence of an external field 28 :
γsp = s 2 (1 + s) T 1 × 1 3 .(1)
This corresponds to spin flip times of tens of nanoseconds at low excitation power and below ten nanoseconds above saturation. Consequently, even for an excitation power corresponding to a tenth of the saturation power we expect a significant motional averaging effect, effectively prolonging electron spin dephasing due to nuclei precessing in the Knight field into the microsecond range. We may capture these dynamics in a phenomenological rate equation model which is plotted in Fig. 4 (b) as a dashed line:
γ N1 ≈ γ 2 K γ K + γ sp ρ g ,(2)
where γ K is the electron spin dephasing rate arising from nuclear precession in the Knight field and ρ g is the QD ground state population. For a QD of N nuclear spins this electron spin dephasing time scales as T K ∼ √ N T 2,e , where T 2,e ∼ 1 ns is the electron spin dephasing time in the Overhauser field 18 . For a QD of average size, N ∼ 5 × 10 4 , we obtain T K = 1/γ K ∼ 200 ns which reproduces the power dependence we observe in the data.
Concerning the origin of the correlation time τ N2 , we may exclude direct dipolar coupling of nuclear spins as the sole contributor because it is a local interaction that depends only weakly on dynamics of the electron spin state, or the QD ground state population. Instead, hyperfine-mediated indirect coupling of nuclear spins, which was shown to be an efficient mechanism for relaxation of dynamic nuclear spin polarization 36,37 and electron spin dephasing 21,38 is likely to be at the origin of the τ N2 correlation. The interaction strength of this second-order process is at least equal to the dipolar interaction in bulk material and dominates dynamics in strained QD systems, where quadrupolar effects (and the Knight field) suppress dipolar coupling. Hyperfinemediated nuclear spin interaction is dependent upon the electron spin state and as such will also be susceptible to motional averaging under electron spin flips. It remains an open question at this stage whether other (excitationpower dependent) interactions 39 take part in nuclear spin dynamics at these timescales. The data indicate a correlation time τ N2 ∼ 10 µs in the absence of optical excitation. We find quantitatively similar behavior for different QDs on the same sample. We note that, in contrast to the nuclear spin dynamics, correlation times associated with electric field fluctuations display a speedup of about a factor two for the same increase of excitation power.
Our results clearly demonstrate a dependence of the dynamics of the nuclear spin bath on the strength of resonant optical excitation. Motional averaging due to spin flips of the resident electron provide qualitative agreement with our observations. The precise dynamics are expected to be highly sensitive to variations in sample structure. Of particular importance is the size of the tunnel barrier separating the QD layer from the doped back contact which determines the electron spinflip co-tunneling rate. Whereas the spin-flip co-tunneling timescale for our sample (35-nm barrier) is about 100 µs in the center of the one-electron stability plateau, a 25nm barrier (compared to Ref. 26) can result in timescales in the nanosecond regime. As a consequence of the fast spin recycling for narrow tunnel barriers, we expect the Knight field to be entirely absent, and electron-mediated nuclear spin interaction to be weak. In the limit of fast spin flips we expect to recover a nuclear bath fluctuation time governed by direct dipolar coupling.
V. QUANTIFYING ELECTRIC AND MAGNETIC FIELD FLUCTUATIONS
Magnetic and electric field correlation times for QDs in our device are well separated (up to 50 µs for nuclear spin bath fluctuations, beyond 1 ms for electric fields) so that electric field fluctuations can be considered frozen on the timescale of Overhauser and Knight field evolution. Here we employ this separation to quantify noise magnitudes using the model discussed in Fig. 1. We first calculate the time-averaged effect of a nuclear spin bath with isotropic distribution function on the excited state populations. Here, the sub-linewidth ground state splitting results in a broadened absorption lineshape (see Appendix A). Electric field fluctuations are then included as a Gaussian distribution of transition resonance frequencies. The electric field contribution to noise in the fluorescence is found directly as the ratio of the resulting variance to the square of the mean excited state population. Our experimental data contain several processes on different timescales associated with electric field fluctuations, however we are able to characterize the combined noise averaged over full measurement times with a single field distribution function 14 . In this case it is the sum of noise amplitudes that we are concerned with and therefore a non-Markovian model which treats dynamics on multiple timescales independently is not required.
Taking the value of the measured autocorrelation func-tion at a time delay where the dominant contributions due to the nuclear field fluctuations have decayed (τ ∼ 200 µs), we find the noise amplitude due to the electric field happening on all (longer) timescales. In Fig. 5(a) this fluctuation amplitude for QD A (data as circles) is fit using the time-averaged model (curve) where free parameters are E FWHM representing the full-widthhalf-maximum of the electric field distribution function and the standard deviation of the Overhauser field distribution σ B . The simulation is in agreement with the data for an Overhauser field with standard deviation σ B = (22±2) mT and a broadening of the optical transition by a Gaussian distribution with a FWHM of (205±7) MHz. Taking into account the measured Stark shift for this QD we arrive at an electric field fluctuation distribution with a FWHM of (3.2±0.1)10 3 V/m. For QD B we extract σ B = (25±2) mT and E FWHM = (3.5±0.2)10 3 V/m, which corresponds to a transition frequency broadening due to electric field of (168±11) MHz [ Fig. 5(b)]. We note that whilst this model does not include the fluctuation processes of nuclear spins explicitly, it is possible to obtain a characteristic Overhauser field distribution through its necessary impact on the underlying absorption lineshape. This model is applicable for low to moderate QD excitation. In this regime we find the extracted Overhauser field distributions to be unaffected by excitation power.
The standard deviation of the Overhauser field distribution we extract from sets of autocorrelations are consistent between QDs and agree with values reported in the literature inferred through other techniques 40,41 , as well as predictions theoretical predictions 18,19 . We note that, in addition to a number of distinct exponential decays in the autocorrelations, a 1/f component is also present at low frequencies. Exponential decays are associated with single charge trapping processes or alternatively multiple interacting charge traps 42 . The description in the longtime limit via a Gaussian distribution function of electric fields suggests a large number of noise contributing processes over the range of timescales. However, only few charges are required to produce a large number of electric field values when acting in combination. Comparing results from multiple QDs we find the amplitudes of electric field fluctuations varying widely, in contrast to nuclear spin fluctuations. However, the timescales of distinct noise processes agree very well between QDs; see Appendix B for additional data on QD B. The consistency of the timescales between QDs suggests the electric field fluctuations are due to distinct classes of charge traps present throughout the sample, i.e. the noise dynamics are a global sample property. The noise amplitude for a particular QD, however, is a local property, depending on the specific relative geometry of QD and noise sources.
VI. SUMMARY AND CONCLUSIONS
In summary, we have investigated the contributions of nuclear spin bath fluctuations and dynamic electric field sources to the environmental noise of a QD in the presence of optical excitation. RFFS provides powerful tools to quantify these processes, for instance through bunching amplitudes of the intensity autocorrelation. Twocolor excitation allows a clear distinction of the noise origins and permits unambiguous identification of nuclear bath correlation times. Two distinct correlation times associated with nuclear spin fluctuations are interpreted as arising from a partially shielded Knight field and hyperfine-mediated nuclear spin interaction. A separation of nuclear (<50 µs) and electric field noise (>1 ms) timescales makes a comparison to a Markovian model of time-averaged noise possible and allows us to quantify the environmental fluctuations. In the present sample, the dominant noise due to electric fields is described by a Gaussian distribution leading to spectral diffusion of 100-300 MHz while the Overhauser field magnitude corresponds to 22-25 mT at low excitation power. Our approach permits the direct quantitative comparison of individual QDs and different samples.
RFFS allows access to the rich physics of the central spin problem in the context of a confined system that is highly sensitive to both the inherent strain and interaction with a nearby Fermi sea. Exploring this parameter space in greater detail is the focus of future investigations. In addition, dynamics of the nuclear spin bath may be studied in the absence of an interacting electron 27,36 . Here, the two-color excitation scheme which allows exclusive access to magnetic field fluctuations can be extended to neutral QDs, where the two transitions split by the fine structure are driven simultaneously. An extension of this work in a different direction could be studying the influence of feedback on the environment.
The intensity of resonance fluorescence is directly proportional to the excited state population, so we employ the optical Bloch equations to calculate this in the case FIG. 6: (a) Level structure and transitions for a negatively charged QD. Excited states decay radiatively, indicated by black wavy arrows, with rate Γ rad and the ground state spin relaxes at ΓGS. In the absence of an external applied field the Zeeman splitting induced by the Overhauser field introduces a sub-linewidth splitting and consequently modifies the selection rules. (b) The instantaneous Overhauser field is decomposed in to cylindrical components in our model. of a negatively charged QD (X 1− transition). The energy levels are indicated in Fig. 6. In the absence of an external applied magnetic field the degeneracy of the spin states is lifted due to the interaction with the ensemble of 10 4 -10 5 nuclear spins in the QD. The hyperfine interaction is composed of a direct dipolar interactions between nuclear and electron/hole spins, and the dominant Fermi contact interaction term 15 . For the heavy-hole wavefunctions in a QD, which are derived from underlying p-type orbitals, the interaction with the nuclear spins is of dipolar form and an order of magnitude smaller than the Fermi contact interaction with the electron spin 43,44 ; it is thus neglected. The Fermi contact hyperfine interaction is treated as an effective magnetic field (the Overhauser field) which provides the electron spin ground state quantization axis. The unpaired hole spin in the excited state has a quantization axis defined by the excitation light. For comparison, in Faraday geometry where an external field is aligned with the growth axis, the ground state electron spin is quantized along this axis, where we represent these states of m s = ±1/2 as |↑ = |1 and |↓ = |2 . In this situation diagonal transitions are forbidden. Due to changes in the Overhauser field vector the electron ground state spin quantization axis shifts over time and hence the selection rules are not fixed. In general, the instantaneous eigenstates will be superpositions of the spin states |↑ and |↓ , allowing diagonal transitions for most Overhauser field configurations. The term of the Hamiltonian that describes the ground state coupling to the Overhauser field B N can be expressed aŝ
H HF = 1 2 µ B g e − → B N · − → σ = 1 2 µ B g e B (σ 11 − σ 22 ) + B ⊥ (e iθ σ 21 + e −iθ σ 12 )(A1)
where we define the projection operators σ ij = |i j|, with i, j = 1. . . 4 corresponding to one of the four levels of X 1− . The angle θ is indicated in Fig. 6. The additional relevant physical parameters in our model are:
1. The spontaneous emission decay rate Γ rad (see lifetime measurements in appendix B).
2. The QD-excitation field coupling strength given by the Rabi frequency. For convenience we use the parameter s = 2(Ω /Γ rad ) 2 .
3. The ground state spin relaxation rate Γ GS .
4. The laser detuning from the transition frequency, δ = ω QD − ω laser .
In general, pure dephasing (decay of coherences for reasons other than population decay) must also be considered. However, previous experiments have demonstrated slow pure dephasing rates 29 , where measurements of the excited state coherence time suggested T 2 ∼ 2T 1 and so it is neglected in the discussion of the model that follows. The Hamiltonian describing the system takes into account the sum of the excited state energy, the electric dipole term representing interaction with the laser and the hyperfine term:
H system =Ĥ HF +Ĥ dipole +Ĥ excited state .(A2)
The dipole interaction term can be written in terms of the projection operators in the frame rotating at ω laser with respect to the laboratory frame:
H dipole = 1 2 Ω e −iδt (σ 13 + σ 24 ) + e iδt (σ 31 + σ 42 ) (A3)
The time dependence of the resulting density matrix follows the Liouville von Neumann equation,
dρ dt = − i [H, ρ] + m L(ρ, L m )(A4)
where m = 1,2,3,4 and
L(ρ, L m ) = L m ρL † m − 1 2 {L † m L m , ρ}. (A5)
Relaxation of the ground state spin and spontaneous emission processes are included in the Lindblad dephasing operators. In the absence of pure dephasing we have
L 1 = (Γ GS ) 1 2 σ 12 ,(A6a)L 2 = (Γ GS ) 1 2 σ 21 ,(A6b)L 3 = (Γ rad ) 1 2 σ 13 ,(A6c)L 4 = (Γ rad ) 1 2 σ 24 .(A6d)
We measure resonance fluorescence intensity on timescales longer than the radiative lifetime. Therefore we are interested in the excited state population ρ 33 +ρ 44 in the stationary limit dρ dt = 0. Next, we illustrate the effect of hyperfine coupling on the optical properties of the QD by considering the resulting absorption lineshape. Figure 7 compares the absorption lineshapes expected for an ideal two-level system (blue curve) and the four-level QD system (black curve) given an Overhauser field distribution with finite variance. Typical values are chosen for the parameters discussed above:
1. Spontaneous emission rate Γ rad = (2πτ 1 ) −1 , T 1 = 700 ps.
2. Saturation parameter s = 1.
3. Ground state relaxation rate Γ GS = 2 x 10 −4 s −1 .
4. Overhauser field standard deviation σ B = 25 mT.
For the two-level system the curve represents the expected Lorentzian power-broadened lineshape. Interestingly, in the case of the X 1− level structure, we obtain a Lorentzian lineshape again, albeit broadened. The amplitude of both curves, corresponding to the intensity of resonance fluorescence, has been scaled to unity here, while in actual fact, the intensity is reduced in the fourlevel case, mainly due to spin pumping.
Autocorrelation bunching amplitudes
The intensity autocorrelation of a time-binned signal written as {x 1 , x 2 , . . . , x N } with mean I(t) =x has a zero time delay amplitude given by:
g (2) (0) = 1 N i x 2 ī x 2 .(A7)
This can be written directly in terms of the variance σ 2 and the mean as
g (2) (0) − 1 = σ 2 x 2 .(A8)
We therefore may relate the variance of our entire signal time trace to the full amplitude of the autocorrelation. In the following we will be considering the autocorrelation amplitude of electric field noise in particular. Given the clear division of nuclear spin and electric field-related timescales found experimentally we can consider a cutoff time in the autocorrelation that separates the two. The autocorrelation amplitude at this cut-off point then captures all fluctuations due to electric field noise.
Model of electric and Overhauser field distributions
We assume that during the measurement time the full range of possible electric field values is explored. The transition frequency distribution P (∆δ) is represented by a Gaussian distribution about a central resonant frequency:
P (∆δ) = 1 2πσ 2 E exp − 1 2 ∆δ σ E 2 . (A9)
Here ∆δ is the detuning with respect to the central frequency arising from the electric-field induced Stark shifts. The distribution has a standard deviation, σ E , corresponding to a full-width at half-maximum ∆ FWHM = √ 8 ln 2σ E . For the Overhauser field vector we choose an isotropic Gaussian distribution
W (B N ) = 1 (2πσ 2 B ) 3/2 exp − 1 2 B N σ B 2 ,(A10)
where B N is the instantaneous Overhauser field vector and σ B is the standard deviation of the field 16,18 . When considering the combined effects of the two noise sources we take advantage of the separation of timescales and calculate the time-averaged effect of a fluctuating Overhauser field, that is, an absorption lineshape such as found in Fig. 7. Resonance fluorescence noise amplitudes due to electric field fluctuations are then obtained by allowing the central frequency of the QD transition to vary according to the probability distribution in Eq. (A9).
Parameters relevant to the model
The sensitivity to electric and Overhauser field fluctuations is determined by the underlying Lorentzian absorption spectrum of a QD transition. The radiative lifetime T 1 gives directly the natural linewidth of the transition, where power broadening produces the linewidth under excitation, Γ FWHM = (2πT 1 ) −1 √ 1 + s. Consequently, the sensitivity to both electric and magnetic field noise drops rapidly as the QD transition is saturated. To model the bunching amplitudes for a particular QD it is necessary to measure both the radiative lifetime and saturation behavior of every QD.
FIG. 8: Fit of electric field noise amplitudes to the model for QD C, where the QD is driven with saturation parameter s = 0.86. The extracted standard deviation for the underlying Overhauser field is 24 mT, and the Gaussian distribution describing the shifts in resonant frequency due to the electric field has a full width at half maximum of (147±5) MHz.
Additional data fit to the electric field amplitude model
Here we present an example of the model applied to QD C. The sum of slow noise amplitudes and a fit is shown in Fig. 8. We extract an Overhauser field distribution with a standard deviation of (24 ± 2) mT. The error in the fit presented in Fig. 8 is minimized for electric field noise with an E FWHM of (2.3 ± 0.1)10 3 V/m or, equivalently, a transition broadening of (147 ± 5) MHz.
Fit to autocorrelation bunching decays
Data is fit to a sum of multiple exponential decays in order to extract rates of noise processes. Extracting individual amplitudes is useful to identify the origins of each component of noise. In appendix B we present further correlation timescales and amplitudes found for QD B.
A single exponential decay with correlation time τ c is indicative of a single relaxation process, where the corresponding power spectrum (directly related via the Wiener-Khinchin theorem) is a single Lorentzian peak with a width that is proportional to 1/τ c 33 . In our data we are able to consistently extract between 4 and 6 exponential decays, which suggests this is the number of distinct processes contributing to noise. We model electric field fluctuations by a Gaussian distribution of resonant frequencies which is a good description for the effect of noise upon photon counting statistics 14 . However, a Gaussian distribution is consistent with a large number of electric field values, not initially in keeping with a small number of charge traps. One picture is that a relatively small number of independently fluctuating charge traps, N, which can be occupied or unoccupied, leads to 2 N possible electric field values at the position of the dot. In addition, single decay timescales in the autocorrelation may be associated with many similar charge traps rather than single locations, potentially increasing N. There is also the possibility that the charge traps interact; in this case a large number of traps with a range of associated timescales again result in Lorentzian noise spectra and thus exponential decays in autocorrelations 42 . Figure 9 shows the treatment of measured autocorrelation data for two examples. The autocorrelation function for the detection of laser emission at comparable count rate is subtracted from the autocorrelation measured for QD fluorescence. This background data is taken with the QD transition detuned from the resonant laser, where the polarization suppression is relaxed to gain the same photon count rates. While APD afterpulsing has a pronounced effect at time delays up to about 1 µs, small corrections resulting from the subtractions are visible for time delays as large as 100 µs, rendering it necessary to take into account background for all data. Further, the autocorrelation function depends sensitively on experimental settings, such as APD count rate or laser power stabilization, and changes when equipment is exchanged. For this reason the reference measurement of the laser autocorrelation has to replicate experimental conditions as closely as possible.
Lifetime measurements of QD A, B, C
The excited state lifetime T 1 is measured under pulsed resonant excitation, using an electro-optic modulator with 10 GHz bandwidth driven by voltage pulses with FIG. 11: Autocorrelation decay amplitudes and timescales for QD B, s = 0.92. The lower two panels are identified as noise due to nuclear spin fluctuations, whilst the upper four panels show a detuning dependence consistent with underlying electric field fluctuations, as discussed in the main text. All nuclear spin fluctuation timescales are again below 100 µs, whilst electric field fluctuations persist from 1 ms up to seconds.
sub-50 ps rise and fall times. QD resonance fluorescence detection times are recorded in bins of 162 ps width with respect to a trigger signal derived from the pulsed voltage source. Data for the three QDs used in this paper are plotted in Fig. 10, together with single exponential fit functions. The error is the standard error in the mean for independent fits to decay curves under repeated measurement (four for QDs A and C and eight for QD B). The timing resolution of the measurement system amounts to ∼ 350 ps.
Detailed autocorrelation amplitudes and
timescales for QD B Figure 11 displays amplitudes and timescales as extracted from exponential fits to the measured autocorrelation functions of QD B. As described in the main text the bottom two sets of panels represent nuclear field fluctuations while the top panels describe resonance fluorescence fluctuations due to electric field noise. We calculate correlation times up to ∼ 1 s from the resonance fluorescence time traces as before, but note that noise processes (due to electric field fluctuations) with considerably longer correlation times take place in our samples as well. These slow dynamics can be accessed in measurements with long acquisition times, but are unlikely to differ qualitatively from the electric-field noise we observe on faster timescales.
In comparison to data for QD A, cf. Fig. 2, decay amplitudes related to electric field fluctuations are reduced by about a factor for QD B. However, we note the timescales are very similar in the two cases and are consistent with the values measured for other QDs of the same sample, as expected when the noise arises from sample-dependent defects.
Sample structure
Our sample structure is illustrated in Fig. 12. Selfassembled InGaAs QDs are incorporated into a Schottky diode structure with a 35-nm tunnel barrier between the QD layer and an n-doped layer. The diode heterostructure is grown above a distributed Bragg reflector to maximize photon outcoupling efficiency. Further enhancement of photon collection is obtained by the presence of a super-hemispherical solid immersion lens placed directly on the semi-transparent Titanium Schottky contact on the surface of the sample. For the current sample we estimate a photon outcoupling efficiency of up to 15 % for QDs with emission wavelengths around 970-980 nm.
FIG. 1 :
1(a) A QD excited state energy level undergoes a Stark shift proportional to a change of the electric field component aligned with the dipole. Time-varying electric fields from local defects broaden the resonance of the optical transition. (b) QD absorption jitter for set of electric field values Ez following a Gaussian probability distribution. The variance of the fluorescence intensity, indicated by dotted red lines, depends on the resonant laser detuning, here shown in units of the natural linewidth Γ. (c)
FIG. 2 :
2(a) Segments of resonance fluorescence time traces from QD A X 1− driven below saturation, s = 0.28. APD counts are binned to 200 µs resolution here. White bars display the standard deviations expected from Poisson statistics (bar thickness) about the mean count rate. The respective excitation detuning is indicated in the legend. (b) Intensity autocorrelations calculated from time traces presented in (a), data as circles, fit as line. Bunching amplitudes vary significantly with detuning. (c) Fitting autocorrelations with multiple exponential decays reveals distinct decay timescales (right). Left: amplitudes of autocorrelation decays are strongly dependent on laser detuning.
FIG. 3 :
3(a) Sketch of the noise balancing concept. Two-laser excitation at equal and opposite detuning renders the resonance fluorescence intensity insensitive to small linear shifts in QD resonance frequency caused by electric field fluctuations (bottom). Resonance fluorescence intensity noise due to Overhauser field changes is enhanced (top). (b) Upper panel: resonance fluorescence time trace segment for QD A X 1− under excitation with a laser detuned by half a linewidth. Lower panel: resonance fluorescence from the same QD at identical total excitation power with two equally detuned lasers, enabling direct comparison. White bars indicate shot noise from the mean count rates; in the case of two lasers the fluctuations about this are much reduced in comparison to single laser excitation. (c) Autocorrelations of data from (b). Bunching amplitudes for time delays > 100 µs are strongly suppressed for two-laser excitation. Bunching with characteristic decay times of 6 µs and 40 µs remains. The long timescale amplitude is reduced by about two orders of magnitude whilst noise on shorter timescales remains, consistent with a magnetic field origin.
FIG. 4 :
4Autocorrelations for QD B X 1− for close to resonant (∆ ∼ 0 MHz) driving. The saturation parameter (indicated in the box on the bottom right-hand side) is varied between s = 0.09 and s = 1.8. Inset: Correlation times of the nuclear spin bath, extracted from exponential fits to the data (not shown), show a strong dependence on the driving power.
FIG. 5 :
5Example simulations of electric field noise, where measured amplitudes are displayed as blue circles. Nuclear field noise amplitudes are displayed as squares. (a) Electric field fluctuation amplitudes for QD A. Here fast noise is masked by the large electric field noise contribution. (b) Comparison to simulation for QD B. The short timescales due to nuclear field fluctuations make a larger contribution in this case.
FIG. 7 :
7Absorption lineshape of an ideal two-level system (blue) and the X 1− 4-level system with Gaussian Overhauser field of 25 mT standard deviation (black). The ground (4level system only) and excited state lifetimes correspond to typical measured values.
FIG. 9 :
9Data treatment for calculated autocorrelation functions. Red curves show the raw autocorrelations for both QD RF and laser output at a comparable count rate. The blue curve is the difference between QD RF and laser autocorrelations, and accounts for intensity fluctuations inherent to the measurement apparatus, such as detector afterpulsing. (a) and (b) represent two examples recorded for different parameters of the measurement system such as detector count rate, laser frequency and power incident upon the experimental set-up.
FIG. 10 :
10Measurements of the radiative lifetime for QDs A, B, C using time-correlated single photon counting under pulsed resonant excitation. Appendix B: Supporting data 1. Background correction of data
AcknowledgmentsWe thank H. Ribeiro, R. Stockill for useful discussions. C.M. gratefully acknowledges Clare College, Cambridge for financial support through a Junior Research Fellowship. We gratefully acknowledge financial support by the University of Cambridge, the European Research Council ERC Consolidator Grant agreement no. 617985 and the EU-FP7 Marie Curie Initial Training Network S 3 NANO.
* Electronic address: [email protected]. * Electronic address: [email protected]
. D Gammon, D G Steel, Physics Today. 5536D. Gammon and D. G. Steel, Physics Today 55, 36 (2002).
. R J Warburton, C Schulhauser, D Haft, C Schäflein, K Karrai, J M Garcia, W Schoenfeld, P M Petroff, Physical Review B. 65113303R. J. Warburton, C. Schulhauser, D. Haft, C. Schäflein, K. Karrai, J. M. Garcia, W. Schoenfeld, and P. M. Petroff, Physical Review B 65, 113303 (2002).
. M Bayer, G Ortner, O Stern, A Kuther, A Gorbunov, A Forchel, P Hawrylak, S Fafard, K Hinzer, T Reinecke, Physical Review B. 65195315M. Bayer, G. Ortner, O. Stern, A. Kuther, A. Gorbunov, A. Forchel, P. Hawrylak, S. Fafard, K. Hinzer, T. Reinecke, et al., Physical Review B 65, 195315 (2002).
. J Finley, D Mowbray, M Skolnick, A Ashmore, C Baker, A Monte, M Hopkinson, Physical Review B. 66153316J. Finley, D. Mowbray, M. Skolnick, A. Ashmore, C. Baker, A. Monte, and M. Hopkinson, Physical Review B 66, 153316 (2002).
. A Vamivakas, Y Zhao, S Fält, A Badolato, J Taylor, M Atatüre, Physical Review Letters. 107166802A. Vamivakas, Y. Zhao, S. Fält, A. Badolato, J. Tay- lor, and M. Atatüre, Physical Review Letters 107, 166802 (2011).
. I Wilson-Rae, P Zoller, A Imamolu, Physical Review Letters. 9275507I. Wilson-Rae, P. Zoller, and A. Imamolu, Physical Review Letters 92, 075507 (2004).
. I Yeo, P.-L De Assis, A Gloppe, E Dupont-Ferrier, P Verlot, N S Malik, E Dupuy, J Claudon, J.-M Gérard, A Auffèves, Nature Nanotechnology. I. Yeo, P.-L. de Assis, A. Gloppe, E. Dupont-Ferrier, P. Verlot, N. S. Malik, E. Dupuy, J. Claudon, J.-M. Gérard, A. Auffèves, et al., Nature Nanotechnology (2013).
. M Montinaro, G Wüst, M Munsch, Y Fontana, E Russo-Averchi, M Heiss, A Fontcuberta I Morral, R J Warburton, M Poggio, arXiv:1405.2821v1arXiv preprintM. Montinaro, G. Wüst, M. Munsch, Y. Fontana, E. Russo-Averchi, M. Heiss, A. Fontcuberta i Mor- ral, R. J. Warburton, and M. Poggio, arXiv preprint arXiv:1405.2821v1 (2014).
. R J Warburton, Nature Materials. 12483R. J. Warburton, Nature Materials 12, 483 (2013).
Left: the sample structure, indicating all MBE-grown layers. Right: post-growth Ohmic and Schottky contacts are applied to the diode structure and a SIL is placed on the sample surface. 12FIG. 12: Left: the sample structure, indicating all MBE-grown layers. Right: post-growth Ohmic and Schottky contacts are applied to the diode structure and a SIL is placed on the sample surface.
. P Lodahl, S Mahmoodian, S Stobbe, arXiv:1312.1079arXiv preprintP. Lodahl, S. Mahmoodian, and S. Stobbe, arXiv preprint arXiv:1312.1079 (2013).
. J Houel, A Kuhlmann, L Greuter, F Xue, M Poggio, B Gerardot, P Dalgarno, A Badolato, P Petroff, A Ludwig, Physical Review Letters. 108107401J. Houel, A. Kuhlmann, L. Greuter, F. Xue, M. Poggio, B. Gerardot, P. Dalgarno, A. Badolato, P. Petroff, A. Lud- wig, et al., Physical Review Letters 108, 107401 (2012).
. H S Nguyen, G Sallen, M Abbarchi, R Ferreira, C Voisin, P Roussignol, G Cassabois, C Diederichs, Physical Review B. 87115305H. S. Nguyen, G. Sallen, M. Abbarchi, R. Ferreira, C. Voisin, P. Roussignol, G. Cassabois, and C. Diederichs, Physical Review B 87, 115305 (2013).
. M Davanço, C S Hellberg, S Ates, A Badolato, K Srinivasan, Physical Review B. 89161303M. Davanço, C. S. Hellberg, S. Ates, A. Badolato, and K. Srinivasan, Physical Review B 89, 161303 (2014).
. C Matthiesen, M J Stanley, M Hugues, E Clarke, M Atatüre, Scientific Reports. 4C. Matthiesen, M. J. Stanley, M. Hugues, E. Clarke, and M. Atatüre, Scientific Reports 4 (2014).
. A Abragam, Oxford , University Press119120A. Abragam, Oxford: University Press 119, 120 (1998).
. B Urbaszek, X Marie, T Amand, O Krebs, P Voisin, P Maletinsky, A Högele, A Imamoglu, Reviews of Modern Physics. 8579B. Urbaszek, X. Marie, T. Amand, O. Krebs, P. Voisin, P. Maletinsky, A. Högele, and A. Imamoglu, Reviews of Modern Physics 85, 79 (2013).
. E Chekhovich, M Makhonin, A Tartakovskii, A Yacoby, H Bluhm, K Nowack, L Vandersypen, Nature Materials. 12494E. Chekhovich, M. Makhonin, A. Tartakovskii, A. Yacoby, H. Bluhm, K. Nowack, and L. Vandersypen, Nature Mate- rials 12, 494 (2013).
. I Merkulov, A L Efros, M Rosen, Physical Review B. 65205309I. Merkulov, A. L. Efros, and M. Rosen, Physical Review B 65, 205309 (2002).
. A V Khaetskii, D Loss, L Glazman, Physical Review Letters. 88186802A. V. Khaetskii, D. Loss, and L. Glazman, Physical Review Letters 88, 186802 (2002).
. W Coish, D Loss, Physical Review B. 70195340W. Coish and D. Loss, Physical Review B 70, 195340 (2004).
. L Cywiński, W M Witzel, S D Sarma, Physical Review Letters. 10257601L. Cywiński, W. M. Witzel, and S. D. Sarma, Physical Review Letters 102, 057601 (2009).
. N Sinitsyn, Y Li, S Crooker, A Saxena, D Smith, Physical Review Letters. 109166605N. Sinitsyn, Y. Li, S. Crooker, A. Saxena, and D. Smith, Physical Review Letters 109, 166605 (2012).
. A Johnson, J Petta, J Taylor, A Yacoby, M Lukin, C Marcus, M Hanson, A Gossard, Nature. 435925A. Johnson, J. Petta, J. Taylor, A. Yacoby, M. Lukin, C. Marcus, M. Hanson, and A. Gossard, Nature 435, 925 (2005).
. X Xu, B Sun, P R Berman, D G Steel, A S Bracker, D Gammon, L Sham, Nature Physics. 4692X. Xu, B. Sun, P. R. Berman, D. G. Steel, A. S. Bracker, D. Gammon, and L. Sham, Nature Physics 4, 692 (2008).
. D Press, K De Greve, P L Mcmahon, T D Ladd, B Friess, C Schneider, M Kamp, S Höfling, A Forchel, Y Yamamoto, Nature Photonics. 4367D. Press, K. De Greve, P. L. McMahon, T. D. Ladd, B. Friess, C. Schneider, M. Kamp, S. Höfling, A. Forchel, and Y. Yamamoto, Nature Photonics 4, 367 (2010).
Warburton. A V Kuhlmann, J Houel, A Ludwig, L Greuter, D Reuter, A D Wieck, M Poggio, R J , Nature Physics. 9570A. V. Kuhlmann, J. Houel, A. Ludwig, L. Greuter, D. Reuter, A. D. Wieck, M. Poggio, and R. J. Warbur- ton, Nature Physics 9, 570 (2013).
. E Chekhovich, M Hopkinson, M Skolnick, A Tartakovskii, arXiv:1403.1510arXiv preprintE. Chekhovich, M. Hopkinson, M. Skolnick, and A. Tar- takovskii, arXiv preprint arXiv:1403.1510 (2014).
. J Hansom, C H H Schulte, C Le Gall, E Clarke, M Hugues, J M Taylor, M Atatüre, arXiv:1408.1272arXiv preprintJ. Hansom, C. H. H. Schulte, C. Le Gall, E. Clarke, M. Hugues, J. M. Taylor, and M. Atatüre, arXiv preprint arXiv:1408.1272 (2014).
. C Matthiesen, A N Vamivakas, M Atatüre, Physical Review Letters. 10893602C. Matthiesen, A. N. Vamivakas, and M. Atatüre, Physical Review Letters 108, 093602 (2012).
. D Magde, E Elson, W W Webb, Physical Review Letters. 29705D. Magde, E. Elson, and W. W. Webb, Physical Review Letters 29, 705 (1972).
. O Krichevsky, G Bonnet, Reports on Progress in Physics. 65251O. Krichevsky and G. Bonnet, Reports on Progress in Physics 65, 251 (2002).
. M Lippitz, F Kulzer, M Orrit, ChemPhysChem. 6770M. Lippitz, F. Kulzer, and M. Orrit, ChemPhysChem 6, 770 (2005).
. S Machlup, Journal of Applied Physics. 25S. Machlup, Journal of Applied Physics 25 (1954).
. C Lai, P Maletinsky, A Badolato, A Imamoglu, Physical Review Letters. 96167403C. Lai, P. Maletinsky, A. Badolato, and A. Imamoglu, Physical Review Letters 96, 167403 (2006).
. E Welander, E Chekhovich, A Tarttakovskii, G Burkard, arXiv:1405.1329arXiv preprintE. Welander, E. Chekhovich, A. Tarttakovskii, and G. Burkard, arXiv preprint arXiv:1405.1329 (2014).
. P Maletinsky, A Badolato, A Imamoglu, Physical Review Letters. 9956804P. Maletinsky, A. Badolato, and A. Imamoglu, Physical Review Letters 99, 056804 (2007).
. C Latta, A Srivastava, A Imamoglu, Physical Review Letters. 107167401C. Latta, A. Srivastava, and A. Imamoglu, Physical Re- view Letters 107, 167401 (2011).
. C Deng, X Hu, Physical Review B. 78245301C. Deng and X. Hu, Physical Review B 78, 245301 (2008).
. D Paget, T Amand, J.-P Korb, Physical Review B. 77245201D. Paget, T. Amand, and J.-P. Korb, Physical Review B 77, 245201 (2008).
. P.-F Braun, X Marie, L Lombez, B Urbaszek, T Amand, P Renucci, V K Kalevich, K V Kavokin, O Krebs, P Voisin, Phys. Rev. Lett. 94116601P.-F. Braun, X. Marie, L. Lombez, B. Urbaszek, T. Amand, P. Renucci, V. K. Kalevich, K. V. Kavokin, O. Krebs, P. Voisin, et al., Phys. Rev. Lett. 94, 116601 (2005).
. J Dreiser, M Atatüre, C Galland, T Müller, A Badolato, A Imamoglu, Physical Review B. 7775317J. Dreiser, M. Atatüre, C. Galland, T. Müller, A. Badolato, and A. Imamoglu, Physical Review B 77, 075317 (2008).
. F Hooge, P Bobbert, Physica B: Condensed Matter. 239223F. Hooge and P. Bobbert, Physica B: Condensed Matter 239, 223 (1997).
. P Fallahi, S Yılmaz, A Imamoglu, Physical Review Letters. 105257402P. Fallahi, S. Yılmaz, and A. Imamoglu, Physical Review Letters 105, 257402 (2010).
. E Chekhovich, A Krysa, M Skolnick, A Tartakovskii, Physical Review Letters. 10627402E. Chekhovich, A. Krysa, M. Skolnick, and A. Tar- takovskii, Physical Review Letters 106, 027402 (2011).
| []
|
[
"Strangeness in the Meson Cloud Model",
"Strangeness in the Meson Cloud Model"
]
| [
"A I Signal \nInstitute of Fundamental Sciences PN461\nMassey University\n4442Palmerston North, New Zealand\n"
]
| [
"Institute of Fundamental Sciences PN461\nMassey University\n4442Palmerston North, New Zealand"
]
| []
| I review progress in calculating strange quark and antiquark distributions of the nucleon using the meson cloud model. This progress parallels that of the meson cloud model, which is now a useful theoretical basis for understanding symmetry breaking in nucleon parton distribution functions. I examine the breaking of symmetries involving strange quarks and antiquarks, including quarkantiquark symmetry in the sea, SU(3) flavour symmetry and SU(6) spin-flavour symmetry. | 10.1063/1.3479364 | [
"https://arxiv.org/pdf/1004.2813v1.pdf"
]
| 118,671,996 | 1004.2813 | 4c0957fa72b80f986c4c404952592efdd3d578ca |
Strangeness in the Meson Cloud Model
16 Apr 2010
A I Signal
Institute of Fundamental Sciences PN461
Massey University
4442Palmerston North, New Zealand
Strangeness in the Meson Cloud Model
16 Apr 2010
I review progress in calculating strange quark and antiquark distributions of the nucleon using the meson cloud model. This progress parallels that of the meson cloud model, which is now a useful theoretical basis for understanding symmetry breaking in nucleon parton distribution functions. I examine the breaking of symmetries involving strange quarks and antiquarks, including quarkantiquark symmetry in the sea, SU(3) flavour symmetry and SU(6) spin-flavour symmetry.
BEGINNINGS -QUARK-ANTIQUARK ASYMMETRY
Tony Thomas and I met when I came to Adelaide as a new PhD student in early 1985. We agreed that I would work in the area of deep inelastic scattering, and in the first year Tony gave me a number of projects to work on. One of these was to try to extend his work from 1983 on the role of the non-perturbative pion cloud of the nucleon in DIS [1] to include kaons. We soon realized that the strangeness carrying components of the cloud would have different characteristics to the non-strange components. This is because all thes antiquarks in the cloud come from the kaon, whereas all the s quarks come from the hyperons. So immediately we saw the possibility that quark and antiquark could have different momentum distributions in the cloud. This was one of the first calculations to take into account the contributions to nucleon quark distribution functions coming from baryons in the cloud via the Sullivan process [2], see fig. 1.
We were able to show that the contributions to the quark and antiquark distributions are given by convolutions between the distribution functions of quarks or antiquarks in
xδs(x) = 1 x dy f K (y) x y s K x y ,(1)xδ s(x) = 1 x dy f H (y) x y s H x y .(2)
Using covariant perturbation theory we found [3] that the meson cloud contribution to the antistrange distribution is softer than the contribution to the strange distribution. This arises mainly because we used as K distribution in the kaon that was fairly soft, and the fluctuation function
f K (y) = f H (1 − y)(3)
is also softer for the kaon than the hyperons. As this was the first attempt to calculate strange contributions from the meson cloud, there were a number of shortcomings with this paper. The first was the use of the covariant formulation of perturbation theory in the calculation. Unfortunately, using this formulation required us to make ansatze for the structure functions of the struck, offshell, hadrons. We chose these to be the same as on-shell structure functions, which is not correct [4]. A better formulation to use for the meson cloud model is time ordered perturbation theory in the infinite momentum frame, as shown by Wally Melnitchouk and Tony Thomas in an important paper for the development of the model [5]. A similar approach was also used by Zoller [6]. Using the time ordered approach has the advantages that the struck hadrons remain on-mass-shell, so avoiding any ambiguities and allowing us to use experimental input to the structure functions. Also the momentum distributions in the cloud can be shown to satisfy the relation (3) exactly, rather than this being imposed by fiat. In the infinite momentum frame, diagrams where the struck hadron is moving backwards in time are suppressed by powers of the longitudinal momentum, and do not contribute as the limit p L → ∞ is taken.
We also had to make educated guesses for the strange and antistrange distributions in hyperons and kaons respectively. For the kaon we used an experimental determination of the pion structure function [7] which is fairly soft, whereas for the hyperons we used a simple valence distribution of the form s H (x) = N s x −1/2 (1 − x) 3 . There was also no Q 2 dependence of our input or output distributions.
The question of a possible quark -antiquark asymmetry in the strange sea received new interest in the early 2000's as a result of the interesting experimental result from the NuTeV collaboration [8]. NuTeV measured NC to CC ratios in deep-inelastic ν(ν) -nucleon scattering. This enabled them to determine the effective couplings to left and right-handed quarks (g L and g R ) and, via the Paschos -Wolfenstein (PW) ratio,
R PW = σ ν NC − σν NC σ ν CC − σν CC = g 2 L − g 2 R = 1 2 − sin 2 θ W ,(4)
the value of the weak mixing angle
sin 2 θ W = 0.2277 ± 0.0013(stat) ± 0.0009(syst),(5)
which is 2% smaller than the world average value, or a 3σ discrepancy. However, the PW ratio receives corrections from both charge symmetry breaking in the nucleon parton distributions (which Tim Londergan and Tony Thomas have investigated in detail [9]), and quark -antiquark symmetry breaking in the sea:
R PW = 1 2 − sin 2 θ W + 3b 1 + b 2 x(u V + d V ) /2 − x(s −s) + 1 2 ( xδ u V − xδ d V )(6)
where
δ u V = u p V − d n V ; δ d V = d p V − u n V(7)
are the charge symmetry breaking valence distributions and
b 1 = ∆ 2 u = g 2 L u − g 2 R u ; b 2 = ∆ 2 d = g 2 L d − g 2 R d .(8)
At the NuTeV scale (Q 2 = 16 GeV 2 ) the coefficient in front of the square brackets of eqn. (6) is about 1.3, so a symmetry breaking term inside the square brackets of −0.0038 would explain the discrepancy between the NuTeV value and the accepted value of sin 2 θ W . We note that the CTEQ group has analyzed the uncertainties around the experimental results for strange and anti-strange distributions in some detail [10]. They place bounds on the second moment of the quark -antiquark asymmetry −0.001 < x(s −s) < 0.005.
This provided impetus to revisit our calculation of the asymmetry. Now we do the calculation using time ordered perturbation theory, with on-shell structure functions. For the strange distribution in the hyperons we now use a bag model calculation [11], evolved using next-to-leading order QCD evolution to Q 2 = 16 GeV 2 . The valences(x) distribution in the kaon is taken from the parameterization of the Dortmund group [12]. We also note that the form factors cutting off the NKH vertex are fairly soft (Λ c ∼ 1 GeV). One further point of difference to our original calculation is the inclusion of K * meson Fock states. This can have a significant effect on the calculations, as the coupling constants for K * NH are fairly large [13]. Also the fluctuation functions for N → K * H peak close to y = 0.5, meaning that the final convolutions to obtain the contributions to s ands reflect the underlying hardness or softness of the valence quark distribution in the hadron. However, we realize that we are pushing the bounds of the cloud model here, as it is not clear that K * H final states would have a clear rapidity gap.
We find that the fluctuation functions for kaons are softer than for hyperons, whereas the s quark distributions in Λ and Σ hyperons are now softer than thes distribution in the K and K * . This means that once the quark distributions are convoluted with the fluctuation functions, there is only a small s−s difference, see fig. 2. The second moment of the asymmetry has a magnitude around 10 −4 , and positive (negative) sign without (with) K * states included. As this is significantly smaller than the size of effect needed to move the NuTeV result into agreement with the world data, we conclude that the strange sea asymmetry is probably not responsible for the NuTeV anomaly.
POLARISED STRANGE SEA
The calculational techniques outlined in the above section can be generalized to the polarized quark distributions ∆s(x) and ∆s(x) without too much difficulty. Polarized quark distributions have been of interest for over 20 years, since the EMC collaboration measured a very small fraction of the nucleon spin being carried by quarks [14]. This is usually interpreted in the context of SU(3) flavour and implies that the strange sea is strongly polarized opposite to the proton ∆S −0.15 [15]. It has been pointed out that a natural consequence of the meson cloud model is that the cloud is capable of carrying a significant proportion of the proton's angular momentum [16].
The HERMES collaboration have carried out an extensive programme of flavour analysis of their polarized DIS data [17,18], which shows that the polarized sea quark distributions are fairly small. Our calculations in the MCM, which include the contributions from K * states, are consistent with HERMES data, see fig. 3.
SU(3) FLAVOUR SYMMETRY BREAKING
The unpolarized strange sea is less well constrained by experimental data than the light (ū,d) sea. For instance the CTEQ6.5 pdf set [10] has a very large variance in the parameters describing the s ands distributions (± 50% in some instances). The recent HERMES data [18] on the strange sea highlights this problem, as it does not agree well with the NuTeV determination [19] -though we note that the HERMES analysis of their data is only to leading order in QCD, whereas the NuTeV analysis goes to next-toleading order.
In the MCM, we can estimate the strange sea via the SU(3) flavour breaking asymmetry
∆(x) =ū(x) +d(x) − s(x) −s(x)(10)
which has leading contributions in the cloud coming from the differences between e. g. |Nπ − |ΛK Fock states. On the other hand, there are no leading contributions to ∆(x) in perturbative QCD (and next-to-leading contributions can also be expected to be small). Having calculated ∆(x) in the MCM, we can subtract the light sea distributions, which are experimentally well constrained, and estimate the total strange sea. Our results are shown in fig. 4, and are generally consistent with the HERMES data. We have again included K * states in the calculation of ∆(x), but they do not dominate the final results, and removing them has about a 10% effect on our total s(x) +s(x). We note that our calculation becomes negative at x ≈ 0.25, which is unphysical. This could be due to either the MCM calculation overestimating ∆(x) or the CTEQ6.6 pdf set [20] underestimating the light sea [ū(x) +d(x)] or both.
In conclusion, the meson cloud model remains an excellent non-perturbative laboratory for exploring and understanding symmetry breaking among the nucleon parton distribution functions. There are still important questions around the polarized and unpolarized strange sea distributions, andthe model can help to provide solutions to these.
FIGURE 1 .
1Non-perturbative contributions to the strange sea of the nucleon. (a) The incoming photon is absorbed by a virtual kaon. (b) The in coming photon is absorbed by a virtual hyperon the hyperon or kaon with the momentum distribution, or fluctuation function, of these hadrons in the cloud:
FIGURE 2 .
2The strange sea asymmetry calculated in the meson cloud model. The solid and dashed curves are the results without and with the K * contributions respectively.
FIGURE 3 .
3Comparison of MCM calculations for x(∆s + ∆s) with the HERMES data at Q 2 = 2.5 GeV 2 .
FIGURE 4 .
4The sum of the strange and antistrange quark distributions from the MCM calculations (the thick solid curve), the HERMES measurements (the data points) and the global fit results from CTEQ6.6M (the thick dashed curve), MSTW2008 (the dash curve) and CTEQ6.5 (the shaded area), and the next-toleading order analysis of NuTeV dimuon data (the thin solid curve).
ACKNOWLEDGMENTSI am happy to acknowledge the contributions to my understanding of the meson cloud model that have come from many colleagues and friends. Firstly to Tony Thomas, who introduced me to this problem, guided me through my PhD studies, and has been incredibly generous with his time and support over 25 years. Also I have learnt a great deal from other Adelaide students especially Andreas Schreiber, Wally Melnitchouk and Fernanda Steffans. My colleagues at Massey University, Fu-Guang Cao and Francois Bissey, have provided many insights, and I am grateful to them for many years of enjoyable collaboration.
. A W Thomas, Phys. Lett. B. 12697A. W. Thomas, Phys. Lett. B, 126, 97 (1983);
. M Ericson, A W Thomas, Phys. Lett. B. 128122M. Ericson and A.W. Thomas, Phys. Lett. B, 128, 122 (1983).
. J D Sullivan, Phys. Rev. D. 51732J. D. Sullivan, Phys. Rev. D, 5, 1732 (1972).
. A I Signal, A W Thomas, Phys. Lett. B. 191205A. I. Signal and A. W. Thomas, Phys. Lett. B, 191, 205 (1987).
. W Melnitchouk, A W Schreiber, A W Thomas, Phys. Rev. D. 491183W. Melnitchouk, A. W. Schreiber and A. W. Thomas, Phys. Rev. D, 49, 1183 (1994).
. W Melnitchouk, A W Thomas, Phys. Rev. D. 473794W. Melnitchouk and A. W. Thomas, Phys. Rev. D, 47, 3794 (1993).
. V Zoller, Z. Phys. C. 53443V. Zoller, Z. Phys. C, 53, 443 (1992).
. J Badier, Z. Phys. C. 18291J. Badier et al., Z. Phys. C, 18, 291(1983).
. G P Zeller, NuTeV collaborationPhys. Rev. Lett. 8891802G. P. Zeller et al. (NuTeV collaboration), Phys. Rev. Lett., 88, 091802 (2002).
. J T Londergan, A W Thomas, Phys. Rev. D. 67111901J. T. Londergan and A. W. Thomas, Phys. Rev. D, 67, 111901(R) (2003).
. H L Lai, CTEQ collaborationJ. High Energy Phys. 070489H. L. Lai, et. al. (CTEQ collaboration), J. High Energy Phys., 0704, 089 (2007).
. C Boros, A W Thomas, Phys. Rev. D. 6074017C. Boros and A. W. Thomas, Phys. Rev. D, 60, 074017 (1999);
. F G Cao, A I Signal, Phys. Lett. B. 474138F. G. Cao and A. I. Signal, Phys. Lett. B, 474, 138 (2000);
. F G Cao, A I Signal, Phys. Lett. B. 559229F. G. Cao and A. I. Signal, Phys. Lett. B, 559, 229 (2003).
. M Glück, E Reya, I Schienbein, Eur. Phys. J. C. 10313M. Glück, E. Reya and I. Schienbein, Eur. Phys. J. C, 10, 313 (1999).
. H Holtmann, A Szczurek, J Speth, Nucl. Phys. A. 569631H. Holtmann, A. Szczurek and J. Speth, Nucl. Phys. A, 569, 631 (1996).
. J Ashman, EMC collaborationPhys. Lett. B. 206364J. Ashman et al. (EMC collaboration), Phys. Lett. B, 206, 364 (1988).
. S J Brodsky, J Ellis, M Karliner, Phys. Lett. B. 206309S. J. Brodsky, J. Ellis and M. Karliner, Phys. Lett. B, 206, 309 (1988);
S D Bass, The Spin Structure of the Proton. SingaporeWorld ScientificS. D. Bass, The Spin Structure of the Proton, World Scientific, Singapore, 2007.
. J Speth, A W Thomas, Adv. Nucl. Phys. 2483J. Speth and A. W. Thomas, Adv. Nucl. Phys., 24, 83 (1997);
. F Bissey, F G Cao, A I Signal, Phys. Rev. D. 7394008F. Bissey, F. G. Cao and A. I. Signal, Phys. Rev. D, 73, 094008 (2006).
. A Airapetian, HERMES collaborationPhys. Rev. Lett. 9212005A. Airapetian et al. (HERMES collaboration), Phys. Rev. Lett., 92, 012005 (2004).
. A Airapetian, HERMES collaborationPhys. Lett. B. 666446A. Airapetian et al. (HERMES collaboration), Phys. Lett. B, 666, 446 (2008).
. D Mason, NuTeV collaborationPhys. Rev. Lett. 99192001D. Mason et al. (NuTeV collaboration), Phys. Rev. Lett., 99, 192001 (2007).
. P M Nadolsky, CTEQ collaborationPhys. Rev. D. 7813004P. M. Nadolsky, et. al. (CTEQ collaboration), Phys. Rev. D, 78, 013004 (2008).
| []
|
[
"Insensitizing controls for a quasi-linear parabolic equation with diffusion depending on gradient of the state",
"Insensitizing controls for a quasi-linear parabolic equation with diffusion depending on gradient of the state"
]
| [
"Nina Dany ",
"Huaman ",
"† ",
"Miguel R Nuñez-Chávez "
]
| []
| []
| In this paper, a quasi-linear parabolic equation with a diffusion term dependent on the gradient to the state with Dirichlet boundary conditions is considered. The goal of this paper is to prove the existence of control that insensitizes the system under study which is the case that Xu Liu left open in 2012. It is well known that the insensitizing control problem is equivalent to a null controllability result for a cascade system, which is obtained by duality arguments, Carleman estimates, and the Right Inverse mapping theorem. Also, some possible extensions and open problems concerning other quasi-linear systems are presented.Mathematical Subject Classification: 35B37; 93C20; 93B05. | null | [
"https://export.arxiv.org/pdf/2304.04316v1.pdf"
]
| 258,048,767 | 2304.04316 | 2492058cb21eb5c35c6a19b69310197855b00b2d |
Insensitizing controls for a quasi-linear parabolic equation with diffusion depending on gradient of the state
Nina Dany
Huaman
†
Miguel R Nuñez-Chávez
Insensitizing controls for a quasi-linear parabolic equation with diffusion depending on gradient of the state
arXiv:2304.04316v1 [math.AP] 9 Apr 2023and phrases: Quasi-linear equationNull controllabilityCarleman inequalityInsensitizing control
In this paper, a quasi-linear parabolic equation with a diffusion term dependent on the gradient to the state with Dirichlet boundary conditions is considered. The goal of this paper is to prove the existence of control that insensitizes the system under study which is the case that Xu Liu left open in 2012. It is well known that the insensitizing control problem is equivalent to a null controllability result for a cascade system, which is obtained by duality arguments, Carleman estimates, and the Right Inverse mapping theorem. Also, some possible extensions and open problems concerning other quasi-linear systems are presented.Mathematical Subject Classification: 35B37; 93C20; 93B05.
Introduction
The problem of insensitizing control was originally address by J. L. Lions in [20,21], leading to numerous papers on this topic both in hyperbolic and parabolic equations.
Concerning the semi-linear heat equation, the first result was obtained in [3] for a distributed control, more precisely y t − ∆y + f (y) = ξ + uχ ω in Ω × (0, T ), y = 0 on ∂Ω × (0, T ), y(0) = y 0 + τŷ 0 in Ω,
(1.1)
where Ω is open set in R N ; χ ω denotes the characteristic function of the open set ω ⊂ Ω; ξ,ŷ 0 are given in X 1 , X 0 (certain spaces). The data of the state equation (1.1) is incomplete in the following sense:
•ŷ 0 ∈ X 0 is unknown and ||ŷ 0 || X0 = 1.
• τ ∈ R is unknown and small enough. |y(x, t; τ, u)| 2 dx dt, (1.2) the problem of insensitizing controls can be stated, roughly, as follows:
• We say that the control u insensitizes Φ if ∂Φ ∂τ (y(·, ·; τ, u))| τ =0 = 0, (1. 3) when (1.3) holds the functionals Φ is locally insensitive to the perturbation τŷ 0 .
• Given ǫ > 0, the control u is said to ǫ-insensitize Φ if ∂Φ ∂τ (y(·, ·; τ, u))| τ =0 ≤ ǫ. (1.4) In [3], the authors introduced and studied the notion of approximate insensitizing controls (ǫ-insensitizing controls) of (1.1). In order to get rid of the condition y 0 = 0, the authors prove that the problem of ǫinsensitizing controls is equivalent to an approximate controllability result for a cascade system which is established therein.
In [8], the condition y 0 = 0 was removed for the linear heat equation, instead the following condition was imposed: O ⊂ ω or O = Ω. Furthermore, the authors proved that if the imposed condition is not satisfied, some negative results occur. In [7], the author proved the existence of insensitizing controls for the same semilinear heat system and proved the existence of an initial datum in L 2 that cannot be insensitized. This last result is extended in [4] to super-linear nonlinearities.
For parabolic systems arising from fluids dynamics the first attempt to treat the insensitizing problem is [10] for a large scale ocean circulation model (linear). In [14], as we have already mentioned, the author treats both the case of a sentinel given by L 2 -norm of the state and L 2 -norm of the curl of the state of a linear Stokes system. As long as insensitizing controls have been considered the condition ω ∩ O = ∅ has always been imposed. But, from [23] and [18], we see that this is not a necessary condition for ǫinsensitizing controls. For instance, the authors have proved in [23] that there exist ǫ-insensitizing controls of Φ for linear heat equations with no intersecting observation and control regions in one space dimension using the spectral theory. Furthermore, the insensitizing problem, as we have seen in this special case, is directly related to control problems for coupled systems. In particular, one could ask whether it is possible to control both states of a coupled system just by acting on one equation. In [14] and [15], as well as some insensitizing problems, the author studied this problem respectively for Stokes and heat systems in a more general framework.
Continuing, in [28] the authors studied the insensitizing control problem with constraints on the control for a nonlinear heat equation for more general cost functionals, by means of the Kakutani's fixed point theorem combined with an adapted Carleman inequality. Some controllability results for equations of the quasilinear parabolic type have been studied in [24,25,26], with respect to the insensitizing controllability for quasi-linear parabolic equations have been studied by Xu Liu in [22] (2012). The author worked with Hölder space for the state y and control u, more precisely the following system was studied
y t − N i,j=1
a ij (y)y xi xj + f (y) = ξ + uχ ω in Ω × (0, T ),
y = 0 on ∂Ω × (0, T ), y(0) = y 0 + τŷ 0 in Ω,(1.5)
with the energy functional (1.2). When the diffusion coefficient depend of gradient of state (a ij (∇y)), this case was left open by Xu Liu in [22]. In this paper the main novelty is that we get to proof the existence of insensitizing controls for the system (1.5) with the diffusion coefficient dependent on gradient of state.
Statements of the Main Results
Let Ω ⊂ R N (N = 1, 2 or 3) be a bounded domain whose boundary Γ is regular enough. Let T > 0 be given and let us consider the cylinder Q = Ω × (0, T ), with lateral boundary Σ = Γ × (0, T ). Assume ω and O to be two given non-empty open subsets of Ω. In the sequel, we will denote by χ ω the characteristic function of the subset ω; (·, ·) and · respectively the L 2 scalar product and the norm in Ω. The symbol C is used to design a generic positive constant.
We will consider the following system
y t − ∇ · (a(∇y)∇y) + f (y) = ξ + uχ ω in Q, y = 0 on Σ, y(0) = y 0 + τŷ 0 in Ω.
(2.1)
In system (2.1), y and u are respectively the state variable and the control variable, ξ and y 0 are two known functions, τ is an unknown small real number,ŷ 0 is an unknown function, f (·) and a(·) are given functions in C 2 (R; R) and C 3 (R N ; R) respectively, satisfying
0 < a 0 ≤ a(x) ≤ a 1 , 0 ≤ D i a(x)x tr , ∀ x ∈ R N ||D i a(x)|| R N + ||D 2 ij a(x)|| R N 2 + ||D 3 ijk a(x)|| R N 3 ≤ M, ∀ x ∈ R N , f (0) = 0, |f (r)| + |f ′ (r)| + |f ′′ (r)| ≤ M, ∀r ∈ R, (2.2)
where D i a, D 2 ij a and D 3 ijk a denote the first, second and third order derivatives (respectively) of the function a(·).
Also, we will consider the energy functional Φ(·) for the system (2.1) as in (1.2). Let us define the following spaces
X 0 = {w : w ∈ L 2 (0, T ; H 2 (Ω)), w t ∈ L 2 (0, T ; L 2 (Ω)), w| Σ = 0}, and X 1 = {w : w ∈ L 2 (0, T ; H 4 (Ω)), w t ∈ L 2 (0, T ; H 2 (Ω)), w tt ∈ L 2 (0, T ; L 2 (Ω)), w| Σ = 0}.
Then, if f = f (·) and a = a(·) satisfy (2.2), ξ, uχ ω ∈ X 0 , y 0 ,ŷ 0 ∈ H 3 (Ω)∩H 1 0 (Ω) with ∆y 0 , ∆ŷ 0 ∈ H 1 0 (Ω) and the functions ξ, uχ ω , y 0 , τ are sufficiently small in their spaces respectively, then the equation (2.1) admits a unique solution y(·, ·; τ, u) ∈ X 1 satisfying
||y|| X1 ≤ C ||ξ|| X0 + ||uχ ω || X0 + ||y 0 + τŷ 0 || H 3 (Ω) ,(2.3)
where C is a positive constant depending only on N, Ω, T, M, a 0 and a 1 . A result concerning the well possessedness of system (2.1) can be found in the Appendix. Now, we introduce the following notion:
Definition 2.1. For a given function ξ ∈ X 0 and y 0 ∈ H 3 (Ω) ∩ H 1 0 (Ω), a control function u ∈ X 0 with supp u ⊆ ω × [0, T ] is said to insensitize the functional Φ defined in (1.2) if u satisfies ∂Φ ∂τ (y(·, ·; τ, u))| τ =0 = 0, ∀ŷ 0 ∈ H 3 (Ω) ∩ H 1 0 (Ω) with ||ŷ 0 || H 3 (Ω) = 1. (2.4)
Our main result is stated in the following Theorem:
eM t ξ X0 ≤ δ, (2.5)
one can find a control function u ∈ X 0 with supp u ⊂ ω × [0, T ], which insensitizes the functional Φ in the sense of Definition 2.1.
The rest of the paper is organized as follows.
In Section 3, we will prove some technical results, this is, firstly, we will reformulate the insensitizing problem to null controllability problem for a optimality quasilinear system; secondly, we will study null controllability of the linearized system (to the optimality quasilinear system) applying Carleman estimates and finally we will find some additional estimates for the state.
In Section 4, we will give the proof of Theorem 2.1, thus, we will prove the null controllability for the optimality quasi-linear system applying the Liusternik's method.
Section 5 will deal with some additional comments and results.
In the Appendix, we will give a proof of the well-possessedness of the quasilinear system (2.1).
Preliminary results
Reformulation of the insensitizing problem:
The special form of Φ allows us to reformulate our insensitizing problem as a controllability problem of a cascade system (for more details, see [22], for instance). Proposition 3.1. Assume that ξ ∈ X 0 satisfies (2.5) and y 0 = 0. If a control function u ∈ X 0 satisfies the condition (2.4) and the corresponding solution (y, h) ∈ X 1 × X 0 of the following nonlinear system
y t − ∇ · (a(∇y)∇y) + f (y) = ξ + uχ ω in Q, −h t − ∇ · {(D i a(∇y)∇y tr ) ∇h + a(∇y)∇h} + f ′ (y)h = yχ O in Q, y = 0, h = 0 on Σ,
y(x, 0) = 0, h(x, T ) = 0 in Ω, 2)).
Proof. The proof is similar to Proposition 2.1 in [22], here we use standard arguments that can be found in [7,4].
Remark 3.1. Note that the second hypothesis about a(·) (this is, D i a(x)x tr ≥ 0) is important in the system (3.1), because we need this system to be uniformly parabolic.
In order to prove that the system (3.1) is null controllable at time t = 0, we consider the linearized system for (3.1)
y t − a(0)∆y + Ay = uχ ω + g 1 in Q, −h t − a(0)∆h + Ah = yχ O + g 2 in Q, y = 0, h = 0 on Σ, y(x, 0) = 0, h(x, T ) = 0 in Ω,(3.2)
where A = f ′ (0) and the adjoint system of (3.2)
−ϕ t − a(0)∆ϕ + Aϕ = ψχ O + G 1 in Q, ψ t − a(0)∆ψ + Aψ = G 2 in Q, ϕ = 0, ψ = 0 on Σ, ϕ(x, T ) = 0, ψ(x, 0) = ψ 0 (x, 0)
in Ω.
Carleman estimates for (3.3)
In the context of the null controllability analysis of parabolic systems, Carleman estimates are a very powerful tool (see [12,17]). In order to state our Carleman estimates we need to define some weight functions. Let ω 0 be a non-empty open subset of ω ∩ O, and set
σ(x, t) = e 4λ||η 0 ||∞ − e λ(2||η 0 ||∞+η 0 (x)) t(T − t) , ξ(x, t) = e λ(2||η 0 ||∞+η 0 (x)) t(T − t) ,(3.4)
for some parameter λ > 0. Here, η 0 ∈ C 2 (Ω) stands for a function that satisfies
|∇η 0 | ≥ k 0 > 0 in Ω \ ω 0 , η 0 > 0 in Ω and η 0 = 0 on ∂Ω. (3.5)
The proof of the existence of such a function η 0 can be found in [12]. This kind of weight functions was also used in [24,6]. We introduce the following notation:
I(s, λ; φ) := Q e −2sσ (sξ) −1 (|φ t | 2 + |∆φ| 2 ) + λ 2 sξ|∇φ| 2 + λ 4 (sξ) 3 |φ| 2 dxdt.
We will deduce a Carleman inequality for the solutions to systems of the kind (3.3).
Proposition 3.2.
Let us assume that G 1 , G 2 ∈ L 2 (Q). There exist positive constants λ 1 , s 1 and C 1 such that, for any s ≥ s 1 and λ ≥ λ 1 , any ψ 0 ∈ L 2 (Ω), the corresponding solution to (3.3) satisfies Proof. Obviously, it will be sufficient to show that the there exist λ 1 and s 1 such that, for any small ǫ > 0, any s ≥ s 1 and any λ ≥ λ 1 , one has I(s, λ; ϕ) + I(s, λ; ψ) ≤ ǫI(s, λ; ψ)
I(s, λ; ϕ) + I(s, λ; ψ) ≤ C 1 Q e −2sσ λ 4 (sξ) 3 |G 1 | 2 + |G 2 | 2 dxdt + ω×(0,T ) e −2sσ λ 8 (sξ) 7 |ϕ| 2 dxdt .+ C ǫ Q e −2sσ [λ 4 (sξ) 3 |G 1 | 2 + |G 2 | 2 ] dxdt + ω×(0,T ) e −2sσ λ 8 (sξ) 7 |ϕ| 2 dxdt . (3.7)
From the usual Carleman inequalities (see [13,6]) in (3.3), for s ≥ σ 1 (T + T 2 ) and λ ≥ λ 0 , we have
I(s, λ; ϕ) + I(s, λ; ψ) ≤ C Q e −2sσ [|G 1 | 2 + |G 2 | 2 ] dxdt + ω0×(0,T ) e −2sσ λ 4 (sξ) 3 [|ϕ| 2 + |ψ| 2 ] dxdt ,(3.8)
from the last inequality we will obtain the inequality (3.7), this purpose we first let us introduce a function η ∈ C ∞ 0 (ω) satisfying 0 ≤ η ≤ 1 and η = 1 in ω 0 . Then
ω0×(0,T ) e −2sσ λ 4 (sξ) 3 |ψ| 2 dxdt ≤ ω×(0,T ) e −2sσ λ 4 (sξ) 3 η|ψ| 2 dxdt = ω×(0,T ) e −2sσ λ 4 (sξ) 3 ηψ(−ϕ t − a(0)∆ϕ + Aϕ − G 1 ) dxdt = M 1 + M 2 + M 3 + M 4 .
Here, using the properties of function α and ξ. We get that
M i ≤ ǫI(s, λ; ψ) + C ǫ ω×(0,T ) e −2sσ λ 8 (sξ) 7 |ϕ| 2 dxdt.
By these estimates and (3.8), we obtain the inequality (3.7) and therefore this end the proof. Now, we will prove a Carleman estimate with functions blowing up only at t = 0; which allows us to prove an observability inequality for the system (3.3).
Define the new weight functions
σ(x, t) = e λ||η 0 ||∞ − e λ(2||η 0 ||∞+η 0 (x)) ℓ(t) , ξ(x, t) = e λ(2||η 0 ||∞+η 0 (x)) ℓ(t) , σ * (t) = max x∈Ω σ(x, t), ξ * (t) = min x∈Ω ξ(x, t),σ(t) = min x∈Ω σ(x, t),ξ(t) = max x∈Ω ξ(x, t),
where the function ℓ is given by
ℓ(t) = t(T − t); 0 ≤ t ≤ T /2, T 2 /4; T /2 < t ≤ T.
Notice that σ = σ and ξ = ξ in (0, T /2) and let us introduce the notation
I(s, λ; φ) := Q e −2sσ (sξ) −1 (|φ t | 2 + |∆φ| 2 ) + λ 2 sξ|∇φ| 2 + λ 4 (sξ) 3 |φ| 2 dxdt.
One has the following:
Proposition 3.3. Let us assume that G 1 , G 2 ∈ L 2 (Q).
There exist positive constants λ 2 , s 2 such that, for any s ≥ s 2 and λ ≥ λ 2 , there exists a constant C 2 = C 2 (s, λ) > 0 with the following property: for any ψ 0 ∈ L 2 (Ω), the associated solution to (3.3) satisfies
I(s, λ; ϕ) + I(s, λ; ψ) ≤ C 2 Q e −2sσ λ 4 (sξ) 3 |G 1 | 2 + |G 2 | 2 dxdt + ω×(0,T ) e −2sσ λ 8 (sξ) 7 |ϕ| 2 dxdt . (3.9)
Furthermore, λ 2 and s 2 only depend on Ω, ω, T, a(0), and |A|.
Proof. We can decompose all the integral in I(s, λ; ϕ) in the form
Q = Ω×(0,T /2) + Ω×(T /2,T ) .
Let us gather together all the integrals in Ω × (0, T /2) (resp. Ω × (T /2, T )) in I 1 (s, λ; ϕ) (resp. I 2 (s, λ; ϕ)). Then I(s, λ; ϕ) = I 1 (s, λ; ϕ) + I 2 (s, λ; ϕ) and a similar decomposition holds for I(s, λ; ψ).
From Carleman inequality in the Proposition 3.2, with s ≥ s 1 and λ ≥ λ 1 , we have
I 1 (s, λ; ϕ) + I 1 (s, λ; ψ) ≤ C(RS),(3.10)
where (RS) is the right side in (3.9). In order to prove that I 2 (s, λ; ϕ) + I 2 (s, λ; ψ) ≤ C(RS), let us define the function η ∈ C 2 ([0, T ]) such that
η(t) = 0, if t ∈ [0, T /4], 1, if t ∈ [T /2, T ]. Then, if (ϕ, ψ) is solution of (3.3), it is not difficult to see that (ηϕ, ηψ) satisfies the system −(ηϕ) t − a(0)∆(ηϕ) + A(ηϕ) = (ηψ)χ O + (ηG 1 ) − η t ϕ in Q, (ηψ) t − a(0)∆(ηψ) + A(ηψ) = (ηG 2 ) + η t ψ in Q, ηϕ = 0, ηψ = 0 on Σ, ηϕ(T ) = 0, ηψ(0) = 0 in Ω. (3.11)
The classical energy estimates for the equation (3.11) 2 , we have
Ω |ηψ| 2 dx + Q |η∇ψ| 2 dxdt ≤ C Q |ηG 2 | 2 dxdt + Q |η t ψ| 2 dxdt . (3.12)
Multiplying by ηϕ in the first equation in (3.11), integrating in Q
Q |ηϕ| 2 dxdt + Q |η∇ϕ| 2 dxdt ≤ C Q |ηG 1 | 2 dxdt + Q |η t ϕ| 2 dxdt + ǫ Q |ηψ| 2 dxdt.
For ǫ > 0 sufficiently small and from this last inequality join with (3.12), we have
Q |η| 2 |ϕ| 2 + |ψ| 2 + |∇ϕ| 2 + |∇ψ| 2 dxdt ≤ C Q |η| 2 |G 1 | 2 + |G 2 | 2 dxdt + Q |η t | 2 [|ϕ| 2 + |ψ| 2 ] dxdt .
(3.13)
Proceeding similarly, we have
Q |ηϕ t | 2 + |ηψ t | 2 + |η∆ϕ| 2 + |η∆ψ| 2 dxdt ≤ C Q |η| 2 |G 1 | 2 + |G 2 | 2 dxdt + Q |η t | 2 |∆ϕ| 2 + |∆ψ| 2 + |ϕ t | 2 + |ψ t | 2 dxdt .
(3.14)
The equations (3.12)-(3.14) and the properties of η, we deduce that
I 2 (s, λ; ϕ) + I 2 (s, λ; ψ) ≤ C Ω×(T /4,T /2) |∆ϕ| 2 + |∆ψ| 2 + |ϕ t | 2 + |ψ t | 2 dxdt + Ω×(T /2,T ) |G 1 | 2 + |G 2 | 2 dxdt .
Consequently,
I 2 (s, λ; ϕ) + I 2 (s, λ; ψ) ≤ C(RS).
This last together with (3.10) allow us obtain the inequality (3.9). This end the proof.
Remark 3.2. If λ > 1 ||η 0 || ∞
is sufficiently large and denoting by
β(x, t) = 2 5 σ(x, t), β * (t) = max x∈Ω β(x, t) andβ(t) = min x∈Ω β(x, t), we haveβ (t) ≤ β(x, t) ≤ 5 4β (t) and 4 5 β * (t) ≤ β(x, t) ≤ β * (t).
Then, due to Proposition 3.3 we have
I * (s, λ; ϕ) + I * (s, λ; ψ) ≤ C Q e −4sβ * λ 4 (sξ) 3 |G 1 | 2 + |G 2 | 2 dxdt + ω×(0,T ) e −4sβ * λ 8 (sξ) 7 |ϕ| 2 dxdt ,(3.
15)
where
I * (s, λ; ϕ) = Q e −5sβ * (sξ) −1 (|φ t | 2 + |∆φ| 2 ) + λ 2 sξ * |∇φ| 2 + λ 4 (sξ * ) 3 |φ| 2 dxdt.
Remark 3.3.
The new weight functionsσ(x, t) andξ(x, t) will not be really necessary, because due to the exponential blow-up of the weight functions σ(x, t) and ξ(x, t) we will prove that h(x, 0) = y(x, T ) = 0, furthermore, h(x, T ) = y(x, 0) = 0, but this are the initial condition (in this case there is not a contradiction respect to the initial values), then it is possible to assume the usual weight functions σ(x, t) and ξ(x, t).
Remark 3.4. It will not be really necessary employ the weight functions σ * (t),σ(t), ξ * (t) andξ(t), we can apply the usual weight functions σ(x, t) and ξ(x, t) to conclude the desire result, but in this case, the estimates will be tedious and bigger, so, for simplicity we will take the maximum and minimum of this weight functions with respect to space variable.
Null controllability for the linear system (3.2)
Assume that the notations and hypothesis of Remark 3.2 holds. Then, in order to simplify the notation, we will denote by
ρ 0 := e 2sβ * ξ * −3/2 , ρ 1 := e 2sβ * ξ * −7/2 , ρ := e 5sβ * /2 , ρ 0 := e 3sβ * /2 ξ * −9/2 ,ρ k := e 3sβ * /2 ξ * (−15−2k)/2 , k ∈ N.
(3.16)
Thanks to Proposition 3.3, we will be able to prove the null controllability of (3.2) for right-hand f and g that decay sufficiently fast to zero as t → 0 + . Indeed, one has the following:
Proposition 3.4.
Assume the hypothesis of Proposition 3.3 and let g 1 , g 2 satisfy ρ 0 g 1 ρg 2 ∈ L 2 (Q). Then, the system
(3.2) is null-controllable at time t = 0. More precisely, there exists u ∈ L 2 (ω × (0, T )) with sup u ⊂ ω × [0, T ] such that, if (y, h) is the solution of (3.2), one has: a) Q ρ 2 0 |y| 2 + ρ 2 0 |h| 2 dxdt < +∞, ω×(0,T )ρ 2 1 |u| 2 dxdt < +∞, ω×(0,T )ρ 2 1 |u t | 2 dxdt < +∞, ω×(0,T )ρ 2 1 |∆u| 2 dxdt < +∞, (3.17) b) sup t∈[0,T ] ρ 2 0 (t) Ω |y(t)| 2 dx + sup t∈[0,T ] ρ 2 0 (t) Ω |h(t)| 2 dx + Qρ 2 0 (|∇y| 2 + |∇h| 2 ) dxdt ≤ C Q ρ 2 |y| 2 + ρ 2 0 |h| 2 dx dt + Q ρ 2 |g 1 | 2 + ρ 2 0 |g 2 | 2 dx dt + ω×(0,T ) ρ 2 1 |u| 2 dx dt , (3.18) c) sup t∈[0,T ] ρ 2 1 (t) Ω |∇y(t)| 2 dx + sup t∈[0,T ] ρ 2 1 (t) Ω |∇h(t)| 2 dx + Qρ 2 1 (|∆y| 2 + |∆h| 2 ) dxdt + Qρ 2 1 (|y t | 2 + |h t | 2 ) dx dt ≤ C Q ρ 2 |y| 2 + ρ 2 0 |h| 2 dx dt + ω×(0,T ) ρ 2 1 |u| 2 dx dt + Q ρ 2 |g 1 | 2 + ρ 2 0 |g 2 | 2 dx dt . (3.19)
Proof. Let us introduce the following constrained extremal problem:
inf 1 2 Q ρ 2 0 (|y| 2 + |h| 2 ) dxdt + ω×(0,T ) ρ 2 1 |u| 2 dxdt , subject to u ∈ L 2 (Q), supp u ⊂ ω × [0, T ] and y t − a(0)∆y + Ay = uχ ω + g 1 in Q, −h t − a(0)∆h + Ah = yχ O + g 2 in Q, y = 0, h = 0 on Σ, y(x, 0) = 0, h(x, T ) = 0 in Ω.
(3.20)
Assume that this problem admits a unique solution (ŷ,ĥ). Then, in virtue of the Lagrange's principle there exist dual variables (y, h) such that:
ŷ = ρ −2 0 ȳ t − a(0)∆ȳ + Aȳ −hχ O in Q, h = ρ −2 0 −h t − a(0)∆h + Ah in Q, u = −ρ −2 1ȳ in ω × (0, T ), y =ĥ = 0 on Σ. (3.21) Let χ ∈ C ∞ 0 (ω), 0 ≤ χ ≤ 1, with χ| ω0 = 1, we set P 0 = {(y, h) ∈ C ∞ (Q) 2 ; y = 0, h = 0 on Σ, y(0) = 0, h(T ) = 0}, and a (ȳ,h); (y, h) = Q ρ −2 0 (L * ȳ −hχ O )(L * y − hχ O ) dxdt + Q ρ −2 0LhL h dxdt + ω×(0,T ) χ 2 ρ −2 1ȳ y dxdt, ∀ (y, h) ∈ P 0 ,(3.22)
where L * y := −y t − a(0)∆y + Ay andLh := h t − a(0)∆h + Ah.
With the definition (3.22), one can see that, if the functionsŷ,ĥ solves (3.20), we must have
a (ȳ,h); (y, h) = G(y, h), ∀ (y, h) ∈ P 0 , (3.23) where G(y, h) = Q g 1 y dxdt + Q g 2 h dxdt. (3.24)
The main idea is to prove that there exists exactly one (ȳ,h) satisfying (3.23). Then we will define (ŷ,ĥ) using (3.22) and we will check that it fulfills the desired properties. Indeed, observe that the inequality (3.15) holds for (y, h) ∈ P 0 ,
Q ρ −2 0 |y| 2 dxdt + Q ρ −2 0 |h| 2 dxdt ≤ Ca ((y, h); (y, h)) ∀ (y, h) ∈ P 0 . (3.25)
In the linear space P 0 we consider the bilinear form a(·, ·) given by (3.22); from the unique continuation given in (3.9) we deduce that a(·, ·) is a scalar product in P 0 . Let us now consider the space P, given by the completion of P 0 for the norm associated to a(·, ·). This is a Hilbert space and a(·, ·) is a continuous and coercive bilinear form on P.
We turn to the linear operator G, given by (3.24) for all (y, h) ∈ P, a simple computation leads to
G(y, h) ≤ ||ρ 0 g 1 || L 2 (Q) ||ρ −1 0 y|| L 2 (Q) + ||ρ 0 g 2 || L 2 (Q) ||ρ −1 h|| L 2 (Q) .
Then, using (3.25) and the density of P in P 0 , we have
G(y, h) ≤ C ||ρ 0 g 1 || L 2 (Q) + ||ρ 0 g 2 || L 2 (Q) ||(y, h)|| P , ∀ (y, h) ∈ P.
Consequently G is a bounded linear operator on P. Then, in view of Lax-Milgram's Lemma, there exists one and only one (ȳ,h) satisfying
a (ȳ,h); (y, h) = G(y, h), ∀ (y, h) ∈ P, (ȳ,h) ∈ P. (3.26)
We finally get the existence of (ŷ,ĥ), just settinĝ
y = ρ −2 0 L * ȳ −hχ O ,ĥ = ρ −2 0Lh andû = χρ −2 1ȳ .
We see that (ŷ,ĥ) solves (3.2) and since that (ŷ,ĥ) ∈ P we have
Q ρ 2 0 |ŷ| 2 dxdt + Q ρ 2 0 |ĥ| 2 dxdt + ω×(0,T ) ρ 2 1 |û| 2 dxdt = a (ȳ,h); (ȳ,h) < ∞. (3.27)
Now, let us prove the items a), b) and c).
Proof of a):
First, let us denote the functions y * , h * , F * 1 and F * 2 the following way
y * =ρ 1 ρ −2 1ȳ , h * =ρ 1 ρ −2 1h , F * 1 = −ρρ −2 1 (L * ȳ −hχ O ) − (ρ 1 ρ −2 1 ) tȳ and F * 2 =ρ 1 ρ −2 1Lh + (ρ 1 ρ −2 1 ) th ,
and by the equation (3.3), we have that (y * , h * ) solves the following system
−y * t − a(0)∆y * + Ay * = h * χ O + F * 1 in Q, h * t − a(0)∆h * + Ah * = F * 2 in Q, y * = 0, h * = 0 on Σ, y * (x, T ) = 0, h * (x, 0) = 0 in Ω, (3.28) thanks to (3.27) we have ||F * 1 || 2 L 2 (Q) + ||F * 2 || 2 L 2 (Q) ≤ a (ȳ,h); (ȳ,h) .
Then, y * ∈ L 2 (0, T ; H 2 (Ω)) and y * t ∈ L 2 (0, T ; L 2 (Ω)).
From definition ofû, we haveρ 1û ∈ L 2 (0, T ; H 2 (Ω)),ρ 1ût ∈ L 2 (Q),
with ω×(0,T )ρ 2 1 |û t | 2 + |∆û| 2 dxdt ≤ C ω×(0,T ) ρ 2 1 |û| 2 dxdt.
Proof of b):
To this purpose, let multiplying the first PDE in (3.2) by byρ 2 y and let us integrate in Ω. We obtain:
Ωρ 2 0 y (y t − a(0)∆y + Ay) dx = Ωρ 2 0 (uχ ω + g 1 ) dx.
Notice that
1 2 Ωρ 2 0 |y| 2 dx + a(0) 2 Ωρ 2 0 |∇y| 2 dx ≤ C ωρ 2 0 |y| 2 +ρ 2 0 |g 1 | 2 dx + ωρ 2 1 |u| 2 dx , consequently, sup t∈[0,T ] ρ 2 0 (t) Ω |y(t)| 2 dx + Qρ 2 0 |∇y| 2 dxdt ≤ C Qρ 2 0 |y| 2 +ρ 2 0 |g 1 | 2 dxdt + ω×(0,T )ρ 2 1 |u| 2 dxdt .ρ 2 0 (t) Ω |h(t)| 2 dx + Qρ 2 0 |∇h| 2 dxdt ≤ C Qρ 2 0 |y| 2 + ρ 2 0 |h| 2 dxdt + Qρ 2 0 |g 2 | 2 dxdt .ρ 2 1 (t) Ω |∇y(t)| 2 dx + sup t∈[0,T ] ρ 2 1 (t) Ω |∇h(t)| 2 dx + Qρ 2 1 |y t | 2 + |h t | 2 dxdt ≤ C(RS),(3.
31) where (RS) represent the term in the right side of (3.19).
Continuing, let us multiplying the equations in (3.2) byρ 2 1 ∆y andρ 2 1 ∆h, after let us integrate in Ω and due to estimate (3.31) we get the following inequality
Some additional estimates of the state
The next results provide additional properties of the states found in Proposition 3.4. They will be needed below, in Section 4.
(|y tt | 2 + |∆y t | 2 + |∇y t | 2 ) dx dt + sup t∈[0,T ] ρ 2 2 (t) Ω |∆y(t)| 2 dx + sup t∈[0,T ] ρ 2 3 (t) Ω |∇y t (t)| 2 dx ≤ C Q (ρ 2 |g 1 | 2 + ρ 2 0 |g 2 | 2 +ρ 2 1 |g 1,t | 2 ) dx dt + ω×(0,T ) ρ 2 1 |u| 2 + ρ 2 1 |u t | 2 dx dt + Q ρ 2 0 |y| 2 + ρ 2 0 |h| 2 dx dt , (3.33) b) Ifρ 1 g 1,t ∈ L 2 (Q) andρ 4 g 1 ∈ L 2 (0, T ; H 2 (Ω) ∩ H 1 0 (Ω)), we have Qρ 2 4 (|∆ 2 y| 2 + |∇∆y| 2 ) dxdt + sup t∈[0,T ] ρ 2 5 (t) Ω |∇∆y(t)| 2 dx ≤ C Q (ρ 2 |g 1 | 2 + ρ 2 0 |g 2 | 2 ) dx dt + ω×(0,T ) ρ 2 1 |u| 2 + ρ 2 1 |u t | 2 dx dt + Qρ 2 1 |g 1,t | 2 +ρ 2 4 |∆g 1 | 2 dx dt + Q ρ 2 0 |y| 2 + ρ 2 0 |h| 2 dx dt . (3.34)
Proof. Due to Proposition 3.4, we have that there exists an state-control (y, h, u) of (3.2) satisfying (3.17), (3.18) and (3.19). So, now on we are going to prove that this solution y, h, u also satisfy the estimations a) and b).
Proof of a):
In order to prove the estimate (3.33), let us derivative the first equation in (3.2) with respect the time variable.
y tt − a(0)∆y t + Ay t = u t χ ω + g 1,t in Q. Therefore from the previous estimates (3.36)-(3.39), we conclude that (3.18) holds.
Proof of b):
To prove this item, let us apply the Laplacian operator ∆ in the system (3.2) and get the following new system: where (RS) 2 is the right side of (3.34).
ŷ t − a(0)∆ŷ + Aŷ = ∆(uχ ω ) + ∆g 1 in Q, y = 0 on Σ, y(x, 0) = 0 in Ω,
• Multiplying by −ρ 2 4 ∆ŷ, integrating in Q and using the previous estimates of a). We have In this subsection we prove the null controllability for the optimal system using Liusternik's theorem.
Theorem 4.1 (Liusternik's Theorem). Let E and F be Banach spaces and let
A : B r (0) ⊂ E → F be a C 1
mapping. Let us assume that the derivative A ′ (0) : E → F is onto and let us denote set ξ 0 = A(0). Then, there exist ǫ > 0, a mapping W : B ǫ (ξ 0 ) ⊂ F → E and a constant K > 0 satisfying W (z) ∈ B r (0) and A(W (z)) = z, ∀z ∈ B ǫ (ξ 0 ),
W (z) E ≤ K z − ξ 0 F ∀z ∈ B ǫ (ξ 0 ).
The proof of this Theorem can be found in [1]. Now, let us introduce the space E = (y, h, u) : ρ 0 y, ρ 0 h, y xi , y xixj ∈ L 2 (Q); ρu,ρ 1 u t ,ρ 3 ∆u ∈ L 2 (ω × (0, T )),
ρ(L 1 y − uχ ω ),ρ 1 (L 1 y t − u t χ ω ),ρ 3 (L 1 ∆y − ∆(uχ ω )), ρ(L 2 h − yχ O ) ∈ L 2 (Q), y| Σ = 0, h| Σ = 0, h(T ) = 0 ,
where L 1 and L 2 denote the following expressions:
L 1 y = y t − a(0)∆y + Ay, L 2 h = −h t − a(0)∆h + Ah, and the norm in E is ||(y, h, u)|| 2 E = ||ρ 0 y|| 2 L 2 (Q) + ||ρ 0 h|| 2 L 2 (Q) + ||ρu|| 2 L 2 (ω×(0,T )) + ||ρ(L 1 y − uχ ω )|| 2 L 2 (Q) + ||ρ(L 1 y t − u t χ ω )|| 2 L 2 (Q) + ||ρ 3 (L 1 ∆y − ∆(uχ ω ))|| 2 L 2 (ω×(0,T )) + ||ρ(L 2 h − yχ O )|| 2 L 2 (Q) .
It is clear that E is a Banach space for the norm · E .
Remark 4.1. Notice that, if (y, h, u) ∈ E, in view of Proposition 3.4 and 3.5, one has
||ρ 4 ∆y|| 2 L ∞ (0,T ;H 1 0 (Ω)) + ||ρ 1 h|| 2 L ∞ (0,T ;H 1 0 (Ω)) + ||ρ 4 y|| 2 X1 + ||ρ 1 h t || 2 X0 ≤ C||(y, h, u)|| 2 E .
Also, let us define the following space F 1 = {g : ρg,ρ 1 g t ∈ L 2 (Q);ρ 3 g ∈ L 2 (0, T ; H 2 (Ω) ∩ H 1 0 (Ω))}, F 2 = {g : ρg ∈ L 2 (Q)}, and denote by F = F 1 × F 2 , with the norm
||(g 1 , g 2 )|| 2 F := Q |ρg 1 | 2 + |ρ 1 g 1,t | 2 + |ρ 3 ∆g 1 | 2 dxdt + Q |ρg 2 | 2 dxdt.
It is clear that F is Banach space with this norm. Now, let us define the mapping A : E → F, giving by A = (A 1 , A 2 ) where
A 1 (y, h, u) = y t − ∇ · (a(∇y)∇y) + f (y) − uχ ω , A 2 (y, h, u) = −h t − ∇ · [(D i a(∇y)∇y tr + a(∇y)) ∇h] + f ′ (y)h − yχ O . (4.1)
We will prove that there exists ǫ > 0 such that, if (g 1 , g 2 ) ∈ F and ||(g 1 , g 2 )|| F ≤ ǫ, then the equation
A(y, h, u) = (g 1 , g 2 ), (y, h, u) ∈ E, (4.2)
possesses at least one solution.
In particular, this shows that (3.1) is locally nul controllable at time t = 0 and, furthermore, the statecontrol triplets can be chosen in E. We will apply the Theorem 4.1.
In order to show that Theorem 4.1 can be applied in this setting, we will use several lemmas. Proof. In order to prove that A is well defined, we will show that
i) A 1 (y, h, u) ∈ F 1 , ∀ (y, h, u) ∈ E. Let (y, h, u) ∈ E, we have Q ρ 2 |A 1 (y, h, u)| 2 dxdt = Q ρ 2 |y t − ∇ · (a(∇y)∇y) + f (y) − uχ ω | 2 dxdt ≤ C Q ρ 2 |L 1 y − uχ ω | 2 dxdt + C Q ρ 2 |y| 2 dxdt + C Q ρ 2 |∇ · [(a(∇y) − a(0))∇y] | 2 dxdt = I 1 + I 2 + I 3 .
From the Proposition 3.5 and Remark 4.1, we see that
I 1 ≤ C||(y, h, u)|| 2 E and I 2 ≤ C||(y, h, u)|| 2 E .
On the other hand, since a ∈ C 3 and is (globally) Lipschitz continuous, one has
I 3 ≤ C Q ρ 2 |a(∇y) − a(0)| 2 |∆y| 2 dxdt + C Q ρ 2 |D i a(∇y)| 2 |∇y| 2 |∆y| 2 dxdt ≤ C Q ρ 2 |∇y| 2 |∆y| 2 dxdt.
From the definition of ρ, one has ρ ≤ Cρ iρj and || · || L ∞ (Ω) ≤ C|| · || H 2 (Ω) . Then,
I 3 ≤ C Qρ 2 4 |∇∆y| 2 dxdt sup t∈[0,T ] (ρ 2 3 (t) Ω |∆y(t)| 2 dx) ≤ C||(y, h, u)|| 4 E .
For all these estimates, we have
Q ρ 2 |A 1 (y, h, u)| 2 dxdt ≤ C ||(y, h, u)|| 2 E + ||(y, h, u)|| 4 E . (4.3)
Continuing, let (y, h, u) ∈ E, we get
Qρ 2 1 |A 1t (y, h, u)| 2 dxdt = Qρ 2 1 |y tt − ∇ · (a(∇y)∇y) t + f ′ (y)y t − u t χ ω | 2 dxdt ≤ C Qρ 2 1 |L 1 y t − u t χ ω | 2 dxdt + C Qρ 2 1 |y t | 2 dxdt + C Qρ 2 1 |∇ · [(a(∇y)∇y) t − a(0)∇y t ] | 2 dxdt = J 1 + J 2 + J 3 .
From Proposition 3.5 and Remark 4.1, we have
J 1 ≤ C||(y, h, u)|| 2 E and J 2 ≤ C||(y, h, u)|| 2 E , ∀(y, h, u) ∈ E.
On the other hand
J 3 ≤ C Qρ 2 1 |a(∇y) − a(0)| 2 |∆y t | 2 dxdt + C Qρ 2 1 |D 2 ij a(∇y)| 2 |∇y t | 2 |∇y| 2 |∆y| 2 dxdt + C Qρ 2 1 |D i a(∇y)| 2 |∆y| 2 |∇y t | 2 dxdt + C Qρ 2 1 |D i a(∇y)| 2 |∆y t | 2 |∇y| 2 dxdt = J 1 3 + J 2 3 + J 3 3 + J 4 3 .
Now, let bound each term J i 3 , this is
J 1 3 ≤ C Qρ 2 1 |∇y| 2 dxdt ≤ C sup t∈[0,T ] (ρ 2 1 (t) Ω |∇y(t)| 2 dx) Qρ 2 5 |∆y t | 2 dxdt ≤ C||(y, h, u)|| 4 E , J 2 3 ≤ C sup t∈[0,T ] (ρ 2 5 (t) Ω |∇∆y(t)| 2 dx) Qρ 2 3 |∇y t | 2 dxdt ≤ C||(y, h, u)|| 4 E , J 3 3 ≤ C Qρ 2 4 |∆ 2 y| 2 dxdt sup t∈[0,T ] (ρ 2 3 (t) Ω |∇y t (t)| 2 dx) ≤ C||(y, h, u)|| 4 E , J 4 3 ≤ C Qρ 2 3 |∆y t | 2 dxdt sup t∈[0,T ] (ρ 2 5 (t) Ω |∇∆y(t)| 2 dx)
≤ C||(y, h, u)|| 2 E . Combining the four estimates of J i 3 , we have
J 3 ≤ C||(y, h, u)|| 4 E , ∀(y, h, u) ∈ E.
and this conclude that
Qρ 2 1 |A 1,t (y, h, u)| 2 dxdt ≤ C ||(y, h, u)|| 2 E + ||(y, h, u)|| 4 E .
Now, let us to prove thatρ 3 ∆A 1 (y, h, u) ∈ L 2 (Q), ∀ (y, h, u) ∈ E. Indeed, we have
= K 1 + K 2 + K 3 .
From the definition of Banach space E and Proposition 3.5, we have
K 1 ≤ C||(y, h, u)|| 2 E and K 2 ≤ C||(y, h, u)|| 2 E .
Now, let us bound the term K 3 ,
K 3 ≤ C Qρ 2 3 | [(a(∇y) − a(0)) ∆y] xixj | 2 dxdt + C Qρ 2 3 | D i a(∇y)∇y tr ∆y xixj | 2 dxdt = K 1 3 + K 2 3 .
Let us denote byK Let us calculate each term inK 1 3 andK 2 3 K 1 3 = D i a(∇y)∇y tr xj ∆y xi + (a(∇y) − a(0)) ∆y xixj + D 2 ij a(∇y)∆y∇y tr xi ∇y tr xj + D i a(∇y)∆y xj ∇y tr xi + D i a(∇y)∆y∇y tr xixj , andK 2 3 = D 2 ij a(∇y)∇y tr xi ∇y tr xj ∆y + D 2 ij a(∇y)∇y tr xixj ∇y tr ∆y + 2D 2 ij a(∇y)∇y tr xi ∇y tr xj ∆y + D 2 ij a(∇y)∇y tr xi ∇y tr ∆y xj + D i a(∇y)∇y tr xixj + D i a(∇y)∇y tr xi ∆y xj + D 2 ij a(∇y)∇y tr ∇y tr xj ∆y xi + D i a(∇y)∇y tr xj ∆y xi + D i a(∇y)∇y tr ∆y xixj . Due it, we have that
K 1 3 = Qρ 2 3 |K 1 3 | 2 dxdt ≤ K 1 1,3 + · · · + K 1 5,3 , and K 2 3 = Qρ 2 3 |K 2 3 | 2 dxdt ≤ K 2 1,3 + · · · + K 2 9,3 .
Using the Proposition 3.5 and Remark 4.1, we have the following estimates
5 i=1 K 1 i,3 + 9 j=1 K 2 j,3 ≤ C ||(y, h, u)|| 2 E + ||(y, h, u)|| 4 E + ||(y, h, u)|| 6 E ,
and therefore,
K 3 ≤ C ||(y, h, u)|| 2 E + ||(y, h, u)|| 4 E + ||(y, h, u)|| 6 E .
For all this, we conclude thatρ
3 ∆A 1 (y, h, u) ∈ L 2 (Q). ii) A 2 (y, h, u) ∈ F 2 , ∀ (y, h, u) ∈ E.
To prove this, we use arguments as above, this is
Q ρ 2 |A 2 (y, h, u)| 2 dxdt ≤ Q ρ 2 |L 2 h − yχ ω | 2 dxdt + C Q ρ 2 |h| 2 dxdt + C Q ρ 2 |∇ · D i a(∇y)∇y tr + a(∇y) ∇h − a(0)∇h | 2 dxdt = L 1 + L 2 + L 3 .
Similarly as in the estimates of item i) we will use the Proposition 3.5 and Remark 4.1, such that L j ≤ C ||(y, h, u)|| 2 E + ||(y, h, u)|| 4 E + ||(y, h, u)|| 6 E , j = 1, 2, 3.
Then A 2 (y, h, u) ∈ F 2 , ∀ (y, h, u) ∈ E holds.
Therefore, A is well defined. Furthermore, using similar arguments, it is easy to check that A is continuous.
Lemma 4.2. The mapping
A : E → F is continuously differentiable.
Proof. Let us first prove that A is Gâteaux-differentiable at any (y, h, u) ∈ E and calculate the G-derivate A ′ (y, h, u).
For this purpose let us introduce the linear mapping
DA : E → F, with DA(ỹ,h,ũ) = DA 1 (ỹ,h,ũ), DA 2 (ỹ,h,ũ) , DA 1 (ỹ,h,ũ) =ỹ t − ∇ · [D i a(∇y)∇ỹ tr ∇y − a(∇y)∇ỹ] −ũχ ω + f ′ (y)ỹ, DA 2 (ỹ,h,ũ) =h t − ∇ · D 2
ij a(∇y)∇ỹ tr ∇y tr + D i a(∇y)∇ỹ tr + D i a(∇y)∇ỹ tr ∇h + (D i a(∇y)∇y tr + a(∇y)) ∇h
+ f ′′ (y)ỹh + f ′ (y)h −ỹχ O .
To prove that DA is the G-derivative of A in (y, h, u), we will prove that
1 λ A i (y, h, u) + λ(ỹ,h,ũ) − A i (y, h, u) → DA i (ỹ,h,ũ),(4.4)
strongly in F i for i = 1, 2 as λ → 0. Indeed, we have
1 λ A 1 (y, h, u) + λ(ỹ,h,ũ) − A 1 (y, h, u) − DA 1 (ỹ,h,ũ)
= −∇ · a(∇y + λ∇ỹ) − a(∇y) λ − D i a(∇y)∇ỹ tr ∇y − ∇ · [(a(∇y + λ∇ỹ) − a(∇y)) ∇ỹ]
=J 1 +J 2 .
Similarly as in the prove of Lemma 4.1, we have
||J 1 || F1 → 0 as λ → 0 and ||J 2 || F1 → 0 as λ → 0.
Then (4.4) holds for i = 1.
Now, let us calculate the following term
1 λ A 2 (y, h, u) + λ(ỹ,h,ũ) − A 2 (y, h, u) − DA 2 (ỹ,h,ũ) = ∇ · D i a(∇y + λ∇ỹ) − D i a(∇y) λ − D 2
ij a(∇y)∇ỹ tr ∇y tr ∇h + ∇ · a(∇y + λ∇ỹ) − a(∇y) λ − D i a(∇y)∇ỹ tr ∇h + ∇ · (D i a(∇y + λ∇y) − D i a(∇y)) ∇ỹ tr ∇h + ∇ · (D i a(∇ + λ∇ỹ) − D i a(∇y)) ∇y tr ∇h + ∇ · (a(∇y + λ∇ỹ) − a(∇y)) ∇h + λ∇ · D i a(∇y + λ∇ỹ)∇ỹ tr ∇h
+ (f ′ (y + λỹ) − f ′ (y))h + f ′ (y + λỹ) − f ′ (y) λ − f ′′ (y)ỹ h =Ñ 1 + · · · +Ñ 8 .
Using the Proposition 3.5 and Remark 4.1, we have that
||Ñ j || F2 → 0 as λ → 0, for j = 1, ..., 8.
Then (4.4) holds for
i = 2. Therefore, A is Gâteaux-differentiable. Now, let us check that A ∈ C 1 (E; F ) with A ′ (y, h, u) = D G A(y, h, u) i.e A ′ (y, h, u)(ỹ,h,ũ) = D G A(y, h, u)(ỹ,h,ũ).
But this last equality is equivalent to prove that there exists ǫ n (y, h, u) such that
|| (D G A(y n , h n , u n ) − D G A(y, h, u)) (ỹ,h,ũ)|| 2 F ≤ ǫ n ||(ỹ,h,ũ)|| 2 E ,(4.5)
for all (ỹ,h,ũ) ∈ E and lim n→∞ ǫ n = 0. Let us to prove (4.5) D G A 1 (y n , h n , u n )(ỹ,h,ũ) − D G A 1 (y, h, u)(ỹ,h,ũ) = −∇ · D i a(∇y)∇ỹ tr (∇y n − ∇y) − ∇ · (D i a(∇y n ) − D i a(∇y)) ∇ỹ tr ∇y + ∇ · [(a(∇y n ) − a(∇y)) ∇ỹ] + (f ′ (y n ) − f ′ (y))ỹ =Õ 1,n +Õ 2,n +Õ 3,n +Õ 4,n .
By properties of the functions a(·), f (·), Proposition 3.5 and Remark 4.1, we have ||Õ j,n || F1 ≤ ǫ 1 j,n ||(ỹ,h,ũ)|| E , with lim n→∞ ǫ 1 j,n = 0, j = 1, .., 4.
All this implies that
|| (D G A 1 (y n , h n , u n ) − D G A 1 (y, h, u)) (ỹ,h,ũ)|| F1 ≤ 4 j=1 ǫ 1 j,n ||(ỹ,h,ũ)|| E . (4.6)
Continuing, let us calculate a similar expression for A 2 ,
(D G A 2 (y n , h n , u n ) − D G A 2 (y, h, u)) (ỹ,h,ũ) = −∇ · D 2 ij a(∇y n ) − D 2
ij a(∇y) ∇ỹ tr (∇y n ) tr ∇h n − ∇ · D 2 ij a(∇y)∇ỹ tr (∇y n − ∇y) tr ∇h n − ∇ · [2 (D i a(∇y n ) − D i a(∇y)) ∇ỹ tr ∇h n ] − ∇ · D 2 ij a(∇y)∇ỹ tr ∇y tr + 2D i a(∇y)∇ỹ tr (∇h n − ∇h) − ∇ · (D i a(∇y n ) − D i a(∇y)) (∇y n ) tr ∇h − ∇ · D i a(∇y) (∇y n − ∇y) tr ∇h − ∇ · (a(∇y n ) − a(∇y)) ∇h
+ (f ′′ (y n ) − f ′′ (y))ỹh n + f ′′ (y)ỹ (h n − h) + (f ′ (y n ) − f ′ (y))h =P 1,n + · · · +P 11,n .
Thanks to Proposition 3.5 and Remark 4.1, we have ||P j,n || F2 ≤ ǫ 2 j,n ||(ỹ,h,ũ)|| E , j = 1, ..., 11 with lim n→∞ ǫ 2 j,n = 0.
Then
|| (D G A 2 (y n , h n , u n ) − D G A 2 (y, h, u)) (ỹ,h,ũ)|| F2 ≤ 11 j=1 ǫ 2 j,n ||(ỹ,h,ũ)|| E . (4.7)
From (4.6)-(4.7), we have that (4.5) holds. This ends the proof. Proof. Let us fix (g 1 , g 2 ) ∈ F . From Proposition 3.4, we know that there exists (y, h, u) satisfying (3.2), (3.17), (3.18) and (3.19). Also, on other hand we have from Proposition 3.5 that y satisfies (3.33) and (3.34). Therefore this (y, h, u) found belong the space E and consequently A ′ (0, 0, 0)(y, h, u) = (g 1 , g 2 ).
This end the proof.
In accordance with Lemmas 4.1, 4.2 and 4.3, we can apply Liusternik's Theorem (Theorem 4.1) and deduce that, there exists ǫ > 0, a mapping W : B ǫ (0) ⊂ F → E such that W (g 1 , g 2 ) ∈ B r (0) and A(W (g 1 , g 2 )) = (g 1 , g 2 ), ∀(g 1 , g 2 ) ∈ B ǫ (0).
Taking (ξ, 0) ∈ B ǫ (0) and (y, h, u) = W (ξ, 0) ∈ E, we have A((y, h, u)) = (ξ, 0), thus, we prove that (2.1) is null locally controllable at time t = 0.
Proof of Theorem 2.1
From the locally null controllability of system (3.1), we have that there exist δ > 0 andM such that if eM t ξ X0 < δ, then one can find a control u with supp u ⊂ ω × [0, T ] such that the state associated (y, h) satisfies h(x, 0) = 0 in Ω. Thanks to Proposition 3.1, this implies that the control u insensitize the functional Φ in the sense of Definition 2.1.
This ends the proof.
Some Additional Comments and Questions
Insensitizing controls for a quasi-linear parabolic equation with diffusion depending on gradient of the state in one dimension
When N = 1, we have that the system (2.1) can be rewritten the following way
y t − (a(y x )y x ) x + f (y) = ξ + uχ ω in I × (0, T ), y(0, t) = 0, y(1, t) = 0 on (0, T ), y(x, 0) = y 0 (x) + τŷ 0 (x) in I,(5.1)
here, I = (0, 1), a ∈ C 2 (R) and f ∈ C 2 (R) with f (0) = 0 satisfying
a 0 ≤ a(r) ≤ a 1 , |a ′ (r)| + |a ′′ (r)| + |f ′ (r)| + f ′′ (r)| ≤ M, ∀ r ∈ R. (5.2)
We have the following result: ≤ δ, one can find a control function u ∈ H 1 (0, T ; L 2 (ω)), which insensitizes the functional Φ.
The proof can be argued as in Section 3. Indeed, the insensitizing problem is equivalent to null controllability for the following system
y t − (a(y x )y x ) x + f (y) = ξ + uχ ω in I × (0, T ), −h t − [(a ′ (y x )y x ) h x + a(y x )h x ] x + f ′ (y)h = yχ O in I × (0, T ), y(0, t) = y(1, t) = 0, h(0, t) = h(1, t) = 0 on (0, T ),
y(x, 0) = 0, h(x, T ) = 0 in I, (5.3) and to guarantee that the system (5.3) be null controllable (for instance, see [26]), we consider the linearized system
y t − a(0)y xx + Ay = uχ ω + F 1 in I × (0, T ), −h t − a(0)h xx + Ah = yχ O + F 2 in I × (0, T ), y(0, t) = y(1, t) = 0, h(0, t) = h(1, t) = 0 on (0, T ),
y(x, 0) = 0, h(x, T ) = 0 in I.
(5.4)
Arguing as in Section 4, we can establish the null controllability for (5.4) with a state-control (y, h, u) satisfying (3.18) and (3.33). Then, we can introduce the Banach spacẽ E = (y, h, u) : ρy, ρh, y x , y xx ∈ L 2 (Q); ρu,ρ 1 u t ∈ L 2 (ω × (0, T )),
ρ(L 1 y − uχ ω ),ρ 1 (L 1 y t − u t χ ω ), ρ(L 2 h − yχ O ) ∈ L 2 (Q),
y| Σ = 0, h| Σ = 0, h(T ) = 0 , and the norm inẼ is
||(y, h, u)|| 2 E = ||ρy|| 2 L 2 (Q) + ||ρh|| 2 L 2 (Q) + ||ρu|| 2 L 2 (Q) + ||ρ (L 1 y − uχ ω ) || 2 L 2 (Q) + ||ρ (L 1 y t − u t χ ω ) || 2 L 2 (Q) + ||ρ (L 2 h − yχ O ) || 2 L 2 (Q) .
Also, let us define the following Banach spacẽ
F 1 = {w : ρw,ρ 1 w t ∈ L 2 (Q)}, F 2 = {w : ρw ∈ L 2 (Q)}, F =F 1 ×F 2 , with its norm ||(f, g)|| 2 F = ||ρf || 2 L 2 (Q) + ||ρf t || 2 L 2 (Q) + ||ρg|| 2 L 2 (Q) ,
and the nonlinear mapping A :Ẽ →F , withÃ(y, h, u) = (A 1 (y, h, u), A 2 (y, h, u)) where
à 1 (y, h, u) = y t − (a(y x )y x ) x + f (y) − uχ ω , A 2 (y, h, u) = −h t − [(a ′ (y x )y x + a(y x )) h x ] x + f ′ (y)h − yχ O .(5.
Numerical Results
The strategy we have followed in the proof of Theorem 2.1 opens the possibility to solve numerically the insensitizing control problem of Φ for (2.1). Indeed, it is completely natural to introduce algorithms of the quasi-Newton kind to compute (an approximation in time and space of) a solution to the null controllability of (3.1). And so solve the insensitizing control problem of Φ for (2.1).
These ideas have already been applied in [25] and [5] (among other references) in the context of the controllability of other equations and systems.
6 Appendix: Well-possessedness for the system (2.1) Consider the following system:
y t − ∇ · (a(∇y)∇y) + f (y) = g in Q, y(x, t) = 0 on Σ,
y(x, 0) = y 0 in Ω.
(6.1)
In order to prove the existence of solution for the system (2.1), we will first study the well-possessedness of system (6.1). We have the following.
Lemma 6.1. There exist r > 0 such that for each y 0 ∈ H 3 (Ω) ∩ H 1 0 (Ω) with ∆y 0 ∈ H 1 0 (Ω) and g ∈ X 0 satisfying
||y 0 || H 3 (Ω) + ||g|| X0 ≤ r,
the problem (6.1) has a unique solution y in X 1 satisfying
||y|| X1 ≤ C ||y 0 || H 3 (Ω) + ||g|| X0 , (6.2)
where C is a constant only depending on Ω, T, M, a 0 and a 1 .
Proof. We employ Faedo-Galerkin method with the Hilbert basis from H 1 0 (Ω), given by eigenvectors (w j ) of the spectral problem ((w j , v)) = λ j (w j , v), for all v ∈ V = H 2 (Ω) ∩ H 1 0 (Ω) and j = 1, 2, 3, ... We represent by V m the subspace of V generated by vectors {w 1 , w 2 , ..., w m }. We propose the following approximate problem:
(y ′ m , v) + (a(∇y m )∇y m , ∇v) + (f (y m ), v) = (g, v) ∀ v ∈ V m , y m (0) = y 0m → y 0 in H 3 (Ω) ∩ H 1 0 (Ω).(6.
3)
The existence and uniqueness of (local in time) solution to the (6.3) is ensure by classical ODE theory. The following estimates show that in fact, they are defined for all t. We can get uniform estimates of y m in the usual way: Estimate I: Taking v = y m (t) in (6.3), we deduce that
1 2 d dt ||y m || 2 + a 0 2 ||∇y m || 2 ≤C 1 ||y m || 2 + ||g|| 2 ,(6.4)
and ||y m || 2 + ||∇y m || 2 L 2 (Q) ≤C 2 ||y 0 || 2 + ||g|| 2 L 2 (Q) . (6.5)
In the sequel, the symbolC k is a constant that only depend of a 0 , a 1 , M and Ω for k = 1, ..., 17.
Notice that the term of side right in (6.5) don't dependent of m and due it, we can extend the solution (y m ) to interval [0, T ].
Estimate II:
Taking v = −∆y m (t) in (6.3), we see that Notice that due to properties of a, we have that ∆y m ∈ H 1 0 (Ω). Therefore, we can work with the following equation (∆y ′ m , v) + (a(∇y m )∇∆y m + ∆(a(∇y m ))∇y m , ∇v) + (2∇ (a(∇y m )) ∆y m , ∇v) + (∆ (f (y m )) , v) = (∆g, v) , ∀ v ∈ V, ∆y m (0) = ∆y 0m → y 0 in H 1 0 (Ω).
(6.9)
Continuing with the appropriates estimates for y m . Estimate V: Taking v = ∆y m (t) in (6.9), we have where,C 9 = a 0 16C 6 ,C 10 = a 0 16C 7 ,C 11 = a 0 16C 8 ,C 12 = 2(3C 4C5 +C 3 ) 2 a 0 ,C 13 = 2(C 5 + a0 16 ) 2 a 0 ,
C 14 = C 1 +C 3 +C 4 +C 5 + 3C 4C5 C(Ω) + a 0 8 ,C 15 =C 1 +C 3 ,C 16 = a 0 16 +C 5 .
Also, from (6.3) taking v = −∆y m (t). We have ||∇y ′ m (0)|| 2 ≤C 17 ||∇∆y 0 || 2 + ||∇g(0)|| 2 .
There exist ǫ 0 > 0 such that for ||y 0 || H 3 (Ω) + ||g|| X0 < ǫ 0 ,
we have ||∇∆y 0 || 4 < a 0 8 C 6 +C 7 +C 12 +C 13 , (6.14)
and
(C 9 + 3C 5 a 1 )||∆y 0 || 2 +C 10 ||∇∆y 0 || 2 + (1 + a 1C11C16 ) ||∇∆y 0 || 2 + ||∇g(0)|| 2 +||y 0 || 2 + ||∇y 0 || 2 + 2C 15 T ||y 0 || 2 + ||g|| 2 + 2C 13 ||∆g|| 2 +2C 14 ||g ′ || 2 L 2 (Q) <C 10 a 1/2 0 4 C 6 +C 7 +C 12 +C 13 1/2 .
(6.15)
Thanks to these inequalities we can prove by a contradiction argument that, the following estimate holds ||∇∆y m (t)|| 4 < a 0 8 C 6 +C 7 +C 12 +C 13
, ∀ t ≥ 0. Now, we can obtain from estimate (6.13)
(y m ) is bounded in L ∞ (0, T ; H 3 (Ω)) ∩ L 2 (0, T ; H 4 (Ω)), (y ′ m ) is bounded in L ∞ (0, T ; H 1 0 (Ω)) ∩ L 2 (0, T ; H 2 (Ω)), (y ′′ m ) is bounded in L 2 (0, T ; L 2 (Ω)). All these uniform bounds allow to take limits in (6.3) (at least for a subsequence) as m → ∞. Indeed, the unique delicate point is the a.e convergence of a(∇y m ). But this is a consequence of the fact that the sequence (y m ) is pre-compact in L 2 (0, T ; H 2 (Ω)) and a ∈ C 3 (R N ).
The uniqueness of the strong solution to (6.1) can be proved by argument standards (to see [27]).
A consequence of Lemma 6.1 is the following result, which guarantees that system (2.1) is well-possessedness: Lemma 6.2. There exists r > 0 such that for each y 0 ,ŷ 0 ∈ H 3 (Ω) ∩ H 1 0 (Ω) with ∆y 0 , ∆ŷ 0 ∈ H 1 0 (Ω) and ξ, u ∈ X 0 with supp u ⊂ ω × (0, T ) satisfying ||y 0 + τŷ 0 || H 3 (Ω) + ||ξ|| X0 + ||u|| X0 ≤ r, the problem (2.1) has a unique solution y in X 1 satisfying ||y|| X1 ≤ C ||y 0 + τŷ 0 || H 3 (Ω) + ||ξ|| X0 + ||u|| X0 .
(6.16)
Proof. Notice that the estimate (6.2), we have that (6.16) holds.
*
Departamento de Matemática, Universidad Nacional Agraria La Molina, Lima, Perú. † IME, Federal University of Rio de Janeiro, RJ, Brazil, [email protected]. ‡ ICET, Depart. of Math., Federal University of Mato Grosso, MT, Brazil, [email protected].
Theorem 2. 1 .
1Assume that ω ∩ O = ∅ and y 0 = 0. Then, there exist two positive constantsM and δ depending only on N, Ω, T, M, a 0 and a 1 , such that for any ξ ∈ X 0 satisfying
h(x, 0) = 0 in Ω, then u insensitizes the functional Φ (defined by (1.
Furthermore, C 1
1and λ 1 only depend on Ω and ω, and s 1 can be chosen of the form s 1 = C(T + T 2 ), where C only depends on Ω, ω, a(0), and |A|.
after of multiplying the second equation in (3.2) byρ 2 h and integrate in Q. We have sup t∈[0,T ]
from (3.29)-(3.30) we have that (3.18) holds. Proof of c) : Proceeding as before, let us multiplying the first and second equation in (3.2) by termsρ 2 1 y t andρ 2 1 h t , integrating in Ω and using the properties of weightsρ 1 ,ρ 1,t as above. We have sup t∈[0,T ]
2 + |∆h| 2 dxdt ≤ C(RS).(3.32) From the estimates (3.29)-(3.32), the inequality (3.19) holds.
Proposition 3. 5 .
5Let the hypothesis in Proposition 3.4 be satisfied and let the state-control (y, h, u) of (3.2) satisfying(3.17). Then a) Ifρ 1 g 1,t ∈ L 2 (Q),
following, let us prove similar estimates as in the Proposition 3.4. • Multiplying in (3.35) byρ 2 y t and integrating in Q and from the Proposition 3.t | 2 dxdt ≤ C(RS) 1 , (3.36)here (RS) 1 is the right side in (3.33).• Let us multiplying byρ 2 2 ∆y in the first equation in (3.2) and integrating in Q. tt in (3.35) and integrating in Q.
tt | 2 dxdt ≤ C(RS) 1 .(3.38) • Multiplying by −ρ 2 3 ∆y t in (3.35) and integrating in Q. Qρ 2 3 |∆y t | 2 dxdt ≤ C(RS) 1 . (3.39)
∆y. In the way similar as in part of estimates a), we get • Multiplying byρ 2 4ŷ , integrating in Q and using the previous estimates of a). We have Qρ 2 4 |∇ŷ| 2 dxdt ≤ C(RS) 2 , (3.41)
Qρ 2 4
2|∆ŷ| 2 dxdt ≤ C(RS) 2 .(3.42) • Multiplying byρ 2 5ŷ t , integrating in Q and using the previous estimates of a) join with (3.41)t)| 2 dx ≤ C(RS) 2 . (3.43) Therefore, the estimates (3.41)-(3.43) the item b) holds. Thus, we conclude the proof. 4 Proof of the main result 4.1 Locally null controllability of optimal system (3.1)
Lemma 4. 1 .
1Let A : E → F be the mapping defined by (4.1). Then, A is well defined and continuous.
33
t − ∆∇ · (a(∇y)∇y) + ∆(f (y)) − ∆(uχ ω )|L 1 ∆y − ∆(uχ ω )| 2 dxdt + C |∆ [∇ · (a(∇y)∇y) − a(0)∆y] | 2 dxdt
1 3
1= [(a(∇y) − a(0)) ∆y] xixj andK 2 3 = D i a(∇y)∇y tr ∆y xixj .
Lemma 4. 3 .
3Let A be the mapping defined by (4.1). Then A ′ (0, 0, 0) is onto.
Theorem 5. 1 .
1Assume ω ∩ O = ∅ and y 0 = 0. Then, there exist two positive constantsM and δ depending only on I, T, M, a 0 and a 1 such that for any ξ ∈ H 1 (0, T ; L 2 (I)) satisfying eM t ξ H 1 (0,T ;L 2 (Ω))
5 )
5Then, we can show that the Lemmas 4.1, 4.2 and 4.3 holds. All this is due to similar arguments as in the Section 4 and thanks to H 1 (I) ֒→ L ∞ (I). Therefore,à is C 1 withà ′ (0, 0, 0) onto. In particular the equatioñA(y, h, u) = (ξ, 0), (y, h, u) ∈Ẽ, is solvable and (5.3) is locally null controllable. Remark 5.1. In the unidimensional case, we can prove the same result of Theorem 2.1 but assuming less regularity in the initial condition ξ. It is true, because when N = 1 we have H 1 (I) ֒→ L ∞ (I) and that makes it easy to obtain the inequality || [(a(y x ) − a(0)) y x ] x ||F 1 ≤ C||(y, h, u)||Ẽ, (5.6) where C is a positive constant and, A satisfies the hypothesis of Liusternik's Theorem. The estimate (5.6) is very important to prove Theorem 5.1.5.2 Insensitizing controls for the system (2.1) when N ≥ 4When N ≥ 4, if we apply the same techniques, we need estimate the following term ||∇ · [(a(∇y) − a(0)) ∇y] || F1 ≤ C||(y, h, u)|| E , for the function A is well defined and obtain the null controllability for the system (2.1), but it is very difficult, because in the prove to Theorem 2.1 we use the immersion H 2 (Ω) ֒→ L ∞ (Ω) and this only is valid when N = 1, 2 or 3. So, this question is open.(5.7)
|| 2 ≤C 3 ||y m || 2 +C 3 ||∆y m || 2 ||∇∆y m || 2 +C 3 ||g|| 2 . ≤C 4 ||∆y m || 2 ||∇∆y m || 2 + ||g|| 2 + ǫ ||∆y m || 2 + ||∆y ′ m || 2 +C 4 ||y m || 2 .Estimate IV: Taking derivative with respect at time t in the equation(6.3) 1 and put v = −∆y ′ ≤C 5 ||∆y ′ m || 2 ||∇∆y m || 2 + ||∇y ′ m || 2 + ||g ′ || 2 .1
2
d
dt
||∇y m || 2 +
a 0
2
||∆y m (6.6)
Estimate III: Taking v = −∆y ′
m (t) in (6.3), we have
1
2
||∇y ′
m || 2 +
1
2
d
dt
Ω
a(∇y m )|∆y m | 2 dx
(6.7)
m (t), we
have
1
2
d
dt
||∇y ′
m || 2 +
a 0
2
||∆y m || 2
(6.8)
≤C 6 ||∇y m || 2 + ||∆g|| 2 + ||∆y m || 4 ||∇∆y m || 2 .Estimate VI:Taking v = ∆ 2 y m (t) in (6.9), we have ≤C 7 ||∆y m || 2 + ||∆g|| 2 + ||∆y m || 4 ||∆ 2 y m || 2 .Estimate VII: Derivative with respect to t in (6.9) and taking v = y ′′ m (t), we have≤C 8 ||∆y ′ m || 2 + ||g ′ || 2 + ||∆y ′ m || 2 ||∇∆y m || 2 .From all these estimates and taking ǫ = ||∆y m || 2 +C 10 ||∇∆y m || 2 +C 11 Ω a(∇y m )|∇y ′ m | 2 dx + 3C 5 Ω a(∇y m )|∆y m | 2 dx + ||y m || 2 + ||∇y m || 2 + ||∇y ′ m || 2 ||∇∆y m || 4 ||∇∆y m || 2 +C 10 a 0 8 −C 7 ||∇∆y m || 4 ||∆ 2 y m || 2 12 ||∇∆y m || 4 ||∆y m || 2 + a 0 8 −C 13 ||∇∆y m || 4 ||∆y ′ m || 2 +C 11 ||y ′′ m || 2 ≤C 13 ||∆g|| 2 +C 14 ||g ′ || 2 +C 15 ||y 0 || 2 + ||g|| 2 L 2 (Q) ,1
2
d
dt
||∆y m || 2 +
a 0
2
||∇∆y m || 2
(6.10)
1
2
d
dt
||∇∆y m || 2 +
a 0
2
||∆ 2 y m || 2
(6.11)
||y ′′
m || 2 +
1
2
d
dt
Ω
a(∇y m )|∇y ′
m | 2 dx
(6.12)
a 0
16C 5
in (6.7), we have
1
2
d
dt
C 9 +C 9
a 0
8
−C 6 +
a 0
8
−C +
a 0
4
||∇y m || 2 +C
5
2
||∇y ′
m || 2 (6.13)
. V M Alekseev, V M Tikhomorov, S V Formin, Optimal Control. Consultants BureauV. M. Alekseev, V. M. Tikhomorov, S. V. Formin, Optimal Control, Consultants Bureau, New York, 1987.
Local exact controllability of the diffusion equation in one dimensional. M Beceanu, Abstract and Applied Analysis. 14M. Beceanu, Local exact controllability of the diffusion equation in one dimensional, Abstract and Applied Analysis, vol 2003 (14), p. 793-811, 2003.
Controls insensitizing the norm of the solution of a semilinear heat-equation. O Bodart, C Fabre, J. Math. Anal. Appl. 1953O. Bodart, C. Fabre, Controls insensitizing the norm of the solution of a semilinear heat-equation, J. Math. Anal. Appl., vol 195 (3), p. 658-683, 1995.
Existence of insensitizing controls for a semilinear heat equation with a superlinear nonlinearity. O Bodart, M González-Burgos, R Pérez-Garcia, Commun. Partial. Differ. Equ., vol. 29O. Bodart, M. González-Burgos, R. Pérez-Garcia, Existence of insensitizing controls for a semilinear heat equation with a superlinear nonlinearity, Commun. Partial. Differ. Equ., vol 29 (7-9), p. 1017-1050, 2004.
Insensitizing controls for a semilinear parabolic equation: a numerical approach. F Boyer, V Hernández-Santamaría, L De Teresa, Math. Control Relat. Fields. 91F. Boyer, V. Hernández-Santamaría, L. de Teresa, Insensitizing controls for a semilinear parabolic equation: a numerical approach, Math. Control Relat. Fields, 9, no. 1, 117-158, 2019.
. H R Clark, E Fernández-Cara, J Límaco, L A Medeiros, Applied Mathematics and Computation. 223H. R. Clark, E. Fernández-Cara, J. Límaco, L. A. Medeiros, Theoretical and numerical local null controllability for a parabolic system with local and nonlocal nonlinearities, Applied Mathematics and Computation, vol 223, p. 483-505, 2013.
Insensitizing controls for a semilinear heat equation. L. De Teresa, Commun. Partial Differ. Equ. 251-2L. De Teresa, Insensitizing controls for a semilinear heat equation, Commun. Partial Differ. Equ., vol 25 (1-2), p. 39-72, 2000.
Identification of the class of initial data for the insensitizing control of the heat equation. L De Teresa, E Zuazua, Commun. Pure Appl. Anal. 81L. De Teresa, E. Zuazua, Identification of the class of initial data for the insensitizing control of the heat equation, Commun. Pure Appl. Anal., vol 8 (1), p. 457-471, 2009.
Approximate controllability of the semilinear heat equation. C Fabre, J.-P Puel, E Zuazua, Proc. Soc. Edinb. Sect. 125C. Fabre, J.-P. Puel, E. Zuazua, Approximate controllability of the semilinear heat equation, Proc. Soc. Edinb. Sect., vol 125A, p. 31-65, 1995.
Insensitizing controls for a large scale Ocean circulation model. E Fernández-Cara, C Galina, A Osses, C. R. Math. Acad. Sci. 3374E Fernández-Cara, C. Galina, A. Osses, Insensitizing controls for a large scale Ocean circulation model, C. R. Math. Acad. Sci., vol 337 (4), p. 265-270, 2003.
On the Theoretical and Numerical Control of a One-Dimensional Nonlinear Parabolic Partial Differential Equation. E Fernández-Cara, D Nina-Huaman, M R Nuñez-Chávez, F B Vieira, Journal of Optimization Theory and Applications. 1753E. Fernández-Cara, D. Nina-Huaman, M. R. Nuñez-Chávez, F. B. Vieira, On the Theoretical and Numerical Control of a One-Dimensional Nonlinear Parabolic Partial Differential Equation, Journal of Optimization Theory and Applications, 175(3), p. 652-682, 2017.
A V Fursikov, O Y Imanuvilov, Controllability of Evolution Equations. Research Institute of Mathematics. Seoul National UniversityA. V. Fursikov, O. Y. Imanuvilov, Controllability of Evolution Equations, Lecture Note Series. Research Institute of Mathematics. Seoul National University, 1996.
Controllability results for cascade systems of m coupled parabolic PDEs by one control force. M González-Burgos, L De Teresa, Portugal. Math. 67M. González-Burgos, L. De Teresa, Controllability results for cascade systems of m coupled parabolic PDEs by one control force, Portugal. Math. 67, vol 1, p. 91-113, 2010.
Controllability of systems of Stokes equations with one control force: existence of insensitizing controls. S Guerrero, Ann. Inst. H. Poincaré Anal. Non Linéaire. 246S. Guerrero, Controllability of systems of Stokes equations with one control force: existence of insensitizing controls, Ann. Inst. H. Poincaré Anal. Non Linéaire, vol 24 (6), p. 1029-1054, 2007.
Null controllability of some systems of two parabolic equations with one control force. S Guerrero, SIAM J. Control Optim. 462S. Guerrero, Null controllability of some systems of two parabolic equations with one control force. SIAM J. Control Optim., vol 46 (2), p. 379-394, 2007.
Insensitizing controls for the Navier-Stokes equations. M Gueye, Annales de l'I. H. P. Non Linear Analysis. 30M. Gueye, Insensitizing controls for the Navier-Stokes equations, Annales de l'I. H. P. Non Linear Analysis, vol 30, p. 825-844, 2013.
Carleman inequalities for parabolic equations in Sobolev spaces of negative order and exact controllability for semilinear parabolic equations. O Y Imanuvilov, M Yamamoto, Publ. RIMS, Kyoto Univ. 39O. Y. Imanuvilov, M. Yamamoto, Carleman inequalities for parabolic equations in Sobolev spaces of negative order and exact controllability for semilinear parabolic equations, Publ. RIMS, Kyoto Univ., vol 39, p. 227-274, 2003.
Unique continuation principle for systems of parabolic equations. O Kavian, L De Teresa, ESAIM Control Optim. Calc. Var. 162O. Kavian, L. De Teresa. Unique continuation principle for systems of parabolic equations, ESAIM Control Optim. Calc. Var. 16, no. 2, 247-274, 2010.
Lions, Hierarchical control. J L , Proc. Indian Academic Science Mathematical Science. 1041J. L. Lions, Hierarchical control, Proc. Indian Academic Science Mathematical Science, vol 104 (1), p. 295-304, 1994.
J L Lions, Quelques notions dans l'analyse et le contrôle de systèmesà donnes es incompletes. MalagaProceeding of the 11th Congress on Applied Mathematics (Spanish)J. L. Lions, Quelques notions dans l'analyse et le contrôle de systèmesà donnes es incompletes, In: Proceeding of the 11th Congress on Applied Mathematics (Spanish) (Malaga, 1989), p. 43-54, 1990.
J L Lions, Sentinelle pour les systèmes distribuésà donnèes incomplétes. ParisMasson21J. L. Lions, Sentinelle pour les systèmes distribuésà donnèes incomplétes, Masson, Paris, vol 21, 1992.
Insensitizing controls for a class of quasilinear parabolic equations. X Liu, J. Diff. Eq. 253X. Liu, Insensitizing controls for a class of quasilinear parabolic equations, J. Diff. Eq, vol 253, p. 1287-1316, 2012.
An example of ǫ-insensitizing controls for the heat equation with no intersecting observation and control regions. S Micu, J H Ortega, L De Teresa, Appl. Math. Lett. 178S. Micu, J. H. Ortega, L. De Teresa, An example of ǫ-insensitizing controls for the heat equation with no intersecting observation and control regions, Appl. Math. Lett., vol 17 (8), p. 927-932, 2004.
Stackelberg-Nash Controllability for a quasi-linear parabolic equation in dimensions 1D, 2D or 3D. D Nina-Huaman, Journal of Dynamical and Control Systems. 27D. Nina-Huaman, Stackelberg-Nash Controllability for a quasi-linear parabolic equation in dimensions 1D, 2D or 3D,Journal of Dynamical and Control Systems, v. 27, p. 1-27, 2021.
Theoretical and numerical local null controllability of a quasi-linear parabolic equation in dimensions 2 and 3. E Fernández-Cara, J Límaco, I Marín-Gayte, Journal of the Franklin Institute. 358E. Fernández-Cara, J. Límaco, I. Marín-Gayte, Theoretical and numerical local null controllability of a quasi-linear parabolic equation in dimensions 2 and 3, Journal of the Franklin Institute, 358, p. 2846-2871, 2021.
Hierarchical Controllability for a nonlinear parabolic equation in one dimension. M R Nuñez-Chávez, J Límaco, Journal of Optimization Theory and Applications. M. R. Nuñez-Chávez, J. Límaco, Hierarchical Controllability for a nonlinear parabolic equation in one dimension, Journal of Optimization Theory and Applications.
A Nonlinear Heat Equation with Temperature-Dependent Parameters. M A Rincon, J Límaco, I-Shih Liu, Mathematical Physics Electronic Journal. 12M. A. Rincon, J. Límaco, I-Shih. Liu, A Nonlinear Heat Equation with Temperature-Dependent Parameters, Mathemati- cal Physics Electronic Journal, vol 12, 2006.
Insensitizing controls with constraints on the control of the semilinear heat equation for a more general cost functional. Y Simporé, O Traoré, J. N. E. E. A. 1Y. Simporé, O. Traoré, Insensitizing controls with constraints on the control of the semilinear heat equation for a more general cost functional, J. N. E. E. A, vol 1, p. 1-12, 2017.
| []
|
[
"Scheme Dependence of the Wilsonian Effective Action and Sharp Cutoff Limit of the Flow Equation",
"Scheme Dependence of the Wilsonian Effective Action and Sharp Cutoff Limit of the Flow Equation"
]
| [
"Jun-Ichi Sumi ",
"Wataru Souma \nInstitute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan\n",
"Ken-Ichi Aoki \nInstitute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan\n",
"Haruhiko Terao \nInstitute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan\n",
"Keiichi Morikawa \nInstitute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan\n",
"\nDepartment of Fundamental Sciences\nFaculty of Integrated Human Studies\nKyoto University\n606-8501KyotoJapan\n",
"\nResearch Center for Nanodevices and Systems\nHiroshima University\n1-4-2 Kagamiyama, Higashi-Hiroshima739-8527Japan\n"
]
| [
"Institute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan",
"Institute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan",
"Institute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan",
"Institute for theoretical Physics\nKanazawa University\nKakuma-machi920-1192KanazawaJapan",
"Department of Fundamental Sciences\nFaculty of Integrated Human Studies\nKyoto University\n606-8501KyotoJapan",
"Research Center for Nanodevices and Systems\nHiroshima University\n1-4-2 Kagamiyama, Higashi-Hiroshima739-8527Japan"
]
| []
| The cutoff scheme dependence in the several formulations of the Exact Renormalization Group (ERG) is investigated. It is shown that the cutoff scheme dependence of the Wilsonian effective action is regarded as a certain coordinate transformation on the theory space. From this observation the Wilsonian effective actions are found to suffer from strong dependence on the schemes even in the infra-red asymptotic region for massive theories. However there is no such scheme dependence in the one particle irreducible parts of them, which is called the effective average actions. We also derive the explicit form of the Polchinski RG equation in the sharp cutoff limit. Finally this equation is shown to be identical with the Wegner-Houghton RG equation. | null | [
"https://export.arxiv.org/pdf/hep-th/0002231v1.pdf"
]
| 18,575,495 | hep-th/0002231 | 061632c3440aad3658c68fdb85f54b87fe7df9de |
Scheme Dependence of the Wilsonian Effective Action and Sharp Cutoff Limit of the Flow Equation
28 Feb 2000
Jun-Ichi Sumi
Wataru Souma
Institute for theoretical Physics
Kanazawa University
Kakuma-machi920-1192KanazawaJapan
Ken-Ichi Aoki
Institute for theoretical Physics
Kanazawa University
Kakuma-machi920-1192KanazawaJapan
Haruhiko Terao
Institute for theoretical Physics
Kanazawa University
Kakuma-machi920-1192KanazawaJapan
Keiichi Morikawa
Institute for theoretical Physics
Kanazawa University
Kakuma-machi920-1192KanazawaJapan
Department of Fundamental Sciences
Faculty of Integrated Human Studies
Kyoto University
606-8501KyotoJapan
Research Center for Nanodevices and Systems
Hiroshima University
1-4-2 Kagamiyama, Higashi-Hiroshima739-8527Japan
Scheme Dependence of the Wilsonian Effective Action and Sharp Cutoff Limit of the Flow Equation
28 Feb 2000
The cutoff scheme dependence in the several formulations of the Exact Renormalization Group (ERG) is investigated. It is shown that the cutoff scheme dependence of the Wilsonian effective action is regarded as a certain coordinate transformation on the theory space. From this observation the Wilsonian effective actions are found to suffer from strong dependence on the schemes even in the infra-red asymptotic region for massive theories. However there is no such scheme dependence in the one particle irreducible parts of them, which is called the effective average actions. We also derive the explicit form of the Polchinski RG equation in the sharp cutoff limit. Finally this equation is shown to be identical with the Wegner-Houghton RG equation.
Introduction
The Exact renormalization group (ERG) [1] has been one of the analytical tools to investigate non-perturbative phenomena of field theories, (e.g. the chiral symmetry breaking [2] etc.). The ERG flow equations are the functional differential equations for the Wilsonian effective actions S Λ [φ], where Λ is an ultra-violet (infra-red) momentum cutoff of the low energy modes φ(q) (the high energy modes which are already integrated out). The responce of the effective action under variation of the cutoff is exactly represented as
∂ ∂Λ S Λ [φ] = F [S Λ ],(1)
where F [S Λ ] is a finite functional of the field φ. The explicit forms of F [S Λ ] are shown later. By solving the ERG flow equations toward to Λ = 0 with certain bare actions as the initial conditions, we can obtain the generating functionals of the connected Green's functions. There have been also known another type of the ERG equations for the cutoff Legendre effective action (or the effective average action) Γ Λ [φ]. In this case the solutions of the ERG lead to the ordinary effective actions, or the generating functionals of the 1PI Green's functions. Usually, we write down the ERG equations for the dimensionless parameters in the effective actions by scaling the parameters with the infra-red cutoff Λ, since the energy unit used to represent the theories does not have any physical significance. The functional space of the dimensionless effective action is called the theory space. Through this manipulation the beta-functional F becomes free from the scale Λ. Among the RG flows of these dimensionless quantities, especially the fixed points, the critical surfacies and the renormalized trajectories, are of utility indispensable in investigating (statistical) continuum limit of field theories.
It is important for the practical analyses that the ERG admits non-perturbative as well as systematic approximations; e.g. the derivative expansion [3], the momentum scale expansion [4,5]. Though we used the word of 'expansion' here, the approximation schemes are not the series expansions with respect to some explicit small parameters. This is an essential distinction from the ordinary expansion schemes; ǫ-expansion, 1/N-expansion and perturbation, which lead to the asymptotic series. The solutions of the ERG equations are expected to converge smoothly with the improvement of approximatioins. Furthermore, we may obtain fairly good non-perturbative results within the simple approximation schemes [6].
The ERG flow equation depends on the cutoff schemes. Here the cutoff scheme means the profile of the cutoff function in the propagator. It is convenient to perform the cutoff of the infra-red region p 2 < Λ 2 by adding a momentum dependent mass,
∆S[φ] ≡ d d x Λ 2 2 φ · C −1 (−∂ 2 /Λ 2 ) · φ,(2)
where C is a proper cutoff function satisfiying that C(x) → 0 as x → 0 and d is the space-time dimensions. Then the propagator is multipied by the cutoff function θ(x) = xC(x)/(1 + xC(x)). In Fig. 1 the examples of the cutoff functions for various C(x) are shown. The sharp cutoff scheme corresponds to the step function; θ(p 2 ) = 0 for p < Λ and θ(p 2 ) = 1 for p > Λ. (1 + xC(x)) for various C(x). C −1 (x) is the mass function introduced in Eq. (2).
The effective actions treated by the ERG equations themselves are cutoff scheme dependent even after the cutoff is removed; Λ → 0. While the physical quantities obtained from their continuum limit, or the renormalized trjectories, should not be affected by the regularization scheme. Therefore it will be important to see the cutoff scheme dependence of the RG flows, specially the renormalizaed trajectories, and to find out the scheme independent quantities at Λ = 0 which are the physical quantities obtained in the ERG approach. In this paper we discuss the basic strucure of the cutoff scheme dependence of the ERG equations and of thier solutions; the effective actions. Especially we look into the scheme dependence of the low energy effective actions, or the renormalized trajectories, in the asymptotic regions for massive theories. In such regions the Wilsonian effectives actions are found to suffer from strong scheme dependence, while the Legendre effective actions are free from such problems. Related with this we also examine the sharp cutoff limit of the so-called Polchinksi equation by taking care of the singular limit of the cutoff profiles. Moreover it will be shown that the Polchinksi equations turn out to be equivalent to the Wegner-Houghton equation in the sharp cutoff limit. This equivalence between these two formulations of the ERG has not been proved yet. This paper is organized as follows. In Sec. 2 we briefly overview the derivation of the ERG flow equations; the Wegner-Houghton equation, the Polchinski equation and the flow equation for the cutoff Legendre effective action. In Sec. 3 we will examine the general aspects of the scheme dependence of the effective actions. At first it will be shown that the variation of the cutoff function; θ(x) → θ(x) + δθ(x), can be reinterpreted as the coordinate transformation on the theory space. From this observation it immediately follows that the critical exponents obtained by the ERG method are independent of the cutoff scheme. We also discuss the formulations using the cutoff mass functions depending on the wave function renormalization from this point of view. We will study also the cutoff scheme dependence of the renormalized trajectories in the infra-red asymptotic region of massive theories. The scheme dependence of the coefficient function V i k (φ) of the Wilsonian effective action S Λ [φ] behaves as 1/Λ 2k and, therefore, becomes so strong as to prevent from taking any physical information as the infra-red cutoff Λ is lowered. While the cutoff Legendre effective actions are free from such a strong scheme dependence.
The sharp cutoff limit of the Polchinski equations and their equivalence to the Wegner-Houghton formulation will be clarified in Sec. 4 and in Sec. 5. Section 6 is devoted to the conclusions and some remarks. Throughout this paper, we restrict ourselves to the single scalar theories. This restriction does not loose the generality of the discussions.
ERG Equations
To fix our conventions and to make this paper self-contained, we briefly overview the derivations of the ERG flow equations. Let us start from the generator of the connected Green's functions W [J] given by
exp (W [J]) = Dφ exp (−∆S U.V. − S Λ 0 + J · φ) ,(3)
where ∆S U.V. is the ultra-violet cutoff term regularizing the path-integral (3). We sometimes use the shorthand: J · φ = d d xJ(x)φ(x) etc., where d is the (euclidean) space-time dimensions. Now, we introduce the intermediate scale Λ < Λ 0 and formally integrate out the high energy modes φ > (q) (Λ < q ≤ Λ 0 ).
Then we get the effective action for the low energy modes φ(q) (q < Λ),
exp (−S Λ [φ, J]) = Dφ > exp −∆S Λ 0 Λ [φ > ] − S Λ 0 [φ + φ > ] + J · (φ + φ > ) ,(4)
where the cutoff action ∆S Λ 0 Λ [φ > ] is given by,
∆S Λ 0 Λ [φ] ≡ 1 2 φ · P Λ 0 Λ −1 · φ.(5)
The support of P Λ 0 Λ (q) is effectively restricted in the region Λ < q < Λ 0 by means of a certain smooth cutoff function. Furthermore, we set, ∆S U.V. = ∆S Λ 0 0 and P Λ 0 0 (q) = P Λ 0 (q) + P Λ 0 Λ (q). This can be achieved by multiplying the partition of unity θ Λ 0 Λ (q) to the propagator 1/q 2 , i.e.
P Λ 0 Λ (q) = θ Λ 0 Λ (q)/q 2 . θ Λ 0 Λ (q) approximately vanishes when q ≫ Λ 0 or q ≪ Λ, and θ Λ 0 Λ (q) ≈ 1 for Λ < q < Λ 0 .
In the RG flow equation, we can safely forget the ultra-violet cutoff Λ 0 by taking the limit Λ 0 → ∞, since ∂P Λ 0 =∞ Λ (q)/∂Λ decays in both q → 0 and q → ∞ sufficiently fast. Now, Eq. (3) can be rewritten in terms of S Λ in Eq. (4), (See ref. [4])
exp (W [J]) = Dφ < exp −∆S Λ 0 − S Λ [φ, J] .(6)
Putting J = 0, Eq. (6) is nothing but the definition of the Wilsonian effective action
∆S Λ 0 + S Λ [φ, 0]
. Evidently, if we put φ < = 0 in Eq. (4) then S Λ becomes the generator of the connected Green's functions with the infra-red cutoff W Λ [J] = −S Λ [0, J] [4]. By shifting φ > → φ > − φ < and setting J = 0 in Eq. (4), we also find [4],
W Λ [P −1 Λ · φ < ] = 1 2 φ < · P −1 Λ · φ < − S Λ [φ < , 0].(7)
Hereafter, we write S Λ [φ] = S Λ [φ, 0]. The above cutoff θ is called the 'multiplicative cutoff', because we multiplied it to the kinetic term of φ. ∆S is called 'additive', since the inverse cutoff propagator is given by C −1 (q 2 ) in Eq. (2) plus the ordinary kinetic term q 2 . θ(x) ≡ θ Λ 0 Λ (x) above is written in terms of C(x) as θ(x) = xC(x)/(1 + xC(x)). The relation between these two cutoff schemes: the multiplicative cutoff and the additive cutoff is given as follows. The bare actions of both schemes obviously satisfy S add
Λ 0 = 1 2 d d x(∂φ) 2 +S multi Λ 0 , where S add Λ 0 and S multi Λ 0
are the bare actions with the additive cutoff and with the multiplicative cutoff respectively. By the definition, the generator of the connected Green's functions W Λ [J] is common for the both schemes. Thus, it immediately follows,
S multi Λ [P Λ · J] − 1 2 J · P Λ · J = S add Λ [C · J] − 1 2 J · C · J.(8)
We will employ the multiplicative cutoff scheme, since it is convenient to investigate the sharp cutoff limit of the RG flow equations.
Flow equations
Setting φ < = 0 and differentiating the boths sides of Eq. (4) with respect to Λ, we get the RG flow equation for W Λ [J],
∂ ∂Λ W Λ = − 1 2 W ′ Λ · ∂ ∂Λ P −1 Λ · W ′ Λ − 1 2 tr ∂ ∂Λ P −1 Λ · W ′′ Λ ,(9)
where the prime denotes the derivative with respect to the source J. By using Eq. (7), we also get,
∂ ∂Λ S Λ = − 1 2 S ′ Λ · ∂ ∂Λ P Λ · S ′ Λ + 1 2 tr ∂ ∂Λ P Λ · S ′′ Λ .(10)
This is the famous 'Polchinski equation' [1]. This equation may be represented diagramatically as in Fig. 2. (10). The crossed circles and the filled cricles correspond to ∂P Λ /∂Λ and the vertices of S Λ respectively.
d + = Λ d
Next, we scale all the dimensionful quantities in terms of the infra-red cutoff Λ, i.e. φ = Λ d φφ , p = Λp and L Λ = Λ dL t , where d φ = (d − 2)/2 is the canonical dimension of the field, t = ln Λ 0 /Λ is the cutoff scale factor andL t is the Lagrangian density. We also write the dimensionless Wilsonian effective action asŜ t [φ] = d dxL t (φ). Then, we get
Λ ∂ ∂Λ S Λ = −Λ d ∂ ∂t + d φ ∆ φ + ∆ ∂ − d Ŝ t ,(11)
where ∆ φ and ∆ ∂ count the degree of the fields and that of the derivatives ∂ µ respectively. The initial boundary condition of Eq. (10) is given by the bare action, S Λ 0 . We can derive the RG flow equation for the Legendre effective action with the infra-red cutoff Γ Λ [φ] given by the Legendre transform of W Λ [J].
W Λ [J] = J · φ − Γ Λ [φ] + 1 2 φ · P −1 Λ − P −1 Λ=0 · φ,(12)
where φ is given by φ = δW Λ /δJ. After the Legendre transformation, the RG flow equation for Γ Λ [φ] can be read,
∂ ∂Λ Γ Λ [φ] = 1 2 tr ∂ ∂Λ P −1 Λ · P −1 Λ − P −1 Λ=0 + Γ ′′ −1 .(13)
The initial condition of Eq. (13) is given by Γ Λ 0 = S Λ 0 , because all quantum corrections varnish at Λ = Λ 0 . Since Γ Λ [φ] is composed of the one particle irreducible diagrams, the diagrams corresponding to Eq. (13) contain no tree diagrams, as is shown in Fig. 3
Wegner-Houghton equation
We start from the following partition function,
Z = p≤Λ Dφ exp − 1 2 φ · P −1 · φ − S Λ [φ] ,(14)
where the support of φ is restricted to p ≤ Λ. We shift the quadratic part, the P −1 term, from the Wilsonian effective action, since it is subtracted also in S Λ given by Eq. (4). Let us integrate out the modes with momenta Λ − δΛ < p ≤ Λ, we call these modes the 'shell modes' and write as φ s . Expanding the action S Λ [φ + φ s ] in the shell modes φ s , we have
S Λ [φ + φ s ] = S Λ [φ] + φ s · S(1)Λ [φ] + 1 2 φ 2 s · S(2)Λ [φ] + · · · ,(15)
where the superscript (n) denotes the n-th functional derivative with respect to the shell mode. We can regard the Taylor coefficients S (n) Λ [φ] to the field (φ) dependent vertices. The quantum fluctuations of the shell modes can be incorporated by perturbative expansion.
For infinitesimal δΛ, the leading corrections, i.e. of order δΛ, come from less than or equal to one loop diagrams. The higher (n ≥ 2) loops diagrams do not contribute to the leading order in δΛ, since every loop integral introduces the factor δΛ. Furthermore, each articulation line also brings the factor δΛ. Hence, the leading corrections are found to be the Feynman diagrams with only one propagator which is either an articulation line or a loop one. Taking into account this constraint, the higher vertices S (n≥3) Λ
[φ] cannot appear and can be dropped in Eq. (15). After performing the Gaussian integration of the shell modes, we get a coarse-grained action S Λ−δΛ [φ] of the low energy modes φ(p) :
p ≤ Λ − δΛ, S Λ−δΛ [φ] = S Λ [φ] − 1 2 δΛS ′ Λ · P −1 + S ′′ Λ −1 · S ′ Λ + 1 2 δΛtr ln P −1 + S ′′ Λ ,(16)
where prime denotes the functional derivative with respect to φ s . By letting δΛ → 0, we find
∂ ∂Λ S Λ = 1 2 S ′ Λ · P −1 + S ′′ Λ −1 · S ′ Λ − 1 2 tr ln P −1 + S ′′ Λ .(17)
This is called the 'Wegner-Houghton equation' [1]. The diagrams corresponding to Eq.
General Aspects of the Cutoff Scheme Dependence
In this section, we discuss some general aspects on the cutoff scheme dependence of the ERG. In the first two subsections we show that the formal relation among the Wilsonian effective actions with different cutoff schemes can be egarded as the coordinate transformation on the theory space. Therefore it immediately follows that the critical exponents, which are the macroscopic physical quantities of the phase transition, do not suffer from the cutoff scheme dependence. In the remaining subsections the scheme dependence of the renormalized trajectories in the infra-red asymptotic region is examined. It is shown that the scheme dependence disappears form the cutoff Legendre effective action in this region, while not from the Wilsonian effectiveaction.
Coordinate Transformation on the Theory Space
The Polchinski RG equation [1] for a single scalar theory can be rewritten,
∂ ∂t + d φ φ · δ δφ + d d q (2π) d φ (q) q µ ∂ ∂q µδ δφ (q) exp (−S t [φ]) = d d q (2π) dδ δφ (q) ∂ ∂q 2 θ q 2 δ δφ (−q) exp (−S t [φ]),(18)
where, for the convenience, we write the Fourier transform of the functional derivative with respect to φ(x) byδ
δφ (q) ≡ d d xe iq·x δ δφ (x) = (2π) d δ δφ (q) .(19)
In Eq. (18), θ (q 2 ) denotes the cutoff function, which is given in Fig. 1 for example. Note that, the momentum derivative operating to the effective action S t [φ] in the first line of Eq. (18) does not operate to the delta function δ(Σq i ) representing momentum conservation. Hence it operates to the effective action as
d d q (2π) d φ (q) q µ ∂ ∂q µδ δφ (q) S t [φ] = (∆ ∂ − d) S t [φ] ,(20)
where ∆ ∂ counts the degree of derivatives. Let us consider the coordinate transformation of the theory space:
S t [φ] → S t [φ]
, given by the following transformation:
exp − S t [φ] = exp 1 2 δD exp (−S t [φ]),(21)
where δD is given by
δD = d d q (2π) dδ δφ (q) · 1 q 2 δθ q 2 ·δ δφ (−q) .(22)
Since δD is independent of the cutoff scale t, this transformation is in fact a coordinate transformation on the theory space 1 . Therefore the critical exponents obtained by the RG technique are invariant under the transformation (21). (See Ref. [14].) Indeed, by this transformation the cutoff function θ(q 2 ) is changed to θ(q 2 ) + δθ(q 2 ) in the Polchinski RG equation. Operating exp(δD/2) to both sides of Eq. (18) and using
0 0 1 1 0 0 1 1 D δ ) 1 - 2 Scheme B Scheme A Critical Surface Critical Surface Fixed Point Fixed Point Renormalized
Trajectory Renormalized Trajectory exp( the commutation relation,
d φ φ · δ δφ + d d q (2π) d φ (q) q µ ∂ ∂q µδ δφ (q) , δD 2 = d d q (2π) dδ δφ (q) ∂δθ (q 2 ) ∂q 2 δ δφ (−q) ,(23)
one can realize that the effective action S t [φ] just satisfies the Polchinski RGE with the cutoff scheme θ(q 2 ) + δθ(q 2 ). Therefore, if S t [φ] is a solution of Eq. (18) with cutoff scheme θ(q 2 ), then S t [φ], defined by Eq. (21), is also a solution for another cutoff scheme θ(q 2 ) + δθ(q 2 ). The transformation (21) maps the fixed point, the critical surface and the renormalized trajectories to those given in another scheme. (See Fig. 5.) However the critical exponents at the fixed points are scheme independent.
Wave-function Renormalization and Additive cutoff
In order to extract the anomalous dimension it is more convenient to employ the additive cutoff instead of the multiplicative one for following two reasons. 1). In the multiplicative case the part of the kinetic term of the Wilsonian effective action is stolen by the inverse cutoff propagator q 2 (θ(q 2 )) −1 . 2). We can elminate the wave-function renormalization factor Z φ in RG equation by rescaling the cuotoff function C (q 2 ) to Z −1 φ C (q 2 ) in the additive case, and the anomalous dimension (η) of the field may be explicitly extracted.
The additive cutoff is introduced by Eq. (2) and the Wilsonian effective actions with two cutoff schemes are related by the relation (8)
, i.e. S multi [φ] = S add [(1 − ∂ 2 C)φ] + 1 2 d d xφ∂ 2 (1 − ∂ 2 C)φ,(24)
where S multi [φ] is the effective action with the multiplicative cutoff. In the multiplicative cutoff case, we drop the part of the kinetic term of (the interaction part of) the effective action. It is rather convenient to include the kinetic term in the effective action completely in extracting the anomalous dimension. The additive cutoff C(x) in Eq. (2) and the multiplicative cutoff θ(x) are related by θ(x) = xC(x)/(1 + xC(x)).
Let us rescale the field φ toφ = Z 1 2 φ φ, where Z φ is the wave-function renormalization factor. If we also rescale the cutoff function as,
C −1 q 2 /Λ 2 −→ Z φ C −1 q 2 /Λ 2 ,(25)
then the explicit Z φ dependence of the RG flow equation can be eliminated. The RG flow equation depends on Z φ only through the anomalous dimension η defined by the consistency condition, i.e. the kinetic term should be unity at each scale. Consequently, η becomes the function of the coupling constants. It means that the beta-function of Z φ is given by,
∂ ∂t Z φ = η(g i )Z φ ,(26)
where {g i } is a coordinate system on the theory space. In this coordinate system, the RG beta-functions have the following structure:
Ω ij (g) = ∂β i ∂g j (g) = 0 for g j = Z φ ,(27)
because the beta-functions have no Z φ dependence except for β Z . Such a parametrization is called the 'perfect coordinate' in Ref. [6]. For the dimensionless Wilsonian effective action, the RG flow equation becomes,
∂ ∂t + d φ φ · δ δφ + d d q (2π) d φ (q) q µ ∂ ∂q µδ δφ (q) exp (−S t [φ])(28)= − d d q (2π) dδ δφ (q) q 2 ∂ ∂q 2 C + 1 2 (η − 2)C δ δφ (−q) exp (−S t [φ]),
where d φ and η =Ż φ /Z φ are the physical scaling dimension of the field; d φ = (d + η − 2)/2 and the anomalous dimension of φ respectively. One can easily extend the scheme dependence relations given by Eq. (23) etc. to this type of RGE. However one may wonder whether the coordinate transformation induced by Eq. (25) is well-defined or not. Suppose that C(x) is a polynomial i.e. C(x) = x k . Since the cutoff Λ appears only though the cutoff function,
e −S Λ [φ] ≡ D φ > exp − Z φ 2 φ > Λ 2 C −1 (−∂ 2 /Λ 2 )φ > − S[φ > + φ] ,(29)
we can eliminate Z φ by shifting the cutoff;
Z φ Λ 2 C −1 ( p 2 Λ 2 ) = Λ ′ 2 C −1 ( p 2 Λ ′2 ) where Λ = Z 1/(2k−2) φ Λ ′ . Therefore the Wilsonian effective action S Λ [φ] with a cutoff scheme C −1 can be written in terms of S Λ [φ], S Λ ′ [φ] = S Λ [φ] = S Λ ′ [φ] + Λ Λ ′ dΛ ∂ ∂Λ SΛ[φ], = S Λ ′ [φ] + δf (Z φ )Λ ′ ∂ ∂Λ ′ S Λ ′ [φ] + 1 2! (δf (Z φ )Λ ′ ) 2 ∂ 2 ∂Λ ′ 2 S Λ ′ [φ] + · · · ,(30)
where
δf (Z φ ) is given by Λ − Λ ′ = δf (Z φ )Λ ′ . In our case, δf (Z φ ) = Z 1/(2k−2) φ − 1.
The derivative with respect to Λ ′ in Eq. (30) will be eliminated by the RG flow equation. Thus one can find the coordinate transformation between S Λ [φ] and S Λ [φ]. Since all the loop momentum integrals are regularized in the both regions of infra-red and ultra-violet, Eq. (30) gives the well-defined coordinate transformation at all orders in the Taylor expansion.
In the more general case, we cannot eliminate Z φ by shifting the cutoff. However, since the change of θ(x) = xC(x)/(1 + xC(x)) induced by the change of the cutoff function C −1 (x) is concentrated in the finite region of the momentum x = p 2 , it also gives the well-defined coordinate transformation equivalent to Eq. (21).
Consequently, the critical exponents given by Eq. (18) and by Eq. (28) are completely the same, since these formulations can be understood as the difference of the coordinate systems on the theory space.
Asymptotic Region of the Renormalized Trajectory
In this and the next subsections we discuss the cutoff scheme dependence of the renormalized trajectories in the 'asymptotic region' Λ ≪ M R , where M R is the renormalized mass of φ. We note that Eq. (21) may be rewritten as follows. Let S (n) t be the vertices of the effective action S t [φ], and W t be sum of the connected diagrams composed of the propagator (δP −1 + S (2) t ) −1 and the vertices S (n) t ( n > 2 ), where δP (q 2 ) is the cutoff propagator i.e. δP (q 2 ) ≡ 1 q 2 δθ(q 2 ). Then it is easily found that (See appendix),
exp 1 2 δD[δ/δφ] exp (−S t [φ]) = exp − 1 2 φ · δP −1 · φ + W t [δP −1 · φ] .(31)
The cutoff scheme dependence of the renormalized trajectory is given by Eq. (31). (See Fig. 5.) If the RG flows of the dimensionful quantities freeze when t → ∞, then all dimensionless coupling g i (t) should behave g i (t) ∼ g R i e d i t as t → ∞ with some finite dimensionful coupling g R i , where d i is the canonical dimension of g i . Here, we take M R to be a unit of the mass scale: t = ln M R /Λ. Hereafter we call such a region 'the freezing region', if it exists. As is aeen below the asymptotic region is not always the freezing region.
In the asymptotic region, it can be realized that the loop diagrams in Eq. (31) are found to be suppressed compared with the tree diagrams. It is seen by the following arguments. Let us consider the Feynman diagram with N I internal lines, N E external legs and N V vertices. By comparing the canonical dimension of each operator in the both sides of Eq. (31), we find the following factor,
exp{∆t} ≡ exp dN V − d φ (N E + 2N I ) − N (1) D − 2N I − (d − d φ N E ) + N (2) D t ,(32)
where N (1) D is the total degree of the external momenta (or the derivatives) of the N V vertices in the Feynman diagram, and N (2) D is that of the subset of N (1) D derivatives which operate to the field φ(x). The first three terms of ∆ come from the canonical dimensions of the vertices, the next one is from propagators and the last two terms are the dimension of N E point vertex with N
D , since the number of derivatives only decreases by the loop integration 2 . We do not expand the propagators with respect to the external momenta, since we are interested only in the scaling behavior of the Feynman diagrams in which each propagator gives a negative power of the external momenta and brings the factor exp(−2t) 3 . The loop integration does not change the factor ∆, because the support of δθ(p 2 ) is concentrated in the small region aroundp 2 ∼ 1. The factor ∆ describes the response of the shrinkage of the loop momentum integral region. The diagrams with ∆ < 0 are dropped in Eq. (31). Nota Bene the massive field decouples in the asymptotic region not due to its large (dimensionless) mass but due to the shrinkage of the loop momentum integral region.
One can rewrite ∆ by using the number of the loops L,
∆[L] = −dL − (N (1) D − N (2) D ) ≤ −dL,(33)
where we used the relation,
N I − N V + 1 = L.(34)
Hence all loop corrections i.e. 'quantum' corrections are relatively suppressed by the factor exp(−dLt) compared with the tree diagrams. Since the number of derivatives is decreased by only loop integration, N
D = N(1)
D in the tree diagrams. Therefore all the tree diagrams survive. Now we can conclude that in the asymptotic region W t in Eq. (31) is consist of tree diagrams only.
In Fig. 6 we show examples of the Feynman diagrams and their suppression factors. In these examples, the factor ∆ of first tree diagram is ∆
= 2(d − 4d φ ) − 2 − (d − 6d φ ) = 0 dt-6d t= 2d − (4 + 6)d φ − 4 − (d − 6d φ ) = −d < 0.
The later diagram is suppressed in comparison with the six points vertex itself.
The above observation holds also for the RG flows, in which δP is induced by lowering the cutoff Λ. Hence the RG flows of the dimensionful couplings of the 1PI building blocks of the Wilsonian effective action freeze in the asymptotic region, however the Wilsonian effective action itself does not. The later conflicts to our assumption made before, i.e. g i (t) ∼ g R i e d i t with fixed g R i . Thus there is no freezing region on the RG flow diagram of the Polchinski equation.
The coordinate transformation ∆ δP [S t ] in the asymptotic region is written by,
∆ δP [S t [φ]] = 1 2 φ · δP −1 · φ − W tree [δP −1 φ],(35)
where W tree is the tree part of the connected diagrams. It can be easily realized that W tree is given by the Legendre transform of the effective action S t , since the 1PI part of W tree is nothing but the 'Legendre effective action' Γ t . ( Γ t should not be confused with Γ Λ given in Eq. (12), which is equal to the effective action S t in the asymptotic region 4 . ) Therefore we find,
W tree [J] =φ · J − S t [φ] − 1 2φ · δP −1 ·φ.(36)
We add the last term of r.h.s. of Eq. (36) to the effective action, because W t in Eq. (31) consists of the connected diagrams with bare action S t [φ] + 1 2 φ · δP −1 · φ. Here J andφ satisfy the relations,
J − δP −1φ = δ δφ S t ,φ = δ δJ W tree .(37)
One can find the coordinate transformation ∆ δP as
∆ δP [S t [φ]] = 1 2 S ′ t [φ] · δP · S ′ t [φ] + S t [φ],(38)
where the prime denotes the functional derivative with respect toφ ≡ φ − δP S ′ t [φ] . One can also easily check the following relations,
∆ δP 1 [∆ δP 2 [S t ]] = ∆ δP 1 +δP 2 [S t ],(39)∆ δP [∆ −δP [S t ]] = S t .(40)
The scheme dependence of the Wilsonian effective action may be understood as follows. As discussed in Ref. [4], the Wilsonian effective action is consist of two different elements. In the high energy region, the vertices of the effective action give the connected Green's functions. In the low energy region, they coincide with those of the 1PI effective action, since all the articulation lines carry the infra-red cutoff. [4] In the boundary of these regions, two quantities are connected to each other by the cutoff function. Therefore, the Wilsonian effective action is scheme dependent even after removing the cutoff. This scheme dependence turns out to be an obstacle in the approximated analyses. (See Sec. 6.)
Scheme independence of the cutoff Legendre effective action
For the infinitesimal transformation δP ≪ 1, the scheme dependence of the generator of the connected Green's functions W t [J] becomes simpler. Now W t [J] is written in terms of the Wilsonian effective action S t [φ] by
W t [J] = 1 2 J · P Λ(t) · J − S t [P Λ(t) J].(41)
By using Eq. (38) we find
δW t = 1 2 W ′ t · δP Λ(t) P 2 Λ(t) · W ′ t .(42)
Since all the loop corrections are suppressed, the 1PI parts of the cutoff connected Green's functions in the asymptotic region are completely scheme independent. Indeed, by the Legendre transformation
W t [J] = J · φ − Γ t [φ] − 1 2 φ · P −1 Λ(t) − P −1 Λ(t)=0 · φ,(43)
and Eq. (42), one can find δΓ t [φ] = 0. Namely, the renormalized trajectories of the 1PI vertices defined in the different cutoff schemes approaches to each other in the asymptotic region, as is schematically shown in Fig. 7. The coordinate transformation on the functinal space of the 1PI effective action Γ t [φ] induced by Eq. (21) maps the fixed points, the renormalized trajectories and the critical surfaces to those of the another scheme. Here we write this coordinate transformation as ∆ (A→B) . Once the effective action of the scheme A (Γ A t [φ]) and that of the scheme B (Γ B t [φ]) satisfy the relation,
Γ B t 1 [φ] = ∆ (A→B) [Γ A t 1 ],(44)
at a certain scale t 1 , then it holds also for the each scale t 2 . (See Fig. 7.) Solving the RG flow equation, the cutoff Legendre effective actions finally arrive at the asymptotic region with maintaining the same relation (44). As discussed in the last subsection ∆ (A→B) reduces to the identical mapping in the asymptotic region. Note that the RG flow of the dimensionful cutoff Legendre effective action is frozen in the asymptotic region. The continuum limit of the field theories are found by tuning the initial boundary condition of the RG equation close to the critical surface or the fixed point and are discribed by the renormalized trajectories. As is seen above, the renormalized trajectories are cutoff scheme independent in the freezing region. This converging property of the RG flows of the cutoff Legendre effective action ensures that the solutions of the RG flow equations become cutoff scheme independent in the continuum limit. Needless to say, each theory must be specified by imposing the renormalization conditions for the renormalized couplings. Then other couplings are determined scheme independently. This structure should be compaired with the scheme dependence of the Wilsonian effective action (or the Polchinski RGE). It is an advantageous feature of the Legendre flow equations that the physically meaningful results can be obtained directly.
Sharp cutoff limit of Polchinski equation
In this and next section, we confirm the equivalence between the sharp cutoff limit of the Polchinski equation and the Wegner-Houghton equation. It seems that the sharp cutoff RG equation, the Wegner-Houghton RG [1] is quite different from the smooth cutoff one; the Polchinski RG. At first, we clarify the sharp cutoff limit of the Polchinski RGE in this section, and confirm the equivalence to the Wegner-Houghton equation in next section.
Since we would like to consider the sharp cutoff limit, it is more convenient to write the cutoff propagator P Λ in terms of the cutoff function θ ε (q, Λ). Here θ ε is a smooth function with respect to the momentum q, the cutoff Λ and a smoothness parameter ε. In the limit of ε → 0, θ ε (q, Λ) becomes a step function θ(q − Λ), therefore,
P Λ (q) = 1 q 2 θ ε (q, Λ) ε→0 −→ 1 q 2 θ(q − Λ).(45)
If we also introduce δ ε (q, Λ) denoting the derivative of θ ε with respect to the cutoff Λ, which satisfies
− ∂ ∂Λ θ ε (q, Λ) = δ ε (q, Λ) ε→0 −→ δ(q − Λ).(46)
It is pointed out in Ref. [4] that one has to be careful for the behavior of θ ε and δ ε in the sharp cutoff limit. The non-trivial and universal behavior of θ ε and δ ε is
δ ε (q, Λ)f (θ ε (q, Λ), q, Λ) ǫ→0 −→ δ(Λ − q) 1 0 dt f (t, q, Λ).(47)
For the derivation of the above formula, see Ref. [4]. Note that, θ ε with another momentum q ′ = q does not behave as Eq. (47). Let S ε=0 Λ be the effective action for the sharp cutoff case and S ε Λ be one for the smooth cutoff case with θ ε . These two effective actions should be related to each other by the formula (21),
exp (−S ε Λ [φ]) = exp 1 2 δD exp −S ε=0 Λ [φ] ,(48)
where
δD = d d q (2π) dδ δφ (q) · 1 q 2 θ ε (q 2 ) − θ(q − Λ) ·δ δφ (−q) .(49)
Here, S ε Λ [φ] has the dependence of θ ε so that we have to take into account of Eq. (46). For the sake of simplicity, we write θ ε (q 2 i ) − θ(q i − Λ) ≡ ∆ i . The cutoff functions θ ε contributing to the non-trivial limit (47) should have common argument q with that of δ ε in the RG flow equation. They lie only on the external legs. Diagrammatically, they can be found in Fig. 8. In this diagram, all the momentum of δε θε θε θε θε θ ε (q) are the same as that of δ ε . Here, the filled circles in Fig. 8 correspond to the self energy Σ ε Λ (q). They are summed up and construct the 'Full propagator' times the inverse cutoff propagator i.e. (q 2 /∆)/(q 2 /∆ + Σ ε Λ (q)). The pre-factor q 2 /∆, the inverse cutoff propagator, is required, since the argument of W Λ in Eq. (31) is δP −1 φ.
In general, one can imagine other diagrams like Fig. 9 which the momentum p flows in. In this figure, the filled circle denotes the multi-point 1PI vertex. Since the cutoff θε θε θε δε p+q q p q Figure 9: The beta-function and also the effective action have the discontinuous momentum dependence coming from these diagrams. function θ ε (p + q) behaves differently at p = 0 in the sharp cutoff limit, the discontinuous momentum dependence should appear in the beta-functional. If the field φ(p) is a smooth function, there are no finite contributions since the measure of the point p = 0 is zero. However since φ(p) may have the distribution like a VEV ϕ(2π) d δ d (p), we separate such distributions explicitly. Furthermore, since we would like to claim the equivalence between the Wegner-Houghton equation and the Polchinski equation in the sharp cutoff limit including these discontinuous momentum dependence, we also introduce the singularities
like δ d (q − q i ), φ (q) −→ (2π) d δ d (q − q i ) ϕ i + φ (q) .(50)
Now, the effective action acquires ϕ i dependence i.e.Ŝ ε Λ =Ŝ ε Λ [φ, ϕ i ] and also satisfies the same formula (48). In this case, the Feynman diagrams like Fig. 9 contribute to the sharp cutoff flow equation via the combination ϕ n i i with n i p i = 0, n i ∈ N, that is a point p = 0 of the diagram in Fig. 9. The corrections from ϕ n i i terms can be absorbed by redefinition of the self energy Σ ε Λ (q). Consequently we regard the self energy Σ ε Λ as a ϕ i dependent function. One may wonder if the self energy Σ ε Λ (q) has the discontinus momentum dependence, since there are θ ε (q) with momenta in common with δ ε 's in the loop integrals giving Σ ε . However, since the regions of the loop momentum integration have the vanishing measure, the above discontinuous momentum dependence does not contribute to Eq. (47) at all. Similarly, all the 1PI building blocks smoothly approach to those for the sharp cutoff. Hence, what we must check is only articulation lines. Furthermore, the internal lines with momenta q + p vanish, since ∆(q + p) goes to zero in the limit ε → 0. Therefore we need to care the external legs only.
Finally, one can conclude that the relevant scheme dependence of n(> 2)-point functions comes only from the external legs. Diagrammatically, they can be illustrated as in Fig. 10. The scheme dependence of the two-point function S (2) Λ is given by q 2 /∆ minus the 'Full propagator' times (q 2 /∆) 2 and is different from those of other vertices. Therefore we separate the two point function in the effective action and write as
S Λ [φ] = 1 2 φ · S (2) Λ · φ +Ŝ Λ [φ],(51)
whereŜ Λ [φ] is the part composed of n(> 2)-point functions. is the vertex with the sharp cutoff and the 'full propagator' is the propagator with the smooth one. Other contributions shown by the 'trivial contribution' vanish in the limit ε → 0.
Taking account of Eq. (31), we can extract the relevant scheme (∆) dependence as follows. For the two-point function S
Λ ,
δ 2 S ε Λ δφ(p)δφ(q) φ=0 = q 2 ∆ 2 ∆ q 2 − 1 q 2 /∆ + Σ ε Λ (q) · (2π) d δ d (p + q),(52)
and for n(> 2)-point functionsŜ Λ ,
S ε Λ [φ] = n =2 1 n! n i=1 d d q i (2π) d q 2 i /∆ i · φ(q i ) q 2 i /∆ i + Σ ε Λ S ε=0 Λ (q 1 , · · · , q n ) + · · · ,(53)
where dots '· · ·' have no significant dependence on ∆ and vanish in the sharp cutoff limit. As mentioned above Σ ε Λ and S ε Λ depend on ϕ i , e.g. Σ ε Λ (q) = Σ(q, ϕ n i i ) andŜ Λ [φ, ϕ n i i ]. Let us first see the sharp cutoff limit of Eqs. (52) and (53). In this limit, ∆ vanishes and Σ ε Λ (q) can be replaced by Σ ε=0 Λ (q) safely. For n > 2 we easily find
S ε=0 Λ [φ] = n =2 1 n! n i=1 d d q i (2π) d φ(q i )S ε=0 Λ (q 1 , · · · , q n ).(54)
For the two-point function, we can rewrite Eq. (52) as,
Σ ε=0 Λ (q) = q 2 Σ ε Λ / q 2 + ∆ · Σ ε Λ ,(55)
where Σ ε=0 Λ corresponds to the two point function of the sharp cutoff effective action S ε=0
Λ [φ],δ 2 S ε=0 Λ δφ(p)δφ(q) φ=0 = Σ ε=0 Λ (q, ϕ n i i )(2π) d δ d (p + q).(56)
We must fix the θ(0) ambiguity before letting ε → 0 in the RG flow equation, since ∆ in Eqs. (52) and (53) is given by ∆(q) = θ ε (q/Λ) − θ(q − Λ) and satisfy
δ ε (q/Λ)f (∆(q)) ǫ→0 −→ δ(q − Λ) 1 0 dtf (t − θ(0)).(57)
We simply set θ(0) = 0 here. It is different from the ordinary convention; θ(0) = 1/2. This is because, we implicitly used θ(0) = 0 to derive the Wegner-Houghton equation. The 'shell modes' φ s integrated out by the RG transformation have momenta Λ − δΛ < q ≤ Λ, lower than the scale Λ of the effective action S Λ . In the limit δΛ → 0, the shell momentum q reaches to Λ from below. To make this limit well-defined, we should employ the left semi-open interval Λ − δΛ < q ≤ Λ. Hence we can say that the fluctuations with q > Λ are incorporated in the Wilsonian effective action S Λ [φ], while that with q = Λ are not. It means that our infra-red cutoff θ(q − Λ) satisfies θ(0) = 0! Since the 'delta'-function δ ε (q, Λ) lies on the Λ derivative of the cutoff propagator, what we must check are the following two terms. One is a field φ(q) dependent term,
−δ S Λ δφ(q) · ∂ ∂Λ P Λ ·δ S Λ δφ(−q) + ∂ ∂Λ P Λ ·δ 2Ŝ Λ δφ(q)δφ(−q) ,(58)
where the first term corresponds to the 'dumbbell' diagram and the second term corresponds to the 'ring' diagram in Fig. 11. One can easily realize that the above equation is proportional to 1 q 2 δ ε (q, Λ)
q 2 /∆ q 2 /∆ + Σ ε Λ (q) 2 ε→0 −→ δ(q − Λ) q 2 + Σ ε=0 Λ (q) ,(59)
where we were taking account of the following relations,
δS ε Λ δφ(q) ∼ q 2 q 2 + ∆ · Σ ε=0 Λ ·δ S ε=0 Λ δφ(q) + no significant terms,(60)andδ 2Ŝε Λ δφ(q)δφ(−q) ∼ q 2 q 2 + ∆ · Σ ε=0 Λ (q) 2 ·δ 2Ŝε=0 Λ δφ(q)δφ(−q) + no significant terms,(61)
for the functional derivative with respect to the field φ(q).
Another is the φ(q) independent term which corresponds to the part evaluated in the Local Potential Approximation (LPA),
− ∂ ∂Λ P Λ · S(2)Λ (q) = 1 q 2 δ ε (q, Λ) · q 2 Σ ε Λ q 2 + ∆ · Σ ε Λ · (2π) d δ d (p + q).(62)
By taking ε → 0, we find the sharp cutoff limit of this as
δ(q − Λ) ln 1 + Σ ε=0 Λ /q 2 · (2π) d δ d (p + q).(63)
They correspond to the diagrams given in Fig. 11. 59) and (63). The third graph has no dependence on the field φ(q) but on the VEV ϕ. It contributes to the LPA flow equation, and should be compared with the LPA Wegner-Houghton RGE. [1] Using these results, the sharp cutoff limit of the Polchinski RG equation becomes,
∂ ∂Λ S ε=0 Λ = 1 2 d d q (2π) d δ(q − Λ) q 2 + Σ ε=0 Λ (q, ϕ n i i ) δ S ε=0 Λ δφ(q) ·δ S ε=0 Λ δφ(−q) −δ 2Ŝε=0 Λ δφ(q)δφ(−q) − 1 2 (2π) d δ d (0) d d q (2π) d δ(q − Λ) ln q 2 + Σ ε=0 Λ (q, ϕ n i i ) .(64)
The canonical scaling of the momentum p µ ∂/∂p µ does not affect these results, since ∂∆/∂p µ → 0 as ε → 0.
Comparison with the Wegner-Houghton equation
To confirm the equivalence between Eq. (64) and the Wegner-Houghton RG equation, we substitute Eq. (50) to the Wegner-Houghton equation. Let us start from the following formula which gives the effective action S Λ−δΛ up to O(δΛ 2 ).
S Λ−δΛ = S Λ − 1 2δ S Λ δφ s · P −1 Λ +δ 2 S Λ δφ sδ φ s −1 ·δ S Λ δφ s + 1 2 Tr ln P −1 Λ +δ 2 S Λ δφ sδ φ s ,(65)
where φ s denotes the 'shell mode' whose support is given by the condition p 2 = Λ 2 . Dot (·) denotes the integral Λ Λ−δΛ d d q/(2π) d . It can be realized that Eq. (65) involves the higher contribution of O(δΛ 2 ). First, we defineŜ Λ by,
δ 2 S Λ δφ s (p)δφ s (q) φs=0 ≡ Σ Λ (q, ϕ n i i )(2π) d δ d (p + q) +δ 2Ŝ Λ δφ s (p)δφ s (q) φs=0 .(66)
Here, Σ Λ is the same as Σ ε=0 Λ given by Eq. (56) before. The second term of the r.h.s. of Eq. (66) is regular at q = −p.
Let us rewrite Eq. (65) in matrix notation. We define a matrix M by,
M p,q ≡δ 2Ŝ Λ δφ s (p)δφ s (q) φs=0 .(67)
The matrix M p,q may have off-diagonal singularities, e.g. δ(p + q + k) due to ϕ i . The first derivative of the effective action with respect to the field corresponds to a 'vector' v;
v q ≡δ S Λ δφ s (q) φs=0 .(68)
Using these notations, the r.h.s. of Eq. (65) can be expressed by the following equation,
1 2 v T · (P −1 + Σ)1 + M −1 · v − 1 2 Tr ln (P −1 + Σ)1 + M ,(69)
where the unit matrix 1 corresponds to (2π) d δ d (p + q). We expand this with respect to the matrix M. Since one momentum intergal Λ Λ−δΛ d d q brings a factor δΛ, only the first few terms can contribute to the RG flow equation, therefore
1 2 v T · (P −1 + Σ)1 −1 · v − 1 2 Tr ln (P −1 + Σ)1 − 1 2 Tr (P −1 + Σ)1 −2 · M + · · · ,(70)
where dots · · · are the higher order in δΛ. One of the higher order contribution is written as follows,δ
S Λ δφ s · P s ·δ 2Ŝ Λ δφ sδ φ s · P s ·δ S Λ δφ s ,(71)
where P s (q) is the propagator of the shell mode φ s whose support is restricted to the region Λ − δΛ < q ≤ Λ. Eq. (71) corresponds to Fig. 12. Since the cross section of the integral region of p and that of q is O(δΛ 2 ), the contribution from the diagram given in Fig. 12 becomes O(δΛ 2 ). If k vanishes, the volume of the integral region above becomes the first order of δΛ, because two 'spheres' completely coincide with each other. Hence the value of the RG beta-function jumps at the point k = 0. However since the field φ is a smooth function of the momentum, not the distribution, we can drop it. It contributes through the distribution given in Eq. (50) by the combinations ϕ n i i with n i p i = 0. They are already taken in the beta-function by the ϕ n i i dependence in the self energy Σ Λ . Consequently, the Wegner-Houghton equation can be found as,
S Λ − S Λ−δΛ = δΛ 2 d d q (2π) d δ(q − Λ) q 2 + Σ Λ (q, ϕ n i i ) δ S Λ δφ(q) ·δ S Λ δφ(−q) −δ 2Ŝ Λ δφ(q)δφ(−q) − δΛ 2 (2π) d δ d (0) d d q (2π) d δ(q − Λ) ln q 2 + Σ Λ (q, ϕ n i i ) + O(δΛ 2 ). (72)
This flow equation is nothing but the sharp cutoff limit of the Polchinski equation (64).
Conclusion and remarks
In this article, we investigated the cutoff scheme dependence of the Wilsonian effective action. It can be reinterpleted as the coordinate transformation on the theory space. It is written formally by Eq. (21). We have studied it in two limiting cases. One is in the asymptotic region i.e. t → ∞, and another is in the sharp cutoff limit i.e. ε → 0. In the both cases, we could write down the cutoff scheme dependence so simple as to explore the RG flows.
As we have shown in Sec. 3, the scheme dependence of the renormalized trajectories in the asymptotic region t → ∞ remains for the Wilsonian effective action. Besides, the RG flow of the Wilsonian effective action does not freeze in t → ∞. The origin is as follows. The vertices of the Wilsonian effective action consist of two different quantities; the connected Green's function at high energy region (p > Λ) and the 1PI vertices at the low energy region (p < Λ). The boundary between these regions are connected scheme dependently. (See also Ref. [4].) Therefore the Wilsonian effective action itself is not a physical quantity.
Moreover, we have also shown the scheme independence of the Legendre effective action, [7,4] or equivalently the 1PI building blocks of the Wilsonian effective action, on the renormalized trajectories. Recalling the statements in Sec. 3, we can say that if the RG flow of the dimensionful Legendre effective action Γ Λ [φ] freezes on the renormalized trajectory in the asymptotic region, i.e. in the statistical continuum limit Λ 0 → ∞, then our Γ Λ [φ] should be scheme independent.
In the perturbation theory, it can be easily realized. Indeed, all the cutoff scheme dependent contributions, i.e. the coefficients of the divergences, are completely absorbed into certain counterterms order by order, and remaining finite terms are scheme independent in the limit Λ 0 → ∞. Of course, needless to say, we should insist on the common renormalization condition (or equivalently the subtruction rule). In the non-perturbative case, however, the problem will be more complicated, because the ordinary renormalization procedure will not work in general, e.g. for the theory around a non-Gaussian fixed point. Hence, the cancellation of divergences, and therefore the cutoff scheme dependent constants can be confirmed only case by case if possible.
Turning to the Exact Renormalization Group, we can recapture it from another point of view. The scheme dependence of the counterterms corresponds to that of the fixed point and/or of the critical surface, and the scheme independence of the total solution can be appreciated by that of the renormalized trajectory in the asymptotic region. All these are described by a coordinate transformation (21). For our purpose, it is sufficient to investigate the asymptotic region of Eq. (21) without using the explicit solutions, since we have expected the asymptotic behavior g i (t) ∼ g R i e d i t and do not need the explicit value g R i . Once we assume existence of the asymptotic region, then the scheme independence of Γ Λ [φ] as Λ → 0 is confirmed. (Recall the discussion in Sec. 3.) For massive theories, the scheme dependence of Γ Λ [φ] decays like exp(−dt) ∼ (Λ/M R ) d as t → ∞. Instead, for the massless case, one may start from a massive case and then letting M R → 0.
We also confirm the equivalence between the Wegner-Houghton equation and the Polchinski equation in the sharp cutoff limit. It seems that these equations are much different from each other even though they are expected to be equivalent. We can prove equivalence of these two ERGs by help of Eq. (21) which describes the scheme dependence of the Wilsonian effective action. The superficial difference occurs by the strong scheme dependence of the Wilsonian effective action. As we showed, the crucial cutoff scheme dependence of the Wilsonian effective action lies in the external legs.
Finally, we would like to comment on the cutoff scheme dependence of the approximate solutions. The ERG flow equations are approximated by projecting them onto smaller dimensional subspaces of the original theory space. In the derivative expansion, for example, we may employ these subspaces as the space of a finite number of the coefficient functions {V 0 , V 2 , · · · , V i k } defined by the following equation,
S Λ [φ] = d d x V 0 (φ) + 1 2 (∂ µ φ) 2 V 2 (φ) + 1 2 (✷φ) 2 V 1 4 (φ) + · · · ,(73)
The subscript k of the coefficient function denotes the degrees of the derivatives and the superscript i of it labels the independent k-th derivative vertices. Then, the ERG flow equations are reduced to the coupled partial differential equations for the coefficient functions V i k (φ). One can easily improve the approximation systematically by enlarging the subspace {V 0 , V 2 , · · · , V i k } step by step. Especially, the approximation with k = 0 is called the 'local potential approximation' (LPA). This procedure preserves the nonperturbative nature of the ERG flow equations.
The scheme dependence given by Eq. (38) are infinitely enhanced in the derivative expansion. By dimensional analysis, the Taylor expansion of δP (q), whose value changes rapidly near the infra-red cutoff q ∼ Λ, is the expansion with respect to the combination q 2 /Λ 2 ≫ 1. Therefore the scheme dependence of the coefficient functions V k i (φ) in Eq. (73) become stronger and stronger as the infra-red cutoff Λ decreases. The scheme dependence of V i k (φ) behaves as 1/Λ 2k . At last it diverges in the limit Λ → 0. It means that the derivative expansion and the limit Λ → 0 do not commute. Hence the cutoff scheme dependence of V k i (φ) is strong enough to prevent the physical predictions. The physical information is completely washed off except for the potential part V 0 (φ) which is the 1PI effective potential.
The RG beta-functionals of the coefficient functions of Γ Λ [φ] like Eq. (73) are also cutoff scheme dependent in the region t → ∞, since by the dimensional analysis, the expanding parameter there is ∂/ε ∼ ∂/Λ where ε stands for the smoothness parameter given in Sec. 4. The higher derivative contributions finally overcome the suppression factor (Λ/M R ) d . Hence, the RG beta-functionals of the higher derivative operators blow up to infinity. In the limit t → ∞, the cutoff scheme approaches towards to the sharp one, since ε ∼ Λ → 0. It is known that these diverging series can be summed up and lead to non-analytical momentum dependence of the vertices, e.g.
√ p µ p µ . This spurious scheme dependence can be avoided if we work on the sharp cutoff Legendre flow equation [4,5].
∝ exp (−S t [δ/δJ]) Dφ ′ exp − 1 2 φ ′ · δP −1 + · φ ′ + J · (φ + φ ′ ) J=0 = Dφ ′ exp − 1 2 φ ′ · δP −1 + · φ ′ − S t [φ + φ ′ ] = exp − 1 2 φ · δP −1 + · φ Dφ ′ exp − 1 2 φ ′ · δP −1 + · φ ′ − S t [φ ′ ] + φ ′ · δP −1 + · φ = exp − 1 2 φ · δP −1 + · φ + W t [J = δP −1 + · φ] ,(74)
where δD + [J] is given by,
δD + [J] = d d q (2π) d J(q)δP + (q)J(−q).(75)
The negative part of the deviation δP − (q) needs the special care, since the path-integral in the fourth line does not converge. However our final result can be hold also for the negative part δP − (q), since Eq. (31) is the identity of δP . In other words, Eq. (31) means the graph by graph correspondence of the Feynman diagrams. It does not restrict our observations in Secs. 3-5, since we need the diagramatical representation of Eq. (74).
Figure 1 :
1The examples of the infra-red cutoff functions θ(x) = xC(x)/
Figure 2 :
2The diagrams of the Polchinski equation
Figure 3 :
3The diagrams of Eq.(13). The filled circles here correspond to the vertices of Γ Λ . The dots denote the higher terms in the vertex of Γ Λ .
Figure 4 :
4are shown in Fig. 4. The diagrams of Eq. (17). The filled circles correspond to the vertices of S Λ .
Figure 5 :
5The coordinate transformation (21) changes the cutoff scheme (A) to another cutoff scheme (B) in the Polchinski equation.
Figure 6 :
6The examples of the factor ∆ are shown. In this figure, d φ is the canonical dimension of the field i.e. d φ = (d − 2)/2. The dots denote the higher order corrections. and that of the second loop diagram is ∆
Scheme A
Figure 7 :
7The scheme dependence of the renormalized trajectories of the 1PI vertices (the solid lines). The dotted arrows are the coordinate transformation (21) and the shadow region is the asymptotic (freezing) region. The dimensionful cutoff Legendre effective action is frozen in this region and the renormalized trajectories with the different cutoff schemes come close to each other.
Figure 8 :
8This kind of the diagrams are crucial for the sharp cutoff limit of the Polchinski equation. δ ε corresponds to ∂θ(q 2 )/∂q 2 in the l.h.s. of Eq. (18).
Figure 10 :
10These kinds of diagrams behave as Eq. (46). S ε=0 Λ
Figure 11 :
11These diagrams correspond to Eqs. (
Figure 12 :Figure 13 :
1213A diagram which external momenta k flows in is drown. The integral region in the momentum space for the diagram given inFig. 12. The volume of the cross section of the two integral regions is order of δΛ 2 .
. By the definition, Γ Λ [φ] coincides with the ordinary Legendre effective action at Λ = 0, i.e. Γ Λ=0 [φ] = Γ[φ]. One can find the RG flow equation for the dimensionless effective action Γ t [φ] by the same manner as we did for the Polchinski one.
In this paper, we naively assume that the coordinate transformation given by Eq.(21)is well-defined. Since both infra-red and ultra-violet regions are regularized, the perturbative expansion of Eq.(21)is finite in all orders.
Some of the derivatives will be replaced by the loop momenta q ∼ 1 which does not contribute to ∆.3 Namely we do not perform the derivative expansion here The asymptotic behavior in the derivative expansion will be discussed in § 6.
All the loop (quantum) corrections for the 1PI vertices are dropped in the asymptotic region. It means Γ t = S t .
Appendix Equation (31) in Sec. 3 is driven as follows. First, let us consider the positive definite deviation of the cutoff propagator δP (q) = δP + (q) > 0, since we call for the Gaussian integral with positive δP (q). Then one can find
. K G Wilson, I G Kogut, Phys. Rep. 1275K.G. Wilson and I.G. Kogut, Phys. Rep. 12, 75 (1974).
. F J Wegner, A Houghton, ; J Polchinski, Nucl. Phys. 8269Phys. Rev.F.J. Wegner and A. Houghton, Phys. Rev. A8, 401 (1973) J. Polchinski, Nucl. Phys. B231, 269 (1984).
. G Keller, C Kopper, M Salmhofer, Helv. Phys. Acta. 6532G. Keller, C. Kopper and M. Salmhofer, Helv. Phys. Acta 65, 32 (1992).
. U Ellwanger, C Wetterich, Nucl. Phys. 423137U. Ellwanger and C. Wetterich, Nucl. Phys. B423, 137 (1994).
D U Jungnickel, C Wetterich, hep-th/9902316Lectures given at Workshop on the Exact Renormalization Group. Faro, Portugal535142D.U. Jungnickel and C. Wetterich, Lectures given at Workshop on the Exact Renor- malization Group, Faro, Portugal, 10-12 Sept. 1998, hep-th/9902316; Phys. Rev. D53, 5142 (1996);
. Eur. Phys. J. 1669Eur. Phys. J. C1, 669 (1998);
. C2. 557C2, 557 (1998);
. Phys. Lett. 389600Phys. Lett. B389, 600 (1996);
Heidelberg preprints HD-THEP-96-40. hep-ph/9610336Heidelberg preprints HD-THEP-96-40, hep-ph/9610336.
. J Berges, D U Jungnickel, C Wetterich, Phys. Rev. 5934010J. Berges, D.U. Jungnickel, and C. Wetterich, Phys. Rev. D59, 34010 (1999).
. M Reuter, C Wetterich, Phys. Rev. 567893M. Reuter and C. Wetterich, Phys. Rev. D56, 7893 (1997).
. K-I Aoki, K Morikawa, J-I Sumi, H Terao, M Tomoyose, Prog. Theor. Phys. 97479K-I. Aoki, K. Morikawa, J-I. Sumi, H. Terao and M. Tomoyose, Prog. Theor. Phys. 97, 479 (1997);
. hep-th/9908043Prog. Theor. Phys. 1021151Phys. Rev. D61Prog. Theor. Phys. 102, 1151 (1999); hep-th/9908043 to be published in Phys. Rev. D61.
H Kodama, J-I Sumi, hep-th/9912215. Prog. Theor. Phys. K. Kubota and H. Terao1021163H. Kodama and J-I. Sumi, hep-th/9912215, to appear in Prog. Theor. Phys. K. Kubota and H. Terao, Prog. Theor. Phys. 102, 1163 (199).
. K-I Aoki, K Takagi, H Terao, M Tomoyose, hep-th/0002038K-I. Aoki, K. Takagi, H. Terao and M. Tomoyose, hep-th/0002038.
. C Wetterich, Z. Phys. 57451C. Wetterich, Z. Phys. C57, 451 (1993).
. N Tetradis, C Wetterich, Nucl. Phys. 422541N. Tetradis and C. Wetterich, Nucl. Phys. B422, 541 (1994).
. T R Morris, Phys. Lett. 329241T.R. Morris, Phys. Lett. B329, 241 (1994).
. T R Morris, Int. J. Mod. Phys. 92411T.R. Morris, Int. J. Mod. Phys. A9, 2411 (1994).
. T R Morris, Nucl. Phys. 495477T.R. Morris, Nucl. Phys. B495 [FS], 477 (1997).
. K-I Aoki, K Morikawa, W Souma, J-I Sumi, H Terao, Prog. Theor. Phys. 95409K-I. Aoki, K. Morikawa, W. Souma, J-I. Sumi and H. Terao, Prog. Theor. Phys. 95, 409 (1996);
. Prog. Theor. Phys. 99451Prog. Theor. Phys. 99, 451 (1998)
. C Wetterich, Phys. Lett. 30190C. Wetterich, Phys. Lett. B301, 90 (1993).
. M Bonini, M D'attanasio, G Marchesini, Nucl. Phys. 409441M. Bonini, M. D'Attanasio, and G. Marchesini, Nucl. Phys. B409, 441 (1993).
. J F Nicol, T S Chang, H E Stanley, Phys. Rev. Lett. 33540J.F. Nicol, T.S. Chang and H.E. Stanley, Phys. Rev. Lett. 33, 540 (1974).
. T R Morris, Phys. Lett. 334355T.R. Morris, Phys. Lett. B334, 355 (1994).
. T R Morris, Phys. Lett. 334355T.R. Morris, Phys. Lett. B334, 355 (1994).
. A Hazenfratz, P Hazenfratz, Nucl. Phys. 269B270 [FS16A. Hazenfratz and P. Hazenfratz, Nucl. Phys. B270 [FS16], 269 (1986).
. M Alford, Phys. Lett. 336237M. Alford, Phys. Lett. B336, 237 (1994).
. N Tetradis, C Wetterich, Nucl. Phys. 422541N. Tetradis and C. Wetterich, Nucl. Phys. B422, 541 (1994).
. K-I Aoki, K Morikawa, W Souma, J-I Sumi, H Terao, Prog. Theor. Phys. 99451K-I. Aoki, K. Morikawa, W. Souma, J-I. Sumi and H. Terao, Prog. Theor. Phys. 99, 451 (1998).
. R D Ball, P E Haagensen, J I Latorre, E Moreno, Phys. Lett. 34780R.D. Ball, P.E. Haagensen, J.I. Latorre and E. Moreno, Phys. Lett. B347, 80 (1995).
Ultraviolet divergences in quantum theories of gravitation. S Weinberg, Cambridge Univ. Press790Cambridge, EngS. Weinberg, "Ultraviolet divergences in quantum theories of gravitation", (Cam- bridge Univ. Press., Cambridge, Eng., 1979) pp.790.
| []
|
[
"STEM AND TOPOLOGICAL ENTROPY ON CAYLEY TREES",
"STEM AND TOPOLOGICAL ENTROPY ON CAYLEY TREES"
]
| [
"Jung-Chao Ban ",
"Chih-Hung Chang ",
"ANDYu-Liang Wu ",
"Yu-Ying Wu "
]
| []
| []
| We consider the existence of the topological entropy of shift spaces on a finitely generated semigroup whose Cayley graph is a tree. The considered semigroups include free groups. On the other hand, the notion of stem entropy is introduced. For shift spaces on a strict free semigroup, the stem entropy coincides with the topological entropy. We reveal a sufficient condition for the existence of the stem entropy of shift spaces on a semigroup. Furthermore, we demonstrate that the topological entropy exists in many cases and is identical to the stem entropy. | 10.1007/s11040-021-09411-4 | [
"https://arxiv.org/pdf/2110.08960v1.pdf"
]
| 239,016,859 | 2110.08960 | 04ea51467cd4f2fe1f9faf25e50cd2ccf0440ab8 |
STEM AND TOPOLOGICAL ENTROPY ON CAYLEY TREES
Jung-Chao Ban
Chih-Hung Chang
ANDYu-Liang Wu
Yu-Ying Wu
STEM AND TOPOLOGICAL ENTROPY ON CAYLEY TREES
We consider the existence of the topological entropy of shift spaces on a finitely generated semigroup whose Cayley graph is a tree. The considered semigroups include free groups. On the other hand, the notion of stem entropy is introduced. For shift spaces on a strict free semigroup, the stem entropy coincides with the topological entropy. We reveal a sufficient condition for the existence of the stem entropy of shift spaces on a semigroup. Furthermore, we demonstrate that the topological entropy exists in many cases and is identical to the stem entropy.
Introduction
Many simplified mathematical models were proposed to understand phase transition; one of the most famous ones refers to the Ising model. Consider a ferromagnetic metal piece, which consists of a massive number of atoms in the thermal equilibrium. Suppose, ideally, that these atoms are located at the sites of a crystal lattice Z d . Each atom shows a magnetic moment resulted from the angular moments, which is, for a simplified model, only capable of two orientations. The set of configurations is X = A Z d , where A = {1, −1} represents the orientations of spins "up" and "down". The Ising model is defined by specifying a Hamiltonian (or potential) describing the interaction between spins and then studying the corresponding Gibbs states. Remarkably, there is a unique Gibbs state for d = 1, whereas for d ≥ 3, there are infinitely many Gibbs states [17].
The notion of a Gibbs state (or a Gibbs measure) dates back to R.L. Dobrushin (1968Dobrushin ( -1969 and O.E. Lanford and D. Ruelle (1969), who proposed it as a mathematical description of an equilibrium state of a physical system which consists of a vast number of interacting components [12,13,14,15,19]. Gibbs and equilibrium states play a crucial role in the theory of thermodynamic formalism for dynamical systems. A classic example is the investigation of uniformly hyperbolic differential systems such as Anosov and Axiom A diffeomorphisms. The orbits of these systems can be encoded as infinite sequences of finite symbols; the collection of these symbolic sequences forms a superior symbolic dynamical system called a shift of finite type (also known as a topological Markov chain). After constructing the invariant measures, the study of their properties is yielded by the construction of equilibrium states in the sense of statistical mechanics, which turn out to be Gibbs states [7,11,18,28]. A remarkable fact is that a Gibbs state for a given type of interaction may not be unique; this, in physical systems, means a phase transition. On the other hand, equilibrium states are defined by a variational principle. More specifically, an equilibrium maximizes the system's entropy under the constraint of fixed mean energy. While a Gibbs state is always an equilibrium state, the reverse fails in general. However, equilibrium states are also Gibbs states provided the given potential function is regular enough [17].
Investigation of Gibbs states for physical models on a Cayley tree has been received considerable attention recently. One of many motivations is that, in the study of the Ising model on a Cayley tree, a new type of phase transition was revealed [23,21,22]. Additionally, Zachary showed that, for ferromagnetic or antiferromagnetic systems on a Cayley tree, either the set of Gibbs states contains a single point or contains infinitely many points [29,30]. A classical construction of Gibbs states on a Cayley tree is the method of Markov random field theory and recurrent equations of this theory; new tools such as group theory, information flows on trees, and node-weighted random walks have been implemented in the modern theory of Gibbs states [27].
Aside from the physical significance of systems on a Cayley tree, there are fruitful phenomena observed in these chaotic systems from the mathematical aspect. For instance, the topological conjugacy between two superior symbolic systems (shifts of finite type, to be precise) is decidable; irreducible shifts of finite type are chaotic in Devaney's sense; a stronger type of irreducibility is decidable. See [1,2,3,4,5,9,16] and the references therein for more details.
As a Gibbs measure maximizes the system's entropy on a crystal lattice Z d , it is of interest whether this remains true for the system on a Cayley tree. The variational principle for a system T is described, when one considers trivial potential, as h(T ) = h(µ), where µ is a Gibbs measure, and h(T ) and h(µ) stand for the topological and measure-theoretic entropies, respectively. In the theory of symbolic dynamical systems on a crystal lattice, the topological entropy is defined as the limit of orbits' contribution in the ball of finite volumes. Such a definition is well-defined since Z d is an amenable group [8]. However, the absence of Følner nets in a Cayley tree makes the definition of topological entropy a controversy; Petersen and Salama extended the study of topological entropy to systems on a rooted Cayley tree via the distribution of orbits on balls [24,25]. Notably, this is how a Gibbs measure for a ferromagnetic Ising model on a Cayley tree is constructed (cf. [27]).
It is natural to generalize the definition of topological entropy to systems on a Cayley graph, which corresponds to a group. Under the circumstance of the existence of topological entropy, the problem of whether a Gibbs state is an equilibrium state then follows. In other words, the existence of the topological entropy is essential for the study of the variational principle of shift spaces on a Cayley graph. This motivates the primary concern of the present investigation.
Problem. What kinds of groups ensure the existence of the topological entropy of systems on them?
This paper focuses on the existence of topological entropy of systems on a Cayley graph corresponding to a class of finitely generated infinite semigroups. Let G be a finitely generated infinite semigroup and let S ⊂ G be a finite generating subset of G. Suppose a binary matrix K indexed by S describes the relations between any two generators. To be more specific, for s, s ∈ S, K(s, s ) = 0 if and only if ss = 1 G , the identity element of G. Observe that the Cayley graphs of these groups include the rooted Cayley trees and the Bethe lattices (i.e., regular Cayley trees). In [6], the authors demonstrated that the topological entropy exists for Markov shifts (or topological Markov chains) on a Fibonacci-Cayley tree, which is a Cayley graph corresponding to a semigroup whose growth rate is the golden mean.
The investigation starts with introducing the notion of the stem entropy of a shift space on G, which, roughly speaking, represents the contribution of complexity on each branch. The existence of the stem entropy follows from the condition that the matrix K is primitive (Theorem 3.1) or is irreducible (Proposition 3.7). Beyond that, we demonstrate the coincidence of the stem entropy and the topological entropy provided K is a full matrix, which is equivalent to the condition that G is a free semigroup. Section 4 applies the existence of the stem entropy to demonstrate whether the topological entropy of a shift space on G exists. Firstly, Theorem 4.1 reveals that the topological entropy of Markov shifts on G exists provided K has a full row, which is a generalization of [6]. Another main result in this section addresses that, suppose the summation of each row of K is identical, the topological entropy of a hom Markov shift on G exists (Theorem 4.3); hom shifts, initiated from physical systems, are symmetric and isotropic Markov shifts (cf. [10]). It is remarkable that the topological entropy, once it exists, coincides with the stem entropy (Theorems 4.1 and 4.3 and Proposition 4.6). On the other hand, the regular Cayley tree satisfies the structure required in Theorem 4.3. An elegant examination of the hard square shift (or the golden mean shift) on a binary free group goes to Piantadosi [26].
Section 5 is devoted to graph representation of Markov shifts on G. Under the restriction of Markov shifts on a Cayley graph, the previous sections' results are unified as one. Theorem 5.4 manifests that, generally speaking, if the graph representation of the system is strongly connected and has a pivot, then the topological entropy exists. Most importantly, the strong connectedness of the graph is equivalent to the irreducibility of a one-dimensional Markov shift; a strongly connected graph representation has a pivot if and only if its associated one-dimensional Markov shift is mixing. It is seen that these conditions are both decidable (Proposition 5.6). Some numerical experiments are carried out in the Appendix for the existence of topological entropy of general systems on G. Further elucidation is under preparation.
Preliminaries
Let G be a finitely generated semigroup and S k = {s 1 , s 2 , · · · , s k } ⊂ G a finite generating subset of G. Suppose the relations between the generators of G = S k |R are represented by a binary matrix K indexed by S k as R = {s i s j : K(s i , s j ) = 0}. In other words,
s i s j = 1 G if and only if K(s i , s j ) = 0,
where 1 G is the identity element of G. The (right) Cayley graph of G with respect to S k is the directed graph T such that the vertex set is G and the edge set is E = {(g, gs) : g ∈ G, s ∈ S k }. It follows that T is an infinite tree. On the other hand, every g ∈ G has a unique minimal representation g = g 1 g 2 · · · g n (with respect to S k ). The length of the g, written as |g|, is defined by |g| = min{n : g = g 1 g 2 · · · g n , g i ∈ S k }.
Obviously, 1 G is the only element of length 0, and g = g 1 g 2 · · · g n , g i ∈ S k , is the unique minimal representation if |g| = n and K(g i , g i+1 ) = 1 for 1 ≤ i ≤ n − 1. Such a minimal representation is assumed for every element throughout the paper unless mentioned otherwise.
Example 2.1. (a) Let S 2 = {a, b} and K = E 2 the full 2 × 2 matrix. Then G is a free semigroup whose Cayley graph T is a binary rooted tree.
(b) Let S 4 = {s 1 , s 2 , s 3 , s 4 } and K ∈ {0, 1} 4×4 given by K(s i , s j ) = 0 if and only if i + j is even and i = j. Then G = F 2 is a free group of rank 2. Figure 2 presents part of the Cayley graph of F 2 .
(c) Suppose S k = {s 1 , s 2 , . . . , s k }, k ≥ 2, and K is the k × k binary matrix given by K(s i , s j ) = 0 if and only if i = j. It is seen that G is a free product of k cyclic groups of the second order, and its Cayley graph T = Γ k−1 is the Bethe lattice (regular Cayley tree) of order k.
(d) Let S 2 = {a, b} and K = 1 1 1 0 the golden mean matrix. The Cayley graph of G is called the Fibonacci-Cayley tree in [6] since the growth rate of {g ∈ G : |g| ≤ n} is the golden mean.
It is noteworthy that this class of semigroups covers many interesting examples with the exceptions of some simple ones nevertheless, such as N 2 or cyclic groups.
2.1. Shift spaces on a Cayley tree. Let A be a set of finite alphabet. A labeled tree (or configuration) is a function t : G → A for which t g := t(g) is the label attached to g ∈ G, and the set A G consisting of all labeled trees is called the full tree shift or full shift on G. A pattern is a function u : A H → A for some finite set H ⊂ G, where s(u) := H is the support of u. We say that a pattern u is accepted by t ∈ A G if there exists g ∈ G such that t gh = u h for every h ∈ s(u); otherwise, t rejects u. A subset X ⊆ A G is a tree shift (or shift space on G) if there exists a set of patterns F such that t rejects any u ∈ F for all t ∈ X. We write X = X F and call F a forbidden set for X. A tree shift X is a tree shift of finite type (TSFT) if X = X F for some finite forbidden set F. Let A = (A 1 , A 2 , . . . , A k ) be a k-tuple of binary matrices indexed by A. A Markov tree shift X A ⊂ A G is defined as
X A := {t ∈ A G : A i (t g , t gsi ) = 1 for all g ∈ G, |gs i | = |g| + 1}.
It follows from the definition that a Markov tree shift is a TSFT; conversely, every TSFT is topological conjugate with a Markov tree shift [3].
2.2.
Topological entropy and stem entropy. Let G be a finitely generated semigroup and A a finite alphabet. Suppose that X ⊂ A G is a tree shift. We introduce the following notions which are fundamental units in the present elaboration.
For g ∈ G and n ≥ 0, denote by ∆ (g) n the n-ball centered at g as ∆ (g) n = {gh : h ∈ G, |h| ≤ n}, with ∆ (1 G ) n simply denoted by ∆ n . On the other hand, we define the n-semiball centered at g as∆ (g) n = {gh : h ∈ G, |h| ≤ n, and |gh| = |g| + |h|}. Observe that∆ (g) n is the initial n-subtree rooted at g. Furthermore, let ∆ (si)+ n = {s i h : h ∈ G, |h| ≤ n, and |s i h| = 1 + |h|} ∪ {1 G } denote the ith branch of the Cayley graph with the root 1 G .
Notation. Suppose g ∈ G, a ∈ A, and n is a nonnegative integer. Definition 2.2. Suppose G is a finitely generated semigroup and A is a finite alphabet. Let X ⊂ A G be a tree shift and g ∈ G.
(a) The ith-stem entropy of X is defined as
(1) h (si) = h (si) (X) := lim sup n→∞ log p (si) n |∆ (si) n | .
The stem entropy, denoted by h (s) , of X exists if h (si) = h (sj ) for all i, j. (b) The topological entropy of X is defined as (2) h = h(X) := lim n→∞ log p n |∆ n | provided the limit exists.
Remark 2.3. Suppose that G is a strict semigroup; that is, no element in G has an inverse element. A straightforward examination indicates that h (s) = h provided h (s) exists. Indeed, B
(g)
n = C (g) n since ∆ (g) n =∆ (g)
n for g ∈ G and n ≥ 0. Later, Theorem 3.1 yields a sufficient condition for the existence of the stem entropy for a class of semigroups.
Suppose that G = S k | is a strict free semigroup of rank k; that is, K = E k is a full k × k matrix. The Cayley graph of G is an infinite rooted tree such that every node has k children. Petersen and Salama [24,25] demonstrated that the Figure 3. The support of patterns in C (s1)+ 3;a topological entropy (2) of a tree shift (i.e., a shift space on an infinite rooted tree) exists and that h = inf n→∞ log p n |∆ n | .
s 1 s 1 s 1 s −1 2 s 2 s −1 2 s −1 2 s −1 1 s 1 s 2 s 2 s 1 s −1 1 s −1 2 s −1 2 s −1 2 s −1 1 s 1 s −1 1 s −1 1 s 2 s −1 2 s 1 s 1 s −1 2 s 2 s 2 s 2 s 2 s 1 s −1 1 s 1 s 1 s −1 2 s 2 s −1 1 s −1 1 s 2 s −1 2 s −1 1 s −1 1 s −1 1 s 2 s −1 2 s 2 s 2 s 1 s −1 1 s −1 2 s −1 2 s −1 1 s 1 a
In [26], Piantadosi considered the golden mean shift X A,A t on F 2 , where A = (A, A)
with A = 1 1 1 0 and A t = (A t , A t ) with A t the transpose of A. Recall that F 2 = S 4 |R such that R is determined by K(s i , s j ) = 0 if and only if i + j is even and i = j. Piantadosi demonstrated the existence of the topological entropy of X A,A t via estimating the growth rate of q (si) n . The present paper generalizes Piantadosi's result to a class of Markov tree shifts on F l for l ≥ 2. More precisely, we show that the limit of the stem entropy exists (Theorems 3.1 and 3.6). Additionally, the topological entropy coincides with the stem entropy (Proposition 4.5).
Existence of Stem Entropy
This section aims at the exposition of the existence of the stem entropy. A straightforward examination derives that the stem entropy, which does exist, is nothing more than the topological entropy provided G is a strict semigroup. Therefore, the notion of the stem entropy can be seen as an extended discussion of the topological entropy whenever G is not a strict semigroup. On the other hand, it is of interest to the interaction between the stem and topological entropies.
Let G = S k |R be a finitely generated semigroup with generating subset S k = {s 1 , s 2 , . . . , s k }. Suppose that the relation set R is represented by a k × k binary matrix K as follows:
s i s j ∈ R if and only if K(s i , s j ) = 0.
With abusing the notation, we write G = S k |K to specify the equivalence of R and K. Let A be a finite alphabet and X ⊂ A G a tree shift. The main result, existence of the stem entropy, of this section is split into two theorems. The following theorem reveals a class of semigroups such that the stem entropy of a shift space on which exists. Additionally, Theorem 3.6 demonstrates that the limit in (1) also exists. For the sake of simplification, the notations S k , K, G, and A satisfy those conditions above in the remainder of this paper unless otherwise specified.
Theorem 3.1. Suppose that G = S k |K is a finitely generated semigroup, and X ⊂ A G is a shift space on G. If K is primitive, then the stem entropy of X exists. In other words, for 1 ≤ i, j ≤ k,
(A1) lim sup m→∞ log p (si) m |∆ (si) m | = lim sup m→∞ log p (sj ) m |∆ (sj ) m | .
The following series of lemmas are prerequisite for proving the theorem. We start with a property possessed by a primitive matrix. Recall that a nonnegative matrix is primitive if it is eventually positive.
Lemma 3.2. Let N be a k × k primitive binary matrix and let µ be its largest eigenvalue. Then, for
1 ≤ i, j ≤ k, there exists c = c(i, j) > 0 such that lim n→∞ N n (i, j)
cµ n = 1. Furthermore, if µ is an eigenvalue of N such that the eigenspace corresponding to µ contains a positive vector, then µ > 1.
Proof. The Lemma is a consequence of the Perron-Frobenius theorem. The proof of the asymptotic behavior of N n is could be found in [20,Theorem 4.5.12]. As for µ > 1, [20,Theorem 4.2.3] assures that µ = µ, and N is primitive implies N n (i, j) tends to infinity. These together with [20, Theorem 4.5.12] leads to µ > 1.
Lemma 3.3. Let {a n } ∞ n=1 , {c n } ∞ n=1 be real sequences and {b n } ∞ n=1 , {d n } ∞ n=1 be positive real sequences. Suppose lim n→∞ a n b n = lim n→∞ c n d n = L.
Then lim n→∞ a n + c n b n + d n = L.
Suppose, furthermore, that lim n→∞
n j=1 b j = +∞. Then lim n→∞ n j=1 a j n j=1 b j = L.
Proof. The equality lim n→∞ an+cn bn+dn = L is immediate and thus the proof is omitted. For the second part, to emphasize the importance of lim n→∞ n j=1 aj n j=1 bj we provide the following detailed discussion.
We prove that for all real numbers M > L and m < L, lim sup n→∞ a1+···+an b1+···+bn ≤ M and lim inf n→∞ a1+···+an b1+···+bn ≥ m, and thus the lemma is proved. By definition of limit superior, there is a positive integer N 1 such that an bn < M for all n ≥ N 1 . Then for n > N 1 , a n < M b n and a 1 + · · · + a n b
1 + · · · + b n = a 1 + · · · + a N1 b 1 + · · · + b n + a N1+1 + · · · + a n b 1 + · · · + b n < a 1 + · · · + a N1 b 1 + · · · + b n + M b N1+1 + · · · + b n b 1 + · · · + b n < a 1 + · · · + a N1 b 1 + · · · + b n + M.
Since N 1 is fixed and lim n→∞ n j=1 b j = +∞, we have
lim sup n→∞ a 1 + · · · + a n b 1 + · · · + b n ≤ lim n→∞ ( a 1 + · · · + a N1 b 1 + · · · + b n + M ) = M.
As for the limit inferior part, there exists a positive integer N 2 such that an bn > m for all n ≥ N 2 . Therefore,
a 1 + · · · + a n b 1 + · · · + b n = a 1 + · · · + a N2 b 1 + · · · + b n + a N2+1 + · · · + a n b 1 + · · · + b n > a 1 + · · · + a N2 b 1 + · · · + b n + m b N2+1 + · · · + b n b 1 + · · · + b n .
By applying lim n→∞ n j=1 b j = +∞ again, we have that lim n→∞ b N 2 +1 +···+bn b1+···+bn = 1 and that lim inf n→∞ a 1 + · · · + a n b 1 + · · · + b n ≥ lim
n→∞ a 1 + · · · + a N3 b 1 + · · · + b n + m b N2+1 + · · · + b n b 1 + · · · + b n = m.
The proof is thus complete.
The following lemma gives an explicit expression for the number of nodes in the initial n-subtree.
Lemma 3.4. Suppose that G = S k |K is finitely generated. For 1 ≤ i ≤ k, m ≥ 0, n ≥ 0 and q ≥ 1, the following statements are true:
(i)
|∆ (si) n | = 1 + n l=1 k j=1 K l (s i , s j ). (ii) |∆ (si) n+q(m+1) | = |∆ (si) n | + k l=1 q−1 j=0 K n+j(m+1)+1 (s i , s l )|∆ (s l ) m |. (iii) p (si) n+q(m+1) ≤ p (si) n k l=1 (p (s l ) m ) q−1 j=0 K n+j(m+1)+1 (si,s l ) .
Proof. (i) The length of each element in∆ (si) n is at most n. There is one element of length 0, and for 1 ≤ l ≤ n, there exists k j=1 K l (s i , s j ) elements of lenth l in ∆ (si) n . Hence there are 1 + k j=1 K(s i , s j ) + · · · + k j=1 K n (s i , s j ) elements in total.
(ii) We prove it by induction on q. Since∆ (si) n+m+1 can be decomposed into disjoint union of 1 copy of∆
(si) n , K n+1 (s i , s 1 ) copies of∆ (s1) m ,. . . ,K n+1 (s i , s k ) copies of∆ (s k )
m , thus the result holds when q = 1. Suppose the statement is true for some q − 1 ∈ N. Applying the result in the induction step, we obtain
|∆ (si) n+(q−1)(m+1)+m+1 | = |∆ (si) n+(q−1)(m+1) | + k l=1 K n+(q−1)(m+1)+1 (s i , s l )|∆ (s l ) m |.
The induction hypothesis gives that
|∆ (si) n+(q−1)(m+1) | = |∆ (si) n | + k l=1 q−2 j=0 K n+j(m+1)+1 (s i , s l )|∆ (s l ) m |.
Therefore,
|∆ (si) n+q(m+1) | = |∆ (si) n+(q−1)(m+1)+m+1 | = |∆ (si) n+(q−1)(m+1) | + k l=1 K n+(q−1)(m+1)+1 (s i , s l )|∆ (s l ) m | = |∆ (si) n | + k l=1 q−2 j=0 K n+j(m+1)+1 (s i , s l )|∆ (s l ) m | + k l=1 K n+(q−1)(m+1)+1 (s i , s l )|∆ (s l ) m | = |∆ (si) n | + k l=1 q−1 j=0 K n+j(m+1)+1 (s i , s l )|∆ (s l ) m |.
The proof is complete.
(iii) We prove it by induction on q. Recall that∆
(si)
n+m+1 is a disjoint union of 1 copy of∆
(si) n , K n+1 (s i , s 1 ) copies of∆ (s1) m , K n+1 (s i , s 2 ) copies of∆(sj ) m is p (sj ) m for 1 ≤ j ≤ k. The number p (si) n+m+1 could not exceed p (si) n k l=1 (p (s l ) m ) (K n+1 )(si,s l )
, the result is valid when q = 1. Now we assume the result holds for some q − 1 ∈ N. Then
p (si) q(m+1)+n = p (si) (q−1)(m+1)+n+m+1 ≤ p (si) (q−1)(m+1)+n k l=1 (p (s l ) m ) (K (q−1)(m+1)+n+1 )(si,s l ) ≤ p (si) n k l=1 (p (s l ) m ) q−2 j=0 (K n+j(m+1)+1 )(si,s l ) k l=1 (p (s l ) m ) (K (q−1)(m+1)+n+1 )(si,s l ) ≤ p (si) n k l=1 (p (s l ) m ) q−1 j=0 (K n+j(m+1)+1 )(si,s l ) ,
the proof is complete.
Aside from the elaboration of Lemmas 3.2-3.4, the following lemma, which plays a crucial role in the proof of Theorem 3.1, further portrays the composition of every m-subtree in terms of all n-subtree when m ≥ n.
Lemma 3.5. Suppose that G = S k |K is finitely generated and K is primitive. For m ≥ 0 and 1 ≤ i, j ≤ k, the following statements are true:
(i) lim n→∞ |∆ (sj ) n | |∆ (si) n+m+1 | > 0 and k l=1 lim n→∞ K m+1 (s i , s l )|∆ (s l ) n | |∆ (si) n+m+1 | = 1.
(ii) There exists γ > 0 such that
lim q→∞ q−1 l=0 K r+l(m+1)+1 (s i , s j ) |∆ (si) q(m+1)+r | = γ λ m+1 − 1 for all r ≥ 0. (iii) For all r ≥ 0, k j=1 lim q→∞ q−1 l=0 K r+l(m+1)+1 (s i , s j )|∆ (sj ) m | |∆ (si) q(m+1)+r | = 1. Proof. (i) Let m ≥ 0, 1 ≤ i, j ≤ k be given. For 1 ≤ l ≤ k,K n+m+1 (s i , s l ) ( k l=1 b l )λ n+m+1 = 1.
Since K is a primitive {0, 1}-matrix, Lemma 3.2 ensures that the largest eigenvalue of K is greater than 1. Therefore we may apply Lemma 3.3 and obtain
(3) lim n→∞ |∆ (sj ) n | n s=1 k l=1 a l λ s = lim n→∞ 1 + n s=1 k l=1 K s (s j , s l ) n s=1 k l=1 a l λ s = 1 and (4) lim n→∞ |∆ (si) n+m+1 | n+m+1 s=1 k l=1 b l λ s = lim n→∞ 1 + n+m+1 s=1 k l=1 K s (s i , s l ) n+m+1 s=1 k l=1 b l λ s = 1.
We also consider
(5) lim n→∞ n s=1 k l=1 a l λ s n+m+1 s=1 k l=1 b l λ s = lim n→∞ k l=1 a l k l=1 b l λ(λ n −1) λ−1 λ(λ n+m+1 −1) λ−1 = k l=1 a l λ m+1 k l=1 b l .
The existence of the limit of {|∆ (3), (4) and (5). We also have
(sj ) n |/|∆ (si) n+m+1 |} ∞ n=1 follows fromlim n→∞ |∆ (sj ) n | |∆ (si) n+m+1 | = lim n→∞ |∆ (sj ) n | n s=1 k l=1 a l λ s lim n→∞ n+m+1 s=1 k l=1 b l λ s |∆ (si) n+m+1 | lim n→∞ n s=1 k l=1 a l λ s n+m+1 s=1 k l=1 b l λ s = k l=1 a l λ m+1 k l=1 b l > 0. From Lemma 3.4 (ii) we see that |∆ (si) n+m+1 | = |∆ (si) m | + k l=1 K m+1 (s i , s l )|∆ (s l ) n |.
Hence it yields
k l=1 lim n→∞ K m+1 (s i , s l )|∆ (s l ) n | |∆ (si) n+m+1 | = lim n→∞ k l=1 K m+1 (s i , s l )|∆ (s l ) n | |∆ (si) n+m+1 | = lim n→∞ |∆ (si) n+m+1 | − |∆ (si) m | |∆ (si) n+m+1 | = 1.
(ii) Let r, m ≥ 0 and 1 ≤ i, j ≤ k be given. Let b 1 ,. . . ,b k be as in the proof of Lemma 3.5 (i). Then (iii) Let r, m ≥ 0 and 1 ≤ i, j ≤ k be given. Using Lemma 3.4 (ii) we see that
q(m+1)+r | k s=1 b s q(m+1)+r l=1 λ l = lim q→∞ 1 + q(m+1)+r s=1 k l=1 K s (s i , s l ) k s=1 b s q(m+1)+r l=1 λ l = 1. Since lim q→∞ q−1 l=0 (K r+l(m+1)+1 )(s i , s j ) |∆ (si) q(m+1)+r | = lim q→∞ b j k s=1 b s q−1 l=0 λ r+l(m+1)+1 q(m+1)+r l=1 λ l = lim q→∞ b j k s=1 b s λ r+1 (λ q(m+1) − 1) λ m+1 − 1 λ − 1 λ(λ q(m+1)+r − 1) = b j k s=1 b s λ − 1 λ m+1 − 1|∆ (si) r+q(m+1) | = |∆ (si) r | + k j=1 q−1 l=0 K r+l(m+1)+1 (s i , s j )|∆ (sj ) m |. Therefore k j=1 lim q→∞ q−1 l=0 K r+1+l(m+1) (s i , s j )|∆ (sj ) m | |∆ (si) q(m+1)+r | = lim q→∞ k j=1 q−1 l=0 K r+1+l(m+1) (s i , s j )|∆ (sj ) m | |∆ (si) q(m+1)+r | = lim q→∞ |∆ (si) r+q(m+1) | − |∆ (si) r | |∆ (si) q(m+1)+r | = 1.
This derives the desired result.
With the delivery of Lemma 3.5, we are at the position of presenting the proof of Theorem 3.1.
Proof of Theorem 3.1. Since K is a primitive matrix, we may choose a positive integer n such that K n is a positive matrix. We also assume that |∆
(s I ) m+n+1 | ≤ log p (s I ) n |∆ (s I ) m+n+1 | + k l=1 K n+1 (s I , s l )|∆ (s l ) m | |∆ (s I ) m+n+1 | log p (s l ) m |∆ (s l ) m | .
Recall that Lemma 3.5 gives This completes the proof.
Besides the demonstration of the existence of the stem entropy, the following theorem deduces that the limit in the definition of the stem entropy does exist once h (si) = h (sj ) for 1 ≤ i, j ≤ k.
Theorem 3.6. Suppose that G = S k |K is finitely generated, and X ⊆ A G is a tree shift. If K is primitive, then the limit of the ith-stem entropy of X (1) exists, and
(A2) lim n→∞ log p (si) n |∆ (si) n | = inf n≥0 max 1≤j≤k log p (sj ) n |∆ (sj ) n | for 1 ≤ i ≤ k.
Proof. Let 1 ≤ i ≤ k and > 0 be given. We choose an integer m > 0 such that
log p (si) m |∆ (si) m | < lim inf n→∞ log p (si) n |∆ (si) n | + and log p (s l ) m |∆ (s l ) m | < lim sup n→∞ log p (s l ) n |∆ (s l ) n | + , for l = i. For r ≥ 0, q ≥ 1, Lemma 3.4 (iii) gives p (si) r+q(m+1) ≤ p (si) r k l=1 (p (s l ) m ) q−1 j=0 (K r+j(m+1)+1 )(si,s l ) ,
which yields
(6) log p (si) q(m+1)+r |∆ (si) q(m+1)+r | ≤ log p (si) r |∆ (si) q(m+1)+r | + k l=1 q−1 j=0 K r+j(m+1)+1 (s i , s l )|∆ (s l ) m | |∆ (si) q(m+1)+r | log p (s l ) m |∆ (s l ) m | . For l = 1, . . . , k, let L (l) denote the limit of { q−1 j=0 K r+j(m+1)+1 (s i , s l )|∆ (s l ) m |/|∆ (si)
q(m+1)+r |} ∞ q=1 . From Lemma 3.5 we know that each L (l) is positive and the value of L (1) +· · ·+L (k) is 1. Taking limit superior at both sides of (6) we thus obtain lim sup
q→∞ log p (si) q(m+1)+r |∆ (si) q(m+1)+r | ≤ k l=1 L (l) log p (s l ) m |∆ (s l ) m | < L (i) lim inf n→∞ log p (si) n |∆ (si) n | + + l =i L (l) lim sup n→∞ log p (s l ) n |∆ (s l ) n | + = L (i) lim inf n→∞ log p (si) n |∆ (si) n | + + l =i L (l) lim sup n→∞ log p (si) n |∆ (si) n | + . Therefore lim sup n→∞ log p (si) n |∆ (si) n | = max 0≤r≤m lim sup q→∞ log p (si) q(m+1)+r |∆ (si) q(m+1)+r | < L (i) lim inf n→∞ log p (si) n |∆ (si) n | + + l =i L (l) lim sup n→∞ log p (si) n |∆ (si) n | + .
Since is arbitrary, the inequality above leads to It remains to show that the stem entropies equal the infimum. Note that for 1 ≤ l ≤ k the value of L (l) does not depend on the choice of r. Observe that (6) holds for all m ≥ 0. Taking r = 0 into (6) and letting q tends to infinity, we obtain
lim n→∞ log p (si) n |∆ (si) n | = lim q→∞ log p (si) q(m+1)+r |∆ (si) q(m+1)+r | ≤ k l=1 L (l) log p (s l ) m |∆ (s l ) m | ≤ k l=1 L (l) max 1≤j≤k log p (sj ) m |∆ (sj ) m | = max 1≤j≤k log p (sj ) m |∆ (sj ) m | . Hence lim n→∞ log p (si) n |∆ (si) (n)| = inf m≥0 max 1≤j≤k log p (sj ) m |∆ (sj ) (m)| .
The proof is complete.
Proposition 3.7.
The assumption of the matrix K could be loosen so that it is irreducible while (A1) and (A2) are still valid.
Proof. Let K be irreducible with period P . According to the cyclic structure of K discussed in [20,Section 4.5], with a proper permutation in index, K has the following form:
(7) O K 1 O · · · O O O O K 2 · · · O O . . . . . . . . . . . . . . . . . . O O O · · · O K P −1 K P O O · · · O O .
Furthermore, by recursively defining K n = K n−P for every n > P , the matrix K r := K r K r+1 . . . K r+P −1 is a primitive matrix and the spectral radius ρ(K r ) = ρ(K) P . The consequence of the above property yields an estimate of the number L
|∆ (si) m+n+1 | ≤ log p (si) n |∆ (si) m+n+1 | + k l=1 K n+1 (s i , s l )|∆ (s l ) m | |∆ (si) m+n+1 | log p (s l ) m |∆ (s l ) m | , we derive lim sup m→∞ log p (si) m |∆ (si) m | = lim inf m→∞ K n+1 (s i , s j )|∆ (sj ) m | |∆ (si) m+n+1 | lim sup m→∞ log p (sj ) m |∆ (sj ) m | + (1 − lim inf m→∞ K n+1 (s i , s j )|∆ (sj ) m | |∆ (si) m+n+1 | ) lim sup m→∞ log p (si) m |∆ (si) m | .
Equation (A1) follows as a consequence of lim inf
m→∞ K n+1 (si,sj )|∆ (s j ) m | |∆ (s i ) m+n+1 |
> 0, which follows from (7) and (9).
We divide the proof of (A2) into two parts. That is, lim sup n→∞ log p For every m ≥ 0, define r m = max{nP ≥ 0 : nP + m 0 + 1 ≤ m}, n 0 = min{n ≥ N : P |n + m 0 + 2}, P 0 = m 0 + n 0 + 2 and S m = {r m − nP 0 : n ∈ N}. Thus, for all sufficiently large m ≥ N ,
log p (si) m |∆ (si) m | ≤ |∆ (si) min Sm | |∆ (si) m | log p (si) min Sm |∆ (si) min Sm | + k l=1 n∈Sm∪{rm} K n (s i , s l )|∆ (s l ) m0 | |∆ (si) m | log p (s l ) m0 |∆ (s l ) m0 | + k l=1 n∈Sm K n+m0+1 (s i , s l )|∆ (s l ) n0 | |∆ (si) m | log p (s l ) n0 |∆ (s l ) n0 | + k l=1 K rm+m0+1 (s i , s l )|∆ (s l ) m−(rm+m0+1) | |∆ (si) m | log p (s l ) m−(rm+m0+1) |∆ (s l ) m−(rm+m0+1) | ≤ k l=1 n∈Sm∪{rm} K n (s i , s l )|∆ (s l ) m0 | |∆ (si) m | lim inf(si) m |∆ (si) m | ≤ lim inf m→∞ n∈Sm∪{rm} K n (s i , s i )|∆ (si) m0 | |∆ (si) m | lim inf m→∞ log p (si) m |∆ (si) m | + + 1 − lim inf m→∞ n∈Sm∪{rm} K n (s i , s i )|∆ (si) m0 | |∆ (si) m | lim sup m→∞ log p (si) m |∆ (si) m | + .
In fact, we can show that the coefficient of the convex combination has the following estimate of lower bound:
lim inf m→∞ n∈Sm∪{rm} K n (s i , s i )|∆ (si) m0 | |∆ (si) m | ≥ C > 0.
To show this we consider when ρ(K) = 1, |∆ (s l ) m | = m + 1 and thus lim inf
m→∞ n∈Sm∪{rm} K n (s i , s i )|∆ (si) m0 | |∆ (si) m | = m 0 + 1 m 0 + 1 + n 0 + 1 ≥ m 0 + 1 m 0 + 1 + 2(m 0 + 1) = 1 3 = C.
For the ρ(K) > 1, Because > 0 is arbitrary, it follows that lim sup n→∞
lim inf m→∞ n∈Sm∪{rm} K n (s i , s i )|∆ (si) m0 | |∆ (si) m | ≥ lim inf m→∞ K rm (s i , s i )|∆ (si) m0 | |∆ (si) m | ≥ lim m→∞ K rm (s i , s i ) ρ(K) rm |∆ (si) m0 | ρ(K) m0+P lim inf m→∞ ρ(K) rm+m0+P |∆ (si) rm+m0+P | ≥ C.log p (s i ) n |∆ (s i ) n | = lim inf n→∞ log p (s i ) n |∆ (s i ) n | .
As for the second part, the proof remains the same as that in Theorem 3.6.
Existence of Topological Entropy
Recall that the definitions of the topological and stem entropies collapse whenever G is a strict semigroup. Theorems 3.1 and 3.6 yield a class of finitely generated semigroups on which the stem entropy of each tree shift exists, following the derived results, this section is devoted to the existence of the topological entropy and the relationship between the topological entropy and stem entropy. We demonstrate the existence of the topological entropy for a class of tree shifts on G, and the topological entropy is identical to the stem entropy. The considered class of semigroups contains but is not limited to the class of finitely generated free groups. For the rest of this article, G = S k |K is a finitely generated semigroup with primitive matrix K.
Let A = (A 1 , A 2 , . . . , A k ) be a k-tuple of binary matrices indexed by A. Recall that a Markov tree shift X A ⊆ A G is defined as
X A = {t ∈ A G : A i (t g , t gsi ) = 1 for all g ∈ G, |gs i | = |g| + 1}.
The following theorem indicates that the topological entropy of a Markov tree shift exists provided K has a full row. Moreover, the topological entropy is identical to the stem entropy. Proof. Note that every n-block u ∈ B n can be uniquely expressed as a (k + 1)-tuple (u 1 G , u|∆(s 1 )
n−1 , u|∆(s 2 ) n−1
, · · · , u|∆(s k ) n−1 ), and thus p n ≤ |A|· k j=1 p (sj ) n−1 . As a consequence,
(10) lim sup n→∞ log p n |∆ n | ≤ lim sup n→∞ |A| |∆ n | + k j=1 log p (sj ) n−1 |∆ (sj ) n−1 | |∆ (sj ) n−1 | |∆ n | = h (s)
holds by applying Theorem 3.6. On the other hand, p (si) n ≤ p n holds naturally, which further implies (11) lim inf
n→∞ log p n |∆ n | ≥ lim inf n→∞ log p (si) n |∆ n | = lim inf n→∞ log p (si) n |∆ (si) n | = h (si) = h (s) .
The proof is finished by combining (10) and (11) above.
The theorem above asserts the existence of topological entropy of a Markov tree shift on a Fibonacci-Cayley tree, which was revealed in [6].
Corollary 4.2 (See [6]). Suppose G is generated by S 2 with K = 1 1 1 0 , and X is a Markov tree shift. Then the topological entropy of X exists and can be calculated via a system of recurrence equations.
A Markov tree shift X A on G is called a hom Markov tree shift if A i = A j for all i, j. From the physical viewpoint, such a system is isotropic and homogeneous; in other words, two symbols are forbidden to sit next to each other in all coordinate directions once they are forbidden in some direction. The class of hom shift spaces plays an important role in the investigation of physical systems. Suppose that the matrix K has a constant row sum. The theorem below reveals that, not only the topological entropy of a hom Markov tree shift exists, the stem entropy and the topological entropy also coincide. Proof. Since m = k j=1 K(s i , s j ) = k j=1 K(s i , s j ) for every 1 ≤ i, i ≤ k and A 1 = A 2 = · · · = A k = A, it follows immediately that q (si) n;a = q (sj ) n;a for every s i , s j ∈ S k , for which we simply denote q n;a in the rest of the proof. Note that since x k m is convex, the following inequality holds for every s i ∈ S k :
(p (si) n ) k m = ( |A| a=1 (q n;a ) m ) k m = (|A| |A| a=1 1 |A| · (q n;a ) m ) k m ≤ |A| k−m m |A| a=1 (q n;a ) k = |A| k−m m p n .
On the other hand, it can be deduced by applying Minkowski inequality that
(p (si) n ) k m = ( |A| a=1 (q n;a ) m ) k m ≥ |A| j=1
(q n;a ) k = p n .
By combining the inequalities above , it yields that (p
(si) n ) k m ≥ p n ≥ |A| m−k m (p (si) n ) k m and thus log p (si) n |∆ (si) n | k m |∆ (si) n | |∆ n | + m − k m log|A| |∆ n | = log p (si) n |∆ (si) n | k m |∆ (si) n | k m (|∆ (si) n | − 1) + 1 + m − k m log|A| |∆ n | ≤ log p n |∆ n | ≤ log p (si) n |∆ (si) n | k m |∆ (si) n | |∆ n | = log p (si) n |∆ (si) n | k m |∆ (si) n | k m (|∆ (si) n | − 1) + 1 . Since lim n→∞ log p (s i ) n |∆ (s i ) n |
is proved to be h (s) for all s i ∈ S k in Theorem 3.6, the proof is finished.
Example 4.4. A class of groups satisfying the assumption of Theorem 4.3 is the Bethe lattice, for which the matrices K's have each diagonal entry 0 and each nondiagonal entry 1. For instance, the Bethe lattice of order 3 is provided in Figure 1.
An immediate application of Theorem 4.3 is that the topological entropy of a hom Markov tree shift on a free group exists. Suppose that A = (A 1 , A 2 , . . . , A k ). We denote by A t = (A t 1 , A t 2 , . . . , A t k ) the k-tuple of transpose matrices of A. Theorem 4.3 is further generalized to the following proposition.
Proposition 4.5. Let G = F k be a free group of rank k. That is, G = S 2k |K with K(s i , s j ) = 0 if and only if |i − j| = k. Suppose X = X A,A t is a Markov shift space over F k with A 1 = A 2 = · · · = A k = A indexed by a finite alphabet A. Then the limit lim n→∞ log pn |∆n| exists and equals h (s) .
Proof. For simplicity, we write q + n;a = q . The inequality then holds by taking limit superior of both sides, and the same arguments apply to q − n;a . Now we claim that lim n→∞ pn |∆n| exists and equals h (s) . Since it follows from (10) that lim sup n→∞ log pn |∆n| ≤ h (s) , it is left to show that lim inf n→∞ log pn |∆n| ≥ h (s) . Since p (s1) n = a∈A (q + n;a ) k · (q − n;a ) k−1 , there exists a n ∈ A for each n such that (q + n;an ) k · (q − n;an ) k−1 ≥ p (s 1 ) n |A| . Hence, by applying Theorem 3.6 and the claim above, for every > 0 there exists N ∈ N such that q + n;an , q − n;an < e (h (s) + )|∆ + n | , and that (q + n;an ) k · (q − n;an ) k−1 ≥
1 |A| p (s1) n > e (h (s) − )|∆n| , whenever n ≥ N . This implies q − n;an = (q + n;an ) k · (q − n;an ) k−1 (q + n;an ) k · (q − n;an ) k−2 ≥ e (h (s) − )|∆n|−(h (s) + )|∆ + n |(2k−2) = e (h (s) − )[(2k−1)|∆ + n |−(2k−2)]−(h (s) + )|∆ + n |(2k−2) = e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | .
Hence, p n;an = (q + n;an ) k · (q − n;an ) k−1 · q − n;an
≥ e (h (s) − )|∆n| · e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | ≥ e (h (s) −(4k−3) )|∆n| · e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | = e (h (s) −(4k−3) )(|∆n|+|∆ + n |) · e −(2k−2)(h (s) − )
Hence, one obtains
lim inf n→∞ p n |∆ n | ≥ lim inf n→∞ p n;an |∆ n | ≥ h (s) .
This finishes the proof.
Using the same technique as above, one can also obtain a variation of Proposition 4.5 as follows.
Proposition 4.6. Suppose A is a finite alphabet with |A| ≤ 2k − 1. Let X A,A t be a Markov shift over F k with A = (A 1 , A 2 , . . . , A k ). Then the topological entropy of X exists and equals h (s) .
Proof. For simplicity, we write |∆ (si) | = |∆ (s −1 i ) | = |∆ n | and |∆ (si)+ | = |∆ (s −1 i )+ | = |∆ + n | in the rest of the proof.
By applying the argument in Proposition 4.5, one obtains that
(12) lim sup log q (si) n;a |∆ (si) n | ≤ h (s)
for every s i ∈ S 2k . Now we claim that lim n→∞ pn |∆n| exists and equals h (s) . Since it follows from (10) that lim sup n→∞ log pn
|∆n| ≤ h (s) , it is left to show that lim inf n→∞ log pn |∆n| ≥ h (s) . Since p (z) n = a∈A w =z −1 q (w)
n;a for every z ∈ S 2k , there exists a n;z ∈ A for each n such that w =z −1 q (w) n;an;z ≥ p (z) n |A| . Hence, by applying Theorem 3.6 and (12), for every > 0 there exists N ∈ N such that q (w) n;an;z < e (h (s) + )|∆ + n | , and that
w =z −1 q (w) n;an;z ≥ 1 |A| p (z) n > e (h (s) − )|∆n| ,
for all z, w ∈ S 2k and all n ≥ N . At this moment, it is noteworthy that the restriction imposed on the dimension of A i leads to the coincidence of some a n;z1 = a n;z2 (z 1 = z 2 ) by the pigeonhole principle, and thus K(z 2 , z −1 1 ) = 1. These two properties together imply that if u, v are admissible patterns in X A,A t with u 1 G = a n;z1 = a n;
z2 = v 1 G , s(u) =∆ (z1) n , and s(v) =∆ (z2)
n , then u with support s(u) = ∆ n , defined as follows, is also a admissible pattern:
u g := v g , if g = z −1 2 g , |g| = |z −1 2 | + |g |; u g , otherwise.
As a consequence, p n;an;z 1 = q (z −1 1 ) n;an;z 2 · w =z −1 1 q (w) n;an;z 1 , and q (z −1 1 ) n;an;z 1 = w =z −1 2 q (w) n;an;z 1
w =z −1 1 ,w =z −1 2 q (w) n;an;z 1 ≥ e (h (s) − )|∆n|−(h (s) + )|∆ + n |(2k−2) = e (h (s) − )[(2k−1)|∆n|−(2k−2)]−(h (s) + )|∆ + n |(2k−2) = e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | .
Combining all the results above, it follows that This finishes the proof.
p n;an = q (z −1 1 ) n;an;z 1 · w =z −1 1 q (w) n;an;z 1 ≥ e (h (s) − )|∆n| · e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | ≥ e (h (s) −(4k−3) )|∆n| · e −(2k−2)(h (s) − ) e (h (s) −(4k−3) )|∆ + n | = e (h (s) −(4k−3) )(|∆n|+|∆ + n |) · e −(2k−2)(h (s) − ) (0, s1) (1, s1) (0, s2) (1, s2)
Generalization of Mixing Property
Aside from the straightforward estimation of topological entropy in the previous section, this section studies from an topological perspective the coincidence between stem entropy and topological entropy. In fact, the exposition in the following is inspired by [24,Proposition 3.1] and generalizes the idea of mixing property on hom Markov tree shifts on a strict semigroup to that on finitely generated semigroup expressed as G = S k |K . We begin with defining the following terms. Example 5.2. Suppose G = S 2 |K is associated with the matrix K = 1 1 1 0 and
A 1 = 1 1 1 0 , A 2 = 0 1 1 1
are the adjacency matrices for the shift space X A1,A2 ⊂ A G . Then, the graph representation of X A1,A2 is defined as in Figure 4.
To see the definitions above are related to the mixing property, we prove the following proposition. (ii) G is strongly connected and contains a pivot if and only if A is primitive.
Proof. (i) It is not hard to see that A is irreducible if G is strongly connected, since for (a, s i ), (b, s i ) ∈ V, there exists a walk (a, s i )(a 1 , s i1 )(a 2 , s i2 ) · · · (a n−1 , s in−1 )(b, s i ) and thus aa 1 a 2 · · · a n b is a word admissible by A. We now show the converse, i.e., for (a, s i ), (b, s j ) ∈ V, there exists a walk (a, s i ) M − − → → (b, s j ). Since K is a primitive matrix, there exists N such that for every n ∈ N and s i , s j ∈ S k , there is an admissible word s i s i1 s i2 · · · s in−1 s j by K. On the other hand, since A is irreducible, for every a, b ∈ A there exists an integer M ≥ N and an M -word aa 1 a 2 · · · a N −1 b admissible by A. This results in a walk (a , s i )(a 1 , s i1 ) · · · (a M −1 ,
s i M −1 )(b, s j ) in G.
This completes the proof.
(ii) First of all, we show that A is primitive if the adjacency matrix A G of G is primitive. Indeed, since A G is primitive, there exists N such that for all (a, s i ), (b, s j ) and n ≥ N , there exists a admissible walk (a, s i ) n − → → (b, s j ) in G. This naturally yields a (n + 1)-word admissible by A which starts at a and terminates at b.
Secondly, we show that G is strongly connected and contains a pivot provided A is primitive. To this end, we show every (a, s i ) ∈ V is a pivot of G. Since K is a primitive matrix, there exists an integer N 1 such that for every s j ∈ S k and every n ≥ N 1 , there exists an (n + 1)-word admissible by K which starts from s i and terminates at s j . On the other hand, since A is primitive, there exists N 2 ≥ N 1 such that for every b ∈ A there is a admissible word aa 1 · · · a N2−1 b by A. This implies for all n ≥ N 2 there is a walk (a, s i )(a 1 , s i1 ) · · · (a n−1 , s in−1 )(a n , s in ) in G. This finishes the proof of our claim. Note since every (a, s i ) is a pivot, irreducibility follows immediately.
Finally, it remains to show that if G is strongly connected and contains a pivot, then A G is primitive. It is also equivalent to show that G is strongly connected and there exists (a, s i ) ∈ V and N ∈ N such that every n ≥ N admits a walk (a, s i ) n − → → (a, s i ). Since strong connectedness follows immediately, it is left to show the latter. Suppose (a, s i ) is a pivot such that there exist s j ∈ S k and walks (a, s i ) N −→ → (b k , s j ) for every b k ∈ A as follows:
(a, s i )(a 1,2 , s l1,2 ) · · · (a 1,N −1 , s l 1,N −1 )(a, s j ), (a, s i )(a 2,2 , s l2,2 ) · · · (a 2,N −1 , s l 2,N −1 )(b 2 , s j ), . . .
(a, s i )(a |A|,2 , s l |A|,2 ) · · · (a |A|,N −1 , s l |A|,N −1 )(b |A| , s j ).
Hence, the following are admissible words by A:
aa 1,2 · · · a 1,N −1 a, aa 2,2 · · · a 2,N −1 b 2 , . . . aa |A|,2 · · · a |A|,N −1 b |A| .
From these, we are able to construct a word of length n + 1 ≥ N + 1 with both starting and terminating symbol a. For instance, when n = N + 2, we may observe a 1,N −2 a 1,N −1 a = b k a 1,N −1 a for some 1 ≤ k ≤ |A| and thus aa k,2 · · · a k,N −1 b k a 1,N −1 a is an admissible word by A. This process can be done for N + 1 ≤ n ≤ 2N , and further extension process for n > 2N is done by a proper concatenation with the prefix aa 1,2 · · · a 1,N a. Now since K is a primitive matrix, we can also prove that for every s i ∈ S k and any sufficiently large n ∈ N there is an (n + 1)-word admissible by K which starts and terminates at s i simultaneously. Combining these two facts we are able to construct a walk (a, s i ) n − → → (a, s i ) for all sufficiently large n, and the proof is completed.
Next, we show that the mixing property in the sense of a Markov tree shift results in the coincidence between the stem entropy and topological entropy. We show that h = h. To begin with, we gives an order on G so that we are able to write {g i } M i=1 = {g ∈ T : |g| = N } in the lexicographical order and introduce the notation Finally, we show that the assumption in Theorem 5.4 is finitely checkable.
Proposition 5.6. Let X A be a Markov tree shift. Suppose G = (V, E) is a graph representation of X A . It is finitely checkable whether G admits a pivot and whether G is strongly connected.
Proof. Since G is strongly connected if and only if the adjacency matrix A G associated with G is irreducible, it is clearly finitely checkable. To see the admittance of pivot is also finitely checkable, we define the matrix A n for all n ∈ Z + as follows:
A n ((a, s i ), (b, s j )) = 1 if (A G ) n ((a, s i ), (b, s j )) = 1, 0 otherwise.
It is then clear that G admits a pivot if and only if there exist s i , s j ∈ S k , a ∈ A, and n ∈ Z + such that A n ((a, s i ), (b, s j )) = 1 for all b ∈ A. Since |{A n : n ≥ 0}| ≤ 2 |V| 2 and A n is eventually periodic, there exist 0 ≤ N 1 ≤ N 2 ≤ 2 |V| 2 such that A N1+n = A N2+n for all n ≥ 0. In other words, G admits a pivot if and only if there exist s i , s j ∈ S k , a ∈ A, and 1 ≤ n ≤ 2 |V| 2 such that A n ((a, s i ), (b, s j )) = 1 for all b ∈ A. This implies that admittance of a pivot is finitely checkable.
Appendix A. An Attempt toward the Existence of Topological Entropy
This section presents an attempt toward the existence of topological entropy by exploiting the composition of colors on the boundary of all n-blocks. Suppose X A is given. We denote by a vector v ∈ Z |A||S k | + the product (a,si) (q (si) n;a ) v (a,s i ) . Note that
W := { v∈Z |A||S k | + r v · v : r v ∈ Z, r v = 0 for finitely many v ∈ Z |A||S k | + } is a vector space with a basis Z |A||S k | + . Define the linear transformation F : W → W as (F (v)) (a,si) = 1, if v (a,si) > 0; 0, if v (a,si) = 0, and the simplified representation F * (v) of F (v) as F * ( v r v · v) = v r v · F (v), where r v = 1, if r v > 0; 0, if r v = 0.
Define the shift transformation σ : W → W by 1 Function normalized tree entropy(A,iter, ) 2p 0 = (p 0;1 ,p 0;2 , · · · ,p 0;k ) t ← (1, 1, · · · , 1) t ;
σ(v) = σ (a,3 r 0 ← 1; 4 t 0 ← log r 0 ; 5 h 0 ← t 0 /|∆ 0 |; 6
for n ∈ {1, 2, · · · , iter − 1} do 7p n = (p n;1 ,p n;2 , · · · ,p n;k ) t ← (A 1pn−1 ) · · · (A dpn−1 ); 8 r n ← max apn;a ; 9p n ←p n /r n ; 10 t n ← d · t n−1 + log r n ; 11 h n ← t n /|∆ n |; 12 if |h n − h n−1 | < h n−1 · or h n < then rn , p 0 = (1/r 0 , 1/r 0 , · · · , 1/r 0 ) t It is noteworthy that the following equality holds:
p n = f n (p 0 ) = g n (p 0 ) · r(0) d n r(1) d n−1 · · · r(n) d 0 =p n · r(0) d n r(1) d n−1 · · · r(i) d 0
Since r n is chosen to be the maximal element in p n in Algorithm 1, the maximal element inp(i) is 1 and thus (14) t n = log max a p n;a = d n log r 0 + d n−1 log r 1 + · · · + d 0 log r n , and h n = log max a p n;a |∆ n | .
In fact, if r n is defined as in the algorithm, then r n is a rational number and h(X A ) = lim n→∞ log max a p n;a d n+1 /d − 1 = ∞ n=0 log r n · d − 1 d n+1 .
In particular, if X (A,A,··· ,A) is a hom Markov tree shift with A an essential matrix , i.e., for every b ∈ A there exists b ∈ A satisfying A(a, b) = 1, then |A| d ≥ r n ≥ 1 for all n ≥ 0 and
N n=0 log r n · d − 1 d n+1 ≤ h(X A ) ≤ N n=0 log r n · d − 1 d n+1 + ∞ n=N +1 d log|A| · d − 1 d n+1 .
input : K: binary matrix of dimension k. Denote by f = (f 1 , f 2 , · · · , f d ) and g = (g 1 , g 2 , · · · , g d ) the map p n−1 f → p n and the mapp n−1 g →p n , respectively. Similar to the above, The experiments are done with mpmath library of python under the following configuration: the precision digits for floating-point number dps=5000, the threshold = 1E − 50, and the relations K = (1, 1, 0, 1; 1, 1, 1, 0; 0, 1, 1, 1; 1, 0, 1, 1). Table 3. Numerical experiments on the stem entropy of X A1,A2 over Fibonacci-Cayley tree generate by S k , where K = [1, 1; 1, 0] (log is computed with base 10.)
Figure 1 .
1The Bethe lattice of order 3
:
u 1 G = a}; (8) p (g) n := |C (g) n |, p (g)n;a := |C (g) n;a |, p n := |B n |, p n;a := |B n;a |;(9) q (si) n := |C (si)+ n |, q (si) n;a := |C (si)+ n;a |.
Figure 2 .
2The support of patterns in C (s1) 2;a is a 2-semiball centered at s 1 Suppose G = F 2 is a free group generated by S 4 = {s 1 , s 2 , s
from Lemma 3.2 we know there are positive numbers a l and b l such that lim n→∞ K n (s j , s l )
bs does not depend on the choice of r, the proof is complete.
(s l ) m ) K n+1 (s I ,s l ) ,
K
n+1 (s I , s l )|∆ (s l ) l = 1, . . . , k.
in n-th level of∆ (si) n . Suppose s i corresponds to the row index I in the matrix K r , which has k r rows in total. Let e I be the k r -dimensional column vector with all entries 0 except for the entry index by I being 1 (A1), let n be a positive integer such that K n+1 (s i , s j )
,
part, let C be a positive constant depending only on K defined as if ρ(K) > 1, and let 1 ≤ i ≤ k and > 0 be given. We choose an integer N ≥ P and m 0 ≥ N such that for every m ≥ N
K n (s i , s l )|∆ (s l )
Theorem 4 . 1 .
41Suppose K ∈ {0, 1} k×k satisfies k j=1 K(s i , s j ) = k for some s i ∈ S k ,and X is a Markov tree shift. Then the topological entropy of X exists and h = lim n→∞ log p n |∆ n | = h (s) .
(s i , s j ) for every 1 ≤ i, i ≤ k and X = X A is a hom Markov tree shift. Then the topological entropy exists and lim n→∞ log pn |∆n| = h (s) .
|
= |∆ n | and |∆ (si)+ n | = |∆ (s −1 i )+ n | = |∆ + n | in the rest of the proof.
Figure 4 .
4Graph representation of X
Definition 5 . 1 .
51Let G = S k |K be a finitely generated semigroup. Suppose X = X A ⊆ A G is a Markov tree shift on G. A graph representation of X is a directed graph G = (V, E) with vertex set V = A × S k and with edge setE = {((a, s i ), (b, s j )) ∈ V × V : K(s i , s j ) = 1, A j (a, b) = 1}.(i) G is called strongly connected if for every (a, s i ), (b, s j ) ∈ V there is a walk of length N from (a, s i ) to (b, s j ) in G (denoted by (a, s i ) N −→ → (b, s j )) for some N depending on (a, s i ) and (b, s j ). (ii) A vertex (a, s i ) ∈ V is called a pivot if there exist s j ∈ S k and an integer N ∈ N such that every (b, s j ) ∈ V admits a walk (a, s i ) N −→ → (b, s j ).
Proposition 5 . 3 .
53Suppose that X A ⊆ A G is a hom Markov tree shift, and G = (V, E) is a graph representation of X A . Then, (i) G is strongly connected if and only if A is irreducible.
Theorem 5 . 4 .
54Let X A ⊆ A G be a Markov tree shift on G. Suppose G = (V, E)is a graph representation of X A . Then the topological entropy h = lim n→∞ log pn |∆n| exists and h = h (s) provided G admits a pivot and is strongly connected.Proof. First, we show that lim inf : s l ∈ S k , c ∈ A =: h.
N
;a;b1,··· ,b M := |{u ∈ A∆ (s i ) N Corollary 5.5. If X A is a hom Markov tree shift, then the topological entropy h exists and equals h (s) if A is primitive.
si) q (si) n;a v (a,s i ) = (a,si) b j:K(i,j)=1 A i (a, b)q (sj ) b;n input : A = (A 1 , A 2 , · · · , A d ): d binary matrices of dimension k.iter: maximum of iterations in execution : threshold for convergence output: h: approximation of entropy, where h n := log maxa pn;a |∆n| .
:
Topological entropy of hom Markov tree shift on the Cayley graph Let {r n > 0 : n ≥ 0} be a given sequence of positive real numbers. Define the normalized system as p n = g(p n−1 ) := f (pn−1)
A = (A 1 , A 2 , · · · , A d ): d binary matrices of dimension k. iter: maximum of iterations in execution : threshold for convergence output: h (sj ) : approximation of entropy, where h n ∈ {1, 2, · · · , iter − 1} do 9 for j ∈ {1, 2, · · · , d} do h (s1) ; h (s d ) ; · · · ; h (s d ) 21 end Algorithm 2: Stem entropy of Markov tree shift Remark B.2. As an analogy of Algorithm 1, the numbers of blocks satisfy the following recursive system. p (sj ) n = (A 1 p (s1) n−1 ) K(sj ,s1) · · · (A d p (s d ) n−1 ) K(sj ,s d ) p (sj ) 0 = (1, 1, · · · , 1) t .
the free group (log is computed with base 10.) In the same manner, given any positive sequence of {r (sj ) n : n ≥ 0, s j ∈ G}, one may define the normalized
to the stem entropy of X A .
Table 1. Numerical experiments on the stem entropy of X A1,A2,A t 1 ,A t 2 over the free group (log is computed with base 10.) Numerical experiments on the stem entropy of X A1,A2,A tA 1
A 2
stem entropy
topological entropy iteration
[0, 1; 1, 1]
[1, 1; 1, 0]
0.1261881372008
0.1261881372008
37
[1, 1; 1, 0]
[1, 1; 1, 0]
0.2332621211030
0.2332621211030
34
[0, 1, 0; 1, 0, 1; 0, 1, 0] [0, 1, 1; 1, 0, 0; 0, 1, 1] 0.1681464340595
0.1681464340595
36
A 1
A 2
stem entropy
topological entropy iteration
[0, 1; 1, 1]
[1, 1; 1, 0]
0.1261881372008
0.1261881372008
37
[1, 1; 1, 0]
[1, 1; 1, 0]
0.2332621211030
0.2332621211030
34
[0, 1, 0; 1, 0, 1; 0, 1, 0] [0, 1, 1; 1, 0, 0; 0, 1, 1] 0.1681464340595
0.1681464340595
36
Table 2.
N ;a;b1,··· ,c,··· ,b M . On the other hand, it follows from the
AcknowledgmentWe are appreciated for the comments from the anonymous referees, which greatly improve the readability of the article.: u is accepted by t ∈ X A , u gi = b i , ∀1 ≤ i ≤ M }|.Since G is strongly connected, there exists a walk (a, s i ) n − → → (b, s j ) in G. As a consequence, there exists p n;b2 · · · p (sj )Note that lim n→∞ for every s i ∈ S k and a ∈ A. Suppose (a, s i ) is a pivot in G. Then, there exist N ∈ N and s j ∈ S k such that every (c, s j ) ∈ V appears in one of the boundary patterns b 1 , · · · , c, · · · , b M , and is thus counted in p (si) claim above that for every > 0, there exists N such that for every n ≥ N ,s l ∈ S k and c ∈ A,Hence,for every product in the first line is counted no more than |A| times in the second summation in the second line. From equation(13), one may further deriveThe inequality above yields that lim inf n→∞ log pThe proof is then finished.The corollary below follows immediately from Proposition 5.3 and Theorem 5.4. Suppose x, y ∈ W . We denote x y if every term v appearing in F * (x) admits a term w appearing in F * (y) satisfying v (a,si) ≥ w (a,si) for every a ∈ A and every s i ∈ S k .Proposition A.1. lim n→∞ log pn |∆n| exists and equals h (s) if there exist N 1 , N 2 ∈ N and s i ∈ S k such that σ N1 (pn ) and y = σ N2 (p n ). Since x y, every term v appearing in F * (x) admits a term φ(v) appearing in F * (y) satisfying v (a,si) ≥ φ(v) (a,si) for every a ∈ A and every s i ∈ S k . In this proof, we denote [n, v] for every v = (a,si) (q n;an;1 ) v (a n;1 ,s l 1 ) · · · · · (q (s l M ) n;a n;M ) v (a n;M ,s l M )appearing in x such that lim n→∞for all n ≥ N . The proof is thus finished.Appendix B. Computation of Stem EntropyIn this section, we provide the pseudo codes for 1. computation for topological entropy of Markov tree shift on the Cayley graph and 2. computation for stem entropy of Markov tree shift, shown in Algorithm 1 and Algorithm 2 respectively. In the following, we denote by the entrywise product of vectors.Remark B.1. The idea behind Algorithm 1 is given as follows. Suppose p n = f (p n−1 ) := (A 1 p n−1 ) · · · (A d p n−1 ) p 0 = (1, 1, · · · , 1) t .It is shown (see for example[4]) that the above system is exactly the vector of the number of blocks: p n = (p n;1 , p n;1 , · · · , p n;k ).
Tree-shifts of finite type. N Aubrun, M.-P Béal, Theor. Comput. Sci. 459N. Aubrun and M.-P. Béal, Tree-shifts of finite type, Theor. Comput. Sci. 459 (2012), 16-25.
Sofic tree-shifts. Theory Comput. Systems. 53, Sofic tree-shifts, Theory Comput. Systems 53 (2013), 621-644.
Tree-shifts: Irreducibility, mixing, and chaos of tree-shifts. J.-C Ban, C.-H Chang, Trans. Am. Math. Soc. 369J.-C. Ban and C.-H. Chang, Tree-shifts: Irreducibility, mixing, and chaos of tree-shifts, Trans. Am. Math. Soc. 369 (2017), 8389-8407.
Tree-shifts: The entropy of tree-shifts of finite type. Nonlinearity. 30, Tree-shifts: The entropy of tree-shifts of finite type, Nonlinearity 30 (2017), 2785-2804.
Decidability of irreducible tree shifts of finite type. J.-C Ban, C.-H Chang, N.-Z Huang, Y.-L Wu, J. Stat. Phys. 177J.-C. Ban, C.-H. Chang, N.-Z. Huang, and Y.-L. Wu, Decidability of irreducible tree shifts of finite type, J. Stat. Phys. 177 (2019), 1043-1062.
Complexity of shift spaces on semigroups. J.-C Ban, C.-H Chang, Y.-H Huang, J. Algebraic Combin. to appearJ.-C. Ban, C.-H. Chang, and Y.-H. Huang, Complexity of shift spaces on semi- groups, J. Algebraic Combin. (2020), to appear.
Equilibrium states and the ergodic theory of Anosov diffeomorphisms. R Bowen, Springer-VerlagBerlin-New YorkR. Bowen, Equilibrium states and the ergodic theory of Anosov diffeomor- phisms, Springer-Verlag, Berlin-New York, 1975.
T Ceccherini-Silberstein, M Coornaert, Cellular automata and groups. Berlin HeidelbergSpringer-VerlagT. Ceccherini-Silberstein and M. Coornaert, Cellular automata and groups, Springer-Verlag Berlin Heidelberg, 2010.
Cellular automata between sofic tree shifts. T Ceccherini-Silberstein, M Coornaert, F Fiorenzi, Z Sunić, Theoret. Comput. Sci. 506T. Ceccherini-Silberstein, M. Coornaert, F. Fiorenzi, and Z.Sunić, Cellular automata between sofic tree shifts, Theoret. Comput. Sci. 506 (2013), 79-101.
Mixing properties for hom-shifts and the distance between walks on associated graphs. N Chandgotia, B Marcus, Pacific J. Math. 294N. Chandgotia and B. Marcus, Mixing properties for hom-shifts and the dis- tance between walks on associated graphs, Pacific J. Math. 294 (2018), 41-69.
Invariant measures for hyperbolic dynamical systems. N Chernov, Handbook of Dynamical Systems. 1Elsevier ScienceN. Chernov, Invariant measures for hyperbolic dynamical systems, Handbook of Dynamical Systems, vol. 1, Elsevier Science, 2002, pp. 321-407.
The description of a random field by means of conditional probabilities and conditions of its regularity. R L Dobrushin, Theory Probab. Appl. 13R. L. Dobrushin, The description of a random field by means of conditional probabilities and conditions of its regularity, Theory Probab. Appl. 13 (1968), 197-224.
Gibbsian random fields for lattice systems with pairwise interactions. Functional Anal. Appl. 2, Gibbsian random fields for lattice systems with pairwise interactions, Functional Anal. Appl. 2 (1968), 292-301.
The problem of uniqueness of a Gibbsian random field and the problem of phase transitions. Functional Anal. Appl. 2, The problem of uniqueness of a Gibbsian random field and the problem of phase transitions, Functional Anal. Appl. 2 (1968), 302-312.
Gibbsian random fields. the general case. Functional Anal. Appl. 3, Gibbsian random fields. the general case, Functional Anal. Appl. 3 (1969), 22-28.
G Fici, F Fiorenzi, Topological properties of cellular automata on trees, AUTOMATA and JAC 2012. G. Fici and F. Fiorenzi, Topological properties of cellular automata on trees, AUTOMATA and JAC 2012, 2012, pp. 255-266.
H.-O Georgii, Gibbs measures and phase transitions. De Gruyter2 ed.H.-O. Georgii, Gibbs measures and phase transitions, 2 ed., De Gruyter, 2011.
G Keller, Equilibrium states in ergodic theory. Cambridge University PressG. Keller, Equilibrium states in ergodic theory, Cambridge University Press, 1998.
Observables at infinity and states with short range correlations in statistical mechanics. O E Lanford, D Ruelle, Commun. Math. Phys. 13O. E. Lanford and D. Ruelle, Observables at infinity and states with short range correlations in statistical mechanics, Commun. Math. Phys. 13 (1969), 194-215.
An introduction to symbolic dynamics and coding. D Lind, B Marcus, Cambridge University PressCambridgeD. Lind and B. Marcus, An introduction to symbolic dynamics and coding, Cambridge University Press, Cambridge, 1995.
Phase transitions of continuous order: Ising model on a Cayley tree. E Müller-Hartmann, Z. Physik B. 22E. Müller-Hartmann, Phase transitions of continuous order: Ising model on a Cayley tree, Z. Physik B 22 (1975), 59-67.
Theory of the Ising model on a Cayley tree. Z. Physik B. 27, Theory of the Ising model on a Cayley tree, Z. Physik B 27 (1977), 161-168.
New type of phase transition. E Müller-Hartmann, J Zittartz, Phys. Rev. Lett. 33E. Müller-Hartmann and J. Zittartz, New type of phase transition, Phys. Rev. Lett. 33 (1974), 893-897.
Tree shift topological entropy. K Petersen, I Salama, Theoret. Comput. Sci. 743K. Petersen and I. Salama, Tree shift topological entropy, Theoret. Comput. Sci. 743 (2018), 64-71.
Entropy on regular trees. Discrete Contin. Dyn. Syst. 40, Entropy on regular trees, Discrete Contin. Dyn. Syst. 40 (2020), 4453- 4477.
Symbolic dynamics on free groups. S T Piantadosi, Discrete Contin. Dyn. Syst. 20S. T. Piantadosi, Symbolic dynamics on free groups, Discrete Contin. Dyn. Syst. 20 (2008), 725-738.
Gibbs measures on cayley trees. U A Rozikov, World Scientific Publishing CompanyU. A. Rozikov, Gibbs measures on cayley trees, World Scientific Publishing Company, 2013.
Thermodynamic formalism: The mathematical structures of classical equilibrium statistical mechanics. D Ruelle, Cambridge University Press2 ed.D. Ruelle, Thermodynamic formalism: The mathematical structures of classi- cal equilibrium statistical mechanics, 2 ed., Cambridge University Press, 2004.
Countable state space Markov random fields and Markov chains on trees. S Zachary, Ann. Prob. 11S. Zachary, Countable state space Markov random fields and Markov chains on trees, Ann. Prob. 11 (1983), 894-903.
Bounded, attractive and repulsive Markov specifications on trees and on the one-dimensional lattice. Stochastic Process. Appl. 20, Bounded, attractive and repulsive Markov specifications on trees and on the one-dimensional lattice, Stochastic Process. Appl. 20 (1985), 247-256.
Department of Mathematical Sciences. Taipei; Taiwan, ROCNational Chengchi University(Jung-Chao Ban) Department of Mathematical Sciences, National Chengchi Univer- sity, Taipei 11605, Taiwan, ROC.
Math, Division, Taipei 10617, Taiwan. ROC. Email address: [email protected] (Chih-Hung Chang) Department of Applied Mathematics. Kaohsiung; Taiwan, ROC; Hsinchu; Taiwan, ROC; Taoyuan; Taiwan, ROC30010National Center for Theoretical Science, National Taiwan University ; National University of Kaohsiung ; National Chiao Tung University ; Yu-Ying Wu) Department of Mathematics, National Central UniversityEmail address: [email protected] (Yu-Liang Wu. Email address: [email protected]. Email address: [email protected]. Division, National Center for Theoretical Science, National Taiwan Univer- sity, Taipei 10617, Taiwan. ROC. Email address: [email protected] (Chih-Hung Chang) Department of Applied Mathematics, National University of Kaoh- siung, Kaohsiung 81148, Taiwan, ROC. Email address: [email protected] (Yu-Liang Wu) Department of Applied Mathematics, National Chiao Tung University, Hsinchu 30010, Taiwan, ROC. Email address: [email protected] (Yu-Ying Wu) Department of Mathematics, National Central University, Taoyuan 32001, Taiwan, ROC. Email address: [email protected]
| []
|
[
"Asymmetric correlation matrices: an analysis of financial data",
"Asymmetric correlation matrices: an analysis of financial data"
]
| [
"Giacomo Livan \nAbdus Salam International Centre for Theoretical Physics\nStrada Costiera 1134151TriesteItaly\n",
"Luca Rebecchi \nDipartimento di Fisica Nucleare e Teorica\nUniversità degli Studi di Pavia\nVia Bassi 627100PaviaItaly\n"
]
| [
"Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 1134151TriesteItaly",
"Dipartimento di Fisica Nucleare e Teorica\nUniversità degli Studi di Pavia\nVia Bassi 627100PaviaItaly"
]
| []
| We analyze the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrices to distinguish between noise and non trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non symmetric correlation matrix. We find several non trivial results, also when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets. | 10.1140/epjb/e2012-30085-3 | [
"https://arxiv.org/pdf/1201.6535v2.pdf"
]
| 124,514,937 | 1201.6535 | 70c19e29b46d92ac0ac4df5c397c74cbf828bfdd |
Asymmetric correlation matrices: an analysis of financial data
27 Apr 2012
Giacomo Livan
Abdus Salam International Centre for Theoretical Physics
Strada Costiera 1134151TriesteItaly
Luca Rebecchi
Dipartimento di Fisica Nucleare e Teorica
Università degli Studi di Pavia
Via Bassi 627100PaviaItaly
Asymmetric correlation matrices: an analysis of financial data
27 Apr 2012Received: date / Revised version: datearXiv:1201.6535v2 [q-fin.ST] EPJ manuscript No. (will be inserted by the editor)
We analyze the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrices to distinguish between noise and non trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non symmetric correlation matrix. We find several non trivial results, also when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.
Introduction
A huge number of scientific disciplines, ranging from Physics to Economics, often need to deal with statistical systems described by a large number of degrees of freedom. Typically, it is very interesting, if not crucial, to analyze the correlations between the random variables describing such degrees of freedom. For this very reason, the development of both analytical and numerical tools to tackle the problem of correlation analysis is a fundamental topic in Multivariate Statistics. In most practical applications, one usually deals with a statistical system described in terms of N random variables R 1 , . . . , R N , and the most obvious thing to do in order to study such a system is to collect as many observations as possible of such R i s. Then, assuming the R i s to be described by a stationary joint probability distribution, the observations can be used to compute empirical time averages of quantities expressed in terms of those variables. So, suppose T equally spaced observations have been collected for each variable, and let us denote the time t (t = 1, . . . , T ) observation of the random variable R i (i = 1, . . . , N ) as R it . Quite straightforwardly, one can collect all such numbers in a N ×T matrix R whose generic entry reads [R] it = R it . The most general correlation structure between the random variables R i would read
R it R jt ′ = E ij,tt ′ ,(1)
where . . . denotes the expectation with respect to the joint probability density describing the R i s. However, in most practical applications the rather involved structure
Send offprint requests to: [email protected] in equation (1) can be factorized into its "spatial" and temporal parts. Assuming that the random variables R i s have zero mean and unit standard deviation, one could then write:
R it R jt ′ = C ij δ tt ′ ,(2)
and this will also be the case throughout the rest of this paper. In the previous expression, the matrix elements C ij (to be collected in a symmetric matrix C) account for the cross-correlations amongst all possible pairs of variables in the system. On the other hand, the Kronecker delta in (2) means that no auto-correlations are present in the system. Also, this means that each C ij in equation (2) can be estimated as the following time average (where the data are assumed to be standardized):
c ij = 1 T T t=1 R it R jt .(3)
This expression is the very well-known Pearson estimator, and all the c ij s can be collected in a N × N symmetric matrix
c = 1 T RR T ,(4)
which represents a "matrix estimator" for the true correlation matrix C introduced in equation (1). So, the problem of characterizing the correlation structure of a statistical system essentially boils down to the estimation of the N (N − 1)/2 independent entries of its correlation matrix from N T empirical observations. However, depending on the length T of the time series being used, the C ij estimates will inevitably be corrupted by a certain amount of measurement error, and this will eventually cause the whole correlation matrix c to be affected by the same problem. Several filtering recipes have been proposed in the statistical literature in order to partially clean correlation matrices from noise. On the other hand, a possible approach to attack the problem came from the Physics community, represented by the tools and methodologies developed in random matrix theory (RMT). Initially devised by Wigner [1] as a framework where to model the spectral properties of Hamiltionians of complex physical systems interacting through unknown laws, RMT gradually underwent a more formal evolution, eventually becoming a mathematical theory of its own [2,3] and finding a plethora of application in extremely different scientific areas [4]. The main RMT result which is commonly used in correlation data analysis is the well known Marčenko-Pastur distribution [5], i.e. the average eigenvalue density for the correlation matrix of a system of uncorrelated Gaussian random variables in the "thermodynamic limit" N, T → ∞, with q . = T /N fixed. Such a distribution intuitively represents a suitable candidate for a "null model" with no correlations. Thus, any deviation between the Marčenko-Pastur distribution and the empirically observed eigenvalue density of the data correlation matrix provides information about the correlation structure of the system under analysis. In the context of financial data analysis, this type of study was first carried out in the late nineties in [6,7], where the spectral properties of the correlation matrix of stocks belonging to the S&P500 Index were analyzed over different time scales. Quite surprisingly, in those works most of the eigenvalue spectrum was shown to be fell fitted by a Marčenko-Pastur distribution, whereas only few, larger, eigenvalues were shown to carry relevant information on the market correlation structure by "leaking out" of the Marčenko-Pastur region. Ever since such works, physicists kept on analyzing financial correlation matrices, constantly refining the general picture described in [6,7] with increasing levels of insight [8,9,10,11,12,13,14,15,16,17,18,19], and also generalizing the framework defined by equation (2) to also include the effects due to temporal correlations [20,21].
A quite natural generalization of the above picture is represented by the extension of correlation analyses to two statistical systems S 1 and S 2 , both described in terms of N random variables. Then, one can straightforwardly write down the Pearson estimator (3) for the correlation coefficient between the ith variable in S 1 and the jth variable in S 2 :
k ij = 1 T T t=1 R (1) it R (2) jt .(5)
Even more generally, one could think of the random variables in S 1 as a set of input variables, whose output is in turn described by the variables in S 2 (or vice versa). Then, it would be of great interest to further generalize (5) to the case of time lagged correlations, i.e.
k ij (τ ) = 1 T − τ T −τ t=1 R (1) it R (2) j,t+τ ,(6)
so that equation (5) is recovered for τ = 0. Recovering the previously outlined framework, it is of course convenient to collect all the k ij (τ ) estimates in a N × N matrix k(τ ). However, the most notable difference of such a matrix with respect to "ordinary" correlation matrices is that it is no longer symmetric, since k ij (τ ) = k ji (τ ). Hence, its eigenvalues will in general be complex, and this feature, as we shall see later, will widely enrich the possible spectral analyses to be performed, and the subsequent considerations on the correlations between the two statistical systems to be studied. In a financial context, it is quite interesting to interpret S 1 and S 2 as two different financial markets, so that the matrix k(τ ) will encode all of the relevant information on the possible correlations between them. In such a framework, we shall interpret R
i,t /S (M) i,t−1 , where S (M) i,t denotes the time t spot price of asset i in market M .
The purpose of this paper is twofold. After briefly reviewing the most relevant spectral features of asymmetric correlation matrices as the one introduced in equation (6), our first goal will be to look for an empirical realization of this type of matrices, providing some possible methodological guidelines to unravel the genuine correlations between two distinct complex systems. As anticipated, we choose financial data as a case study. So, our second main goal will be the one of verifying whether asymmetric correlation matrices can prove to be a valuable tool for the description of relevant stylized facts observed in financial markets. Admittedly, in this respect the choice of working with matrices of the type (6) represents a limitation, since one needs the matrix k(τ ) to be square (so it has eigenvalues), and this forces one to consider an equal number N of stocks in the two markets. Working with singular values, as in [22], removes this constraint. However, we believe our first, more general, goal to justify such a limitation.
Before we start to detail our study, it is worth mentioning that an analysis of financial data based on asymmetric matrices was first attempted in [23]. However, the random matrix benchmark used in that work was represented by the Ginibre orthogonal ensemble (GinOE), i.e. the ensemble of random matrices with independent Gaussian real entries and no symmetry requirement. Despite producing complex eigenvalues, the spectral structure of the GinOE is completely different from the one produced by the random version of asymmetric correlation matrices as the one in equation (6). Thus, we believe the analyses to be presented in our paper to be based on more solid theoretical grounds.
The paper is organized as follows. In Section 2 the RMT results concerning the average eigenvalue density of random asymmetric correlation matrices will be overviewed.
Then, the case study on financial data will be detailed in Section 3, where the two-subsytems S 1 and S 2 will be represented by the American and British stock exchanges, respectively. The empirical results discussed in Section 3 will be corroborated in Section 4 by investigating the spectral properties of the standard Pearson correlation matrix of the two datasets to be used. The paper will then be concluded with some final remarks in Section 5.
Random asymmetric correlation matrices
The asymmetric correlation matrix in equation (6) can be clearly written as a product of two matrices:
k(τ ) = 1 T − τ R (1) 0 (R (2) τ ) T (7) where [R (1,2) l ] it = R (1,2)
i,t+l . In the following, we shall consider the case in which both matrices in the right hand side of equation (7) are random (in a sense to be made rigorous in a moment). Not many results are known on the spectra of products of random matrices (see for example [24,25,26,27]) as the one in equation (7), and most of them only describe "microscopic" spectral properties. However, in [28] an equation for the average eigenvalue density for a product of an arbitrary number of large Gaussian random matrices was derived. Such equation was derived by means of a planar diagram expansion (see [29] for a step by step introduction to this technique) under the assumption of all matrix dimensions going to infinity with their ratios kept fixed. Also, quite importantly for our present discussion, the aforementioned equation can be solved exactly for the product of two matrices, as in equation (7). More precisely, assuming all matrix entries in both R (1) 0 and R (2) τ to be independent and identically distributed Gaussian random numbers with zero mean and unit variance, the average eigenvalue density (in the complex plane) for the k (12) matrix can be shown [28] to be:
ρ k (λ, λ * ) = q 2 π √ (1−q) 2 +4q 2 |λ| 2 for |λ| ≤ q −1/2 0 for |λ| > q −1/2 ,(8)
where again we have q = T /N and * denotes complex conjugation. Thus, in the thermodynamic limit N, T → ∞ with q held fixed, the average eigenvalue density ρ k displays circular symmetry within a circle of radius q −1/2 centered in the origin of the complex plane. However, for any finite matrix dimension N , the circular symmetry is broken, due to the fact that Tr[k (12) (τ )] is a real number, and this introduces a constraint on the eigenvalues. Thus, for any finite N an excess of eigenvalues lying on the real axis, which can be shown to decrease as √ N [30], can be observed (see Figure 1). When considering complex rather than real entries for k (12) , circular symmetry is recovered also for finite values of N . Since the leading order (in N ) results obtained for the eigenvalue densities with real and complex entries coincide, when taking the infinite matrix size limit one eventually ends up with the density in equation (8) in both cases.
Given the circular symmetry, one can safely work with the radial eigenvalue density derived from (8), which reads ρ rad k (x) = 2πxρ k (λ, λ * )| |λ|=x . Now, the thermodynamic limit density (8) reaches a finite value at the boundary of its domain (|λ| = q −1/2 ), and then abruptly becomes equal to zero. However, when working with finite sized matrices, this transition is smoothed according to the following damping (conjectured in [28], inspired by analogous finite size corrections that can be introduced rigorously for the Ginibre random matrix ensembles [4,31], and actually proved in [27]):
ρ eff k (x) = 1 2 ρ rad k (x) erfc(h(x − q −1/2 )),(9)
where the parameter h is phenomenological and needs to be adjusted by fitting. See Figure 2 for an example: as can be seen, the excess of eigenvalues on the real axis almost does not affect the overall shape of the radial density, even for relatively small matrix dimensions. Thus, in all of our following analyses we shall freely compare empirical data with the density in equation (9).
Empirical analysis
In this section we shall look for an empirical realization of the asymmetric correlation matrix (6) in a financial context. Namely, as already anticipated, in the following we shall consider two different financial markets as the two statistical systems from which the data R (1) it and R
(2) j,t+τ (see again equation (6)) are drawn from. In particular, we shall focus on the American and British financial markets by employing prices of stocks belonging to the S&P500 Index and the FTSE350 Index. The dataset to be used is made of daily prices of N = 200 stocks (from both markets, so 400 stocks overall) covering the years 2005-2011 (T = 1595 log-returns). It is important to remark here that, in order to empirically recreate the correlation matrix (6) (especially for τ = 0, as in equation (5)) it is mandatory to work with data well defined on the same time steps t = 1, . . . , T . For this very reason, prices collected from the American market during British holidays (and vice versa) were removed from the datasets.
When actually computing the eigenvalue spectrum of the generalized correlation matrix (6) for the aforementioned S&P500 and FTSE350 datasets, two main features can be clearly distinguished: a main eigenvalue bulk close to zero and one large (in modulus) eigenvalue. We shall separately discuss those two aspects.
The largest eigenvalue
In the following, the variables R (1) it in equation (6) will be meant to be the log-returns of stocks belonging to the S&P500 Index, whereas the variables R (2) j,t+τ represent logreturns of stocks belonging to the FTSE350 Index.
In Figure 3 the largest (in absolute value) eigenvalue |λ MAX | is plotted as a function of τ (blue solid line). It is worth remarking that, except for a few cases, such eigenvalue is always found to be real. Intuitively, this is because it actually accounts for most of the trace of the k(τ ) matrix, which is a real number too. Now, as one can see from Figure 3, the largest values of |λ MAX | are found for τ = 0, 1. More specifically, in both such cases λ MAX is real and we have λ MAX (τ = 0) = 36.4 and λ MAX (τ = 1) = 23.3. Quite interestingly, one finds λ MAX (τ = −1) = 3.1, much smaller than λ MAX (τ = 1). This asymmetry highlights (also in the light of the interpretation of λ MAX as average correlation to be discussed in the following) a strong influence of past American stock prices on the following day's British stock prices.
In order to verify the robustness of such evidence, we also computed the values of λ MAX for τ = 0, ±1 over eight Over such eight samples we find, for τ = 0, an average value ofλ MAX (τ = 0) = 37.3 with a standard deviation σ(τ = 0) = 1.5, whereas for τ = ±1 we find λ MAX (τ = 1) = 24.1 andλ MAX (τ = −1) = 3.8 with standard deviations σ(τ = 1) = 0.1 and σ(τ = −1) = 0.4. In some cases, such estimates only appear to be close, but not perfectly compatible with the ones shown previously for the whole dataset. However, this fact does not point out any inconsistency, since the average and standard deviation values we reported are computed over (sometimes largely) overlapping time windows. So, they are not to be considered for any serious statistical comparison, and are only meant to qualitatively show how the estimates for λ MAX fluctuate over time.
For values of τ other than 0 and ±1, |λ MAX | seems to follow a random path, approximately lying between 0 and 10. The interesting point, however, is that λ MAX is very often found to be much larger than the limiting radius predicted by RMT for the eigenvalue density of random asymmetric correlation matrices. As already detailed in the previous section, such a radius is equal to q −1/2 = N/T (see equation (8)). With the values of N and T of our dataset we have R ∼ 0.35, much smaller than most values of |λ MAX |. At first, this might seem to suggest the existence of some non trivial long-range correlation. On the contrary, such persistently high values can be shown to be a spurious effect by means of the following argument. Letk(τ ) be the average estimated correlation between stocks in the two markets, i.e.
k(τ ) = 1 N 2 N i,j=1 k ij (τ )(10)
with k ij (τ ) defined as in equation (6). Let us then approximate the whole matrix as k(τ ) ∼k(τ )E N , where E N is the N × N matrix whose entries are all equal to one: this amounts to approximate all correlations in k(τ ) with their average. Now, it can be easily shown that the matrix E N has one eigenvalue equal to N and N − 1 eigenvalues equal to zero. Under such a "mean field" approximation the eigenvalue spectrum of k(τ ) would read
det(k(τ ) − λ1 N ) ∼ det(k(τ )E N − λ1 N ) (11) = (−λ) N −1 (k(τ )N − λ),
where 1 N represents the N × N identity matrix. Equation (11) means that we would have N − 1 zero modes plus one eigenvalue equal tok(τ )N . Quite remarkably, this simple and apparently very rough approximation is actually enough to explain the persistence of a large eigenvalue over large time lags: the red dashed line in Figure 3 represents |k(τ )|N , and one can see how close this follows the path of the largest eigenvalue |λ MAX |. All in all, this latter merely reflects the average correlation for a certain value of the time lag τ . Most importantly, this is also true for τ = 0, 1, i.e. when |λ MAX | reaches its highest measured values, and such evidence tells us, unsurprisingly, that the average correlations are much higher for those values of τ . For other values of τ , the absolute value of the average correlation approximately lies between 0 and ±0.04 (i.e. very small values), but the enhancing factor N causes the corresponding large eigenvalue |k(τ )|N to lie between 0 and 10, as already stated. In the genuinely random matrix model for k(τ ) outlined in Section 2 the average correlationk(τ ) is very strongly suppressed, so that no large and isolated eigenvalues can appear.
Bulk of the spectrum
In Figure 4 the main part of the radial eigenvalue spectrum of the the k(τ ) matrix constructed with the aforementioned S&P and FTSE datasets is plotted (blue dots) for τ = 0 and τ = 100. In order to improve the statistics, a bootstrap approach is followed: namely, 200 iterations are performed and, for each of those, the k(τ ) matrix is constructed by randomly selecting 190 stocks out of the 200 available ones for each of the two stock sets. This is done under the reasonable assumption that the eigenvalue spectrum will not be drastically affected, at least in its overall appearance, by the particular stock selection. Also, in both plots of Figure 4, the effective radial eigenvalue density (9) predicted by RMT for N = 190 and T = 1595 (T = 1495 for the case τ = 100) is shown. In both cases the h parameter was determined by fitting on Monte Carlo densities with very large statistics. As is immediate to see, for both of the considered values of τ the empirical and theoretical densities have no similarity at all. Also, trying to fit the effective density (9) allowing the q ratio to be a free parameter (much in the same spirit of what was done in [6,7] with the Marčenko-Pastur density) does not provide acceptable results, essentially due to to the much slower falloff of the empirical densities with respect to the exponential one of the RMT radial density (9). At first, one might naively interpret such discrepancies, especially the one for τ = 100, as a sign of some long-range time correlations between the markets under study. However, one should recall that the RMT densities in equations (8) and (9) are derived for the k(τ ) matrix in (6) under the assumption that the two sub-systems have no mutual correlation and no correlation of their own. This is a crucial point: using a large time lag τ should suppress all correlations between the stocks in the two datasets, and this is actually confirmed by the previous analysis on the τ -dependence of the average correlationk. However, using a sliding time window does not suppress the self-correlations within each market: figuratively speaking, those are "dragged along" by the sliding window τ itself. Hence, one should try to disentangle the two different types of correlations, getting rid of the inner ones while retaining only those existing between the two sub-systems. Quite naturally, this task can be accomplished by mapping the original variables onto the corresponding sets of principal components.
Mapping onto principal components
Starting from our two datasets, let us construct their standard Pearson correlation matrices in the usual way as
C (1) = R (1) (R (1) ) T /T and C (2) = R (2) (R 2 ) T /T .
where M = 1, 2 and R (1) jt and R (2) jt are as in equation (6). Now, exploiting eigenvector orthogonality one can immediately verify that principal components are exactly uncorrelated :
1 T T t=1 e (M) it e (M) jt = δ ij .(13)
Moreover, inverting equation (12) one can expand any of the original variables in terms of principal components:
R (M) it = N j=1 λ M,j V (M,j) i e (M) jt .(14)
This relation is exact and shows that any of the random variables R
(M) it
can be decomposed over a set of uncorrelated variables, whose explanatory power (in terms of variance) of the original variables' dynamics can be ranked depending on the size of the corresponding eigenvalues.
Principal components look as a quite appealing set of variables to use in the framework of asymmetric correlation matrices between two distinct financial markets. As already stated, the huge and persistent (over large time lags) deviations between empirical spectra and RMT predictions seem to be due to the inner correlations of the two markets. Switching to principal components circumvents this problem. Let us then introduce the asymmetric correlation matrix between principal components of the two datasets in use. We shall write the correlation coefficients as
k (e) ij (τ ) = 1 T − τ T −τ t=1 e (1) it e (2) j,t+τ ,(15)
and we shall collect them in a matrix k (e) (τ ). So now, since the principal components in each set are completely uncorrelated, any deviation of the k (e) (τ ) matrix's eigenvalue spectrum from the pure noise RMT prediction can only be imputed to correlations between the two sub-systems under study, encoded as correlations between their respective principal components. Even more interestingly, as is quite well known, the first few principal components, i.e. the dominant ones related to the largest eigenvalues, can be given a simple financial interpretation (see for example [11]): the first one arises as a consequence of collective market fluctuation (hence it is usually given the name of "market mode"), and the first few after that generally correspond to market sectors. Hence, before studying the whole spectrum of the k (e) (τ ) matrix, let us take a look at the correlations between such variables. In Figures 5 and 6 the τ -dependence of some matrix elements in k (e) (τ ) is shown. Namely, in Figure 5 the correlation coefficients k and k (e) 22 between the two main principal components (i.e. those related to the two largest eigenvalues) in the two datasets is plotted. Such principal components account for 46.5% of the overall data variance in the S&P dataset, and for 31.7% in the FTSE dataset. As one can see, quite strong correlations (either positive or negative) are again On the contrary, correlations between the first and second principal components in the two datasets, encoded in the matrix elements k (e) 12 and k (e) 21 are found to be quite small for all values of τ (see Figure 6). Similar facts, i.e. strong "diagonal" correlations for τ = 0, 1 and weak "offdiagonal" correlations, are observed also when considering the other most relevant principal components. This is quite interesting since, apart from the first component e 1 , which represent market modes, the other most relevant principal components do not necessarily represent one well-defined market sector or the same sector in the two markets. Nevertheless, their quite strong mutual correlations for τ = 0, 1 suggest that they encode relevant information about "orthogonal" (in the sense made rigorous by PCA) market portions, which remain "orthogonal" across different financial markets (as demonstrated by the small "off-diagonal" correlations k (e) ij (τ ) for i = j). As a concluding remark to this discussion, let us also clarify how the correlations between different principal components impact those between the "true", original variables (daily log-returns in our case). Starting from the correlation coefficient (6), and using equation (14), one finds
k ij (τ ) = 1 T − τ T −τ t=1 R (1) it R (2) j,t+τ (16) = 1 T − τ T −τ t=1 1 l,s=1 λ 1,l λ 2,s V (1,l) i V (2,s) j e (1) lt e (2) s,t+τ = N l,s=1 λ 1,l λ 2,s V (1,l) i V (2,s) j k (e) ls (τ ),
and from this relation one sees that, unsurprisingly, the largest eigenvalues and largest correlations between principal components justify, for most part, the correlations between the original variables. Defining two N × N matrices W (1) and W (2) with entries
W (M) ij = λ M,j V (M,j) i(17)
(where M = 1, 2) allows us to rewrite equation (16) in matrix form:
k(τ ) = W (1) k (e) (τ ) W (2) T .(18)
We shall come back later to this point. Finally, let us look at the eigenvalue spectrum of the asymmetric correlation matrix k (e) (τ ) of the principal components. In Figure 7 the empirical radial eigenvalue spectra of the k (e) (τ ) matrix are plotted for τ = 0, 1, 30 (topleft, top-right and bottom, respectively). In all cases, the same bootstrap approach already adopted for the spectra in Figure 4 is used, i.e. 200 iterations are performed, each time randomly selecting N = 190 stocks out of the 200 available ones in each dataset. As can be seen, for all values of τ one now ends up with an eigenvalue spectrum which is much closer to the one predicted by RMT (solid line in all plots of Figure 7) than when using the original variables ( Figure 4). For τ = 0, 1 significant correlations between the two markets under study exist, as pointed out in the previous analyses, and this is reflected into visible deviations between the empirical and the theoretically expected eigenvalue density. For larger values of τ (exemplified by the bottom plot of Figure 7) the overall agreement improves: the exponential falloff of the RMT density is quite well reproduced (whereas for τ = 0, 1 this is not the case), but still an excess of eigenvalues lying around the peak region of the distribution can be clearly seen. However, even though the agreement between data and theory is still not excellent even after switching to principal components, the main point to be discussed is the following: namely, all the theoretical densities in Figure 7 are fitted to the empirical histograms, allowing both h and q (see equation (9)) to be free parameters. Now, whereas the former parameter is phenomenological by definition, the latter should in principle be given by the ratio T /N . The values of N and T used in Figure 7 give q ∼ 8.4 while by fitting one obtains q = 5.59, q = 5.64 and q = 6.08 for τ = 0, 1, 30 respectively: in all cases the effective q parameter is very different with respect to its expected value. Moreover, one can also check that by performing one same time reshuffling for all the e (1) s and another one (different from the first) for all the e (2) s, the expected value of is essentially reached (see Figure 8, where the radial density (9) is fitted giving q = 8.24 when τ = 0 and q = 8.15 when τ = 30, very close to the "natural" value q ∼ 8.4. So, how to interpret this result?
Performing one time reshuffling within one dataset and a different one within the other one has the following effects on the different types of correlations involved:
-Performing one same reshuffling for all the variables within one set keeps their mutual cross-correlations intact. Since the variables being dealt with here are principal components, this kind of reshuffling keeps them uncorrelated (see equation (13)). -Since the two reshufflings performed on the two datasets are different, all correlations between variables belonging to different sub-systems are destroyed. -Performing a time reshuffling on a time series reasonably destroys all possible autocorrelations in it.
As a matter of fact the first two points in the above list empirically recreate the conditions under which the RMT density (9) is derived, i.e. no self-correlations within each system and no correlation between the two. However, such conditions are essentially obtained also when the correlations amongst principal components are computed for large enough values of τ (see Figures 5 and 6), whereas the example shown in Figure 7 shows that this is not the case, since one ends up with an effective value of q which is quite far from the expected one. So, the last point in the above list appears to be the crucial one.
Finding smaller values of q, with respect to the "natural" ones, as in Figure 7, amounts to larger effective values of N or smaller effective values of T , and the latter seems the only possible option. As a matter of fact, principal component analysis grants us that no cross-correlations exist between principal components (equation (13)). Nevertheless, nothing prevents such variables to display autocorrelations, contrarily to the original variables, i.e. the log-returns, which are known not to display any relevant autocorrelation (see for example [32]). In this respect, see Figure 9, where the autocorrelations (as a function of τ ) of the first two principal components for each of our datasets are plotted; the 99.7% confidence interval for a purely random process of length T is shown in red. As can be clearly seen, the interval boundaries are crossed several times, thus illustrating that, indeed, the main principal components do feature autocorrelations (similar behaviors are found for all other principal components). On a qualitative level, the presence of autocorrelations reduces the number of degrees of freedom in the system, and justifies the need to accordingly adjust T to an effective dimensionality [33].
Joint correlation matrix
Before concluding, let us complement our analyses on asymmetric correlation matrices by studying the standard correlation matrix of our whole dataset. Let us then consider the following 2N × T matrix:
R = R (1) R (2) ,(19)
where R (1) and R (2) are two N × T matrices containing the time series of our S&P and FTSE datasets, respectively. From the matrix in equation (19), we can build the ordinary Pearson correlation matrix as in equation (4):
c = 1 T RR T = R (1) (R (1) ) T T R (1) (R (2) ) T T R (2) (R (1) ) T T R (2) (R (2) ) T T .(20)
So, the asymmetric correlation matrix (for τ = 0) k = R (1) (R (2) ) T /T and its transpose are embedded as the offdiagonal blocks of a larger object, which we shall call joint correlation matrix, having real eigenvalues. Fig. 10. Eigenvalue spectrum of the joint correlation matrix in equation (20). For better visualization, the two largest eigenvalues, equal to 112.7 and 31.8, have not been plotted.
The eigenvalue spectrum of the joint correlation matrix in equation (20) displays one main bulk (see Figure 10), plus a few eigenvalues "leaking out" of such bulk. Some of those can already be seen in Figure 10, but not the largest two, equal to λ 1 = 112.7 and λ 2 = 31.8, i.e. much larger than all the remaining ones. Very interestingly, some intuition on the meaning of such eigenvalues can be grasped by means of principal component analysis. Let us denote the eigenvalues of the joint correlation matrix c as λ 1 > λ 2 > . . . > λ 2N , and the corresponding normalized eigenvectors as 2N . Denoting principal components as e i , equation (14) can be specialized to the present case by writing
V (i) = (V (i) 1 , . . . , V (i) 2N ), for i = 1, . . . ,R it = 2N j=1 λ j V (j) i e jt .(21)
In the above equation, values of the index i going from 1 to N cover stocks belonging to the S&P Index, while values going from N + 1 to 2N refer to stocks in the FTSE Index. Given the above considerations on the eigenvalue spectrum of the c matrix, it is certainly interesting to look at the eigenvector components of V (1) and V (2) , i.e. the eigenvectors corresponding to the largest eigenvalues. In Figure 11 the components of V (1) are reported, dis- Fig. 11. Component distribution for the eigenvector V (1) related to the largest eigenvalue λ 1 of the joint correlation matrix in equation (20). The distribution of components related to S&P stocks is plotted with the solid line, while the one of components related to FTSE stocks is plotted with the dashed line. (2) related to the second largest eigenvalue λ 2 of the joint correlation matrix in equation (20). The distribution of components related to S&P stocks is plotted with the solid line, while the one of components related to FTSE stocks is plotted with the dashed line.
tinguishing those related to S&P stocks (solid line) from those related to FTSE stocks (dashed line). As one can see, both component groups are positive and they partially overlap. Thus, from equation (21) one can conclude that the first principal component of the c approximately impacts all stocks in the same way. On the contrary, one can see in Figure 12 that the eigenvector components of V (2) are split into two well separated groups: components related to S&P stocks are positive, while component related to FTSE stocks are negative. Also, one can verify that the component distributions for all the remaining eigenvectors (from V (3) to V (2N ) ) almost exactly overlap. These facts suggest the following interpretation. The largest eigenvalue λ 1 is a "global market" eigenvalue, meaning that the corresponding principal component, accounting for 28.2% of the overall data variance, drives both markets in the same direction (all V (1) i s positive), and roughly drives all of their stocks with the same intensity (partial overlap of the two distributions in Figure 11). On the other hand, Figure 12 makes it clear that the principal component (20) for i = 3, . . . , 2N . Blue dots refer to components related to S&P stocks, whereas red crosses refer to to components related to FTSE stocks. related to the second largest eigenvalue, accounting for almost 8% of the overall data variance, is the main source of negative correlation between the two markets under study. Those observations essentially match the results discussed in Section 3.3. In particular, in Figure 5 it was shown that the main principal components of the two markets are strongly correlated for τ = 0, whereas their second most relevant principal components are negatively correlated. So, both analyses point out two main sources of correlation, one positive and one negative, between the two markets. The remaining eigenvalues of c do not allow for similarly clear interpretations, and this is quite well portrayed by the almost overlapping eigenvector component distributions shown in Figure 13.
Conclusions
In very general terms, the main motivation for the study presented in this paper was to look for an empirical realization of a random asymmetric generalized correlation matrix of the type (6) and its eigenvalue density (equations (8) and (9)), attempting to perform a correlation analysis with complex eigenvalues. Financial data were chosen as a case study, but all of the analyses performed could be exactly replicated in any context where time series are involved.
As already stated, looking at eigenvalues might represent a limitation, since it forces one to work with square matrices. From the financial viewpoint, this limitation forced us to work with equal number of stocks in the S&P and FTSE datasets. Drawing more significant conclusions on the possible correlations between the two whole indices (or markets) would require to keep the datasets to their actual dimensions, and consequently to work with singular values, as in [22]. Still, whenever one is reasonably allowed to work with an approximately close number of variables in the two sub-systems, the radial density (9) represents an effective tool to detect the presence of cross-correlations (as in Figure 4) or autocorrelations almost at first glance, or at least after a quick fitting procedure to determine the effective value of the q ratio.
As far as financial aspects are concerned, all the results we presented suggest that all macroscopically relevant correlations between the New York and London stock exchanges expire within a 24 hours time window. Switching to principal components and studying the spectral properties of the joint correlation matrix (20) allowed us both to corroborate such findings and to unravel some other non trivial facts, such as the identification of the main sources of positive and negative correlation between the markets we considered, and the emergence of an effective system dimensionality due to autocorrelations in the principal components. Also, it would definitely be interesting to repeat all or some of the analyses detailed in this paper on high frequency data, possibly comparing the results with those presented in the previously mentioned paper [23].
Lastly, from the viewpoint of RMT, equation (18) represents a very interesting starting point for possible future developments. More specifically: principal component analysis grants us that the variables which give rise to the k (e) (τ ) matrix are exactly uncorrelated within each sub-system. So, whenever those are reasonably well described by Gaussian statistics, we know that the average eigenvalue density of k (e) (τ ) is given by equations (8) and (9) (possibly for some effective value of q, as we discussed). Thus, equation (18) describes the transition from the eigenvalue density arising from two uncorrelated systems (encoded in k (e) (τ )) to the one of two systems having the correlation structure encoded in the W (1) and W (2) matrices (see equation (17)). This is an interesting property at least for the following reason. As far as theoretical advances in RMT are concerned, one could try to use recently developed tools about the multiplicative structure of random matrices [34] in order to derive analytical, or semi-analytical, results for the spectrum of the k(τ ) matrix seen as the outcome of the multiplicative action of two fixed known matrices (W (1) and W (2) ) on a known spectrum (the one given by k (e) (τ )). Also, generalizing the results in equations (8) and (9) to the eigenvalue spectra of asymmetric correlation matrices arising from random variables displaying both cross-correlations and autocorrelations would represent a major challenge to RMT developments. However, intuition based on similar generalizations for ordinary correlation matrices (see for example [20]) suggests that the presence of short lived, e.g. exponentially damped, autocorrelations would not modify the eigenvalue spectra in a dramatic fashion.
We thank Guido Montagna for helpful suggestions and for reading the preliminary version of our manuscript. G. L. also wishes to thank Oreste Nicrosini and Andrea Schirru for many stimulating discussions during the early stages of this work.
time t log-return of the ith stock (i = 1, . . . , N ) in market M (M = 1, 2). Log-returns are the most commonly used variables in financial practice, and (at time t) they are defined as log S (M)
Fig. 1 .
1Eigenvalues of 50 random asymmetric correlation matrices with N = 100 and T = 500.
Fig. 2 .
2Radial density corresponding to the eigenvalues inFigure 1fitted with the effective finite size density of equation(9)(finding h = 27.9).
Fig. 3 .
3Absolute value of the largest eigenvalue λ MAX of the asymmetric correlation matrix k(τ ) as a function of τ . different portions of our datasets (all of them made of 1195 daily log-returns and starting at t = 1, 50, 100, . . . , 350).
Denoting their eigenvalues as λ 1,i and λ 2,j (for i, j = 1, . . . , N ), and the corresponding eigenvectors as V(1,i)
Fig. 4 .
4Eigenvalue spectrum of the asymmetric correlation matrix k(τ ) for stocks belonging to the S&P500 and FTSE350 indices (blue dots), with N = 190, T = 1595 and τ = 0 (left), T = 1495 and τ = 100 (right). The spectrum statistics is enhanced by means of a bootstrap approach (see main text). The solid line represents the effective radial density predicted by RMT (equation(9)) for the same values of N and T . The h parameter was adjusted by fitting on a Monte Carlo density with large statistics, yielding h = 51.80 and h = 50.86 in the two cases.
Fig. 5. Correlation coefficients k (e) 11 (τ ) (solid line) and k (e) 22 (τ ) (dashed line).
Fig. 6 .
6Correlation coefficients k
τ = 0, 1: k 11 (τ = 0) = 0.54, k 11 (τ = 1) = 0.34 and k 22 (τ = 0) = −0.31, k 22 (τ = 1) = −0.29. For different values of τ , much smaller values are found, similarly to the case of the largest eigenvalue (see Figure 3).
Fig. 7 .
7Empirical eigenvalue spectra (enhanced by bootstrap) of the k (e) (τ ) correlation matrix (15) of principal components for τ = 0 (top-left), τ = 1 (top-right) and τ = 30 (bottom) fitted with the radial density (9) (solid line).
Fig. 8 .
8Empirical eigenvalue density of the asymmetric correlation matrix of principal components (equation(15)) after performing one same time reshuffling on all the stocks in the S&P dataset and another one on all stocks in the FTSE dataset (see the main text for more details on this). The left plot refers to the case τ = 0, while in the right plot we have τ = 30.
Fig. 9 .
9Autocorrelation function of the first two principal components of the S&P (left plot) and FTSE (right plot) datasets. In both plots, solid lines refer to the first principal component, while dashed lines refer to the second one. The region delimited by the red horizontal lines represents the 99.7% confidence interval for the autocorrelation of a purely random process.
Fig. 12 .
12Component distribution for the eigenvector V
Fig. 13 .
13Component distribution of the eigenvectors V (i) of the joint correlation matrix in equation
. E P Wigner, Ann. Math. 62548E. P. Wigner, Ann. Math. 62, 548 (1955);
. Ann. Math. 67325Ann. Math. 67 325 (1958)
M Mehta, Random Matrices. AmsterdamElsevierM. Mehta, Random Matrices (Elsevier, Amsterdam, 2004)
G W Anderson, A Guionnet, O Zeitouni, An Introduction to Random Matrices. Cambridge University PressG. W. Anderson, A. Guionnet, O. Zeitouni, An Introduction to Random Matrices (Cambridge University Press, 2009)
The Oxford Handbook of Random Matrix Theory. G. Akemann, J. Baik, P. Di FrancescoOxford University PressThe Oxford Handbook of Random Matrix Theory, edited by G. Akemann, J. Baik, P. Di Francesco (Oxford University Press, 2011)
. V A Marčenko, L A Pastur, Math. USSR-Sb. 1457V. A. Marčenko, L. A. Pastur, Math. USSR-Sb 1, 457 (1967)
. L Laloux, P Cizeau, J.-P Bouchaud, M Potters, Phys. Rev. Lett. 831467L. Laloux, P. Cizeau, J.-P. Bouchaud, M. Potters, Phys. Rev. Lett. 83, 1467 (1999)
. V Plerou, P Gopikrishnan, B Rosenow, L A Nunes Amaral, H E Stanley, Phys. Rev. Lett. 831471V. Plerou, P. Gopikrishnan, B. Rosenow, L. A. Nunes Ama- ral, H. E. Stanley, Phys. Rev. Lett. 83, 1471 (1999)
. R N Mantegna, Eur. Phys. J. B. 11193R. N. Mantegna, Eur. Phys. J. B 11, 193 (1999)
. G Bonanno, R Mantegna, N Vandewalle, Phys. Rev. E. 627615G. Bonanno, R. Mantegna, N. Vandewalle, Phys. Rev. E 62, R7615 (2000)
. L Laloux, P Cizeau, J.-P Bouchaud, M Potters, Int. J. Theor. Appl. Finance. 3391L. Laloux, P. Cizeau, J.-P. Bouchaud, M. Potters, Int. J. Theor. Appl. Finance 3, 391 (2000)
. V Plerou, P Gopikrishnan, B Rosenow, L A Nunes Amaral, T Guhr, H E Stanley, Phys. Rev. E. 6566126V. Plerou, P. Gopikrishnan, B. Rosenow, L. A. Nunes Amaral, T. Guhr, H. E. Stanley, Phys. Rev. E 65, 066126 (2002)
. T Guhr, B Kälber, J. Phys. A. 363009T. Guhr, B. Kälber, J. Phys. A 36, 3009 (2003)
. G Bonanno, G Caldarelli, F Lillo, S Micciché, N Vandewalle, R Mantegna, Eur. Phys. J. B. 38363G. Bonanno, G. Caldarelli, F. Lillo, S. Micciché, N. Van- dewalle, R. Mantegna, Eur. Phys. J. B 38, 363 (2004)
. Z Burda, J Jurkiewicz, Physica A. 34467Z. Burda, J. Jurkiewicz, Physica A 344, 67 (2004)
. G Raffaelli, M Marsili, J. Stat. Mech. 08001G. Raffaelli, M. Marsili, J. Stat. Mech. , L08001 (2006)
. S Drożdż, A Z Gorski, J Kwapień, Eur. Phys. J. B. 58499S. Drożdż, A. Z. Gorski, J. Kwapień, Eur. Phys. J. B 58, 499 (2007)
. M Marsili, G Raffaelli, B Ponsot, J. Econ. Dyn. Control. 331170M. Marsili, G. Raffaelli, B. Ponsot, J. Econ. Dyn. Control 33, 1170 (2009)
. G Akemann, J Fischmann, P Vivo, Physica A. 3892566G. Akemann, J. Fischmann, P. Vivo, Physica A 389, 2566 (2010)
. G Livan, S Alfarano, E Scalas, Phys. Rev. E. 8416113G. Livan, S. Alfarano, E. Scalas, Phys. Rev. E 84, 016113 (2011)
. Z Burda, J Jurkiewicz, B Wac, Law, Acta Phys. Pol. B. 362641Z. Burda, J. Jurkiewicz, B. Wac law, Acta Phys. Pol. B 36, 2641 (2005)
. Z Burda, A M A Jarosz, M Nowak, Snarska, New J. Phys. 1275036Z. Burda, A. Jarosz. M. A. Nowak, M. Snarska, New J. Phys. 12 075036 (2010)
. J.-P Bouchaud, L Laloux, M Miceli, M Potters, Eur. Phys. J. B. 55201J.-P. Bouchaud, L. Laloux, M. Miceli, M. Potters, Eur. Phys. J. B 55, 201 (2007)
. S Drożdż, J Kwapień, A Z Gorski, P Oswiȩcimka, Acta Phys. Pol. B. 373039S. Drożdż, J. Kwapień, A. Z. Gorski, P. Oswiȩcimka, Acta Phys. Pol. B 37, 3039 (2006)
. J C Osborn, Phys. Rev. Lett. 93222001J. C. Osborn, Phys. Rev. Lett. 93, 222001 (2004)
. G Akemann, Nucl. Phys. B. 730253G. Akemann, Nucl. Phys. B 730, 253 (2005)
. G Akemann, M J Phillips, H.-J Sommers, J. Phys. A. 4212001G. Akemann, M. J. Phillips, H.-J. Sommers, J. Phys. A 42, 012001 (2009)
. E Kanzieper, N Singh, J. Math. Phys. 51103510E. Kanzieper, N. Singh, J. Math. Phys. 51, 103510 (2010)
. Z Burda, A Jarosz, G Livan, M A Nowak, A Swiȩch, Phys. Rev. E. 8261114Z. Burda, A. Jarosz, G. Livan, M. A. Nowak, A. Swiȩch, Phys. Rev. E 82, 061114 (2010)
. Z Burda, A Jarosz, G Livan, M A Nowak, A Swiȩch, Acta Phys. Pol. B. 42939Z. Burda, A. Jarosz, G. Livan, M. A. Nowak, A. Swiȩch, Acta Phys. Pol. B 42, 939 (2011)
. G Akemann, E Kanzieper, J. Stat. Phys. 1291159G. Akemann, E. Kanzieper, J. Stat. Phys. 129, 1159 (2007)
. P J Forrester, G Honner, J. Phys. A. 322961P. J. Forrester, G. Honner, J. Phys. A 32, 2961 (1999)
MacKinlay The Econometrics of Financial Markets. J Y Campbell, A W Lo, A C , Princeton University PressJ. Y. Campbell, A. W. Lo, A. C. MacKinlay The Econo- metrics of Financial Markets (Princeton University Press, 1997)
. C S Bretherton, M Widmann, V P Dymnikov, J M Wallace, I Blade, J. Climate. 12C. S. Bretherton, M. Widmann, V. P. Dymnikov, J. M. Wallace, I. Blade, J. Climate 12, 1990 (1998)
. Z Burda, R A Janik, M A Nowak, Phys. Rev. E. 8461125Z. Burda, R. A. Janik, M. A. Nowak, Phys. Rev. E 84, 061125 (2011)
| []
|
[
"BASIS FOR INTENTIONS: EFFICIENT INVERSE REINFORCEMENT LEARNING USING PAST EXPERI- ENCE",
"BASIS FOR INTENTIONS: EFFICIENT INVERSE REINFORCEMENT LEARNING USING PAST EXPERI- ENCE"
]
| [
"Marwa Abdulhai [email protected] \nDepartment of Computer Science\nUniversity of California\nBerkeley\n",
"Natasha Jaques [email protected] \nGoogle Research\nBrain Team\n",
"Sergey Levine [email protected] \nDepartment of Computer Science\nUniversity of California\nBerkeley\n"
]
| [
"Department of Computer Science\nUniversity of California\nBerkeley",
"Google Research\nBrain Team",
"Department of Computer Science\nUniversity of California\nBerkeley"
]
| []
| This paper addresses the problem of inverse reinforcement learning (IRL) -inferring the reward function of an agent from observing its behavior. IRL can provide a generalizable and compact representation for apprenticeship learning, and enable accurately inferring the preferences of a human in order to assist them. However, effective IRL is challenging, because many reward functions can be compatible with an observed behavior. We focus on how prior reinforcement learning (RL) experience can be leveraged to make learning these preferences faster and more efficient. We propose the IRL algorithm BASIS (Behavior Acquisition through Successor-feature Intention inference from Samples), which leverages multi-task RL pre-training and successor features to allow an agent to build a strong basis for intentions that spans the space of possible goals in a given domain. When exposed to just a few expert demonstrations optimizing a novel goal, the agent uses its basis to quickly and effectively infer the reward function. Our experiments reveal that our method is highly effective at inferring and optimizing demonstrated reward functions, accurately inferring reward functions from less than 100 trajectories. | 10.48550/arxiv.2208.04919 | [
"https://export.arxiv.org/pdf/2208.04919v1.pdf"
]
| 251,442,616 | 2208.04919 | bb97c5d837f92c819680927e705fe8cfdedb2837 |
BASIS FOR INTENTIONS: EFFICIENT INVERSE REINFORCEMENT LEARNING USING PAST EXPERI- ENCE
Marwa Abdulhai [email protected]
Department of Computer Science
University of California
Berkeley
Natasha Jaques [email protected]
Google Research
Brain Team
Sergey Levine [email protected]
Department of Computer Science
University of California
Berkeley
BASIS FOR INTENTIONS: EFFICIENT INVERSE REINFORCEMENT LEARNING USING PAST EXPERI- ENCE
This paper addresses the problem of inverse reinforcement learning (IRL) -inferring the reward function of an agent from observing its behavior. IRL can provide a generalizable and compact representation for apprenticeship learning, and enable accurately inferring the preferences of a human in order to assist them. However, effective IRL is challenging, because many reward functions can be compatible with an observed behavior. We focus on how prior reinforcement learning (RL) experience can be leveraged to make learning these preferences faster and more efficient. We propose the IRL algorithm BASIS (Behavior Acquisition through Successor-feature Intention inference from Samples), which leverages multi-task RL pre-training and successor features to allow an agent to build a strong basis for intentions that spans the space of possible goals in a given domain. When exposed to just a few expert demonstrations optimizing a novel goal, the agent uses its basis to quickly and effectively infer the reward function. Our experiments reveal that our method is highly effective at inferring and optimizing demonstrated reward functions, accurately inferring reward functions from less than 100 trajectories.
INTRODUCTION
Inverse reinforcement learning (IRL) seeks to identify a reward function under which observed behavior of an expert is optimal. Once an agent has effectively inferred the reward function, it can then use standard (forward) RL to optimize it, and thus acquire not only useful skills by observing demonstrations, but also a reward function as an explanation for the demonstrator's behavior. By inferring the underlying goal being pursued by the demonstrator, the agent is more likely to be able to generalize to a new scenario in which it must optimize that goal, versus an agent which merely imitates the demonstrated actions. IRL has already proven useful in applications including autonomous driving, where learned models capture the behavior of nearby drivers and pedestrians (Huang et al., 2021;Kim & Pineau, 2016), and is a key component in enabling assistive technologies where a helper agent must infer the goals of the human it is assisting (Hadfield-Menell et al., 2016).
However, IRL becomes difficult when the model does not know which aspects of the environment are potentially relevant for obtaining reward and which are distractions from achieving its intended goal. Hence, effective IRL often depends heavily on good features (Abbeel & Ng, 2004;Ziebart et al., 2008). Inferring relevant features from raw, high-dimensional observations is extremely challenging, because there are many possible reward functions consistent with a set of demonstrations. For this reason, previous work has often focused on IRL with hand-crafted features that manually build in prior knowledge (Ziebart et al., 2008;Abbeel & Ng, 2004;Ratliff et al., 2006). When learning rewards from scratch, modern deep IRL algorithms often require a large number of demonstrations and trials (Garg et al., 2021).
In contrast, humans quickly and easily infer the intentions of other people. As shown by Qian et al. (2021), humans can infer rewards more effectively than our best IRL algorithms, as they bring to bear strong prior beliefs as to what might constitute a reasonable goal -e.g., that a person moving Figure 1: BASIS uses multi-task RL pre-training to learn a "basis for intentions". It encodes information about both the environment dynamics, and-through modeling the rewards for multiple pre-training tasks-the space of possible goals that can be pursued in the environment. It captures this information in cumulants φ, successor representation ψ, and preference vectors w 1:K . The agent then leverages knowledge from these parameters to rapidly infer the demonstrator's goal shown through demonstrations (s t , a t , s t+1 ), updating the parameters as needed.
towards a wall is more likely to have the intention of turning off the light as opposed to moving to a random point. This skill comes from humans having access to a lot of previous experience successfully accomplishing prior goals or watching others pursue their preferences (Baker et al., 2007). We hypothesize that prior knowledge of the space of probable goals is important to effectively and efficiently infer intentions with IRL. As shown by Ng & Russell (2000); Abbeel & Ng (2004); Ratliff et al. (2006), IRL methods that utilize user-supplied features concisely capture the structure of the reward function. We hypothesize the path towards building scalable IRL methods entails being able to instead learn those features from past experience. Thus, we propose an IRL algorithm, BASIS (Behavior Acquisition through Successor-feature Intention inference from Samples), that leverages multi-task RL pre-training and successor features to enable the agent to first learn a basis for intentions that spans the space of potential tasks in a domain. Using this basis, the agent can then perform more efficient IRL or inference of goals. Figure 1 shows an overview of our approach.
We use successor features to enable learning a basis for intentions because they allow learning a representation that naturally decouples the dynamics of the environment from the rewards of the agent, which are represented with a low-dimensional preference vector (Dayan, 1993;Filos et al., 2021). Via multi-task pre-training, the agent learns a representation in which the same successor features are shared across multiple tasks, as in . When the agent is tasked with inferring the rewards of a novel demonstrator via IRL, it initializes its model of the other agent with the learned successor features and a randomly initialized preference vector. Thus, the agent starts with a strong prior over the environment dynamics and the space of reasonable policies. It can then quickly infer the low-dimensional preference vector, while updating the successor features, in order to accurately describe the demonstrated behaviour.
We evaluate BASIS in three multi-task environment domains: a gridworld environment, a highway driving scenario, and a roundabout driving scenario. On these tasks, our approach is up to 10x more accurate at recovering the demonstrator's reward than state-of-the-art IRL methods involving pre-training with IRL, and achieves up to 15x more ground-truth reward than state-of-the-art imitation learning methods. In summary, the contributions of this paper are to show the effectiveness of multi-task RL pre-training for IRL, to propose a new technique for using successor features to learn a basis for behaviour that can be used to infer rewards, and empirical results demonstrating the effectiveness of our method over prior work.
RELATED WORK
Inverse RL: IRL methods learn the reward function of an agent through observing expert demonstrations. Depending on whether the goal is imitation, explanation, or transfer, downstream applications might use the recovered policy, reward function, or both. In environments with high-dimensional state spaces, there are many possible reward functions that are consistent with a set of demonstrations. Thus, early work on IRL relied on hand-engineered features that were known to be relevant to the reward function (Ng & Russell, 2000;Abbeel & Ng, 2004;Ratliff et al., 2006;Syed et al., 2008;Levine et al., 2010). Ziebart et al. (2008) propose maximum entropy IRL, which assumes that the probability of an action being seen in the demonstrations increases exponentially with its reward, an approach we also follow in this paper. A series of deep IRL methods have emerged that learn reward structures (Jin et al., 2017;Burchfiel et al., 2016;Shiarlis et al., 2016) or features (Wulfmeier et al., 2016;Fahad et al., 2018;Jara-Ettinger, 2019;Fu et al., 2018) from input. IQL (Kalweit et al., 2020) is a recent IRL method which uses inverse-action value iteration to recover the underlying reward. It was benchmarked against (Wulfmeier et al., 2016) and found to provide superior performance, so we compare to IQL in this paper. However, these prior methods do not leverage past multi-task RL experience as a way to overcome the underspecification issue to improve IRL.
Meta-inverse reinforcement learning methods, including those proposed by Yu et al. (2019); Xu et al. (2019); Gleave & Habryka (2018), have applied meta-learning techniques to IRL, by pre-training on past IRL problems, then using meta-learning to adapt to a new IRL problem at test time while leveraging this past experience. In contrast with meta-IRL methods, we pre-train using RL, in which the agent has a chance to explore the environment and learn to obtain high rewards on multiple tasks. Relying on RL rather than IRL pre-training provides an advantage, since collecting the demonstrations required for IRL can be expensive, but RL only requires access to a simulator. Further, the basis learned through RL enables our agent to rapidly infer rewards in demonstrated trajectories, even in complex and high-dimensional environments.
Imitation learning: IL methods attempt to replicate the policy that produced a set of demonstrations. The most straightforward IL method is behavioural cloning Bain & Sammut, 1996), where the agent receives training data of encountered states and the resulting actions of the expert demonstrator, and uses supervised learning to imitate this data . This allows the agent observing the expert data to learn new behaviors without having to interact with the environment. , our approach leverages learning from demonstrations where actions are available but rewards and task annotations are unknown. One among these methods is the non-adversarial method IQ-Learn (Garg et al., 2021), which we compare with our method. However, current IL methods that focus on skill transfer have a different goal of minimizing the supervised learning loss and do not recover the reward function, which BASIS does.
Successor features: derives generalized successor features from successor representations (Dayan, 1993) to leverage reward-free demonstrations for learning cumulants, successor features, and corresponding preferences. These have been used for applications including planning (Zhu et al., 2017), zero-shot transfer (Lehnert et al., 2017;Borsa et al., 2018;Filos et al., 2021), exploration (Janz et al., 2019;Machado et al., 2020), skill discovery (Machado et al., 2018;Hansen et al., 2020), apprenticeship learning (Lee et al., 2019), and theory of mind (Rabinowitz et al., 2018). PsiPhi-Learning (Filos et al., 2021) illustrates that generalized value functions, such as successor features, are a very effective way to transfer knowledge about agency in multi-agent settings, and includes an experiment which uses successor features for IRL. Unlike PsiPhi, which learns a new set of successor features for each agent it models, we learn a shared set of successor features spanning all tasks that have been seen in the multi-task learning phase, similar to ; this enables more effective transfer to new tasks that have not been encountered during training. However, does not address IRL or learning from demonstrations. Through this protocol, we demonstrate a notion of basis for intentions learned from past experience that can be leveraged to quickly infer intentions.
BACKGROUND AND PROBLEM SETTING
Markov decision processes: An agent's interaction in the environment can be represented by a Markov decision process (MDP) (Puterman, 1994). Specifically, an MDP is defined as a tuple M = S, A, P, R, γ ; S is the state space, A is the action space, P : S × A → S is the state transition probability function, R is the reward function, and γ ∈ [0, 1) is the discount factor. An agent executes an action at each timestep t according to its stochastic policy a t ∼ π(a t |s t ), where s t ∈ S. An action a t yields a transition from the current state s t to the next state s t+1 ∈ S with probability P (s t+1 |s t , a t ). An agent then obtains a reward according to the reward function r t ∼ R(s t , a t ). An agent's goal is to maximize its expected discounted return E [ t γ t r t |s 0 , a 0 ] = Q(s 0 , a 0 ).
Successor features: decouple modeling the dynamics of the environment from modeling rewards ; Dayan (1993). The value function is decomposed into features that represent the environment transition dynamics, and preference vectors (which are specific to a particular reward function). Thus, SFs enable quick adaptation to optimizing a new reward function in the same environment (with the same transition dynamics). Specifically, we can represent the one-step expected reward as:
r(s t , a t , s t+1 ) = φ(s t , a t , s t+1 ) w,(1)
where φ(s t , a t , s t+1 ) ∈ R d are features (called cumulants) of (s t , a t , s t+1 ) and w ∈ R d are weights or preferences. The preferences w are a representation of a possible goal with a particular reward function, with each w giving rise to a separate task. The terms 'task', 'goal', and 'preferences' are used interchangeably when context makes it clear whether we are referring to w itself. The state-action value function for a particular policy π can now be decomposed with the following linear form :
Q π (s, a) = ψ π (s, a) w, ψ π (s, a) = E [st=s,at=a] ∞ i=t γ i−t φ(s i+1 , a i+1 , s i+1 )(2)
where γ ∈ [0, 1), and ψ π (s, a) are the successor features of π.
Maximum-entropy IRL: Given a set of demonstrations D = (s 0 , a 0 ), (s 1 , a 1 ) . . . provided by an expert, the IRL problem (Ng & Russell, 2000) is to uncover the expert's unknown reward function R that resulted in the expert's policy, which in turn led to the provided demonstrations. Ziebart et al. (2008) propose the maximum entropy IRL framework, under which highly rewarding actions are considered exponentially more probable in the demonstrations, an assumption which we follow in this work. MaxEnt IRL states the expert's preference for any given trajectory between specified start and goal states is proportional to the exponential of the reward along the path: P (s, a|r) ∝ exp{ s,a R s,a }. Thus, the maximum entropy model proposed by Ziebart et al. (2008) is:
P (a|s i ) = exp(Q(s i , a))(3)
The optimal parameters can be found by maximizing the log likelihood with respect to the parameters of the reward, often utilizing either a tabular method similar to value iteration, or approximate gradient estimators based on adversarially trained discriminators.
BASIS: BEHAVIOR ACQUISITION THROUGH SUCCESSOR-FEATURE INTENTION INFERENCE FROM SAMPLES
We now present our IRL algorithm, BASIS, which is illustrated in Figure 1. First, the agent learns a basis for intentions using successor features and multi-task RL pre-training. Then, the agent uses the successor representation learned from pre-training as an initialization for inferring the reward function of an expert from demonstrations with IRL. The successor features learned via pre-training act like a prior over intentions, enabling the agent to learn the features for linear MaxEnt IRL so that we get the simplicity and efficiency benefits of good features, without having to specify those features manually. These successor features can then be refined to infer a likely intention using IRL based on the provided demonstrations and additional online experience.
RL PRE-TRAINING: LEARNING A BASIS FOR INTENTIONS
In order to form a good basis for intentions, we use multi-task RL pre-training and successor features to learn a representation that enables the agent to solve a large variety of tasks. We assume that each of the tasks share the same state space and transition dynamics, but differ in their reward functions. These assumptions are relevant to the setting in which a human demonstrator may have one of several possible goals in the same environment. Figure 2: Architecture diagram for learning global cumulants φ and successor features ψ, both of which we represent as function approximators, and task-specific preference vectors w. The input s t is passed through shared convolution layers highlighted in blue, which extract information about the state from pixels. This intermediate representation is passed into networks for ψ and φ respectively. Taking the dot product of the output from ψ and w, we obtain the action-value function Q. The dot product of φ and w gives us the predicted reward at s t . Networks that are updated in both RL pre-training and during IRL are highlighted in purple (ψ, w) and those learned solely with RL are highlighted in red. Similar color conventions are used to represent the loss functions used for each network parameter, with losses only computed during IRL highlighted in green.
To learn a representation of the tasks, we learn a global cumulant function φ, successor features ψ, and per-task preference vectors w 1:K . As the agent operates under the same state space and transition dynamics across the different tasks, we can share φ across tasks, which enables learning a common basis for reward functions. We can think of φ as a task-agnostic set of state features that are relevant to predicting rewards across any training task. The agent's policy is captured by ψ, which estimates the future accumulation of these state features according to eq. (2). We learn a separate preference vector w 1:K for each of the K tasks, which enables representing the different reward functions of the tasks. For simplicity, we refer to a preference vector specific for a task as w.
We use a neural network to learn both ψ and φ, as shown in Figure 2. Initial features are extracted from raw, high-dimensional observations s t via a shared trunk of convolution layers. We then use separate heads output both φ and ψ, which are parameterized by θ φ and θ φ , respectively. The preference vectors w 1:K do not depend on the state, and are learned separately. As shown in Eq. 2, combining ψ with a particular preference vector w produces Q π,w = ψ π (s, a) w, which can be used as a policy π. In order to ensure our representation is suitable for later IRL training, we fit the Q function using a modified version of the Bellman error with a softmax function, following the formulation of MaxEnt IRL given in Ziebart et al. (2008):
L Q (θ ψ ) = E (st,at,st+1,rt)∼B [||Q π,w (a t , s t ; θ ψ ) − r t − γ softmax at+1 Q π,w (s t+1 , a t+1 ;θ ψ )||],(4)
where B represents the replay buffer. Note that this formulation of Q-learning is equivalent to Soft Q-Learning Haarnoja et al. (2017), which is a maximum entropy RL method that can improve exploration, and is thus a reasonable choice for a forward RL objective.
To ensure that environment features extracted by φ are sufficient to represent the space of possible reward functions, and that each w accurately represents its task-specific reward function, we train both using the following reward loss:
L R (θ φ , w) = E (st,at,rt)∼D ||φ(s t , a t ; θ φ ) w−r t ||.(5)
As shown in Eq. eq. (2), the successor features ψ should represent the accumulation of the cumulants φ over time. To enforce this consistency, we train θ ψ with the following inverse temporal difference (ITD) loss , which is similar to a Bellman consistency loss:
L TD (θ ψ ) = E (st,at,st+1,at+1)∼B [||ψ(s t , a t ; θ ψ ) − φ(s t , a t ; θ φ ) − γψ(s t+1 , a t+1 ;θ ψ )||].(6)
We do not train θ φ with this loss (the gradient is not passed through φ(s t , a t ; θ φ )). This is because we first force φ to represent the rewards through Eq. 5, then construct ψ out of the fixed φ, which leads to more stable training. Through this process of successor feature learning, our method learns a "basis for intentions" that can be used as an effective prior for IRL in the next phase.
INFERRING INTENTIONS WITH IRL
Given an expert demonstrating its preferences, the goal of IRL is to determine the intentions of this demonstrator by not only recovering its policy π e (a|s) but accurately inferring its reward function. Our agent is given access to these demonstrations without rewards, denoted D = τ 1 , τ 2 . . . τ N , where the trajectory τ = (s 0 , a 0 , s 1 . . . , s T , a T , s T +1 ) is generated by the demonstrator. The demonstrator is optimizing an unknown, ground-truth reward function r e (s t , a t ) that was not part of the pre-training tasks.
We will now clarify how successor features can be related to the demonstrator's policy and reward function by drawing a parallel between successor features and MaxEnt IRL (Ziebart et al., 2008). This motivates our formulation of IRL with successor features.
MaxEnt IRL assumes that the demonstrator's actions are distributed in proportion to exponentiated Qvalues, i.e., π(a|s) ∝ exp(Q(s, a)) (Eq. 3). We then learn the parameters of the expert's Q-function, θ e , by solving the following optimization problem: θ * e = arg max θe a T a=1 log P (a|π e ). Since we can represent the expert's Q-value with successor features ψ e and preference vector w e (Eq. 2), we can express the expert's policy as: π e (a|s) ∝ exp(ψ e w e ). This leads to the following behavioral cloning (BC) loss to fit our task-specific preferences w e and successor features ψ e :
L BC (θ ψe , ω e ) = −E τ ∼D log exp(ψ e (s, a) w e ) a (exp ψ e (s, a) w e ).(7)
Note however that this BC loss alone is insufficient to produce an effective IRL method, since we have no way to infer the reward function of the expert. To predict the expert's rewards, we need to make use of φ; i.e.:
φ e (s, a) w e = r e (s, a)
To ensure that ψ e and φ e are consistent, we also require an ITD loss:
L TD (θ ψ e ) = E D ||ψ e (s t , a t ; θ ψ e ) − φ e (s t , a t ; θ φ e ) − γψ e (s t+1 , a t+1 ;θ ψ e )||.
Now we can draw a direction connection to MaxEnt IRL (Ziebart et al., 2008), which proposes inferring the demonstrator's reward function using a linear transformation applied to a set of state features: R(f (s); θ) = θ T f (s) where the mapping f : S → [0, 1] k is a state feature vector, θ are the model parameters, and R(f (s), θ) is the reward function. Our approach instead uses successor features to learn a set of continuous state features φ e to replace f , and w e is analogous to learning θ. Thus, according to the MaxEnt IRL model, if ψ π e (s, a) remains Bellman consistent with φ e (by minimizing the ITD loss) and w e and ψ πe e (s, a) are optimized so as to maximize the probability of the observed demonstration actions (as in Eq. 7), we will have recovered the demonstrator's Q-function as ψ e (s, a) T w, and the demonstrator's reward function as φ e (s, a) T w e .
Benefitting from RL pre-training: To initialize ψ e and φ e , the agent uses the parameters θ φ and θ ψ that it learned during RL pre-training; i.e. θ ψ e ← ψ and θ φ e ← θ φ . It also initializes a new preference vector w e as the average of all w vectors across tasks learned in during RL pre-training, in order to begin with an agnostic representation of the demonstrator's goals. To give some intuition for the method, the φ learned during RL training represents a shared feature space that can be used to explain all the pre-training tasks. When the agent initializes φ e with φ, it is given a strong prior that can help explain the expert's behavior. Hence, even though at the beginning of the IRL stage the agent has never been directly trained on behavior for the new task, it can potentially extrapolate from the learned basis in order to quickly ascertain the demonstrator's preferences. To the extent that the pre-trained successor features ψ provide a good basis for the expert's policy, then this learning process might primarily modify w e , and only make minor changes to ψ e to make it consistent with the new policy. In this case, the method can recover the reward and policy for the new task very quickly. However, the astute reader will notice that the basis for intentions learned during RL pre-training and encoded in θ φ and θ ψ does not necessarily constitute a policy that is optimal for the new task that is being demonstrated by the expert. Hence, the ITD loss (Eq. 6) is necessary to learn the correct ψ e .
Algorithm 1 RL pre-training: learning a basis 1: Initialize θ φ for cumulants, θ ψ for successor features, and preference vectors w k K Update θ ψ , w with bellman error in eq. (4) 12:
Update θ φ , w with reward loss in eq. (5) 13:
Update θ ψ , w with ITD loss in eq. (6) Algorithm 2 IRL: Inferring Intentions 1: Input: expert demonstrations D 2: Initialize θ φ e ← φ θ ψ e ← ψ, w e ← K 1 w k /K from multi-task RL pre-training 3: for each demonstration <s t , a t , s t+1 , a t+1 > do:
4:
Update ψ e , w e with BC loss in eq. (7) 5:
Update ψ e , w e with ITD loss in eq. (9) Even if the demonstrator policy differs from the ψ learned during RL pre-training, ITD allows us to accurately infer the demonstrated policy.
Algorithm summary: We now summarize the procedure for both RL pre-training (learning a basis for intentions) shown in algorithm 1 and IRL (inferring intentions) shown in algorithm 2. Algorithm 1,uses multi-task RL training, and computes the Bellman error, reward loss, and ITD loss at every gradient step to discover a good basis for intentions. In Algorithm 2 (IRL), we begin the learning process with parameters initialized from the RL pre-training. We iterate through a batch of a fixed set of demonstrations from D, computing the BC loss and ITD loss to maintain consistency between ψ and φ as aforementioned. At test time, we use the inferred cumulants φ e , preference vector w e , and successor representation ψ e , to produce a policy that can be executed in the test environment, and measure how well it matches the demonstrator's reward (as in Figure 1).
EXPERIMENTS
Below we describe the research questions that our empirical experiments are designed to address.
Question 1: Will BASIS acquire the behaviors of a demonstrator expert more quickly and effectively than conventional IRL and imitation learning methods? A central goal of IRL is to be able to reproduce the demonstrated behavior of the expert in a generalizable way. We hypothesize that the RL pre-training and successor representation of BASIS will allow it do this more accurately and with fewer demonstrations than existing techniques. To measure how well a method can acquire demonstrated behaviors, we evaluate performance using the expected value difference. This metric evaluates the sub-optimality of a policy trained to optimize the reward function inferred with IRL. It is computed as the difference between the return achieved by the expert policy and the policy inferred with IRL, both measured under the ground truth reward function (thus, a lower value difference is better). Intuitively, this metric captures how much worse the policy that IRL recovers is vs. the expert demonstrator's policy, and is the metric of choice in for evaluating IRL methods in prior work (Levine et al., 2011;Wulfmeier et al., 2016;Xu et al., 2019). For each algorithm, we evaluate the performance with different numbers of demonstrations to study whether the use of prior knowledge from other tasks in BASIS allows it to perform IRL more efficiently (i.e., with fewer demonstrations).
Question 2: Can BASIS more accurately predict the true reward with fewer demonstrations? It is often easier to optimize for the correct behavior of a demonstrator agent than accurately predict its reward function. Even with inaccurate reward values, the agent could still demonstrate the correct behavior as long as it estimated the relative value of different actions correctly. Hence, we perform further analysis to observe how well the agent is able to predict the expert's true reward function. We compute the mean squared error between the agent's prediction of the reward, and the true environment reward that is observed: M SE = (φ e (s t , a t ) T w e − r t ) 2 . This allows us to understand how accurately the agent is able to infer intentions by leveraging its basis for intentions.
Question 3: How closely does BASIS match the demonstrated policy and adhere to the demonstrator's preferences? In order to understand how well the agents are able to adhere to the demonstrators preferences, we visualize the behaviors of the agents. For example, we measure the distribution of behaviors for each method vs. the distribution of the expert's policy.
Question 4: How does multi-task pre-training & successor features benefit IRL? We address this question through ablations that show how much multi-task RL pre-training vs successor features contribute to learning an IRL task. We compare BASIS to two ablations. The first is uses no multi-task pre-training, but does perform IRL via BC and successor features. Denoted as "No pre-training (BC + successor features)", it is used to assess the importance of RL pre-training. The second, "No successor features (pre-train with DQN)", is an algorithm which performs multi-task pre-training via DQN and IRL via BC, and assesses the importance of successor features.
DOMAINS
We evaluate BASIS on two domains: 1) a gridworld environment Fruit-Picking, which allows us to carefully analyze and visualize the performance of the method, and 2) high-dimensional autonomous driving environments Highway and Roundabout, which necessitate the use of deep IRL. We modified the domains to be able to create multiple tasks with differing reward functions, enabling us to test generalization to novel demonstrator tasks outside of the set of pre-training tasks. Further details are available in Appendix 8.1.
Fruit-Picking is a custom environment based on Chevalier-Boisvert et al. (2018), with different colored fruits the agent must gather. In each task, the number and type of fruits varies, along with the reward received for gathering a specific type of fruit. During RL pre-training, the agent learns to pick one type of fruit per task. For the IRL phase, the demonstrated task is different to the training tasks; namely, the agent shows a varying degree of preference to each of the fruits in the environment i.e. 80% preference for red fruits, 20% preference for orange fruits, and 0% preference for green fruits. This behavior was not seen during pre-training. The final reward of the expert in this task is 40.
Highway-Env & Roundabout-Env Leurent (2018) features a collection of autonomous driving and tactical decision-making environments. We chose to model driving behavior as it allows us to determine the ability of IRL algorithms to learn the hidden intentions of a driver. The ego-vehicle is driving on a road populated with other vehicles positioned at random. All vehicles can switch lanes and change their speed. We have modified the agent's reward objective to maintain a target speed, target distance from the front vehicle, and target lane, while avoiding collisions with neighbouring vehicles. As there are many continuous parameters that determine the reward function for the agent, it is not possible to sample all combinations of behaviors within the training tasks. Thus, it is straightforward to create a novel test task for the demonstrator by choosing a different combination of these behaviors.
BASELINES AND COMPARISONS
We compare BASIS to three baselines. For all baselines, we use the same hyperparameters as BASIS when applicable, and maintain default values from source code otherwise. IQ-learn (Garg et al., 2021) is a state-of-the-art, dynamics-aware, imitation learning (IL) method that is able to perform with very sparse training data, scaling to complex image-based environments. As it is able to implicitly learn rewards, this method can also be used in IRL. We use the authors' original implementation from: https://github.com/Div99/IQ-Learn. IQL (Inverse Q-learning) (Kalweit et al., 2020) is a state-of-the-art inverse RL method, which uses inverse-action value iteration to recover the underlying reward of an external agent, providing a strong comparison representative of recent inverse RL methods. Multi-task IRL pre-training is included to enable a fair comparison to methods which leverage multi-task pre-training. To our knowledge there is no prior method that uses RL pre-training to acquire a starting point for IRL. Closest in spirit is prior work on meta-IRL (Yu et al., 2019;Xu et al., 2019;Gleave & Habryka, 2018), which pre-train on other IRL tasks, rather than pretraining on a set of standard RL problems, as in our method. Since these works generally assume a tabular or small discrete-state MDP, they are not directly applicable to our setting. We thus create a multi-task IRL pre-training baseline by applying the same network architecture and BC and ITD losses as our own method (essentially performing our IRL phase (Algorithm 2) during pre-training as well). We note that in general, IRL pre-training has the significant disadvantage that it requires many more expert demonstrations during the pre-training phase, whereas our method does not.
RESULTS
We demonstrate BASIS's ability to leverage prior experience to help infer preferences quickly and efficiently on a diverse suite of domains. The code is available at https://github. com/abdulhaim/basis-irl and the videos showing performance are available at https: //sites.google.com/view/basis-irl.
Question 1: Acquiring demonstrated behavior. Figure 4 shows the value difference when evaluating an agent after learning from a series of demonstrations for all three environments. In Fruit-Picking, we observe that BASIS is better able to optimize the demonstrated policy than IQ-learn, IQL, and multi-task IRL pre-training, achieving the lowest value difference of 15 at 10000 demonstrations, and surpassing the final performance of the baselines with less than 1/3 of the demonstrations. We see similar trends in Figure 4b, showing the value difference when inferring previously unseen driving preferences of desired driving distance, target speed, and lane preference in the Highway and Roundabout domains. The value difference for BASIS is seen to be significantly lower than all three baselines across all numbers of demonstrations, and it is once again able to surpass the best baseline with only 1/3 of the data requirements.
In the FruitGrid environment, we hypothesize that IQL as well as IQ-learn are unable to achieve a low value difference as shown in fig. 5a, due to the number of trajectories provided as well as a lack of ability to explore a range of rewards to discern the preferences of the agent. We find that multi-task IRL shows a higher value difference, as it was unable to reach maximum reward after an equivalent number of environment steps to multi-task RL in the pre-training phase. In addition, we note that conducting multi-task IRL pre-training requires collecting many expert trajectories, which may be prohibitively expensive, especially if these demonstrations are collected from humans. Instead, our method allows collecting experience inexpensively in simulation, making it much more sample efficient in terms of human demonstrations.
This experiment allows us to determine whether BASIS fulfills the first requirement of being an IRL algorithm as well as an IL algorithm: being able to reproduce the behavior demonstrated by the expert. Further, because the performance of BASIS after only a few demonstrations surpasses the performance of both baselines after three times the number of demonstrations, this provides evidence that building a strong prior over the space of reasonable goals helps BASIS infer the expert's behavior rapidly and efficiently.
Question 2: Predicting rewards. Figure 5 shows the reward loss (mean squared error in predicting the ground truth reward) obtained by each of the methods. Across all three environments, BASIS Similarly, as shown in fig. 5b, we observe BASIS to converge to a lower reward MSE loss than any of the other techniques after only one demonstration. This rapid inference of the expert's reward suggests that the basis acquired during pre-training was sufficient to explain the expert's behavior, and the algorithm was able to adapt rapidly to the expert's task by simply updating w e (as explained in Section 4.2). This is consistent with prior work using successor features for multi-agent learning (Filos et al., 2021), which also showed 1-shot adaptation to a novel test task. expert prefers. This could be due to a failure to scale to high-dimensional environments. Although IQ-learn and Multi-task IRL pre-training are better able to capture the correct distribution, BASIS shows a distribution closer to the ground truth than either of the baseline techniques. In Figure 6b Question 4: How does multi-task pre-training & successor features benefit IRL? We now conduct ablation experiments to assess how multi-task RL pre-training and successor features help in learning an IRL task in all three domains. We compare BASIS (in red) to two ablations: 1) No successor features (which pre-trains a model with DQN), and 2) No pre-training (which uses successor features and the BC loss in Eq. 7) to do IRL). We observe value difference in fig. 4 and reward loss in fig. 5 to be larger without successor features and without multi-task pre-training, demonstrating the benefit of both components of our approach. We note that for both Fruit-Picking and Highway, the No pre-training baseline does best, actually outperforming or matching the performance of all three baseline techniques (IQ-Learn, IQL, and multi-task IRL pre-training). However, No pre-training performs poorly in the more complex Roundabout environment. In Roundabout, No successor features actually gives better performance than two of the baseline techniques (IQL and multi-task IRL pre-training), demonstrating a strong benefit of learning a prior with RL pre-training. This empirically shows how multi-task RL pre-training and successor features accelerate learning an IRL task.
CONCLUSION
A major challenge in inverse RL is that the problem is underconstrained. With many different reward functions consistent with observed expert behaviour, it is difficult to infer a reward function for a new task. BASIS presents an approach to this problem by building a strong basis of intentions by combining multi-task RL pre-trainin with successor features. BASIS leverages past experience to infer the intentions of a demonstrator agent on an unseen task. We evaluate our method on domains with high dimensional state spaces, and compare to state-of-the-art inverse RL and imitation learning baselines, as well as pre-training with multi-task IRL. Our results show that BASIS is able to achieve better performance than prior work in less than one third of the demonstrations. The limitation of this approach is that it requires the design of a set of tasks for RL pre-training that are relevant to the expert demonstrations. Nevertheless, this work shows the potential of building a generalizable basis of intentions for efficient IRL.
APPENDIX
ENVIRONMENT DETAILS
Fruit-Picking is built from a library of open-source grid-world domains in sparse reward settings with image observations (Chevalier-Boisvert et al., 2018). We designed this custom environment with different colored fruits spread across the grid that the agent must gather. The number of fruits and types of fruits are customizable, along with the reward received for gathering a specific type of fruit.
To learn a basis for intentions using RL pre-training, the agent learns to pick three different colored fruit (red, orange, green) depending on the task ID that is provided in the observation as a one-hot encoded vector. The agent picks as many fruits as it can until the horizon of the episode. Each fruit is replaced in a random location after being picked by the agent such that there are always 3 fruits of each color present in the grid. In Phase II, the demonstrated task is different to the training tasks; namely, the agent shows a varying degree of preference to each of the fruits in the environment i.e. 80% preference for red fruits, 20% preference for orange fruits, and 0% preference for green fruits. This behavior was not seen during pre-training. The final reward of the expert in this task is 40.
Highway-Env & Roundabout-Env Leurent (2018) features a collection of autonomous driving and tactical decision-making environments. We chose to model driving behavior as it allows us to determine the ability of IRL algorithms to learn the hidden intentions of a driver. In the highway env, the ego-vehicle is driving on a three-lane highway populated with other vehicles positioned at random. All vehicles can switch lanes and change their speed. We have modified the agent's reward objective to maintain a target speed while avoiding collisions with neighbouring vehicles and keeping to a preferred lane. Maintaining a target distance away from the front vehicle is also rewarded. As there are many continuous-spaced parameters that determine the reward function for the agent, it is not possible to sample all combinations of behaviors within the training tasks. Thus, to create a novel test task for the demonstrator, it is straightforward to choose a different combination of these behaviors. Specifically, we test on a task of the driving agent maintaining a desired distance 10 from the vehicle in front of it while maintaining a speed of 28 m/s. We perform similar modifications to the Roundabout where the agent must merge onto a roundabout while maintaining a specific speed and target distance away from other vehicles. We build off the implementation of the Highway domain here: https://github.com/eleurent/highway-env.
NETWORK ARCHITECTURE DETAILS
For the MiniGrid Domain, networks for φ, ψ and w policy share three convolution layers, with a ReLU after each layer and a max-pooling operation after the first ReLU activation. ψ and φ are represented by two separate linear layers with a Tanh activation function between them. Finally, w is represented as a single parameter. The network architecture is as follows:
• Shared feature extraction layer:
-Conv2d (3, 16) 2x2 filters, stride 1, padding 0 -ReLU -MaxPool2d (2, 2) -Conv2d (16, 32) 2x2 filters, stride 1, padding 0 -ReLU -Conv2d (32, 64) 2x2 filters, stride 1, padding 0 -ReLU
• φ network layer:
-FC (256 + num tasks, 64) -Tanh -FC (64, num cumulants + num actions)
• ψ network layer:
-FC (256 + num tasks, 64) -Tanh -FC (64, num cumulants + num actions)
• w network layer:
-Parameter (num tasks, num cumulants)
In this domain, num cumulants=64, num actions=4, and num tasks=3. Note that the network architecture stays the same across RL pre-training and IRL, however during IRL, the num tasks hyper-parameter is not provided, and a dummy value (a vector of 0's equal to number of tasks) is used.
We now present the architecture for the Highway Domain. Networks for φ, ψ and w policy share a linear layer with a ReLU activation after. ψ and φ are represented by two separate linear layers with a ReLU activation function between them. Finally, w is represented as a single parameter. The network architecture is as follows:
• Shared feature extraction layer:
-FC (5, 256) -ReLU
• φ network layer:
-FC (1280 + num tasks, 256) -ReLU -FC (256, num cumulants + num actions)
• ψ network layer:
-FC (1280 + num tasks, 256) -ReLU -FC (256, num cumulants + num actions)
• w network layer:
-Parameter (num tasks, num cumulants)
In this domain, num cumulants=64, num actions=5, and num tasks=10. Similar to Fruit-picking, the network architecture stays the same between RL pre-training and IRL.
IMPLEMENTATION DETAILS
On global feature φ. We would like to clarify assumptions made in this paper that will address the reviewer's question on why φ transfers to the task during IRL (inferring intentions). As φ is indeed critical for the method's ability to recover effective rewards on downstream tasks, it is learned from the pre-training tasks which are within the same distribution (noted in Section 4). Our training procedure ensures that the φ features are sufficient to represent the pre-training tasks (optimized directly in the objective for ψ and φ). If these features then generalize to other in-distribution tasks in pre-training, they should also be sufficient for downstream tasks during IRL from the same distribution. Hence, we reuse φ during IRL and do not optimize it further. We have run an additional empirical analysis as suggested in Figure 7, where we initialize φ from RL pre-training (learning a basis) and allow it to be optimized via ITD loss during IRL as well.There is marginal change in the value difference when optimizing and not optimizing φ (ours) during IRL in the Highway domain, confirming our intuition. On global feature φ. We would like to clarify assumptions made in this paper that will address the reviewer's question on why φ transfers to the task during IRL (inferring intentions). As φ is indeed critical for the method's ability to recover effective rewards on downstream tasks, it is learned from the pre-training tasks which are within the same distribution (noted in Section 4). Our training procedure ensures that the φ features are sufficient to represent the pre-training tasks (optimized directly in the objective for ψ and φ). If these features then generalize to other in-distribution tasks in pre-training, they should also be sufficient for downstream tasks during IRL from the same distribution. Hence, we reuse φ and do not optimize it further. We have run an additional empirical analysis as suggested in Figure 7, where we initialize φ from RL pre-training (learning a basis) and allow it to be optimized via ITD loss during IRL as well.There is marginal change in the value difference when optimizing and not optimizing φ (ours) during IRL in the Highway domain, confirming our intuition.
MAXENTROPY DERIVATION
Expanding on our explanation in Section 4.2, if r = φ(s, a) T w is the representation for the reward, and Q(a, s) = ψ(a, s)w, then the MaxEnt IRL problem can be written as:
This leads to our method, which relaxes the constraint into a soft constraint.
Kostrikov et al. (2020); Jarrett et al. (2020); Chan & van der Schaar (2021) have proposed several other IL methods. Some IL methods allow the agent to interact with the environment in addition to receiving demonstrations, and include adversarial methods such as Ho & Ermon (2016); Kostrikov et al. (2018); Baram et al. (2016). Similar to Borsa et al. (2017); Torabi et al. (2018); Sermanet et al. (2018); Liu et al. (2018); Brown et al. (2019)
Figure 3 :
3We evaluate on the Fruit-Picking, Highway, and Roundabout domains (shown left to right).
Figure 4 :Figure 5 :
45Value difference, i.e., the difference in the return obtained by each algorithm and the expert policy. Across both the Fruit-Picking (a), Highway (b), and Roundabout (c) domains, BASIS shows the lowest value difference indicating its behavior is closest to that of the optimal policy. It is able to surpass the performance of all baselines with less than one third of the data. The error bars show the standard deviation of 10 seeds. Reward loss: error in predicting the expert's true reward. BASIS is able to converge to a lower reward MSE loss compared to baselines in both the Fruit-Picking (a), Highway (b), and Roundabout (c) domains, which shows that it is most accurate in predicting the expert's reward. The error bars show the standard deviation of 10 seeds. achieves the lowest error vs. any of the baseline techniques, often reaching the best performance in a fraction of the examples. In Fruit-Picking (fig. 5a), BASIS achieves the best performance after only 100 demonstrations, surpassing the performance of other techniques after 1000 demonstrations.
Figure 6 :
6The above figures compare the distribution of behaviors learned by BASIS and each of the baseline techniques with the ground-truth behavior of the expert demonstrator.
, we perform a similar experiment in the Highway domain, visualizing how well the learned agent adheres to the staying in the preferred left lane. BASIS shows a preference for the left lane 80% of the time, compared to IQ-learn which matches the expert's preference 60% of the time. Both IQL and Multi-task IRL pre-training show an almost uniform lane preference, with no clear preference for the left lane.
Figure 7 :
7Optimizing φ (IRL) causes no significant change.
t. ψ(s, a) = φ(s, a)+γ * E a softmax(ψ(s ,a ) w) [ψ(s , a )]
Apprenticeship learning via inverse reinforcement learning. Pieter Abbeel, Andrew Y Ng, 10.1145/1015330.1015430Proceedings of the Twenty-First International Conference on Machine Learning, ICML '04. the Twenty-First International Conference on Machine Learning, ICML '04New York, NY, USAAssociation for Computing MachineryPieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML '04, pp. 1, New York, NY, USA, 2004. Association for Computing Machinery. ISBN 1581138385. doi: 10.1145/1015330.1015430. URL https://doi.org/10.1145/1015330.1015430.
A framework for behavioural cloning. Michael Bain, Claude Sammut, Machine Intelligence 15. Oxford University PressMichael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelligence 15, pp. 103-129. Oxford University Press, 1996.
Goal inference as inverse planning. Chris Baker, Joshua Tenenbaum, Rebecca Saxe, Proceedings of the 29th Annual Conference of the Cognitive Science Society. the 29th Annual Conference of the Cognitive Science SocietyChris Baker, Joshua Tenenbaum, and Rebecca Saxe. Goal inference as inverse planning. Proceedings of the 29th Annual Conference of the Cognitive Science Society, 01 2007.
Model-based adversarial imitation learning. Nir Baram, Oron Anschel, Shie Mannor, Nir Baram, Oron Anschel, and Shie Mannor. Model-based adversarial imitation learning, 2016.
Fast reinforcement learning with generalized policy updates. André Barreto, Shaobo Hou, Diana Borsa, David Silver, Doina Precup, Proceedings of the National Academy of Sciences. 117André Barreto, Shaobo Hou, Diana Borsa, David Silver, and Doina Precup. Fast reinforcement learning with generalized policy updates. Proceedings of the National Academy of Sciences, 117: 30079 -30087, 2020.
Successor features for transfer in reinforcement learning. André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, David Hado Van Hasselt, Silver, André Barreto, Will Dabney, Rémi Munos, Jonathan J. Hunt, Tom Schaul, Hado van Hasselt, and David Silver. Successor features for transfer in reinforcement learning, 2018.
Observational learning by reinforcement learning. Diana Borsa, Bilal Piot, Rémi Munos, Olivier Pietquin, Diana Borsa, Bilal Piot, Rémi Munos, and Olivier Pietquin. Observational learning by reinforcement learning, 2017.
Universal successor features approximators. Diana Borsa, André Barreto, John Quan, Daniel J Mankowitz, Rémi Munos, David Hado Van Hasselt, Tom Silver, Schaul, abs/1812.07626CoRRDiana Borsa, André Barreto, John Quan, Daniel J. Mankowitz, Rémi Munos, Hado van Hasselt, David Silver, and Tom Schaul. Universal successor features approximators. CoRR, abs/1812.07626, 2018. URL http://arxiv.org/abs/1812.07626.
Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. Daniel S Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum, Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations, 2019.
Distance minimization for reward learning from scored trajectories. Benjamin Burchfiel, Carlo Tomasi, Ronald Parr, AAAI, AAAI'16. AAAI PressBenjamin Burchfiel, Carlo Tomasi, and Ronald Parr. Distance minimization for reward learning from scored trajectories. In AAAI, AAAI'16, pp. 3330-3336. AAAI Press, 2016.
Scalable bayesian inverse reinforcement learning. Alex J Chan, Mihaela Van Der Schaar, Alex J. Chan and Mihaela van der Schaar. Scalable bayesian inverse reinforcement learning, 2021.
Minimalistic gridworld environment for openai gym. Maxime Chevalier, - Boisvert, Lucas Willems, Suman Pal, Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment for openai gym. https://github.com/maximecb/gym-minigrid, 2018.
Improving generalization for temporal difference learning: The successor representation. Peter Dayan, 10.1162/neco.1993.5.4.613Neural Computation. 54Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613-624, 1993. doi: 10.1162/neco.1993.5.4.613.
Learning how pedestrians navigate: A deep inverse reinforcement learning approach. Muhammad Fahad, Zhuo Chen, Yi Guo, 10.1109/IROS.2018.8593438IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Muhammad Fahad, Zhuo Chen, and Yi Guo. Learning how pedestrians navigate: A deep inverse reinforcement learning approach. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 819-826, 2018. doi: 10.1109/IROS.2018.8593438.
Psiphilearning: Reinforcement learning with demonstrations using successor features and inverse temporal difference learning. Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar, Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, and Gregory Farquhar. Psiphi- learning: Reinforcement learning with demonstrations using successor features and inverse tempo- ral difference learning, 2021.
Learning robust rewards with adverserial inverse reinforcement learning. Justin Fu, Katie Luo, Sergey Levine, International Conference on Learning Representations. Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adverserial inverse re- inforcement learning. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rkHywl-A-.
IQ-learn: Inverse soft-q learning for imitation. Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, Stefano Ermon, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanDivyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. IQ-learn: Inverse soft-q learning for imitation. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https:// openreview.net/forum?id=Aeo-xqtb5p.
Multi-task maximum entropy inverse reinforcement learning. Adam Gleave, Oliver Habryka, Adam Gleave and Oliver Habryka. Multi-task maximum entropy inverse reinforcement learning, 2018.
Reinforcement learning with deep energy-based policies. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine, International conference on machine learning. PMLRTuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In International conference on machine learning, pp. 1352-1361. PMLR, 2017.
Cooperative inverse reinforcement learning. Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell, Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. Cooperative inverse reinforcement learning, 2016.
Fast task inference with variational intrinsic successor features. Steven Hansen, Will Dabney, Andre Barreto, Tom Van De Wiele, David Warde-Farley, Volodymyr Mnih, Steven Hansen, Will Dabney, Andre Barreto, Tom Van de Wiele, David Warde-Farley, and Volodymyr Mnih. Fast task inference with variational intrinsic successor features, 2020.
Generative adversarial imitation learning. Jonathan Ho, Stefano Ermon, Advances in Neural Information Processing Systems. D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. GarnettCurran Associates, Inc29Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/ paper/2016/file/cc7e2b878868cbae992d1fb743995d8f-Paper.pdf.
Driving behavior modeling using naturalistic human driving data with inverse reinforcement learning. Zhiyu Huang, Jingda Wu, Chen Lv, Zhiyu Huang, Jingda Wu, and Chen Lv. Driving behavior modeling using naturalistic human driving data with inverse reinforcement learning, 2021.
Successor uncertainties: Exploration and uncertainty in temporal difference learning. David Janz, Jiri Hron, Katja Przemysł Aw Mazur, José Miguel Hofmann, Sebastian Hernández-Lobato, Tschiatschek, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32David Janz, Jiri Hron, Przemysł aw Mazur, Katja Hofmann, José Miguel Hernández-Lobato, and Sebastian Tschiatschek. Successor uncertainties: Exploration and uncertainty in temporal dif- ference learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran As- sociates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 1b113258af3968aaf3969ca67e744ff8-Paper.pdf.
Theory of mind as inverse reinforcement learning. Current Opinion in Behavioral Sciences. Julian Jara-Ettinger, 10.1016/j.cobeha.2019.04.010.URLhttps:/www.sciencedirect.com/science/article/pii/S2352154618302055.ArtificialIntelligence2352-154629Julian Jara-Ettinger. Theory of mind as inverse reinforcement learning. Current Opinion in Be- havioral Sciences, 29:105-110, 2019. ISSN 2352-1546. doi: https://doi.org/10.1016/j.cobeha. 2019.04.010. URL https://www.sciencedirect.com/science/article/pii/ S2352154618302055. Artificial Intelligence.
Strictly batch imitation learning by energybased distribution matching. Daniel Jarrett, Ioana Bica, Mihaela Van Der Schaar, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Daniel Jarrett, Ioana Bica, and Mihaela van der Schaar. Strictly batch imitation learning by energy- based distribution matching. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 7354-7365. Curran As- sociates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 524f141e189d2a00968c3d48cadd4159-Paper.pdf.
Ming Jin, Andreas Damianou, Pieter Abbeel, and Costas Spanos. Inverse reinforcement learning via deep gaussian process. Ming Jin, Andreas Damianou, Pieter Abbeel, and Costas Spanos. Inverse reinforcement learning via deep gaussian process, 2017.
Deep inverse q-learning with constraints. Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker, Gabriel Kalweit, Maria Huegle, Moritz Werling, and Joschka Boedecker. Deep inverse q-learning with constraints, 2020.
Socially adaptive path planning in human environments using inverse reinforcement learning. Beomjoon Kim, Joelle Pineau, 10.1007/s12369-015-0310-2International Journal of Social Robotics. 82016Beomjoon Kim and Joelle Pineau. Socially adaptive path planning in human environments using inverse reinforcement learning. International Journal of Social Robotics, 8:51-66, 01 2016. doi: 10.1007/s12369-015-0310-2.
Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. Ilya Kostrikov, Krishna Kumar, Debidatta Agrawal, Sergey Dwibedi, Jonathan Levine, Tompson, Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning, 2018.
Imitation learning via off-policy distribution matching. Ilya Kostrikov, Ofir Nachum, Jonathan Tompson, International Conference on Learning Representations. Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. Imitation learning via off-policy distribution matching. In International Conference on Learning Representations, 2020. URL https:// openreview.net/forum?id=Hyg-JC4FDr.
Truly batch apprenticeship learning with deep successor features. Donghun Lee, Srivatsan Srinivasan, Finale Doshi-Velez, Donghun Lee, Srivatsan Srinivasan, and Finale Doshi-Velez. Truly batch apprenticeship learning with deep successor features, 2019. URL https://arxiv.org/abs/1903.10077.
Advantages and limitations of using successor features for transfer in reinforcement learning. Lucas Lehnert, Stefanie Tellex, Michael L Littman, Lucas Lehnert, Stefanie Tellex, and Michael L. Littman. Advantages and limitations of using successor features for transfer in reinforcement learning, 2017.
An environment for autonomous driving decision-making. Edouard Leurent, Edouard Leurent. An environment for autonomous driving decision-making. https://github. com/eleurent/highway-env, 2018.
Feature construction for inverse reinforcement learning. Sergey Levine, Zoran Popovic, Vladlen Koltun, Advances in Neural Information Processing Systems. J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. CulottaCurran Associates, Inc23Sergey Levine, Zoran Popovic, and Vladlen Koltun. Feature construction for inverse rein- forcement learning. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Cu- lotta (eds.), Advances in Neural Information Processing Systems, volume 23. Curran Asso- ciates, Inc., 2010. URL https://proceedings.neurips.cc/paper/2010/file/ a8f15eda80c50adb0e71943adc8015cf-Paper.pdf.
Nonlinear inverse reinforcement learning with gaussian processes. Sergey Levine, Zoran Popovic, Vladlen Koltun, Advances in Neural Information Processing Systems. J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Q. WeinbergerCurran Associates, Inc24Sergey Levine, Zoran Popovic, and Vladlen Koltun. Nonlinear inverse reinforcement learning with gaussian processes. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Q. Wein- berger (eds.), Advances in Neural Information Processing Systems, volume 24. Curran Asso- ciates, Inc., 2011. URL https://proceedings.neurips.cc/paper/2011/file/ c51ce410c124a10e0db5e4b97fc2af39-Paper.pdf.
Imitation from observation: Learning to imitate behaviors from raw video via context translation. Yuxuan Liu, Abhishek Gupta, Pieter Abbeel, Sergey Levine, YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation, 2018.
Eigenoption discovery through the deep successor representation. C Marlos, Clemens Machado, Xiaoxiao Rosenbaum, Miao Guo, Gerald Liu, Murray Tesauro, Campbell, Marlos C. Machado, Clemens Rosenbaum, Xiaoxiao Guo, Miao Liu, Gerald Tesauro, and Murray Campbell. Eigenoption discovery through the deep successor representation, 2018.
Count-based exploration with the successor representation. C Marlos, Marc G Machado, Michael Bellemare, Bowling, 10.1609/aaai.v34i04.5955Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Marlos C. Machado, Marc G. Bellemare, and Michael Bowling. Count-based exploration with the successor representation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04): 5125-5133, Apr. 2020. doi: 10.1609/aaai.v34i04.5955. URL https://ojs.aaai.org/ index.php/AAAI/article/view/5955.
Algorithms for inverse reinforcement learning. Y Andrew, Stuart Ng, Russell, Proc. 17th International Conf. on Machine Learning. 17th International Conf. on Machine LearningMorgan KaufmannAndrew Y. Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In in Proc. 17th International Conf. on Machine Learning, pp. 663-670. Morgan Kaufmann, 2000.
Markov decision processes. Ml Puterman, Jhon Wiley & SonsNew JerseyML Puterman. Markov decision processes. 1994. Jhon Wiley & Sons, New Jersey, 1994.
Modeling human intention inference in continuous 3d domains by inverse planning and body kinematics. Yingdong Qian, Marta Kryven, Tao Gao, Hanbyul Joo, Josh Tenenbaum, Yingdong Qian, Marta Kryven, Tao Gao, Hanbyul Joo, and Josh Tenenbaum. Modeling human intention inference in continuous 3d domains by inverse planning and body kinematics, 2021.
Machine theory of mind. Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, S M Ali Eslami, Matthew Botvinick, PMLRProceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, S. M. Ali Eslami, and Matthew Botvinick. Machine theory of mind. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4218-4227. PMLR, 10-15 Jul 2018. URL https://proceedings. mlr.press/v80/rabinowitz18a.html.
Maximum margin planning. Nathan D Ratliff, J Andrew Bagnell, Martin A Zinkevich, 10.1145/1143844.1143936Proceedings of the 23rd International Conference on Machine Learning, ICML '06. the 23rd International Conference on Machine Learning, ICML '06New York, NY, USAAssociation for Computing MachineryNathan D. Ratliff, J. Andrew Bagnell, and Martin A. Zinkevich. Maximum margin planning. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, pp. 729-736, New York, NY, USA, 2006. Association for Computing Machinery. ISBN 1595933832. doi: 10.1145/1143844.1143936. URL https://doi.org/10.1145/1143844.1143936.
Efficient reductions for imitation learning. Stephane Ross, Drew Bagnell, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Yee Whye Teh and Mike Titteringtonthe Thirteenth International Conference on Artificial Intelligence and StatisticsSardinia, Italy9Chia Laguna ResortStephane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Yee Whye Teh and Mike Titterington (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 661-668, Chia Laguna Resort, Sardinia, Italy, 13-15 May 2010. PMLR. URL https://proceedings. mlr.press/v9/ross10a.html.
No-regret reductions for imitation learning and structured prediction. CoRR, abs/1011.0686. Stéphane Ross, Geoffrey J Gordon, J Andrew Bagnell, Stéphane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. No-regret reductions for imitation learning and structured prediction. CoRR, abs/1011.0686, 2010. URL http://arxiv.org/ abs/1011.0686.
Time-contrastive networks: Self-supervised learning from video. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Time-contrastive networks: Self-supervised learning from video, 2018.
International Foundation for Autonomous Agents and Multiagent Systems. Kyriacos Shiarlis, Joao Messias, Shimon Whiteson, Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, AAMAS '16. the 2016 International Conference on Autonomous Agents & Multiagent Systems, AAMAS '16Richland, SCInverse reinforcement learning from failure. ISBN 9781450342391Kyriacos Shiarlis, Joao Messias, and Shimon Whiteson. Inverse reinforcement learning from failure. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, AAMAS '16, pp. 1060-1068, Richland, SC, 2016. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450342391.
Apprenticeship learning using linear programming. Umar Syed, Michael Bowling, Robert E Schapire, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningUmar Syed, Michael Bowling, and Robert E Schapire. Apprenticeship learning using linear program- ming. In Proceedings of the 25th international conference on Machine learning, pp. 1032-1039, 2008.
. Faraz Torabi, Garrett Warnell, Peter Stone, Behavioral cloning from observation. Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation, 2018.
Maximum entropy deep inverse reinforcement learning. Markus Wulfmeier, Peter Ondruska, Ingmar Posner, Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Maximum entropy deep inverse reinforce- ment learning, 2016.
Learning a prior over intent via meta-inverse reinforcement learning. Kelvin Xu, Ellis Ratner, Anca Dragan, Sergey Levine, Chelsea Finn, Kelvin Xu, Ellis Ratner, Anca Dragan, Sergey Levine, and Chelsea Finn. Learning a prior over intent via meta-inverse reinforcement learning, 2019.
Meta-inverse reinforcement learning with probabilistic context variables. Lantao Yu, Tianhe Yu, Chelsea Finn, Stefano Ermon, Lantao Yu, Tianhe Yu, Chelsea Finn, and Stefano Ermon. Meta-inverse reinforcement learning with probabilistic context variables, 2019.
Visual semantic planning using deep successor representations. Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi, Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. Visual semantic planning using deep successor representations, 2017.
Maximum entropy inverse reinforcement learning. Brian D Ziebart, Andrew Maas, J Andrew Bagnell, Anind K Dey, Proceedings of the 23rd National Conference on Artificial Intelligence. the 23rd National Conference on Artificial IntelligenceAAAI Press3ISBN 9781577353683Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3, AAAI'08, pp. 1433-1438. AAAI Press, 2008. ISBN 9781577353683.
| [
"https://github.com/Div99/IQ-Learn.",
"https://github.com/eleurent/highway-env.",
"https://github.com/maximecb/gym-minigrid,"
]
|
[
"Statistical Mechanics of semi-classical colored Objects",
"Statistical Mechanics of semi-classical colored Objects",
"Statistical Mechanics of semi-classical colored Objects",
"Statistical Mechanics of semi-classical colored Objects"
]
| [
"M Hofmann ",
"M Bleicher ",
"S Scherer ",
"L Neise ",
"H Stöcker ",
"W Greiner ",
"\nInstitut für Theoretische Physik\nJ. W. Goethe-Universität\nD-60054Frankfurt am MainGermany\n",
"\nSun Microsystems GmbH\nLangenGermany\n",
"M Hofmann ",
"M Bleicher ",
"S Scherer ",
"L Neise ",
"H Stöcker ",
"W Greiner ",
"\nInstitut für Theoretische Physik\nJ. W. Goethe-Universität\nD-60054Frankfurt am MainGermany\n",
"\nSun Microsystems GmbH\nLangenGermany\n"
]
| [
"Institut für Theoretische Physik\nJ. W. Goethe-Universität\nD-60054Frankfurt am MainGermany",
"Sun Microsystems GmbH\nLangenGermany",
"Institut für Theoretische Physik\nJ. W. Goethe-Universität\nD-60054Frankfurt am MainGermany",
"Sun Microsystems GmbH\nLangenGermany"
]
| [
"Fellow of the Josef Buchmann-Foundation",
"Fellow of the Josef Buchmann-Foundation"
]
| A microscopic model of deconfined matter based on color interactions between semi-classical quarks is studied. A hadronization mechanism is imposed to examine the properties and the disassembly of a thermalized quark plasma and to investigate the possible existence of a phase transition from quark matter to hadron matter. § present address: | 10.1016/s0370-2693(00)00257-4 | [
"https://export.arxiv.org/pdf/nucl-th/9908030v1.pdf"
]
| 17,592,430 | nucl-th/9908030 | 9435134e8854400aec096cb302ebd12abdff3bc5 |
Statistical Mechanics of semi-classical colored Objects
Aug 1999
M Hofmann
M Bleicher
S Scherer
L Neise
H Stöcker
W Greiner
Institut für Theoretische Physik
J. W. Goethe-Universität
D-60054Frankfurt am MainGermany
Sun Microsystems GmbH
LangenGermany
Statistical Mechanics of semi-classical colored Objects
Fellow of the Josef Buchmann-Foundation
Aug 1999arXiv:nucl-th/9908030v1 7
A microscopic model of deconfined matter based on color interactions between semi-classical quarks is studied. A hadronization mechanism is imposed to examine the properties and the disassembly of a thermalized quark plasma and to investigate the possible existence of a phase transition from quark matter to hadron matter. § present address:
The study of relativistic heavy-ion collisions is motivated to a considerable extend by the search for and the unambiguous observation of a phase transition from confined, hadronic matter to a deconfined state of QCD-matter dubbed the quark-gluon plasma [1,2].
In the forthcoming experiments at RHIC (and later at LHC), the formation of a zone of quark-gluon plasma is generally expected. The primary stage of a collision at RHIC will be dominated by hard pQCD processes leading to the creation of a tremendous number of quarks and gluons which are believed to form a zone of hot and dense and therefore expectedly deconfined partonic matter. This part of a heavy-ion collision has been described microscopically by partonic cascade models as VNI [3]. However, pQCD is, by definition, only applicable in reactions with large momentum transfer Q 2 . At SPS these partonic processes are strongly suppressed as compared to hadronic interactions in the early stage. Here, the strong collective motion of the impinging heavy nuclei may drive the system to temperatures and densities beyond the hadronic level into a deconfined phase. However, in both pictures, partonic or hadronic, the major part of particle production takes place in primary collisions within the first few fm/c when the system is strongly compressed and heated.
Most recently, a combination of partonic and hadronic cascades has been established by connecting the VNI model with the UrQMD model which finally copes with the hadronic secondary interactions [4].
Unfortunately, a possible quark-gluon plasma phase dominated by soft, non-perturbative QCD processes which mediates between parton and hadron mode and intrinsically performs the hadronization process is not dynamically treated. The non-perturbative properties of QCD, which are crucial for this transition, impede the applicability of all common approaches to a first-principle description of hadronization. Effective models have to be constructed which allow a numerical calculation of observables by simulating the essential features of non-perturbative QCD. In [5], a dynamical approach based on the Nambu-Jona-Lasinio model has been presented, in which quarks are propagated on classical trajectories while their effective masses are calculated self-consistently according to the NJL equations of motion. Hadron production is driven by qq and qh collisions. Unfortunately, this approach does not provide confinement and therefore is not suitable for the investigation of heavy-ion collisions. On the footing of the Friedberg-Lee Lagrangian, a similar study has been performed in the chromodielectric model [6] completely respecting confinement. Hadronization is performed by mapping quark-gluon states onto irreducible representations of color SU(3). However, this method is numerically extremely expensive. This prohibits the simulation of heavy-ion collisions.
In this paper we present a semi-classical model which mimics the properties of non-abelian QCD by the means of a two-body color potential between quarks. In addition, a dynamical hadronization criterion is defined which allows for the consecutive migration from quark to hadronic degrees of freedom. The long term objective of this investigation is the unification of the different species of microscopic models, partonic in the initial, hadronic in the final stage of the reaction, into one single model, which finally will allow the simulation of a complete heavy-ion collision including a QGP phase transition. In this paper we shall elaborate the major thermodynamic properties of the so-defined system which will justify the crude approximation by its phenomenological implications. In a subsequent publication we will investigate the dynamical evolution of the model and adopt it to more realistic initial conditions which then will allow to describe heavy-ion collisions.
The model Hamiltonian
The colored and flavored quarks are treated as semi-classical particles interacting via a Cornell potential with color matrices [7]. This interaction provides an effective description of the non-perturbative, soft gluonic part of QCD. The Hamiltonian reads
H = N i=1 p 2 i + m 2 i + 1 2 i,j C ij V (|r i − r j |)
where N is the number of quarks. Four quark flavors (u, d, s, c) with current masses m u = m d = 10 MeV, m s = 150 MeV and m c = 1.5 MeV are considered.
Statistical Mechanics of semi-classical colored Objects
3
The confining properties of V (r) are ensured by a linear increase at large distances r. At short distances, the strong coupling constant α s becomes small, yielding a Coulomb-type behavior as in QED. This color Coulomb potential plus the confining part is the well known Cornell-potential [8] V (r) = − 3 4 α s r + κ r , which has successfully been applied to meson spectroscopy. For infinite quark masses this inter-quark potential has also been found in lattice calculations over a wide range of quark distances [9]. For small quark masses, retardation and chromomagnetic effects should be included. This is neglected in the present work. However, the linear behavior at large distances seems to be supported by the success of the string model even for zero quark masses [10]. The color matrix elements C ij regulate the sign and relative strength of the interaction between two quarks/antiquarks, respectively, depending on the color combination of the pair. The matrix C ij in the short range color interaction potential between quarks, V color = −C ij 3 4 α r , can be calculated from the quarkgluon interaction part of the QCD Lagrangian Using the standard fundamental representation of SU(3) color for the quarks and the adjoint representation for the gluons,
L int = g 2Ψ λ a γ µ ΨGq R = 1 0 0 , q G = 0 1 0 , q B = 0 0 1 , T a = 1 2 λ a , a = 1, . . . , 8
where λ a are the Gell-Mann matrices, and separating the quark wave function in the color and Dirac parts,
Ψ α = ψ q α the interaction amplitude M αα ′ ββ ′ ∼ g 2 4Ψ α ′ γ µ λ a Ψ α D µν ab (q)Ψ β ′ γ ν λ b Ψ β separates in color and Dirac parts (D µν ab (q) = D µν (q)δ ab is the gluon propagator): M αα ′ ββ ′ ∼ψ 1 γ µ ψ 1 D µν (q)ψ 2 γ ν ψ 2 q † α ′ λ a q α δ ab q † β ′ λ b q β .C αβ c R G B B G R R −1 + 1 2 + 1 2 − 1 2 − 1 2 +1 G + 1 2 −1 + 1 2 − 1 2 +1 − 1 2 B + 1 2 + 1 2 −1 +1 − 1 2 − 1 2 B − 1 2 − 1 2 +1 −1 + 1 2 + 1 2 G − 1 2 +1 − 1 2 + 1 2 −1 + 1 2 R +1 − 1 2 − 1 2 + 1 2 + 1 2 −1 λ (3) λ (8) α = 1 (R) α = 2 (G) α = 3 (B)
Here, α and β represent the color charges of the incoming quarks, α ′ and β ′ of the outgoing quarks. Collecting the color parts in a color factor
C c αα ′ ββ ′ = 3 4 8 a=1 q † α ′ λ a q α q † β ′ λ a q β = 3 4 8 a=1 (λ a ) αα ′ (λ a ) ββ ′ ,
one can calculate the net amplitude by summing over all possible combinations of in-and outgoing colors. As there is evidence from lattice calculations that there is no color transport over distances larger than λ ≈ 0.2 . . . 0.3 fm, only the commutating diagonal Gell-Mann matrices λ 3 and λ 8 from the Cartan subalgebra of SU(3) color contribute over larger distances. In this Abelian approximation the total color matrix for quark-quark interactions then is given by
C c αβ = 3 4 a=3,8 (λ a ) αα (λ a ) ββ = w T α w β , where w α = √ 3 2 (λ 3 ) αα (λ 8 ) αα , α = 1, 2, 3 (R, G, B)
are the normalized weight vectors corresponding to the three quark colors in (λ 3 , λ 8 ) space. Imposing a factor −1 at each antiquark vertex in color space yields the color matrix elements for the different color combinations as collected in table 1. They can easily be read off as the scalar products of the weight vectors corresponding to the three colors or anticolors, respectively. Positive values indicate attractive, negative repulsive interactions. Note that the relative strength of the color matrix elements is rigorously enforced by the requirement of color neutrality of widely separated qq and qqq states.
The properties of the interacting quark gas turn out to be independent from the selection of the shape of the potential at small distances, as far as the long distance term is defined properly. Therefore, we shall extend the linear potential to small distances r instead of using the color Coulomb potential at small r, which Statistical Mechanics of semi-classical colored Objects 5 brings us in accordance to widely used phenomenological models for hadrons [11,12].
Regge trajectories yield values of κ 0 ≈ 1.1 GeV/fm, while in the string model the string constant is found at κ 0 ≈ 0.9 GeV/fm. However, these values were fitted to the properties of isolated strings [13]. In a dense medium, quarks interact with all other color charges. This prohibits the confinement of the field lines into one single flux tube -deconfinement is the consequence. Thus, free string constants κ 0 are not appropriate to calculate the properties of quark matter at high temperatures and densities as expected in heavy-ion collisions [14]. Inmedium effects, e. g. interacting color fields, yield an effective increase of the string tension (Casimir scaling) [15]. In the present model κ effectively describes these in-medium effects. It will be treated as a free parameter of the model and should not be identified with the zero temperature value of free strings.
Obviously, a sufficiently high density of color charge carriers will lead to the screening of the the color interaction in the dense medium, and thus color deconfinement results, even in the simple semi-classical toy model presented here. We will discuss this below.
On the other hand, in a less dense and cooler system, all quarks will condense into clusters of two or three (anti-)particles with a total color charge in each cluster of zero. Note that higher quark numbers may also form totally color neutral states which appear to be bound. However, further propagation causes a separation into smaller likewise color neutral subclusters. Therefore, we ultimately obtain bound states which correspond to mesons or baryons.
Hadronization
It is now the second request to the model to define a criterion how to map those bound quark states to hadrons. Such a mechanism is essential as the Hamiltonian is not tuned to describe bound and truly confined hadron states. Attempts have been made [16] to do so in a Vlasov approach. Here, we use the straight-forward requirement that the total color interaction from a pair (or a three particle state) of quarks with the remaining system vanishes . Then, these qq-and qqq-states do no longer contribute to the color interaction of the quark gas (see figure 1). In the present model, this criterion of confinement -which in a numerical simulation of course would never be fulfilled exactly -has been softened by introducing a lower bound for the remaining interaction κ min between the cluster and the residual quark matter beyond which the cluster is declared to be frozen out [17]. It is convenient to measure κ min in units of the natural scale of the model, κ.
|F cluster | = 1 N cluster i∈cluster F i < κ min = F cut · κ .
Statistical Mechanics of semi-classical colored Objects
6
Here,
F i = j F ij = − j C ij ∇ j V (|r i − r j |)
gives the total force of the system acting on particle i. If a bound quark state fulfills the hadronization criterion it will be mapped to an appropriate hadronic state with identical quantum numbers. Spin and isospin of the hadron is randomly chosen according to the probabilities given by the Clebsch-Gordon coefficients.
The mass of the produced hadron is determined by energy and momentum conservation. The total energy of the multi-quark state is given by the expression
E H = i∈cluster E i + 1 2 j∈cluster j =i C ij V (|r i − r j |) +δE .
where δE represents the residual energy which was set free due to the field cut-off in the hadronization process and is of the order δE/E 10 −2 . The momentum of the hadron reads
P H = i∈cluster p i
which yields a hadron mass of
M H = E 2 H − P 2 H .
Usually the obtained hadron masses will hardly fit to the tabulated pole masses of the known hadrons. Therefore, the quark clusters will preferably be mapped to resonances with a broad mass distribution instead of sharply peaked ground states. In case of multiple possible selections for given quantum numbers we pick one randomly according to mass distributions which are given by Breit-Wigner distributions
f (M) ∼ Γ 2 (M − m 0 ) 2 + (Γ/2) 2 .
Here, m 0 and Γ denote the peak mass and the total decay width of the particle, respectively. To low masses, the distribution is cut-off at a minimal mass to ensure hadronic decay according to the experimentally known branching ratios. In the current version the model discriminates 29 mesonic and 36 baryonic states.
Thermodynamic properties of the interacting quarks gas
In the present work, the properties of the interacting quark gas, i. e. of hot quark matter, are studied in complete thermal equilibrium. The system of interacting quarks is not an ideal gas, but rather a strongly coupled quark fluid. Therefore, the integration of the partition function cannot be carried out analytically. By adopting the Metropolis algorithm [18], an arbitrary number N rep of N-particle phase space configurations can be generated. The latter configurations are
x (r) k → x (r+1) k = x (r) k + δx (r) k , p (r) k → p (r+1) k = p (r) k + δp (r) k .
In each iteration, each displacement (δx (r) k , δp (r) k ) in phase space will cause a change in total energy of the system
∆E = E (r+1) − E (r) .
According to the standard Metropolis algorithm, if ∆E < 0, the new configuration is energetically more favorable than the old one and will be accepted. If, on the other hand, ∆E is positive, the new configuration will be accepted with a probability exp(−∆E/T ). This allows for a statistical increase of the free energy of the system driven by the "temperature" T . After a sufficient number of iterations the system will enter a stationary state, where further iteration will account for a thermal motion of the sample around the ground state. All configurations can then be identified as representations of the thermalized state. Now, the ensemble average of any thermodynamical variable O can be approximated by the sum over those representations
O = 1 N rep Nrep k=1 O(x (k) i , p (k) i ) , i = 1 . . . N .
This enables us to calculate the energy density ǫ = 1 V H and -by using the virial theorem -the pressure of the interacting quark gas
P = 1 3V i p i v i + i r i ∇ i V .
Here v i = p i /E i is the velocity of particle i. In addition to the description of the quark phase we have to cope with the hadronic sector. The produced hadrons are evaporated into the void. The hadron pressure and temperature are assumed to be equal to the quark phase.
We will now assume a system of N quarks in a finite sphere of volume V with all color charges adding up to zero. This system is thermalized at a temperature T according to the previously discussed Metropolis method. A spherical system with a radius of 4 fm contains about 400 quarks and antiquarks at a temperature of 150 MeV. If during the equilibration process quarks form clusters that fulfill the above hadronization criterion they are converted to color neutral hadrons which do no longer interact due to color forces.
Statistical Mechanics of semi-classical colored Objects
Mixed phase and the equation of state
The first important observable is the number of quarks which are hadronized from a given thermodynamic sample. For high temperatures this quantity should converge to zero, while at T → 0 all quarks should be hadronized due to confinement. The fraction ξ = N h /(N h + N q ) of hadrons compared to the total particle number in the system therefore should be 1 in this limit. Figure 2 depicts this hadron fraction ξ as a function of the temperature (µ = 0) measured in units of the "critical temperature" T C , where the latter is defined as the temperature of the steepest descent for each set of parameters (κ, F cut ). A rapid fall-off within 0.2 T C from a hadron to a quark dominated phase can be observed indicating the existence of a mixed phase during the transition. In case of a true first order phase transition in an infinite volume a sharp discontinuity of this quantity would be expected at T C [19].
A similar continuous transition can be observed in the energy dependence of the quark phase as plotted in figure 3. Here, the energy density ǫ and the pressure p are divided by T 4 and are given for various values of κ and F cut as discussed above. The pressure has been multiplied by a factor of 3. Lattice calculations reveal a similar transition, slightly smoothed for energy density and pressure. [20]. However, our microscopic finite size simulation exhibits an even broader crossover. It is worth to note that the absolute values of lattice calculations for very high temperatures may not be compared to our results as we neglect the contributions of hard gluons.
A functional form of thermodynamic quantities similar to one found here has been parameterized [21,22] in order to model the assumed smooth crossover transition and to study the physical consequences. In accordance to those investigations our microscopic model also reveals a minimum of the equation of state in the phase transition region (see figure 4). However, compared to the case of infinite matter this dip is less pronounced.
The plots in fig. 2 and 3 both reveal a perfect scaling behavior for F cut 0.01. This imposes a natural range for the seemingly completely arbitrary parameter F cut which could a priori not be connected to any physical observable.
Despite the conformity in shape, the absolute scale of T C is strongly affected by the particular choice of these parameters. A reduction of F cut to zero will ultimately shift T C → 0, since hadronization is completely suppressed in this limit. On the other hand, an increase of the string tension, κ, gives rise to an increasing critical temperature T C , revealing a scaling dependence of the form T C ∼ √ F cut · κ. This can be directly understood from the hadronization mechanism: If any colorless quark cluster (qq, qqq or, in principle, any multiquark state) separates from the remaining quarks, we shall always obtain a finite remaining color field between the two quark samples whose strength κ is reduced (screened) compared to the vacuum value. The quark clusters are now assumed to separate sufficiently slowly so that the mediating color field lines can Statistical Mechanics of semi-classical colored Objects 9 be considered to confine to an equilibrated flux tube which approximately fulfills the presumptions of a cylindrical MIT bag. In the bag model the field strength κ is connected to the bag constant B according to κ ∼ √ B [13]. On the other hand, the bag pressure for an ideal quark-gluon gas raises as B ∼ T 4 C . -therefore one immediately obtains T C ∼ √ κ. In our model, the separating cluster is declared a "hadron" if the remaining force drops below the cut-off F cut · κ. Applying the flux tube picture as derived form the MIT bag model then yields the previously found dependency for the critical temperature T C ∼ √ F cut · κ. While the lattice results predict a critical temperature of T C ≈ 150 MeV [20], the natural choices κ = κ 0 ≈ 1 GeV/fm and F cut = 0.01 give a much lower value T C ≈ 90 MeV. As the value for F cut is at the upper bound of the scaling domain, the critical temperature may only be enhanced by increasing the field strength κ > κ 0 . The impact of a variation of κ on the thermodynamical properties is visualized in figure 5. In this plot, we further extend the investigations to finite µ and calculate the phase diagram for various κ. First attempts to apply lattice QCD to finite densities [23] seem to support our findings. It is obvious that for high κ the curvature of the lines appears smaller as compared to the curves usually extracted from the MIT bag model. To approach the lattice results of T C ≈ 140 MeV for µ → 0, values κ 2κ 0 are required. For T → 0 the chemical potential than is about 350 MeV so that normal nuclear matter (µ ≈ 300 MeV) is safely within the hadronic region. This is in perfect agreement with the above discussion and reflects the finite density character of the quark phase.
The high value found for κ should not come as a surprise: As discussed before, because of in-medium effects an increased string constant (compared to the free value) should be expected.
However, this high value of κ contradicts to the two-particle limit (one quark and one antiquark). The model then turns into the common string model which determines the color field strength to κ = κ 0 . In principle, this suggests the introduction of a density and temperature dependence of the string constant. A more qualitative view on the properties of the interacting quark gas shall be provided. However, it should be noted that the hadronization of the QGP is mainly based on quark rearrangements within the blob compared to string fragmentation on the surface of the plasma. This also supports the assumption of particle number conservation in the hadronization process. Hence, we fix the string constant to a medium value of κ = 2κ 0 .
The hadronization of the thermalized quark system yields hadron ratios which can be compared at mid-rapidity to those measured in CERN-SPS experiments (see compilation in [24]). Fig. 6 shows the comparison to the outcome in a S+Au collision, assuming a thermal fireball as hadron source. We find a very good agreement in all MM, MB and BB ratios, while the antibaryons seem to be clearly under-predicted. Particle ratios, however, proved not to be a very sensitive observable to test the quality of theoretical models. Fits of a pure hadron gas [24] proved to describe data with a comparable precision as other thermal or hydrodynamical approaches including a QGP phase transition or several microscopic simulations as UrQMD [25]. However, the analysis of event-by-event fluctuations [26] and of the dynamical properties of the system may yield new insight.
Dissociation of a quark blob
All results from the last paragraph presume complete equilibration of a finite canonical ensemble which is defined by all possible microscopic representations at a time. One particular representation will always contain fluctuations which may cause the properties of the single representation to deviate strongly from the collective behavior [27]. This effect is emphasized in figure 7 where the average radial force F rad (r) = i F iri acting on a quark at a distance r from the origin of a spherical thermalized quark blob (R = 4 fm, T = 200 MeV, µ = 100 MeV) is plotted. In the ensemble average the quarks in the center of the quark matter do not feel any net interaction: color is screened. A net interaction within approximately 1fm from the surface traps the color charges confined within the blob. Moreover, this result is independent of the particular shape of the interaction potential as long as it fulfills the symmetry requirements concerning the color charges as given in table 1. Then, within the center of the quark phase all contributions from the potential cancel exactly to zero if the spatial distribution is sufficiently homogeneous. However, this statement holds only for a large number of quark samples. In one single microscopic representation one can find large fluctuations of the net color force on each quark. This is pointed out in figure 8, where the distribution function of the radial color forces F iri acting on quarks in the center of the blob is plotted. We obtain an almost perfect Gaussian distribution with a standard deviation σ = 0.5κ indicating huge fluctuations. Therefore strong inhomogeneities in one single event are to be expected. Hence, the microscopic system will not behave like an ideal quark gas. Due to the color interactions we do not expect that the quark system does expand hydrodynamically, smoothly reducing temperature and density. Instead, these results indicate that during the expansion the quark phase will rupture, and hadrons will condense both from its surface as well as from its interior.
Conclusion
We have presented a microscopic description of the deconfinement phase transition by means of a semiclassical interacting quark gas supplemented with a dynamical hadronization criterion. The color interaction potential has been motivated from phenomenological QCD in the abelian approximation. A smooth crossover was found, comparable to recent lattice results. The phase diagram for finite µ has been calculated. Particle ratios have been compared to experimental results yielding a reasonable agreement. An event-by-event analysis revealed strong fluctuations which initially drive the dissociation process. . Hadron fraction as a function of temperature in a finite quark blob for various sets of parameters κ and F cut . The temperature is measured in T C which is defined for any set of parameters as the temperature of steepest descent with T . The distributions show perfect scaling behavior for F cut < 0.01 (black symbols).
Figure 1 .
1Hadronization of white quark clusters
Figure 2
2Figure 2. Hadron fraction as a function of temperature in a finite quark blob for various sets of parameters κ and F cut . The temperature is measured in T C which is defined for any set of parameters as the temperature of steepest descent with T . The distributions show perfect scaling behavior for F cut < 0.01 (black symbols).
Figure 3 .
3Energy density and pressure of the quark phase as a function of temperature for various sets of parameters κ and F cut .
Figure 4 .Figure 5 .
45Equation of state for κ = κ 0 (dashed line) and κ = 2κ 0 (solid line). A softening of the EOS around ǫ = 1GeV/fm 3 is revealed. Phase diagram in the T -µ plane for κ/κ 0 = 1, 2. The lines are fits to the calculation.
Figure 6 .
6Final state hadron ratios from thermal qMD calculations (open circles) compared to S+Au data at 200 AGeV (full circles, taken from [24])
Figure 7 .
7Averaged radial force | Fr| acting on a quark at a distance r within a quark blob of radius R = 4fm. In the center the quarks are approximately free. Near the surface they are strongly pulled back into the sphere.
Figure 8 .
8Fluctuations of net radial forces F rad = Fr acting on a central quark (r < 1fm).
Table 1 .
1Color matrix elements of the 36 different elementary color combinations of the quarks. The matrix elements can be obtained from the scalar products of the corresponding weight vectors
AcknowledgmentsThis work is supported by GSI, BMBF, DFG, Graduiertenkolleg Theoretische und Experimentelle Schwerionenphysik, and the Josef Buchmann Foundation.
. H Stöcker, W Greiner, Phys. Rep. 137277H. Stöcker and W. Greiner, Phys. Rep. 137, 277 (1986).
. L Mclerran, Rev. Mod. Phys. 581021L. McLerran, Rev. Mod. Phys. 58, 1021 (1986).
. K Geiger, B Müller, Nucl. Phys. B. 369600K. Geiger and B. Müller, Nucl. Phys. B 369, 600 (1992).
. S A Bass, M Hofmann, M Bleicher, L Bravina, E Zabrodin, H Stöcker, W Greiner, nucl-th/990205510S.A. Bass, M. Hofmann, M. Bleicher, L. Bravina, E. Zabrodin, H. Stöcker, and W. Greiner, e-print nucl-th/9902055, 10pp (1999).
. P Rehberg, L Bot, J Aichelin, hep-ph/980956538P. Rehberg, L. Bot, and J. Aichelin, e-print hep-ph/9809565, 38pp (1998).
. C T Traxler, U Mosel, T S Biro, Phys. Rev. C. 59C. T. Traxler, U. Mosel, and T. S. Biro, Phys. Rev. C 59, 1620-1636 (1999).
. Nathan Isgur, Jack Paton, Phys. Rev. 312910Nathan Isgur and Jack Paton, Phys. Rev. D31, 2910 (1985).
. E Eichten, K Gottfried, T Kinoshita, J Kogut, K D Lane, T.-M Yan, Phys. Rev. Lett. 34E. Eichten, K. Gottfried, T. Kinoshita, J. Kogut, K.D. Lane, and T.-M. Yan, Phys. Rev. Lett 34, 369-372 (1975).
. K D Born, E Laermann, R Sommer, T F Walsh, P M Zerwas, Phys. Lett. B. 329K. D. Born, E. Laermann, R. Sommer, T. F. Walsh, and P. M. Zerwas, Phys. Lett. B 329, 325-331 (1994).
. B Andersson, G Gustafson, G Ingelman, T Sjöstrand, Phys. Rep. 972 & 331B. Andersson, G. Gustafson, G. Ingelman, and T. Sjöstrand, Phys. Rep. 97, 31 (1983), no. 2 & 3.
. T Regge, Nuovo Cim. 14951T. Regge, Nuovo Cim. 14, 951 (1959).
. V N Gribov, Sov. Phys. JETP. 26V. N. Gribov, Sov. Phys. JETP 26, 414-422 (1968).
. K Sailer, T Schönfeld, Z Schram, A Schäfer, W Greiner, J. Phys. G. 17K. Sailer, T. Schönfeld, Z. Schram, A. Schäfer, and W. Greiner, J. Phys. G 17, 1005-1057 (1991).
. T S Biro, H B Nielsen, J Knoll, Nucl. Phys. B. 245449T. S. Biro, H. B. Nielsen, and J. Knoll, Nucl. Phys. B 245, 449 (1984).
. M Faber, J Greensite, S Olejnik, Phys.Rev. 57M. Faber, J. Greensite, and S. Olejnik, Phys.Rev. D57, 2603-2609 (1998).
e-print nucl-th/9905025. A Bonasera, 11A. Bonasera, e-print nucl-th/9905025, 11pp (1999).
. Matt Crawford, David N Schramm, Nature. 298Matt Crawford and David N. Schramm, Nature 298, 538-540 (1982).
. N Metropolis, A W Rosenbluth, M N Rosenbluth, A H Teller, E Teller, J.Chem.Phys. 21N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller, J.Chem.Phys 21, 1087-1092 (1953).
. C Spieles, H Stöcker, C Greiner, Eur. Phys. J. C. 2351C. Spieles, H. Stöcker, and C. Greiner, Eur. Phys. J. C 2, 351 (1998).
. F Karsch, Nucl.Phys. 590367F. Karsch, Nucl.Phys. A590, 367 (1995).
. D H Rischke, M Gyulassy, Nucl. Phys. A. 597D. H. Rischke and M. Gyulassy, Nucl. Phys. A 597, 701-726 (1996).
. M Asakawa, T Hatsuda, Phys. Rev. D. 55M. Asakawa and T. Hatsuda, Phys. Rev. D 55, 4488-4491 (1997).
. J Engels, O Kaczmarek, F Karsch, E Laermann, hep-lat/990303025J. Engels, O. Kaczmarek, F. Karsch, and E. Laermann, e-print hep-lat/9903030, 25pp (1999).
. P Braun-Munzinger, J Stachel, J P Wessels, N Xu, Phys. Lett. B. 3651P. Braun-Munzinger, J. Stachel, J. P. Wessels, and N. Xu, Phys. Lett. B 365, 1 (1996).
. S A Bass, M Belkacem, M Brandstetter, M Bleicher, L Gerland, J Konopka, L Neise, C Spieles, S Soff, H Weber, H Stocker, W Greiner, Phys. Rev. Lett. 814092S.A. Bass, M. Belkacem, M. Brandstetter, M. Bleicher, L. Gerland, J. Konopka, L. Neise, C. Spieles, S. Soff, H. Weber, H. Stocker, and W. Greiner, Phys. Rev. Lett. 81, 4092 (1998).
. M Bleicher, M Belkacem, C Ernst, H Weber, L Gerland, C Spieles, S A Bass, H Stocker, W Greiner, Phys. Lett. 4359M. Bleicher, M. Belkacem, C. Ernst, H. Weber, L. Gerland, C. Spieles, S.A. Bass, H. Stocker, and W. Greiner, Phys. Lett. B435, 9 (1998).
. M Bleicher, L Gerland, S Bass, M Brandstetter, C Ernst, S Soff, H Weber, H Stöcker, W Greiner, Nucl. Phys. A. 638M. Bleicher, L. Gerland, S. Bass, M. Brandstetter, C. Ernst, S. Soff, H. Weber, H. Stöcker, and W. Greiner, Nucl. Phys. A 638, 391-394 (1998).
| []
|
[
"Generalized Toric Polygons, T-branes, and 5d SCFTs",
"Generalized Toric Polygons, T-branes, and 5d SCFTs"
]
| [
"Antoine Bourget \nInstitut de physique théorique\nUniversité Paris-Saclay\nCNRS\nCEA\n91191Gif-sur-YvetteFrance\n\nLaboratoire de Physique de l'École Normale Supérieure\nPSL University\n24 rue Lhomond75005ParisFrance\n",
"Andrés Collinucci \nService de Physique Théorique et Mathématique\nUniversité Libre de Bruxelles and International Solvay Institutes\nCampus Plaine C.P. 231B-1050BruxellesBelgium\n",
"Sakura Schäfer-Nameki \nMathematical Institute\nUniversity of Oxford\nWoodstock RoadOX2 6GGOxfordUnited Kingdom\n"
]
| [
"Institut de physique théorique\nUniversité Paris-Saclay\nCNRS\nCEA\n91191Gif-sur-YvetteFrance",
"Laboratoire de Physique de l'École Normale Supérieure\nPSL University\n24 rue Lhomond75005ParisFrance",
"Service de Physique Théorique et Mathématique\nUniversité Libre de Bruxelles and International Solvay Institutes\nCampus Plaine C.P. 231B-1050BruxellesBelgium",
"Mathematical Institute\nUniversity of Oxford\nWoodstock RoadOX2 6GGOxfordUnited Kingdom"
]
| []
| 5d Superconformal Field Theories (SCFTs) are intrinsically strongly-coupled UV fixed points, whose realization hinges on string theoretic methods: they can be constructed by compactifying M-theory on local Calabi-Yau threefold singularities or alternatively from the world-volume of 5-brane-webs in type IIB string theory. There is a correspondence between 5-brane-webs and toric Calabi-Yau threefolds, however this breaks down when multiple 5-branes are allowed to end on a single 7-brane. In this paper, we extend this connection and provide a geometric realization of brane configurations including 7-branes. A web with 7-branes defines a so-called generalized toric polygon (GTP), which corresponds to combinatorial data that is obtained by removing vertices along external edges of a toric polygon. We identify the geometries associated to GTPs as non-toric deformations of toric Calabi-Yau threefolds and provide a precise, algebraic description of the geometry, when 7-branes are introduced along a single edge. The key ingredients in our analysis are T-branes in a type IIA frame, which includes D6-branes. We show that performing Hanany-Witten moves for the 7-branes on the type IIB side corresponds to switching on semisimple vacuum expectation values on the worldvolume of D6-branes, which in turn uplifts to complex structure deformations of the Calabi-Yau geometries. We test the proposal by computing the crepant resolutions of the deformed geometries, thereby checking consistency with the expected properties of the SCFTs. | null | [
"https://export.arxiv.org/pdf/2301.05239v1.pdf"
]
| 255,825,596 | 2301.05239 | 661b7d11bc20c752dbbe9f8b57a5bee1fec5f972 |
Generalized Toric Polygons, T-branes, and 5d SCFTs
12 Jan 2023
Antoine Bourget
Institut de physique théorique
Université Paris-Saclay
CNRS
CEA
91191Gif-sur-YvetteFrance
Laboratoire de Physique de l'École Normale Supérieure
PSL University
24 rue Lhomond75005ParisFrance
Andrés Collinucci
Service de Physique Théorique et Mathématique
Université Libre de Bruxelles and International Solvay Institutes
Campus Plaine C.P. 231B-1050BruxellesBelgium
Sakura Schäfer-Nameki
Mathematical Institute
University of Oxford
Woodstock RoadOX2 6GGOxfordUnited Kingdom
Generalized Toric Polygons, T-branes, and 5d SCFTs
12 Jan 2023
5d Superconformal Field Theories (SCFTs) are intrinsically strongly-coupled UV fixed points, whose realization hinges on string theoretic methods: they can be constructed by compactifying M-theory on local Calabi-Yau threefold singularities or alternatively from the world-volume of 5-brane-webs in type IIB string theory. There is a correspondence between 5-brane-webs and toric Calabi-Yau threefolds, however this breaks down when multiple 5-branes are allowed to end on a single 7-brane. In this paper, we extend this connection and provide a geometric realization of brane configurations including 7-branes. A web with 7-branes defines a so-called generalized toric polygon (GTP), which corresponds to combinatorial data that is obtained by removing vertices along external edges of a toric polygon. We identify the geometries associated to GTPs as non-toric deformations of toric Calabi-Yau threefolds and provide a precise, algebraic description of the geometry, when 7-branes are introduced along a single edge. The key ingredients in our analysis are T-branes in a type IIA frame, which includes D6-branes. We show that performing Hanany-Witten moves for the 7-branes on the type IIB side corresponds to switching on semisimple vacuum expectation values on the worldvolume of D6-branes, which in turn uplifts to complex structure deformations of the Calabi-Yau geometries. We test the proposal by computing the crepant resolutions of the deformed geometries, thereby checking consistency with the expected properties of the SCFTs.
Contents
Introduction and Summary
The existence and characterization of interacting superconformal field theories in five spacetime dimensions (5d SCFTs) is a remarkable prediction of string theory [1]. Two approaches have emerged that allow the construction of 5d SCFTs within the framework of string theory: the low energy limit of M-theory on R 1,4 times a local Calabi-Yau threefold [2], and the world-volume of a brane-web in type IIB string theory [3][4][5][6][7][8][9][10][11][12]. The properties of the theory are encoded in the geometry of the threefold in the first case, and in the charges of the external (p, q)-5-branes in the second case. A middle ground is the type IIA realization, which involves both geometry and branes [13]. When the CY is toric, there is a precise dictionary between the brane-web and M-theory realization: the charges of the external (p, q)-5-branes can be encoded into an integral polygon, which in turn can be seen as the intersection of a toric three-dimensional fan in ←→ Figure 1. Example of toric polygon (left) and dual brane-web (right) in which lines denote 5-branes and circles denote 7-branes. The 7-branes on which stacks of 5-branes are spaced to emphasize how the 5-brane end, here exactly one 5-brane ends on each 7-brane. This geometry encodes a 5d SCFT of rank 3.
R 3
x,y,z with the plane {z = 1}. The toric threefold X constructed from this fan is such that the 5d SCFT obtained from M-theory on X coincides with that on the world-volume of the brane-web [14]. An example is shown in figure 1.
A systematic geometric exploration and classification of 5d SCFTs was started in . These studies reveal many detailed properties of the 5d SCFTs, such as their UV enhanced flavor symmetry, their Coulomb branch (modeled in terms of the crepant resolutions of the Calabi-Yau singularities), but also refined information such as their generalized symmetries [39][40][41][42][43][44]. What remains somewhat obscure in this framework is the derivation of the full quantum corrected Higgs branch -though some progress in the context of isolated hypersurface Calabi-Yau singularities can be made [32-34, 45, 46].
Not surprisingly, these explorations reveal that only a small class of 5d SCFTs have a realization in terms of toric Calabi-Yau threefolds. If such a toric realization exists, then the geometry of the moduli space of supersymmetric vacua, in particular the Higgs branch (but also Coulomb branch and mixed branches) can be computed exactly -irrespective of whether the singularity is isolated or not. The key tool is the connection between the toric geometries and brane-webs, where in the latter these moduli space questions have been determined in [47][48][49][50][51][52][53]. In this paper, we report progress in generalizing these methods to a larger class of 5d SCFTs, which have not necessarily a toric description.
In [16], a generalization of toric polygons was introduced: a toric Calabi-Yau threefold can be described in terms of a convex polygon in a square integral lattice embedded into a 2-plane. The polygon associated to a toric geometry has the property that all lattice points along the edges are part of the toric data (corresponding to vertices). We will refer to these as black dots. Generalizing this, [16] proposed to also allow some vertices along the edges of the polygon to be unoccupied, which we will refer to as white dots. In the dual description, allowing such white dots corresponds in the web to several 5-branes that end on the same 7-brane. For a stack of n 5-branes, the boundary condition is encoded in an integer partition λ of n. Figure 2 gives an example of a configuration that translates to the [3, 2, 2, 1] partition of 8. This combinatorial data will be referred to as Generalized Toric Polygons (GTP), and generalizes the standard toric description. In this framework, the standard toric case considered in the previous paragraph corresponds to a boundary condition where for each charge (p, q), there is an equal number of (p, q)-5-branes and Figure 2. Correspondence between white dots on GTPs (left) and boundary conditions of (p, q) 5-branes on (p, q) 7-branes (middle). Here we have (p, q) = (1, 0) and the partition λ = [3, 2, 2, 1] of n = 8. When we draw brane-webs we usually ignore the detached 7-branes and separate the 7-branes on a stack of 5-branes to show the boundary conditions (right).
(p, q)-7-branes, with one 5-brane ending on one 7-brane, i.e. the partition is λ = [1 n ]. This generalization and its implications for characterizing the moduli space of supersymmetric vacua using magnetic quivers were explored in great detail in [32,45,[53][54][55][56][57][58][59][60][61][62][63]. Note that O5 orientifold planes can also be included in the webs [8,[64][65][66], but we do not consider this possibility here.
Another way of interpreting GTPs is as non-convex would-be toric polygons. These make sense when certain parameters, which map out the extended Coulomb branch (i.e. gauge couplings and masses of hypers), are turned on, but the non-convexity prevents one from considering the SCFT limit (i.e. from passing to the origin of the extended Coulomb branch). From the dual brane-web point of view, this is resolved using a combination of Hanany-Witten moves and 7-brane monodromies, and this plays a prominent role in the brane-web manipulations of [6,8,9,11,64,[67][68][69][70][71]. Importantly, not all non-convex polygons can be transformed into GTPs in this way, and it is in general a hard question to decide whether a given polygon can be transformed in this way or not. For this reason, it is simpler to take the GTPs as our starting point.
GTPs can be thought of as generalizations of toric geometries. However, unlike the precise dictionary between the combinatorial data of a toric polygon and the algebraic geometry of the corresponding Calabi-Yau, no such dictionary exists thus far for GTPs. The main purpose of this paper is to develop initial steps in order to close this gap. See figure 3. In particular, in the following, we will determine the algebraic geometric description of GTPs, which have white dots along a single edge.
Summary.
We now give a schematic summary of the main ideas involved in our proposal. Consider a length n edge of a toric polygon, which we can assume to have vertical orientation. In the brane-web, this corresponds to n parallel semi-infinite D5-branes. In the associated toric threefold, there is an asymptotic region that approaches C 2 /Z n ×C. Indeed after a transverse T-duality, the D5-branes become D6-branes, which uplift to n-centered Figure 3. Summary of the main question addressed in this paper. On the left hand side, one starts from a convex polygon with integral vertices. It defines a 5d SCFT in two distinct ways: from M-theory on the associated toric CY, and from the dual brane-web. If on the contrary, as shown on the right hand side, the polygon is not convex, i.e. is a generalized toric polygon (GTP), the toric description is lost. However there still exists a dual brane-web, which has non-trivial boundary conditions on 7-branes, and thus a 5d SCFT. The central goal of this paper is to develop a map (the dashed line in the diagram) from GTPs to (non-toric) geometry.
Taub-NUT spaces. At strong string coupling g s → ∞, this becomes C 2 /Z n . Denoting two longitudinal directions as the w-complex plane, we arrive at a local C 2 /Z n × C patch. M-theory on this geometry gives us N = 1 7d SYM with SU(n) gauge group, and we can represent it as the singular hypersurface in C 4 given by
uv = z n (1.1)
with the w-coordinate tagging along. The three adjoint scalars φ i=1,2,3 on the worldvolume of the D6-branes can be grouped into a complex scalar Φ = φ 1 +iφ 2 , and the remaining real one ϕ = φ 3 . In the M-theory uplift, Φ encodes algebraic deformations to the hypersurface, and ϕ encodes Kähler volumes of resolutions. This grouping is of course arbitrary, and correlates with the arbitrariness of choosing a complex structure on the noncompact K3 in M-theory.
Having seen this, we can recast the hypersurface as a spectral equation for the complexified adjoint Higgs field uv = det(1 n z − Φ(w)) .
(1.2)
Switching on constant vevs along the Cartan subalgebra of su(n) will deform the equation and unfold the singularity into a deformed K3 times the w-plane. However, switching on wdependent vevs will turn this into a bona fide noncompact CY threefold. The geometry will be more or less desingularized, depending on the Casimir invariants of Φ that are switched on. The claim of this paper, is that white dots correspond to nilpotent elements in Φ. Note, that switching on a nilpotent Φ means that the spectral equation remains unchanged. In other words, the D6-branes do not actually move, and the uplifted geometry underlying the M-theory construction remains undeformed. This phenomenon is known as a T-brane [72][73][74]. It is a non-Abelian bound state of branes, whereby the worldvolume gauge group is (partially) Higgsed, but the geometry of the branes is unaltered. However, the physics is of course impacted by this T-brane, as we shall see momentarily. To give a simple example, take a vertical edge with n + 1 dots, and replace the second black dot from the top with a white dot as shown in figure 4. In terms of the 5-branes, this corresponds to forming a bound state between the two uppermost branes, and sending a suspended 5-brane segment to infinity. From the D6-brane viewpoint, the bound state is understood as switching on a vev for Φ along the minimal nilpotent orbit of su(n)
Φ = 0 1 0 0 . . . . (1.3)
This binds the first two D6-branes, and partially Higgses
su(n) → s (u(1) ⊕ u(n − 2)) . (1.4)
More generally, we said above that a distribution of white dots on the edge is encoded in a partition λ of n. Our claim is that each such partition λ of n translates into a vev for Φ along an element in the nilpotent orbit O λ of su(n) that is uniquely characterized by λ.
The unbroken 7d gauge group on the D6-branes, which will correspond to a subgroup of the total 5d flavor group, is then broken to the commutant of this nilpotent element. So far, our discussion parallels the picture developed several years ago in [75,76], in the 6d SCFT context, whereby geometric data (about elliptic fibrations) was supplemented by nilpotent orbits, which would partially Higgs an original theory and trigger various RG flows. At this point, the reader might object that simply claiming that a white dot translates to a nilpotent vev is not very interesting or verifiable, since that data will be invisible to the geometry. While this is true, from the 5-brane-web perspective we know that a white dot opens up the possibility to perform Hanany-Witten type transitions that were not possible in the presence of black dots only. For instance, the following GTPs are related by such a transition where one of the three leftmost 7-branes is moved to the right:
↔ (1.5)
Such HW type transitions provide non-trivial tests of our proposal: HW moves correspond to changing the positions of branes, which in turn will impact the dual geometry. We identify the subset of complex structure deformations that are associated to these nilpotent vevs of Φ. They are realized in terms of vevs of Φ along a slice transverse to the nilpotent vev (inside the full Lie algebra, not the nilpotent cone), known as the Slodowy slice. By switching on such a vev, a subset of possible Casimir invariants will become non-zero, leading to a deformation of the geometry, which is given by the spectral equation of the the Higgs field. For instance, in the simple case of su(2), we take the initial nilpotent vev along the minimal orbit
Φ 0 = 0 1 0 0 . (1.6)
The M-theory uplifted geometry corresponds to C 2 /Z 2 . The Slodowy slice is given by matrices of the form
Φ = 0 1 a 0 , with a ∈ C . (1.7)
The characteristic polynomial of this Higgs field is now non-trivial, and the M-theory geometry deforms as follows
uv = z 2 −→ uv = z 2 + a . (1.8)
The present paper elucidates this for all GTPs, which allow for a IIA-description, i.e. whose white dots are along a single edge of the GTP. Dually, all 7-branes are parallel, i.e. mutually local. The toy-model where Slodowy slices appear is generalized in the following way. Higgs branches are symplectic singularities [77], to which one can associate a Hasse diagram of symplectic leaves [78][79][80]. In terms of this diagram, the T-brane data select a new, lowest leaf of the foliation, and the transverse slice to that leaf is the total space of a fibration over the complex structure moduli space of the deformed Calabi-Yau threefold. See for instance figure 5, where the deformations of the T 4 5d SCFT, realized on the threefold W 1 W 2 W 3 = Z 4 , are displayed, along with the effect on the Higgs branches.
In future work we will aim to generalize this to arbitrary GTPs, with mutually nonlocal 7-branes. We conjecture that the above picture of transverse slices in the Higgs branch extends to this situation. Eventually we hope to develop a succinct description of the algebraic geometry of GTPs, as they exist for toric polygons: A precise map between the combinatorial data and the basic algebraic geometry, such as the set of divisors, curves, intersection numbers.
Plan. In the rest of the paper, we spell out the details of the construction. An essential tool is the T-brane, which is reviewed in section 2. The bulk of the construction is then carried out explicitly in a representative example -that of the T n SCFTs -in section 3, before generalizing to any GTP with white dots on a single edge in section 4. As a first check, we reproduce there the transition (1.5). Finally, in section 5 we provide consistency checks, by computing the resolutions of the deformed threefold geometries. This shows Figure 5. Three GTPs are shown on the first line, and below the algebraic equations characterizing the associated Calabi-Yau threefold geometry. The model on the left is a toric threefold. The other two, non-toric GTPs, are characterized in terms of deformations. Each of these geometries defines a 5d SCFT. The Hasse diagrams of symplectic singularities for the Higgs branch of these 5d SCFTs are shown below. The vertices represent symplectic leaves. For transverse slices we use a standard notation where the closure of the minimal nilpotent orbit of a simple Lie algebra is denoted using the lowercase form of the name of the algebra, e.g. e 7 for algebra E 7 . In red are drawn the effects of the deformations. agreement of the UV flavor symmetry of the SCFT with the one expected from the braneweb (and resulting Higgs branch).
W 1 W 2 W 3 = Z 4 W 1 W 2 W 3 = Z 2 (Z 2 + αW 1 ) W 1 W 2 W 3 = (Z 2 + αW 1 )(Z 2 + βW 1 )
T-branes and Kraft-Procesi Transitions
T-brane Basics
Consider n parallel D7-branes in type IIB string theory on flat space. The transverse space is the complex plane with coordinate z, which has coordinate ring R = C[z]. We call z 1 , . . . , z n the positions of the n branes. The stack of branes can be described as a D9/D9-brane tachyon condensate, which is defined mathematically as the cokernel of the tachyon map R ⊕n R ⊕n ,
T (2.1)
where T = Diag(z − z 1 , . . . , z − z n ). This means that the D7-branes correspond to the sheaf 1 S in the short exact sequence
0 R ⊕n R ⊕n S 0 . T (2.2)
The matter on the system of D7-branes is described by fluctuations of the tachyon, δT , which are defined up to linearized gauge transformations. This corresponds to a self-Ext 1 1 We will use the formulation of branes modulo tachyon condensation in terms of the derived category of coherent sheaves throughout this paper. Some introduction to this topic can be found in [81][82][83].
computation for the complex (2.1), i.e. morphisms between the complex and the shifted version of that same complex,
R ⊕n R ⊕n R ⊕n R ⊕n α L δT T α R T (2.3) up to homotopies, δT ∼ δT − T · α L + α R · T . (2.4)
Concretely, this means δT is valued in the quotient ring of n × n matrices Mat n (R) modulo the two matrix ideals in R generated by left and right multiplication by T ,
δT ∈ Mat n (R)/(T ·, ·T ) . (2.5)
Note also that the tachyon map T can be expressed as a matrix given a choice of basis for the D9 gauge bundle and the D9 gauge bundle. These choices are independent, which means algebraically that only the equivalence class of T under the equivalence relation where the p i are monic polynomials 2 in z such that p 1 |p 2 | . . . |p r .
T ∼ G L · T · G −1 R G L , G R ∈ GL(n, C[z])
Example.
To illustrate the discussion of the previous paragraph, consider the case n = 2. The tachyon matrix is
T = z − z 1 0 0 z − z 2 (2.8)
and the fluctuations belong to
δT ∈ R (z−z 1 ) R (z−z 1 ,z−z 2 ) R (z−z 1 ,z−z 2 ) R (z−z 2 ) C 0 0 C z 1 = z 2 C C C C z 1 = z 2 (2.9)
For any value of z 1 , z 2 there is U(1) adjoint matter on each D7-brane, and for z 1 = z 2 the U(1) 2 gauge symmetry enhances to U(2), and one can have fluctuations in the adjoint of U(2). From the U(1) 2 perspective this is simply bifundamental matter.
Consider the case z 1 = z 2 = 0. We can then activate an off-diagonal term:
T [1,1] = z 0 0 z → T [2] = z 1 0 z . (2.10)
After this activation, the fluctuations are reduced to
δT [2] ∈ 0 0 1 0 C ⊕ 1 0 0 −1 C . (2.11)
The SNF reveals the same structure in a slightly different guise. Indeed
SNF z − z 1 0 0 z − z 2 = 1 0 0 (z − z 1 )(z − z 2 ) z 1 = z 2 z − z 1 0 0 z − z 1 z 1 = z 2 (2.12) so the fluctuations are valued in δT ∈ 0 0 0 C ⊕ zC z 1 = z 2 C C C C z 1 = z 2 .
(2.13)
Activating the off-diagonal term translates using the SNF into
SNF z − z 1 1 0 z − z 2 = 1 0 0 (z − z 1 )(z − z 2 )
, (2.14)
valid for z 1 = z 2 and z 1 = z 2 alike. In particular, putting both branes at the origin,
SNF z 1 0 z = 1 0 0 z 2 ,(2.15)
where we see the appearance of an infrared trivial complex
R R ∼ = 0 1 (2.16)
and a so-called 'thick brane' R R .
z 2 (2.17)
Multivariable polynomials. The ring of polynomials in one variable C[z] has the property that every ideal is principal. In particular, it is a Bézout ring, which means by definition that any ideal generated by finitely many generators is principal. The ring C[x 1 , . . . , x n ] for n > 1 on the other hand is not a Bézout ring : it has non-principal finitely generated ideals (for example, the ideal generated by x 1 and x 2 ). It turns out the SNF is best defined in Bézout rings. By [84, Theorem 2.1] an SNF does not exist for matrices with coefficients in C[x 1 , . . . , x n ]. Thus, at face value it seems not possible to use the SNF to describe intersecting branes. However, this is not a weakness but a feature, as we now demonstrate. Consider the case of two variables, x and z. Physically, this means we are considering branes that share an R 1,5 and wrap complex curves in the (x, z)-plane. Consider a stack of n branes at x = 0 and a stack of m branes at z = 0. This is described by the diagonal matrix diag(x, . . . , x, z, . . . , z). We can activate non-diagonal terms, that we call Q andQ as they correspond to strings that yield hypermultiplets at low energy
T = x1 n Q Q z1 m . (2.18)
In order to pick a canonical diagonal form for this matrix, we need to make a choice of main variable. Let us pick z. This means we extend the non-Bézout ring C[x, z] to the ring
C(x)[z]
, which is Bézout as it is a polynomial ring in one variable over the field C(x). This is simply telling us that poles in x have to be included. We now describe the SNF over this ring. Assume first that the eigenvalues of Q Q are all distinct, call them λ 1 , . . . , λ m . Then the SNF is
diag x, . . . , x, 1, . . . , 1, m i=1 z − λ i x . (2.19)
If some of the eigenvalues coincide, the form of the SNF changes. We can collect this information, which is insensitive to the detailed properties of the eigenvalues, by simply stating that the SNF is
x1 n 0 0 z1 n − Q Q x λ i = λ j . (2.20)
Thus, from the point of view of the stack of m branes at z = 0, the presence of the other stack is felt as a pole for the complex adjoint-valued Higgs field living on the brane at z = 0, [85][86][87]. Note that the situation is symmetric and one could have chosen the other stack as the base one. This is exactly the same arbitrariness we made when writing the ring as a Bézout ring.
Kraft-Procesi Transitions
Nilpotent orbits for sl n are in one-to-one correspondence with partitions of n, and are partially ordered by inclusion of their closure [88]. The nilpotent orbit associated to a partition λ of n is denoted O λ . The partial order corresponds to the well-known dominance ordering for partitions, 3 and it can be represented by a Hasse diagram, which indicates the covering relation associated to this partial order. The diagram thus obtained also corresponds to the stratification of the nilpotent cone (the set of all nilpotent matrices) into symplectic leaves [77,78,89]. Elementary degenerations between adjacent nilpotent orbits are called Kraft-Procesi transitions. In the case of sl n nilpotent orbits, these can be either closures of minimal sl m nilpotent orbits or Kleinian singularities C 2 /Z m for m ≤ n. This can be implemented in brane setups [90,91], where nilpotent orbit closures are realized as Higgs or Coulomb branches of 3d N = 4 quiver theories.
Slodowy Slices.
Consider the case where T = z · 1 n + M and M is a nilpotent matrix. Equation (2.4) shows that
δT ∼ δT + (α R − α L )z + (α R M − M α L ) , δT ∈ Mat n (R) . (2.21) Define α + := 1 2 (α R + α L ) and α − := 1 2 (α R − α L ) this gives δT ∼ δT + 2α − z + [α + , M ] + {α − , M } , δT ∈ Mat n (R) . (2.22)
The image of the map
Mat n (R) → Mat n (R) α − → 2α − z + {α − , M } (2.23)
contains all of Mat n (zR), 4 so we can use α − to eliminate all z-dependence in δT . We still have the freedom to use α + , which defines an equivalence relation
δT ∼ δT + [α + , M ] , δT ∈ Mat n (C) . (2.24)
The cokernel of the adjoint action by M has dimension
d λ := i (2i − 1)λ i , (2.25)
where λ is the partition that specifies the nilpotent orbit of M . If one considers only traceless matrices, δT then depends only on d λ − 1 parameters. Note that being in the cokernel of ad(M ) corresponds to commuting with the other nilpotent element in the sl 2 -triple generated by M . Thus M +δT parameterizes the Slodowy slice S M transverse to M (see Appendix A for definitions and a proof of this statement). Note that indeed that
∀M ∈ O λ (sl n ) , dim S M = d λ − 1 . (2.26) Kraft-Procesi Transitions.
Consider two partitions λ and µ, which are immediately adjacent in the partial order -one says that λ covers µ if they are adjacent and λ > µ. Then λ and µ differ only in two entries, say with indices i < j, with λ i − 1 = µ i and λ i+1 + 1 = µ i+1 , and one of the two following transitions occurs:
Condition Transition name Transverse slice j = i + 1 A λ i −λ j −1 C 2 /Z λ i −λ j µ i = µ j a i−j−1 O min (sl(i − j, C)) (2.27)
The first corresponds to a Kleinian singularity, whereas the second is the closure of a minimal nilpotent orbit. The equivalence class of tachyon matrices for a partition µ = [µ 1 , . . . , µ r ] (with µ 1 ≥ · · · ≥ µ r ) is characterized by a common SNF, i.e.
SNF(T µ ) = 1 . . . 1 z µr . . . z µ 1 . (2.28)
Starting from the SNF of partition µ, the Kraft-Procesi transition is realized using
SNF z µ j αz µ j −1 0 z µ i = z µ j −1 0 0 z µ i +1 = z λ j 0 0 z λ i (2.29) for α = 0.
Note that this is precisely the tachyon matrix formalism analog of the way Kraft-Procesi transitions are realized in Hanany-Witten brane systems for 3d N = 4 quiver theories in [90].
Examples.
The equality (2.15) corresponds to the covering of partition [1,1] by [2], whereby two branes are combined into a thick brane. A less trivial case is the covering of [2,2] by [3,1], where we do not simply have two branes being combined. Rather, one of the two thick branes needs to be broken. This is realized in our framework using (2.29) as follows:
SNF z 2 αz 0 z 2 = z 1 0 0 z 3 (2.30) for α = 0.
When the nilpotent orbit Hasse diagram is linear, one can build matrices that encode all partitions at once. This is the case for n ≤ 5, where the matrices are given by
z a 1 0 z , z a 1 0 0 z a 2 0 0 z , z a 1 0 0 0 z za 3 + a 4 0 0 0 z a 2 0 0 0 z , (2.31) z a 1 0 0 0 0 z za 3 0 a 2 a 3 a 4 0 0 z a 2 0 −z 3 a 6 0 z 2 a 1 a 3 a 6 z −za 4 −za 2 a 4 a 5 (a 1 a 2 a 3 a 6 + 1) 0 a 2 1 a 2 2 a 2 3 a 4 a 5 a 6 0 z . (2.32)
This means that the SNF of a matrix above with a 1 , · · · , a r non-zero gives precisely the r-th partition of n, the partitions being totally ordered.
Example: GTPs for T n and Related Models
In this section, we use an intermediate step in correspondence between M-theory on a CY threefold and IIB 5-brane-webs: IIA on a resolved C 2 /Z n singularity with D6-branes. This discussion follows the philosophy of [13]. We start in this section with the instructive example of T n , and consider its description as well as GTPs obtained by adding white dots along a single edge.
The Setup
Consider the toric local Calabi-Yau defined by the toric diagram with vertices at coordinates (0, 0), (n, 0) and (n, n), drawn here for n = 5:
W 1 W 3 W 2 (3.1)
The generators of the dual cone are
(−1, 0, 0) ↔ W 1 (3.2) (1, −1, n) ↔ W 2 (3.3) (0, 1, 0) ↔ W 3 (3.4) (0, 0, 1) ↔ Z . (3.5)
The first three generators are vectors normal to the 2-dimensional facets on the fan, drawn in blue arrows when projected on the CY plane in (3.1). As an algebraic variety, the toric threefold is simply a hypersurface in C 4 :
W 1 W 2 W 3 = Z n ⊂ C 4 W 1 , W 2 , W 3 , Z . (3.6)
This space has non-isolated singularities, specified by the following intersecting ideals:
I sing = (W 1 , W 2 , Z) ∩ (W 1 , W 3 , Z) ∩ (W 2 , W 3 , Z) . (3.7)
Along each such ideal, there is a family of A n−1 -singularities. These are in one-to-one correspondence with the three edges of the toric graph. The singular threefold admits three different (albeit linearly dependent) C * -actions which act on the coordinates with the following weights:
W 1 W 2 W 3 Z C * 1 0 1 −1 0 C * 2 1 0 −1 0 C * 3 1 −1 0 0 (3.8)
As explained in toric language in [13], we can define projections π i , for i = 1, 2, 3, with respect to these actions, and this will bring us down to IIA. Let (i, j, k) be a permutation of (1, 2, 3). The way to reduce along a particular C * -action C * i is to pick the pair of 'charged' coordinates W j and W k , and setup a C * -fibration over an new complex coordinate V jk as follows:
C * i : C[W 1 , W 2 , W 3 , Z] ∼ = C[W 1 , W 2 , W 3 , Z, V jk ] (W j W k − V jk ) . (3.9)
Now we can rewrite the threefold in the following presentation:
C[W 1 , W 2 , W 3 , Z] (W 1 W 2 W 3 − Z n ) ∼ = C[W 1 , W 2 , W 3 , Z, V jk ] (W j W k − V jk ; W i V jk − Z n )
.
(3.10)
The IIA reduction is achieved by reducing over the S 1 ⊂ C * action in each case. The noncompact part R ⊂ C * becomes a transverse direction to the D6-branes. The projection is simply defined as dropping the pair of coordinates (W j , W k ), leaving us with a local K3 with an A n−1 Klein singularity
C[W i , V jk , Z] (W i V jk − Z n ) . (3.11)
To simplify the notations, we switch to the more standard
X := W i Y := V jk (3.12)
so that the A n−1 singularity (a local K3) is described by
XY = Z n . (3.13)
There are D6-branes on the locus defined by the ideal
I D6 = (Y, Z n ) ,(3.14)
which we call the D6 ideal. It is, first of all a stack of n non-compact D6-branes. However, since this passes through the singularity, more is at play here, and we need to resolve the local K3 to refine our understanding. In order to describe the resolution of the A n−1 orbifold (3.13), we introduce homogeneous coordinates (z 1 , e 1 , . . . , e n−1 , z 2 ) with n − 1 C * -actions z 1 e 1 e 2 e 3 . . . e n−3 e n−2 e n−1 z 2
C * 1 1 −2 1 0 . . . 0 0 0 0 C * 2 0 1 −2 1 . . . 0 0 0 0 . . . C * n−2 0 0 0 0 . . . 1 −2 1 0 C * n−1 0 0 0 0 . . . 0 1 −2 1 (3.15)
The coordinates are homogeneous with respect to the n − 1 projective actions, by which the space is quotiented. Each row gives the list of weights of the coordinate with respect to each such action. Just as one must excise particular loci when creating standard projective space, so must one excise a number of loci here.
· · · e 1 = 0 e 2 = 0 e n−2 = 0 e n−1 = 0 Specifically, if a coordinate is set to zero, then only one its two 'neighbors' are allowed to vanish. See figure 6. For example, we can have e 2 = 0, and e 3 = 0, or e 2 = 0 and e 1 = 0, but the pair (e 1 , e 3 ) does not form a valid ideal for a vanishing locus. In terms of the coordinates (X, Y, Z), we have
z 1 = 0 z 2 = 0X = n i=0 e n−i i , Y = n i=0 e i i , Z = n i=0 e i (3.16)
where we have introduced e 0 := z 1 and e n := z 2 . The locus e i = 0 corresponds to the i-th exceptional P 1 . The loci z 1 = 0 and z 2 = 0 correspond to noncompact holomorphic curves intersecting the first and last P 1 , respectively. Line bundles over this space are characterized by their first Chern class, which is encoded as O(k 1 , . . . , k n−1 ). A section of this bundle is a polynomial of homogeneous multi-degree (k 1 , . . . , k n−1 ). The brane locus (3.14) is now given by
I D6 = n i=0 e i i , n i=0 e n i = n i=0 e i i . (3.17)
The interpretation is as follows: there are i D6-brane wrapping the i-th P 1 , and n noncompact D6-branes on the curve z 2 = 0, which intersects the (n−1)-th sphere at one point. At the SCFT point, all the P 1 's shrink to zero size. On the Coulomb branch, where the Kähler volumes are non zero, the effective theory can be read from the ideal (3.17). It is described by the quiver:
U(1) U(2) · · · U(n − 2) U(n − 1) n (3.18)
Tachyon Condensation Picture
The theory (3.18) is encoded by via the tachyon condensation/coherent sheaf language as follows. First recall that in terms of complexes, one can describe the branes as follows:
• A brane B i wrapped on the i-th P 1 given by e i = 0 can be described as the cokernel of the complex of line bundles:
B i : O(−e i ) O , e i (3.19)
where O is the structure sheaf over the local K3, and O(−e i ) is the dual of the line bundle O(e i ), of which e i is a section. For instance, O(−e 1 ) = O(2, −1, 0, . . . , 0).
• A noncompact 'flavor brane' B F at the locus z 2 = 0, intersecting the rightmost P 1 (given by e n−1 = 0), is given by the following complex:
B n : O(−z 2 ) O . z 2 (3.20)
• More generally, a noncompact D6-brane intersecting the i-th exceptional P 1 will be given by the zero-locus of a section of O(0, . . . , 0, 1, 0, . . . , 0), where the '1' is the i-th entry.
Define N = 1 2 n(n+1) D8-branes with gauge bundle F D8 := O ⊕N and N anti-D8-branes with gauge bundle
F D8 := O(2, −1, 0, · · · , 0) ⊕ O(−1, 2, −1, · · · , 0) ⊕2 ⊕ · · · ⊕ O(0, · · · , 0, −1) ⊕n . (3.21)
We then define the tachyon map
T : F D8 → F D8 (3.22)
as a diagonal matrix T = Diag(e 1 · 1 1 , e 2 · 1 2 , . . . , e n−1 · 1 n−1 , z 2 · 1 n ) . Note that det T = Y , so that the equations of the threefold are
W j W k = det T = Y , XY = Z n . (3.24)
from which one recover the original equation W i W j W k = Z n . The resulting D6-brane system is defined as the cokernel S := cok(T ) of this map, which is the locus where T fails to be invertible. So S is reducible:
S = n i=1 O ⊕i e i ,(3.25)
where O p means the structure sheaf with support over p = 0. There is an exact sequence
F D8 F D8 S 0 . T (3.26)
Fluctuations. The fluctuations around this background are computed as self-extensions, in the same way as in section 2. This means that the fluctuations δT of the tachyon T belong to the self-extension group,
δT ∈ Ext 1 (S, S) = Hom D(K3) (S, S[1]) ,(3.27)
where we consider the Homs in the derived category of coherent sheaves on K3 D(K3), and the [1] means that we shift the complex one step to the left. The matrix δT can be decomposed in blocks, and there will be non-zero fluctuations just above and below the diagonal.
In practice, the fluctuations are subjected to three conditions, that we will be using repeatedly in the following sections: (iii) The components of δT are subject to the identifications (2.5).
Example: T 2
For concreteness we work out in detail the case n = 2 for T n before treating the general case. The tachyon matrix is
O(2) ⊕ O(−1) ⊕2 O ⊕3 T , T = e 1 0 0 z 2 · 1 2 (3.28)
and we give names to the blocks in the fluctuation δT in correspondence with the quiver
δT = Φ 1 Q 1 Q 1 Φ 2 1 2 Q 1 Q 1 Φ 1 Φ 2 (3.29)
The three conditions listed above give
δT ∈ Γ(O(−2) (e 1 ) ) Γ(O(1) (e 1 ,z 2 ) ) · C 1×2 Γ(O(−2) (e 1 ,z 2 ) ) · C 2×1 Γ(O(1) (z 2 ) ) · C 2×2 = 0 C 1×2 · z 1 C 2×1 · 1 z 2 1 C 2×2 · z 1 . (3.30)
Here, Γ indicates that we take sections of the bundles, and C n×m is the set of n × m matrix of complex numbers. The last equality is easily checked, e.g. for the lower-left entry, the regular sections are generated by rational functions in z 1 alone, having C * -weight −2, and poles are allowed as the support is the intersection between two curves where z 1 = 0. In terms of Ext groups between B 1 (see (3.19)) and B 2 (see (3.20)), we can write (using e.g. a spectral sequence argument)
Q 1 ∈ Ext 1 (B 1 , B 2 ) = H 0 (O(e 1 ) e 1 ,z 2 ) = 1 z 2 1 .
(3.31)
Q 1 ∈ Ext 1 (B 2 , B 1 ) = H 0 (O(z 2 ) e 1 ,z 2 ) = z 1 . (3.32)
To summarize, the background tachyon plus fluctuation is given by
T + δT = e 1 q 1 z 1 q 1 z 2 1 z 2 · 1 2 + z 1 ϕ 2 ,(3.33)
where we have pulled out all dependencies in z 1 , e 1 and z 2 , so that q 1 ∈ C 1×2 , q 1 ∈ C 2×1 and ϕ 2 ∈ C 2×2 are pure constants: Figure 7. Intersecting branes before and after the transformation that maps the off-diagonal Higgsfield entries Q 1 ,Q 1 to diagonal ones with a pole. The orange dot signals the pole in the Higgs field at X = 0.
Q 1 = q 1 z 2 1 , Q 1 = q 1 z 1 , Φ 2 = ϕ 2 z 1 . (3.34) e 1 = 0 z 1 = 0 z 2 = 0 Φ ϕ 1 Q 1 ,Q 1 e 1 = 0 z 1 = 0 z 2 = 0 ϕ 2 = − M X ϕ 1
Note that the term q 1 z 2 1 ensures that (3.33) is defined on the locus z 1 = 0 We can perform basis changes from the left and right, using (2.6), as follows:
1 0 − q 1 z 2 1 e 1 1 2 · e 1 q 1 z 1 q 1 z 2 1 z 2 · 1 2 · 1 − q 1 z 1 e 1 0 1 2 = e 1 0 0 z 2 · 1 2 − M z 1 e 1 (3.35)
In the last step we have introduced the meson matrix M := q 1 q 1 .
(3.36)
In the 5d effective field theory, F-term conditions impose that M be nilpotent. This can also be demonstrated mathematically via the so-called cone construction in the derived category of coherent sheaves. See [83] for examples of this mechanism. This diagonalization shows that giving a vev to the meson field can be subsumed into a shift of vev of the adjoint field φ on the flavor branes, with a pole. Using the coordinate X = z 2 1 e 1 (see (3.16)) on the z 2 = 0 plane, we can write this as
T + δT ∼ e 1 0 0 z 2 · 1 2 + z 1 · ϕ 2 with ϕ 2 = − M X . (3.37)
The transformation from (3.33) to (3.37) means that we can regard in an appropriate regime the intersecting branes in figure 7 as a stack of two branes on z 2 = 0 with a complex codimension-one defect on its world-volume at e 1 = 0. This is in agreement with the findings of [85,87], where the authors find that a vev of bifundamental fields at the intersection of two branes can be subsumed into a pole for the adjoint of one of the two branes. In the picture, we trade the description in figure 7 on the left with the right.
Up to a change of basis we can take M in canonical Jordan form. There are two possibilities, corresponding to the two partitions of n = 2:
M [1 2 ] = 0 0 0 0 and M [2] = 0 1 0 0 . (3.38)
Consider the latter case,
T [2] := e 1 t [2] := e 1 0 0 0 z 2 − 1 z 1 e 1 0 0 z 2 .
(3.39)
We still have det T [2] = ez 2 2 = Y . So the brane configuration has not changed geometrically. Accordingly, the M-theory uplift is still given by (3.24). However, the tachyon matrix shows us that the flavor brane no longer carries an SU(2) group. This is the hallmark of a Tbrane: A non-abelian bound state of branes that does not realize the gauge group that it would naively have given its geometry. This T-brane effect is the IIA counterpart of the change in boundary conditions in the dual type IIB brane-webs (3.40) In this particular case, once the D5-segment has been sent away, a Hanany-Witten move where the 7-brane detaches completely becomes possible: (3.41) How does this translate into the IIA language, i.e. in terms of the tachyon field? Let us see what deformations are available, starting from the new vacuum defined by (3.39). Using the results of section 2, the fluctuations of t [2] are
δt [2] = z 1 · 0 0 α 0 , α ∈ C . (3.42)
Now in order to see how this affects the geometry, we add this perturbation to T [2] T [2] + δt [2]
= e 0 0 0 z 2 − 1 z 1 e 0 αz 1 z 2 .
(3.43)
Now the geometry (3.24) is deformed to
W j W k = det T = Y + α , XY = Z 2 .(3.44)
which reduces to the hypersurface
W 1 W 2 W 3 = Z 2 + αW i . (3.45)
So the CY threefold is fully desingularized. From the IIA perspective, we see that two flavor branes have recombined with one gauge brane, to give rise to a noncompact brane that can escape the singular locus. This is in full agreement with the 5-brane-web picture, whereby the 7-brane moves off to the left, becomes fully detached from the NS5-branes, and can escape to infinity, see (3.41). This corresponds precisely to removing the A 1 singularity. To summarize, the white dot is represented in the IIA picture as a nilpotent vev with poles on the flavor D6-stack. The further Hanany-Witten move that actually deforms the M-theory geometry is implemented by switching on a further vev on the flavor stack along the Slodowy slice with respect to the initial singular nilpotent vev.
General Case: T n
The general T n case is very similar to the T 2 example treated above. The fluctuations around the tachyon background are denoted by fields as follows:
1 2 . . . n − 2 n − 1 n Q 1 Q n−2 Q n−1 Q 1 Q n−2 Q n−1 Φ 1 Φ 2 Φ n−2 Φ n−1 (3.46)
Generalizing (3.31) and (3.32), we find that the 5d hypermultiplets that reside at the intersection of the curves e i = 0 and e i+1 = 0 are described by
Q i ∈ Ext 1 (B i , B i+1 ) = H 0 (O(e i ) (e i ,e i+1 ) ) = Y Z i+1 e i = j =i e j−i−1 j . (3.47) andQ i ∈ Ext 1 (B i+1 , B i ) = H 0 (O(e i+1 ) (e i ,e i+1 ) ) = Z i Y e i+1 = j =i+1 e i−j j . (3.48)
On the other hand, one can check that Ext 1 (B i , B j ) = 0 for |i − j| > 1. Therefore the tachyon matrix T , plus the fluctuations δT , fit schematically (we will provide the explicit form below) into the matrix
T + δT = e 1 · 1 1 Q 1 Q 1 e 2 · 1 2 . . . . . . . . . . . . e n−1 · 1 n−1 Q n−1 Q n−1 z 2 · 1 n . (3.49)
As in (3.34), we introduce the notation
Q i = q i · Y Z i+1 e iQi =q i · Z i Y e i+1 ,(3.50)
such that q i andq i are constants. We can also have fluctuations on the diagonal, which we write as
Φ i = ϕ i · Y Z i+1 e i .
(3.51)
With this notation the F-terms at the ith node can be written q 1 q 1 = 0, and q i q i = q i+1 q i+1 i = 1, . . . , n − 2 .
(3.52)
We now come back to the reason why (3.49) is only a schematic form. The i-the hyper (Q i , Q i ) is only well defined at the (e i , e i+1 ) intersection, but has poles in the nearby patches. Hence, (3.49) is not well-defined over the whole target space. This is due to the projective nature of the resolved K3. Therefore, we must study it patch by patch. A good local affine coordinate on the locus e i = 0 for the hemisphere where e i+1 (respectively e i−1 ) does not vanish is Y Z i (respectively Z i Y ):
e i = 0 e i+1 = 0 Coordinate Y Z i Coordinate Z i+1 Y (3.53)
Note in particular that for i = N (respectively i = 0), this is compatible with the affine coordinate on the locus z 2 = 0 (resp. z 1 = 0) being simply X (resp. Y ), as chosen in (3.37).
In the patch that contains the intersection {e i = 0} ∩ {e i+1 = 0}, the tachyon fluctuation is expressed as
δT = Φ i Q i Q i Φ i+1 ∈ C i×i · Y Z i+1 e i C i×(i+1) · Z i Y e i+1 C (i+1)×i · Y Z i+1 e i C (i+1)×(i+1) · Z i Y e i+1
(3.54)
Using line and row transformations, one finds
1 i q i Z i Y −q i Y Z i+1 1 i+1 − q i q i Z e i 1 i − q i q i Z 0 0 e i+1 · 1 i+1 1 i − q i Z i e i+1 Y e i q i Y e i Z i+1 e i+1 1 i+1 − q i q i Z = e i · 1 i 0 0 e i+1 1 i+1 − q i q i Z .
(3.55)
Effectively, this transforms a pole of the form
e i · 1 i + Y Z i+1 e i ϕ i 0 0 e i+1 · 1 i+1 , ϕ i = − q i q i (Y /Z i ) (3.56)
into a pole of the form
e i · 1 i 0 0 e i+1 1 i+1 + Z i Y e i+1 ϕ i+1 , ϕ i+1 = −q i q i (Z i+1 /Y ) . (3.57)
Let us introduce the n × n meson matrix M = q n−1 q n−1 .
(3.58)
As argued for the T 2 previously, this meson matrix must be nilpotent. We can characterize the tachyon fluctuation entirely by the last pole (3.57) for i = n − 1, which gives
ϕ n = − M X ,(3.59)
consistently with what we found for the n = 2 case in (3.37). In summary, the bifundamental matter between the various branes can be subsumed under a shift of the 7d SU(n)-adjoint Higgs ϕ n , as a simple pole with residue equal to a nilpotent element M . Since M n = 0, we still have
det T = n i=1 e i i = Y . (3.60)
Hence, the brane locus remains unscathed despite the activation of the matter fields.
Interpretation in terms of Generalized Toric Polygons
We saw in the last subsection that the geometry of the threefold can be affected by the presence of a pole of the form ( q → (r i = rank(q i q i )) i=1,...,n−1 .
For a given M ∈ N , the ranks of the bilinears in q i q i that map to M are not fixed. In other words, r(m −1 (M )) contains more than one element. However, there is a unique element of r min ∈ r(m −1 (M )) that minimizes the sum of the entries. If M ∈ O λ , then r 0 is given by the partial sums of the transpose of λ,
r min = n j=n+1−i λ T j i=1,...,n−1 .
(3.64) Note that this coincides with the ranks of the linear quiver r min 1 r min 2 · · · r min n−1 n (3.65) whose 3d N = 4 Coulomb branch is the corresponding nilpotent orbit closure. The ranks in (3.64) represent the minimal deformations needed in δT to produce the pole (3.59). In simple cases, the quivers (3.65) appear as embedded in the magnetic quiver describing the Higgs branch of the 5d theory. For a more precise statement about the embedding, the reader is referred to [55]. In the GTP, these ranks can be encoded with white dots inside the polygon, using again the notation introduced in [16]. For instance, for n = 9 and partition λ = [4, 3, 2], we can draw the configuration shown in figure 8. The minimal ranks correspond to the number of white dots in each column: here we get r min = (0, 0, 0, 0, 0, 1, 3, 6). These numbers give the brane configuration which saturates the s-rule, as illustrated in the lower part of figure 8. Impact on the Hasse diagram. The vevs of Higgs branch operators in the 5d theory can be projected on the space of complex structure deformations of the geometry. The fact that the pole (3.59) freezes some of these deformations means geometrically that the resulting pole-deformed Higgs branch is the slice in the initial Higgs branch transverse to these imposed deformations. This can be represented as follows.
The Higgs branch H with no pole is a symplectic singularity, which can be depicted using its Hasse diagram of singularities. The effect of the pole is to freeze certain deformations, represented here as a forced choice of a higher dimensional bottom symplectic leaf. The resulting Higgs branch H is the transverse slice to that leaf. See figure 9 for a schematic depiction.
The analysis above applies to the theory in any phase. In the case of the IIA reduction on the resolved A n−1 singularity shown in figure 6, the Higgs branch of the T n theory becomes the nilpotent cone of sl n . The transverse slices are then identified with the Slodowy slices. The example n = 4 is illustrated in figure 10. If M ∈ O λ for λ some partition of n, then the corresponding Higgs branch is the Slodowy slice S λ ∩ N . Moving on to the SCFT phase, the Higgs branch is no longer a nilpotent orbit closure, but the general picture stays the same: the Higgs branch is restricted to a transverse slice. This is what we already mentioned in the introduction, see figure 5. The Hasse diagrams drawn in figure 5 can be reproduced independently using the quiver subtraction algorithm [92,93] on the magnetic quivers extracted from the GTPs.
Hanany-Witten Moves
In the previous section, we defined the IIA counterpart of a 'white dot' as a nilpotent residue for the adjoint complex scalar on the flavor D6-branes. The nilpotency implies Figure 10. Diagram for the nilpotent cone of sl 4 . The dots are nilpotent orbits. When M belongs to a given orbit, the Higgs branch of the resulting theory is the transverse Slodowy slice, represented by a bracket on the right. that there will be no repercussions on the geometry of the branes, and hence, the M-theory CY will not be deformed. This is the hallmark of a 'T-brane' [74].
0 3 4 5 6 a 3 a 1 A 1 A 3 M ∈ O [1 4 ] M ∈ O [2,1 2 ] M ∈ O [2 2 ] M ∈ O [3,1]
We would now like to determine how the T-brane configuration impacts the physics of the 5d theory. This is done using the brane-web language, with Hanany-Witten moves, as we have explained in detail in section 3.3. The key point is that the nilpotent orbit of M determines which Hanany-Witten move can be performed in order to detach any of the 7-branes.
In the IIA setup, flavor D6-branes are now allowed to move across exceptional P 1 s, thereby changing the quiver structure. The way this is seen at the level of the Higgs field, is that by activating vevs along the Slodowy (transverse) slice to the nilpotent vev with poles, the characteristic polynomial of the tachyon matrix actually becomes deformed. This can be seen by computing the self-Ext group Ext 1 B (nil)
F , B (nil) F
of the nilpotent configuration on the flavor brane. Using the computation from section 2, we are looking for δT F such that
δT F ∼ δT F + z 2 Z [M, g] . (3.66)
This is equivalent to requiring that δT F be on a transverse slice to M , gauge equivalent to the Slodowy slice. We will choose a gauge such that δT F be the companion matrix to M as in [94], which is referred to as a 'reconstructible Higgs' in [74]. The details on how this is done are given in Appendix A. Say T nil is in the maximal nilpotent orbit, then the 'Hanany-Witten' tachyon will take the form
T HW = T nil + δT F = z 2 Z Z 1 Z 1 . . . . . . . . . Z 1 (−) n−1 a n X (−) n−2 a n−1 X −a 2 X Z , (3.67)
where the a i are constants (the fluctuation δT F is proportional to X as a consequence of (3.54) with i = N − 1). This matrix has determinant det(T HW ) = z 2 Z n Z n + a 2 XZ n−2 + . . . + a n X . (3.68) More generally, for M in the [λ 1 , λ 2 , . . . , λ r ] partition, we take a block diagonal matrix where each block will take the form:
(T HW ) i = z 2 Z Z 1 Z 1 . . . . . . . . . Z 1 (−) λ i −1 a (i) λ i X (−) λ i −2 a (i) λ i −1 X −a (i) 2 X Z . (3.69)
Note that by doing so, we are not using the full Slodowy slice S λ (which contains nonblock-diagonal matrices) but instead restrict to the intersection S 0 λ = S λ ∩ l λ with the Levi subalgebra l λ , see Appendix A. This is physically justified, as it guarantees that the flavor symmetry will not be further broken by the Hanany-Witten moves than it already has been by the white dots. The missing parameters, in S λ − S 0 λ , are associated to the non splitting of the flavor branes, illustrated on an example below in (3.77).
Putting together all these blocks, and the non-flavor part of the tachyon matrix, the end result of the full deformation is
Y → 1 X · r i=1 Z λ i + X λ i −2 j=0 a (i) λ i −j Z j ,(3.70)
where i λ i = n. Actually it can be argued that only the coefficients a (i) λ i affect the physics near the singularity: coefficient a (i) λ i −j for j = 0 correspond to a shift of Z λ i −j X , which is using (3.53) the coordinate along a P 1 with which the brane has zero intersection. Therefore the equation simplifies to
Y → 1 X · r i=1 Z λ i + Xa (i) ,(3.71)
where we have renamed a (i) := a (i) λ i . Reverting to the original notations (W 1 , W 2 , W 3 , Z) for the coordinates in C 4 , see (3.12), one finally gets
W 1 W 2 W 3 = r i=1 Z λ i + a (i) W 1 . (3.72)
Examples.
Let us work out a few examples. Equation (3.70) is worked out explicitly for T 3 and T 4 in table 1. The factorization allows to read off the corresponding quivers, which are the magnetic quivers for nilpotent orbits of sl(3) and sl(4) [95], as expected.
Partition det T Factorization [1 3 ] Y e 1 e 2 2 (z 3 2 ) [2, 1]
Y + a 2 Z e 1 e 2 (z 2 )(e 2 z 2 2 + a 2 z 1 ) [3] Y + a 2 Z + a 3 smooth
Partition det T Factorization [1 4 ] Y e 1 e 2 2 e 3 3 (z 4 2 ) [2, 1 2 ] Y + a 2 Z 2 e 1 e 2 2 e 2 3 (e 3 z 2 2 + a 2 z 2 1 e 1 )(z 2 2 ) [2 2 ] Y + (a (1) 2 + a(2)2 )Z 2 + a (1) 2 a(2)
2 X e 1 e 2 2 e 3 (e 3 z 2 2 + a (1)
2 z 2 1 e 1 )(e 3 z 2 2 + a (2) 2 z 2 1 e 1 ) [3, 1]
Y + a 2 Z 2 + a 3 Z e 1 e 2 e 3 (e 2 e 2 3 z 3 2 + a 2 z 2 1 e 1 e 2 e 3 z 2 + a 3 z 1 )(z 2 ) [4] Y + a 2 Z 2 + a 3 Z + a 4 smooth Table 1. Equations defining the brane configurations in T 3 and T 4 with white dots on one edge after HW moves. In the last column, the equation is rewritten in terms of the toric variables for the resolution, and maximally factorized. The factor in orange give the ranks of the low energy quiver while the terms within brackets give the flavor ranks.
We can illustrate the problem discussed in the previous paragraph with partition [2,1]. The generic matrix in the Slodowy slice would give rise to the final block for the tachyon matrix given by
Z 1 0 −a 2 X Z αX βX 0 Z + γX ,(3.73)
The resulting, deformed, equation then reads det T = e 1 e 2 a 2 γe 1 z 3 1 + a 2 z 1 z 2 + γe 1 e 2 z 2 1 z 2 2 + αβe 1 z 3 1 + e 2 z 3 2 , (3.74) from which one reads off a quiver with two abelian nodes and a single flavor brane system:
e 1 = 0 e 2 = 0 z 1 = 0 z 2 = 0 (3.75)
This system, drawn in teal, intersects both P 1 divisors at e 1 = 0 and e 2 = 0. This breaks the global isometry u(1) of the nilpotent Slodowy slice S [2,1] ∩ N . By restricting to the Levi subalgebra, i.e. sending α, β and γ to zero, one gets one more factorization:
det T = e 1 e 2 (z 2 ) a 2 z 1 + e 2 z 2 2 . The flavor branes are now disjoint and can be moved independently:
e 1 = 0 e 2 = 0 z 1 = 0 z 2 = 0 (3.78)
Each flavor brane intersects a single P 1 , and the global symmetry of the resulting Higgs branch (still in the resolved phase) is read to be s(u(1) ⊕ u(1)) = u(1).
General Discussion
White Dots along a Single Edge
Having discussed the case of T n in detail, we move to a generic toric polygon and its GTP deformations. Let P be a convex polygon in R 2 with vertices in Z 2 . Pick an edge on P , and denote by n its length (i.e. the edge contains exactly n + 1 points in Z 2 ). Using an SL(2, Z) transformation, one can assume without loss of generality that this edge extends between the vertices (0, 0) and (0, n), and that all vertices of P other than (0, 0) and (0, n) have coordinates (i, j) with i < 0:
· · · · · · (0, 0) (0, n) (4.1)
In the toric threefold, the length n edge is translated into the presence of a dimension 1 singular stratum with transverse slice of type A n−1 . In the brane-web picture, this means that n D5-branes extend to infinity, and the boundary condition, encoded in how they end on D7-branes, can again be encoded in a T-brane datum. This can be done explicitly using the IIA reduction as in section 3 for any given polygon (and we do so for the example of a rectangular P in the next section), even though it would be extremely cumbersome to try to give general formulas that would apply to any polygon. The general idea, however, is clear: in the T n example discussed in section 3, the undeformed equation W 1 W 2 W 3 = Z n yields the A n−1 singularity transverse to the line W 2 = W 3 = Z = 0 by setting W 1 equal to a constant, say W 1 = c, giving
cW 2 W 3 = Z n . (4.2)
It is then this equation that is modified by (i) adding a nilpotent pole of the form (3.59) with M in the prescribed nilpotent orbit, and (ii) performing HW moves to deform the geometry, in effect replacing
Z n → i (Z λ i + a (i) W 1 ) . (4.3)
The general case is obtained by mimicking this procedure: among the equations for the toric threefold corresponding to the polygon P , the line with transverse singularity A n−1 can be parametrized by a coordinate W 1 , and the transverse slice is again given by (4.2). This is one of the equations defining the toric threefold, called a "boundary equation" in [96]. This equation should then be replaced by
cW 2 W 3 = i (Z λ i + a (i) W 1 ) . (4.4)
This is in agreement with [96,Theorem 4.8]. A useful mnemonic is that in the toric diagram (3.1), the deformation of the edge labelled by W 1 involves an A n−1 equation involving the coordinates labelling the adjacent edges, here W 2 and W 3 , with the degree n polynomial in Z deformed by powers of W 1 . In the following subsections, we give examples to illustrate the scope of our results, and also show certain limitations.
Successive deformations.
Before discussing the examples, we want to address a natural question: given a GTP with white dots on an edge of length n characterized by a partition µ, is it possible to deform it to the same GTP with partition λ instead? One necessary condition is that λ > µ, and it is also sufficient if the polygon is large enough. 5 In this case, one can deform the nilpotent pole according to the general rules spelled out in section 2.
For instance, if n = 4, we can use the explicit form (2.31). Going from partition [2, 1 2 ] to [2 2 ] is straightforward as it just involves merging two Jordan blocks, while going from [2 2 ] to [3,1] requires using higher powers of z (recall (2.29)):
z 1 0 0 0 z 0 0 0 0 z 0 0 0 0 z z 1 0 0 0 z 0 0 0 0 z α 0 0 0 z z 1 0 0 0 z zβ 0 0 0 z α 0 0 0 z α = 0 β = 0 (4.5)
However it is worth mentioning that this can be done on the T-brane before HW deformations are added. Crucially, the two operations do not commute in general. Here for instance the deformed equations for T 4 with partition [2,2] and [3,1], given in table 1, cannot be deformed from one to the other.
Rectangular Box
Consider a rectangular box of size n × m,
m n W 1 W 3 W 2 W 4 (4.6)
The 5d SCFT this describes admits several well-known low energy gauge theory deformations. The toric threefold singularity is not a hypersurface, but it is described by the pair of equations
W 1 W 3 = Z m , W 2 W 4 = Z n . (4.7)
We can add white dots on the right segment, labeled by W 1 , exactly as in section 3. The deformed equations are
W 1 W 3 = Z m , W 2 W 4 = i (Z λ i + a (i) W 1 ) . (4.8)
Note that again, the coordinate W 1 is used to deform the A n−1 equation involving the coordinates labeling the adjacent edges W 2 and W 4 .
Example.
The following example is a useful check of this proposal, as the resulting model is related to T 3 . Consider the case m = 3, n = 2. Placing a white dot on the length 2 edge yields the equations
W 1 W 3 = Z 3 , W 2 W 4 = Z 3 + aW 1 . (4.9)
We can now use the second equation to solve for W 1 , and we get the hypersurface
W 2 W 3 W 4 = Z 2 (aZ + W 3 ) .
(4.10)
In the limit where a → ∞, the D7-brane, in the web interpretation, has crossed the whole brane system from right to left. We are left with
W 2 W 3 W 4 = Z 3 ,(4.11)
and we have reproduced the prediction (1.5)
↔ (4.12)
This is a consistency check on the validity of our proposal.
Generic Triangle
In this subsection we consider more examples which involve threefolds what are not hypersurfaces in C 4 . Consider the case where the toric polygon is a triangle, of arbitrary shape. Pick one edge of length n. Then an SL(2, Z) transformation can bring the triangle to a frame where its three vertices are:
{(0, 0); (0, n); (−a, b)} , a ∈ Z >0 , b ∈ Z . (4.13)
This means the toric fan is generated by the three vectors (0, 0, 1), (0, n, 1) and (−a, b, 1). Among the generators of the dual cone we have
(−1, 0, 0) ↔ W 1 (4.14) n − b d 2 , − a d 2 , an d 2 ↔ W 2 (4.15) b d 1 , a d 1 , 0 ↔ W 3 (4.16) (0, 0, 1) ↔ Z . (4.17)
In order to write the equation of the threefold, we have introduced d 1 = gcd(a, b), d 2 = gcd(a, n − b) and d 3 = gcd(a, b, n − b). One of the equations for the toric threefold is
W n/d 3 1 W d 2 /d 3 2 W d 1 /d 3 3 = Z an/d 3 ,(4.18)
but in general there are other equations, as there are other generators in the dual cone. In the case of T n (3.1), equation (4.18) is sufficient and reduces to (3.6), but this is a special case. To illustrate this phenomenon, we consider an example.
Example.
We take the case a = 2 and b = 0.
2 2n W 1 W 2 W 3 (4.19)
The dual cone is generated by 5 vectors: 6
(−1, 0, 0) ↔ W 1 (4.20) (n, −1, 2n) ↔ W 2 (4.21) (0, 1, 0) ↔ W 3 (4.22) (0, 0, 1) ↔ Z (4.23) (1, 0, 2) ↔ T (4.24)
and the threefold is described by two equations in C 5 :
W n 1 W 2 W 3 = Z 2n , W 1 T = Z 2 .
(4.25)
The first equation corresponds to (4.18). The singular locus associated to the length 2n edge is Z = W 2 = W 3 = T = 0, parametrized by W 1 . Setting as before W 1 = c, we can eliminate T and we find as expected an equation c n W 2 W 3 = Z 2n that we can deform. Therefore, the system of equations for the triangle (4.19) with one white dot on the long edge according to our conjecture is
↔ W n 1 W 2 W 3 = Z 2n−2 (Z 2 + aW 1 ) W 1 T = Z 2 (4.26) 6
There is an algorithmic way to see that the fifth vector (1, 0, 2) is needed, an no other. This is the computation of the Hilbert basis of the semi-group σ ∨ ∩ M , using standard notation in toric geometry. This Hilbert basis is unique, and in the present case it has 5 elements, given here.
Testing the Proposal: Resolution of Singularities
To test the proposal for the deformation of singularities, we compute the resolutions for the deformed singularities. For the starting point, which we take to be a toric variety, the resolution is easily obtained in a combinatorial fashion, by a complete triangulation of the toric polygon. However, no such computational simplification exists yet for the resolution of GTPs. In view of this, we therefore need to revert to resolving the actual algebraic varieties. We will carry this out for the T n theories.
Crepant Resolution of T n
The simplest type of singularities are the hypersurfaces that realize the T n theories. The flavor symmetry algebra is su(n) 3 except for n = 3, where we expect e 6 . This can be easily detected from the toric geometry by extracting the set of curves that are complete intersections between compact and non-compact (i.e. flavor) divisors. The resulting socalled combined fiber diagram (CFD) [26][27][28] is easily read off from the toric polygon [80].
Here we will take the more laborious path of resolving the hypersurface singularity, in preparation for resolving the deformed singularities. The hypersurface in C 4 is
W 1 W 2 W 3 = Z n .
(5.1)
The first resolution for all of these singularities is simply to remove the locus W 1 = W 2 = W 3 = Z = 0, which is achieved by inserting a P 3 with projective coordinates [W 1 , W 2 , W 3 , Z] (for a detailed exposition of resolutions from a physicist's perspective, see [97]). Denoting the exceptional section of the blowup by δ 1 the equation, after proper transform, which ensures that the resolution is crepant, becomes
W 1 W 2 W 3 = Z n δ n−3 1 . (5.2)
This can be further resolved by consecutively blowing up the loci W 1 = W 2 = W 3 = δ i = 0 i.e. by inserting another projective space with coordinates
[W 1 , W 2 , W 3 , δ i ] , i = 1, · · · , n − 3i ,(5.3)
which implies that the coordinates cannot vanish at the same time (and thus the above is a relation in the Stanley-Reissner ideal of the hypersurface). This is iterated until we reach
W 1 W 2 W 3 = Z n n−3i i=1 δ n−3i i . (5.4)
The remaining blowups will resolve the local singularities along W 1 = 0, W 2 = 0 and W 3 = 0 respectively, by small resolutions, i.e.
[W i , Z] , or [W i , δ i ] . (5.5)
Let us denote the compact divisors by S i and the non-compact divisors by D a . We then find from these resolutions the intersection matrix Figure 11. CFDs: The intersection graph for T 3 (left hand side) and the deformation of T 3 described by the partition [1,2] (right). The nodes are curves that are complete intersections between S i (i.e. the sum of all compact divisors) and the non-compact divisors. The green nodes are −2 self-intersection curves, which correspond to roots of the flavor symmetry algebra, white are −1 curves, which can be thought of as bifundamental matter. On the left we see the su(3) 3 flavor symmetry and the matter in the (3, 3, 1) etc, which enhances to e 6 . On the right the theory is rank 0.
G a = i S i · D a · D a (5.6) T 3 T 3 with deformation [1, 2]
to be precisely the adjacency matrix of the CFD for T n : i.e. three su(n) Cartan matrices, pairwise connected by −1 curves. We have shown examples in figures 11 and 12.
Resolution of Deformations of T n
Deformations of T 3 . By including the deformations, some of the above resolutions get obstructed. Lets consider the simplest case of T 3 , which itself is the rank 1 Seiberg theory with flavor symmetry e 6 . After a single blowup [W 1 , W 2 , W 3 , Z] we get one compact divisor δ 1 = 0, and three A 2 singularities. Resolving yields the CFD shown in figure 11 on the left hand side. The representations are (3, 3, 1), and cyclic permutations, and thus we get the known enhancement to e 6 7 ,
(3, 3, 1) ⊕ (1, 3, 3) ⊕ (3, 1, 3) −→ 27 . (5.7)
Adding the deformation results in W 1 W 2 W 3 = Z(Z 2 + W 1 ), which does not allow for a big resolution. There are small resolutions, along e.g. W 1 = Z = 0, indicating that this is a rank 0 theory.
Deformations of T 4 . More interestingly we can start with T 4 . All deformations involving a single edge and yielding a positive rank theory are shown in table 2. The deformation that is associated to the partition [2, 1 2 ] along one edge is the hypersurface
W 1 W 2 W 3 = Z 2 (Z 2 + W 1 ) . (5.8) Partitions Rank Free Symmetry Equation Fig. [1 4 ] 3 0 su(4) 3 W 1 W 2 W 3 = Z 4 12 [2, 1 2 ] 2 0 su(8) ⊕ su(2) W 1 W 2 W 3 = Z 2 (Z 2 + W 1 ) 12 [2 2 ]
1 0 e 7 W 1 W 2 W 3 = (Z 2 + aW 1 )(Z 2 + bW 1 ) 12 Table 2. The partition that defines the distribution of white dots in the GTP, the rank of the 5d SCFT, the flavor symmetry algebra, as well as hypersurface equation for the deformations of T 4 .
The first resolution (5.3) results in
W 1 W 2 W 3 = δ 2 Z 4 + Z 2 W 1 . (5.9)
Continuing the blowup results in the intersection matrix between the sum of compact divisors i S i and each of the non-compact ones This is shown in figure 12, from which the manifest flavor symmetry algebra is read off from the collection of −2 curves su(4) 2 ⊕ su(2) ⊕ u(1) .
i S i · D a · D b =
(5.11)
The additional u(1) follows from the general rule on flavor symmetry algebras from CFDs as laid out in [31]. The matter is in the representations, which combine naturally into the representations of the enhanced symmetry algebra as follows (4, 4, 1) ⊕ (6, 1, 1) ⊕ (1, 6, 1) ⊕ (4, 1, 2) ⊕ (1, 4, 2) −→ (28, 1) ⊕ (8, 2) , (5.12)
where we identify the enhanced flavor symmetry algebra as
g U V = su(8) ⊕ su(2) . (5.13)
Similarly for the partition [2 2 ], the manifest symmetry is su(4) 2 , while the actual symmetry is e 7 . In the resolution, see figure 12 on the right hand side, where we see the representation (4, 4) ⊕ (4, 4) ⊕ 2 × (6, 1) ⊕ 2 × (1, 6) −→ 56 . ]. The nodes are curves that are complete intersections between S i (i.e. the sum of all compact divisors) and the non-compact divisors. The green nodes are −2 self-intersection curves, which correspond to roots of the flavor symmetry algebra, white are −1 curves, which can be thought of as bifundamental matter. On the left we see the su(4) 3 flavor symmetry and the matter in the (4, 4, 1) etc representations. In the middle, we see the manifest flavor symmetry su(4) 2 ⊕ su(2) ⊕ u(1), but the matter shown results in the enhancement to su(8) ⊕ u(1). On the right the flavor symmetry enhances to e 7 .
Partitions Rank
Symmetry Table 3. Deformations of the T 5 model: the partition that defines the GTP, the rank of the expected 5d SCFT, the flavor symmetry algebra, and the deformed hypersurface equation.
Equation Fig. [1 5 ] 6 su(5) 3 W 1 W 2 W 3 = Z 5 13 [2, 1 3 ] 5 su(5) 2 ⊕ su(3) ⊕ u(1) W 1 W 2 W 3 = Z 3 (Z 2 + W 1 ) 13 [2 2 , 1] 4 su(5) 2 ⊕ su(2) ⊕ u(1) W 1 W 2 W 3 = Z(Z 2 + W 1 ) 2 15 [3, 1 2 ] 3 su(10) ⊕ su(2) W 1 W 2 W 3 = Z 2 (Z 3 + W 1 ) 13 [3, 2] 2 su(10) W 1 W 2 W 3 = (Z 2 + W 1 )(Z 3 + W 1 ) 14
Deformations of T 5 . Finally consider the T 5 model with deformations added along a single edge. These are summarized in table 3. First we recompute the resolution of the undeformed T 5 model, which is given in figure 13. The flavor symmetry algebra and the representation of the hypers is
g U V = su(5) 3 : (5, 5, 1) ⊕ (1, 5, 5) ⊕ (5, 1, 5) . (5.15)
The CFD is shown in figure 13. Adding a single white dot results in a theory with flavor algebra su(5) 2 ⊕ su(3) ⊕ u(1), with the CFD shown in the middle in figure 13, which has bifundamental matter consistent with these flavor symmetry algebras. Adding two white dots along a single edge in the partition [3, 1 2 ] we obtain the right hand figure 13. The representations under su(5) 2 ⊕ su(2) are Similarly for [3,2] we computed the resolution and find the CFD shown in figure 14, which shows the representations under su(5) 2 to be Again, the flavor symmetry algebra is consistent with the one predicted from GTPs.
T 5 with deformation [2,3] Figure 14. CFDs: The intersection graph for T 5 with the deformation labeled by the partition [2,3]. We see the su(5) 2 ⊕ u(1) and the matter in the (5, 5) ⊕ (10, 1) ⊕ (1, 10) which enhance into 45 of su(10). A Basic algebraic notions for sl n This appendix gathers a few elementary algebraic concepts needed in the bulk of the paper. In the following, we take g = sl n (C), which we represent as n × n complex matrices with vanishing trace, and we pick the diagonal matrices for the Cartan subalgebra h.
Nilpotent Orbits and Triples.
Let e be a nilpotent element of g. By the Jacobson-Morozov theorem, one can construct an sl 2 triple (e, f, h), i.e. a triple of elements of g with h ∈ h satisfying the commutation relations such that ρ(e) defines a nilpotent element of g. In our case, nilpotent orbits are characterized by partitions of n. For the partition λ = (λ 1 , . . . , λ r ), there is a canonical triple, where e is in the Jordan form specified by λ, h is diagonal and f is lower diagonal, defined as follows. Consider first the maximal orbit, λ = (n). In this case we define the canonical triple e = J n , h = H n and f =J n where (J n ) i,j = δ i+1,j , (H n ) i,j = δ i,j (n + 1 − 2i) , (J n ) i,j = δ i−1,j j(n − j) . where g f is the centralizer of f in g. This is called the Slodowy slice transverse to e. Note that this should not be confused with the nilpotent Slodowy slice, which is the intersection of the Slodowy slice with the nilpotent cone. When (e, f, h) is the canonical nilpotent element (A.4) associated to partition λ, we call S λ the Slodowy slice.
In order to construct explicitly S λ , we first need the set of matrices which commute withJ n . These are the matrices S(a 1 , . . . , a n ) with a 1 , . . . , a n ∈ C, defined by 8 (S(a 1 , . . . , a n )) ij := Then the centralizer g f is a set of block matrices of sizes λ i × λ j , where the block (i, j) depends on min(λ i , λ j ) parameters. We will not need its explicit form here. We only need the form of the diagonal blocks (i = j), which are of the form S(a The dimension of the nilpotent cone N is n(n − 1), and therefore we can deduce that the dimension of the nilpotent orbit of e is dim O λ = n(n + 1) − 2 r i=1 iλ i , (A.11)
which matches known results.
Levi subalgebras and Slodowy slices. A Borel subalgebra b of g is a maximal solvable subalgebra. The standard choice, which we make, is to pick for b the upper triangular matrices. The simple roots α i can be indexed by i ∈ I := {1, . . . , n − 1}, and to every subset Θ of I one can associate a parabolic subalgebra p Θ of g containing b. This parabolic subalgebra decomposes as a direct sum
p Θ = l Θ ⊕ n Θ (A.12)
where l Θ is the Levi subalgebra and n Θ is the nilradical of p Θ . Let λ = (λ 1 , . . . , λ r ) be a partition of n. We associate to this partition the set Θ λ = I − {λ 1 , λ 1 + λ 2 , . . . , λ 1 + · · · + λ r−1 } . (A. 13) This way, one can construct the corresponding Levi subalgebra l λ := l Θ λ . We then introduce the intersection S 0 λ := S λ ∩ l λ . (A.14)
With out choices, l λ is simply the algebra of block diagonal matrices, with blocks of respective sizes λ 1 , . . . , λ r , and S 0 λ is the set of traceless matrices which are block diagonal, with diagonal blocks J λ i +S(a 1 , a 2 , . . . , a λ i ). The dimension is dim S 0 λ = n−1, independent of λ.
Companion matrix.
It is easy to check that the matrix J n + S(0, a 2 , . . . , a n ) can be conjugated to a matrix of the form Any matrix in S 0 λ can then be conjugated to a block diagonal matrix with blocks of the form C(b 2 , . . . , b λ i ).
Cokernel of ad(e).
Consider a finite dimensional representation ρ : sl(2) → V of sl(2). One can decompose V into irreducible representations V i of dimensions n i . In each irrep, the image of ρ(e) has codimension 1, and the cokernel is spanned by the weight space of lowest weight, which by definition is precisely the kernel of ρ(f ). Therefore in the Lie algebra g seen as an sl(2) representation defined by a triple (e, f, h), the cokernel of ad(e) is g f .
A
Basic algebraic notions for sl n 38
Figure 4 .
4White dot and brane transition for a length 2 edge of a toric polygon.
. A canonical representative of each such equivalence class is given by the Smith Normal Form (SNF) computed in the ring R = C[z], i.e. any matrix T with entries in polynomials in z is equivalent under (2.6) to a unique diagonal matrix SNF(T ) := diag(p 1 , . . . , p r , 0, . . . , 0) , (2.7)
Figure 6 .
6Geometry of the resolved A n−1 singularity. Each curved line is a P 1 while the straight lines are the non compact divisors z 1,2 = 0.
(i) The C * weights of the columns of T and δT need to be compatible with (3.23) ;(ii) The fluctuations are regular sections of the relevant sheaves (no pole on the support locus) ;
3.59). This pole can be encoded in two ways: in the nilpotent meson matrix M , or in the list of hypermultiplet deformations q i and q i satisfying the relations (3.52). Let us call Q the set Q = {q = (q 1 , q 1 , . . . , q n−1 , q n−1 ) ∈ C 2×1 × . . . C (n−1)×n |(3.52)} .(3.61) We also call N the set of nilpotent n × n complex matrices. The map m : Q → N (3.62) q → M = q n−1 q n−1 is certainly not injective, as given a nilpotent matrix M , there are infinitely many families {q i , q i } i=1,...,n−1 mapping to it. Let us define r : Q → N n−1 (3.63)
Figure 8 .
8On top, we depict a part of the resolved GTP for the T 9 theory, with white dots on the right edge, along with an internal triangulation consistent with the white dots. The presence of white dots propagates into the interior of the GTP, thereby limiting the possible resolutions. Each column in the GTP corresponds to a boundary condition for D5-branes ending on D7-branes. This information is encoded in the ranks r i of the matrices appearing in δT . On the bottom, we show the associated D5-brane boundary conditions (where each circle denotes a D7-brane).
Figure 9 .
9Schematic form of a Hasse diagram of the Higgs branch H viewed as a symplectic singularity. The effect of the pole prescription corresponds geometrically to imposing a higher dimensional base leaf (the transverse slice is shown in red -here it is minimal, but it does not have to be in general). The remaining Higgs branch H is the transverse slice.
arise from the branching of the 56 of e 7 to su(4) 2 .
Figure 12 .
12CFDs: The intersection graph for T 4 (left hand side) and the deformation of T 4 described by the partition [1 2 2] (middle) and deformation of T 4 with partition [2 2
( 5 ,
55, 1) ⊕ (1, 5, 2) ⊕ (5, 1, 2) ⊕ (10, 1, 1) ⊕ (1, 10, 1) −→ (45, 1) ⊕ (10, 2) (5.16) T 5 T 5 with deformation [1 3 , 2] T 5 with deformation [1 2 , 3]
Figure 13 .
13CFDs: The intersection graph of −2 (green) and (−1) (white) curves, i.e. the CFD, for T 5 (LHS), the partition [2, 1 3 ] in the middle, and [3, 1 2 ] on the right hand side.which is consistent with the flavor symmetry enhancement to g U V = su(10) ⊕ su(2) .(5.17)
(5, 5 )
5⊕ (10, 1) ⊕ (1, 10) −→ 45 , (5.18) which is consistent with the enhancement to su(10). Finally consider figure 15, where we manifestly see the su(5) but the su(2) is decomposed into u(1). For partition [2 2 , 1], although the su(2) factor of the global symmetry is not directly visible in the resolution, an educated guess allows to read the following representations of su(5) ⊕ su(5) ⊕ su(2) on figure 15: (5, 1, 1) ⊕ (1, 5, 1) ⊕ (10, 1, 2) ⊕ (5, 10, 2) ⊕ (5, 5, 1) . (5.19)
Figure 15 .
15CFDs: The intersection graph for T 5 with the deformation labeled by the partition [1, 2 2 ].
[e, f ] = h , [h, e] = 2e , [h, f ] = −2f . (A.1)More precisely, there exists an embedding ρ : sl 2 −→ g , (A.2)
any partition λ the canonical triple is the block diagonale = Diag(J λ i ) , f = Diag(J λ i ) , h = Diag(H λ i ) . (A.4)Slodowy Slices. Given a triple (e, f, h), one constructs the space S e = e + g f , (A.5)
8
, we can compute the dimension of the Slodowy slices, which isdim S λ = −1 + 1≤i,j,≤r min(λ i , λ j ) = −(n λ ∩ N = dim S λ − (n − 1) = −2nThe matrices S(a1, . . . , an) for low values of n are:
b i are known functions of the a i . Note that the change of basis depends explicitly on the a i . A matrix in the form (A.15) is called a companion matrix. It has the convenient property that its characteristic polynomial (evaluated at −z) is det(z1 n + C) = z n + n i=2 b i z n−i . (A.16)
Unicity is guaranteed up to multiplication by units in the ring; demanding that the polynomials be monic, i.e. have coefficient 1 for the term of highest degree, fixes this redundancy.
The dominance ordering is defined as follows. If λ and µ are two partitions of n, we say that λ ≥ µ if and only if for all j,j i=1 λj ≥ j i=1µj.
This is easily proved by a recursive argument on the degree.
If the GTP is small, then certain deformations could lead to violations of the S-rule.
From the geometry we are also able to extract the flavor symmetry group[42,98], which however will not play a role in the present paper.
AcknowledgmentsWe would like to thank Cyril Closset for collaboration in early stages of this work. AB and SSN thank Hendrik Süß for discussions of related questions. This work, in particular AC and SSN, was supported and enabled by the Fondation Wiener-Anspach. This research is further supported by IISN-Belgium (convention 4.4503.15). AB is supported by the ERC Consolidator Grant 772408-Stringlandscape, and by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*. SSN is supported in part by the "Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics" and the EPSRC Open Fellowship EP/X01276X/1.
Five-dimensional SUSY field theories, nontrivial fixed points and string dynamics. N Seiberg, 10.1016/S0370-2693(96)01215-4hep-th/9608111Phys. Lett. B. 388N. Seiberg, Five-dimensional SUSY field theories, nontrivial fixed points and string dynamics, Phys. Lett. B 388 (1996) 753-760, [hep-th/9608111].
Extremal transitions and five-dimensional supersymmetric field theories. D R Morrison, N Seiberg, 10.1016/S0550-3213(96)00592-5hep-th/9609070Nucl. Phys. B. 483D. R. Morrison and N. Seiberg, Extremal transitions and five-dimensional supersymmetric field theories, Nucl. Phys. B 483 (1997) 229-247, [hep-th/9609070].
Branes, superpotentials and superconformal fixed points. O Aharony, A Hanany, 10.1016/S0550-3213(97)00472-0hep-th/9704170Nucl. Phys. B. 504O. Aharony and A. Hanany, Branes, superpotentials and superconformal fixed points, Nucl. Phys. B 504 (1997) 239-271, [hep-th/9704170].
Webs of (p,q) five-branes, five-dimensional field theories and grid diagrams. O Aharony, A Hanany, B Kol, 10.1088/1126-6708/1998/01/002hep-th/9710116JHEP. 012O. Aharony, A. Hanany and B. Kol, Webs of (p,q) five-branes, five-dimensional field theories and grid diagrams, JHEP 01 (1998) 002, [hep-th/9710116].
Five-branes, seven-branes and five-dimensional E(n) field theories. O Dewolfe, A Hanany, A Iqbal, E Katz, 10.1088/1126-6708/1999/03/006hep-th/9902179JHEP. 036O. DeWolfe, A. Hanany, A. Iqbal and E. Katz, Five-branes, seven-branes and five-dimensional E(n) field theories, JHEP 03 (1999) 006, [hep-th/9902179].
5d fixed points from brane webs and O7-planes. O Bergman, G Zafrir, 10.1007/JHEP12(2015)1631507.03860JHEP. 12163O. Bergman and G. Zafrir, 5d fixed points from brane webs and O7-planes, JHEP 12 (2015) 163, [1507.03860].
Brane webs, 5d gauge theories and 6d N = (1, 0) SCFT's. G Zafrir, 10.1007/JHEP12(2015)1571509.02016JHEP. 12157G. Zafrir, Brane webs, 5d gauge theories and 6d N = (1, 0) SCFT's, JHEP 12 (2015) 157, [1509.02016].
Brane webs and O5-planes. G Zafrir, 10.1007/JHEP03(2016)1091512.08114JHEP. 03109G. Zafrir, Brane webs and O5-planes, JHEP 03 (2016) 109, [1512.08114].
S 1 /T 2 compactifications of 6d N = (1, 0) theories and brane webs. K Ohmori, H Shimizu, 10.1007/JHEP03(2016)0241509.03195JHEP. 0324K. Ohmori and H. Shimizu, S 1 /T 2 compactifications of 6d N = (1, 0) theories and brane webs, JHEP 03 (2016) 024, [1509.03195].
Discrete theta angle from an O5-plane. H Hayashi, S.-S Kim, K Lee, F Yagi, 10.1007/JHEP11(2017)0411707.07181JHEP. 1141H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, Discrete theta angle from an O5-plane, JHEP 11 (2017) 041, [1707.07181].
5-brane webs for 5d N = 1 G 2 gauge theories. H Hayashi, S.-S Kim, K Lee, F Yagi, 10.1007/JHEP03(2018)1251801.03916JHEP. 03125H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, 5-brane webs for 5d N = 1 G 2 gauge theories, JHEP 03 (2018) 125, [1801.03916].
Dualities and 5-brane webs for 5d rank 2 SCFTs. H Hayashi, S.-S Kim, K Lee, F Yagi, 10.1007/JHEP12(2018)0161806.10569JHEP. 1216H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, Dualities and 5-brane webs for 5d rank 2 SCFTs, JHEP 12 (2018) 016, [1806.10569].
Five-dimensional SCFTs and gauge theory phases: an M-theory/type IIA perspective. C Closset, M Zotto, V Saxena, 10.21468/SciPostPhys.6.5.0521812.10451SciPost Phys. 652C. Closset, M. Del Zotto and V. Saxena, Five-dimensional SCFTs and gauge theory phases: an M-theory/type IIA perspective, SciPost Phys. 6 (2019) 052, [1812.10451].
Branes and toric geometry. N C Leung, C Vafa, 10.4310/ATMP.1998.v2.n1.a4hep-th/9711013Adv. Theor. Math. Phys. 2N. C. Leung and C. Vafa, Branes and toric geometry, Adv. Theor. Math. Phys. 2 (1998) 91-118, [hep-th/9711013].
Five-dimensional supersymmetric gauge theories and degenerations of Calabi-Yau spaces. K A Intriligator, D R Morrison, N Seiberg, 10.1016/S0550-3213(97)00279-4hep-th/9702198Nucl. Phys. B. 497K. A. Intriligator, D. R. Morrison and N. Seiberg, Five-dimensional supersymmetric gauge theories and degenerations of Calabi-Yau spaces, Nucl. Phys. B 497 (1997) 56-100, [hep-th/9702198].
Webs of five-branes and N=2 superconformal field theories. F Benini, S Benvenuti, Y Tachikawa, 10.1088/1126-6708/2009/09/0520906.0359JHEP. 0952F. Benini, S. Benvenuti and Y. Tachikawa, Webs of five-branes and N=2 superconformal field theories, JHEP 09 (2009) 052, [0906.0359].
Phases of Five-dimensional Theories, Monopole Walls, and Melting Crystals. S A Cherkis, 10.1007/JHEP06(2014)0271402.7117JHEP. 0627S. A. Cherkis, Phases of Five-dimensional Theories, Monopole Walls, and Melting Crystals, JHEP 06 (2014) 027, [1402.7117].
Three dimensional canonical singularity and five dimensional N = 1 SCFT. D Xie, S.-T Yau, 10.1007/JHEP06(2017)1341704.00799JHEP. 06134D. Xie and S.-T. Yau, Three dimensional canonical singularity and five dimensional N = 1 SCFT, JHEP 06 (2017) 134, [1704.00799].
6D SCFTs and Phases of 5D Theories. M Zotto, J J Heckman, D R Morrison, 10.1007/JHEP09(2017)1471703.02981JHEP. 09147M. Del Zotto, J. J. Heckman and D. R. Morrison, 6D SCFTs and Phases of 5D Theories, JHEP 09 (2017) 147, [1703.02981].
Rigid limit for hypermultiplets and five-dimensional gauge theories. S Alexandrov, S Banerjee, P Longhi, 10.1007/JHEP01(2018)1561710.10665JHEP. 01156S. Alexandrov, S. Banerjee and P. Longhi, Rigid limit for hypermultiplets and five-dimensional gauge theories, JHEP 01 (2018) 156, [1710.10665].
P Jefferson, H.-C Kim, C Vafa, G Zafrir, 1705.05836Towards Classification of 5d SCFTs: Single Gauge Node. P. Jefferson, H.-C. Kim, C. Vafa and G. Zafrir, Towards Classification of 5d SCFTs: Single Gauge Node, 1705.05836.
On Geometric Classification of 5d SCFTs. P Jefferson, S Katz, H.-C Kim, C Vafa, 10.1007/JHEP04(2018)1031801.04036JHEP. 04103P. Jefferson, S. Katz, H.-C. Kim and C. Vafa, On Geometric Classification of 5d SCFTs, JHEP 04 (2018) 103, [1801.04036].
Classifying 5d SCFTs via 6d SCFTs: Rank one. L Bhardwaj, P Jefferson, 10.1007/JHEP07(2019)1781809.01650JHEP. 07178L. Bhardwaj and P. Jefferson, Classifying 5d SCFTs via 6d SCFTs: Rank one, JHEP 07 (2019) 178, [1809.01650].
Classifying 5d SCFTs via 6d SCFTs: Arbitrary rank. L Bhardwaj, P Jefferson, 10.1007/JHEP10(2019)2821811.10616JHEP. 10282L. Bhardwaj and P. Jefferson, Classifying 5d SCFTs via 6d SCFTs: Arbitrary rank, JHEP 10 (2019) 282, [1811.10616].
F Apruzzi, L Lin, C Mayrhofer, 10.1007/JHEP05(2019)1871811.12400Phases of 5d SCFTs from M-/F-theory on Non-Flat Fibrations. 187F. Apruzzi, L. Lin and C. Mayrhofer, Phases of 5d SCFTs from M-/F-theory on Non-Flat Fibrations, JHEP 05 (2019) 187, [1811.12400].
F Apruzzi, C Lawrie, L Lin, S Schafer-Nameki, Y.-N Wang, 10.1016/j.physletb.2019.1350771906.118205d Superconformal Field Theories and Graphs. 800135077F. Apruzzi, C. Lawrie, L. Lin, S. Schafer-Nameki and Y.-N. Wang, 5d Superconformal Field Theories and Graphs, Phys. Lett. B 800 (2020) 135077, [1906.11820].
Fibers add Flavor, Part I: Classification of 5d SCFTs, Flavor Symmetries and BPS States. F Apruzzi, C Lawrie, L Lin, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP11(2019)0681907.05404JHEP. 1168F. Apruzzi, C. Lawrie, L. Lin, S. Schafer-Nameki and Y.-N. Wang, Fibers add Flavor, Part I: Classification of 5d SCFTs, Flavor Symmetries and BPS States, JHEP 11 (2019) 068, [1907.05404].
F Apruzzi, C Lawrie, L Lin, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP03(2020)0521909.09128Fibers add Flavor, Part II: 5d SCFTs, Gauge Theories, and Dualities. 52F. Apruzzi, C. Lawrie, L. Lin, S. Schafer-Nameki and Y.-N. Wang, Fibers add Flavor, Part II: 5d SCFTs, Gauge Theories, and Dualities, JHEP 03 (2020) 052, [1909.09128].
Twisted Circle Compactifications of 6d SCFTs. L Bhardwaj, P Jefferson, H.-C Kim, H.-C Tarazi, C Vafa, 10.1007/JHEP12(2020)1511909.11666JHEP. 12151L. Bhardwaj, P. Jefferson, H.-C. Kim, H.-C. Tarazi and C. Vafa, Twisted Circle Compactifications of 6d SCFTs, JHEP 12 (2020) 151, [1909.11666].
Do all 5d SCFTs descend from 6d SCFTs?. L Bhardwaj, 10.1007/JHEP04(2021)0851912.00025JHEP. 0485L. Bhardwaj, Do all 5d SCFTs descend from 6d SCFTs?, JHEP 04 (2021) 085, [1912.00025].
F Apruzzi, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP08(2020)1531912.042645d SCFTs from Decoupling and Gluing. 153F. Apruzzi, S. Schafer-Nameki and Y.-N. Wang, 5d SCFTs from Decoupling and Gluing, JHEP 08 (2020) 153, [1912.04264].
Coulomb and Higgs Branches from Canonical Singularities: Part 0. C Closset, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP02(2021)003JHEP. 023C. Closset, S. Schafer-Nameki and Y.-N. Wang, Coulomb and Higgs Branches from Canonical Singularities: Part 0, JHEP 02 (2021) 003, [2007.15600].
5d and 4d SCFTs: Canonical Singularities, Trinions and S-Dualities. C Closset, S Giacomelli, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP05(2021)2742012.12827JHEP. 05274C. Closset, S. Giacomelli, S. Schafer-Nameki and Y.-N. Wang, 5d and 4d SCFTs: Canonical Singularities, Trinions and S-Dualities, JHEP 05 (2021) 274, [2012.12827].
Coulomb and Higgs branches from canonical singularities. Part I. Hypersurfaces with smooth Calabi-Yau resolutions. C Closset, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP04(2022)061JHEP. 04612111.13564C. Closset, S. Schafer-Nameki and Y.-N. Wang, Coulomb and Higgs branches from canonical singularities. Part I. Hypersurfaces with smooth Calabi-Yau resolutions, JHEP 04 (2022) 061, [2111.13564].
Classification of 5d N = 1 gauge theories. L Bhardwaj, G Zafrir, 10.1007/JHEP12(2020)099JHEP. 12992003.04333L. Bhardwaj and G. Zafrir, Classification of 5d N = 1 gauge theories, JHEP 12 (2020) 099, [2003.04333].
More 5d KK theories. L Bhardwaj, 10.1007/JHEP03(2021)0542005.01722JHEP. 0354L. Bhardwaj, More 5d KK theories, JHEP 03 (2021) 054, [2005.01722].
Flavor symmetry of 5d SCFTs. Part I. General setup. L Bhardwaj, 10.1007/JHEP09(2021)1862010.13230JHEP. 09186L. Bhardwaj, Flavor symmetry of 5d SCFTs. Part I. General setup, JHEP 09 (2021) 186, [2010.13230].
Flavor symmetry of 5d SCFTs. Part II. Applications. L Bhardwaj, 10.1007/JHEP04(2021)221JHEP. 042212010.13235L. Bhardwaj, Flavor symmetry of 5d SCFTs. Part II. Applications, JHEP 04 (2021) 221, [2010.13235].
Higher-Form Symmetries in 5d. D R Morrison, S Schafer-Nameki, B Willett, 10.1007/JHEP09(2020)024JHEP. 09242005.12296D. R. Morrison, S. Schafer-Nameki and B. Willett, Higher-Form Symmetries in 5d, JHEP 09 (2020) 024, [2005.12296].
F Albertini, M Zotto, I N García Etxebarria, S S Hosseini, 10.1007/JHEP12(2020)203Higher Form Symmetries and M-theory. 12203F. Albertini, M. Del Zotto, I. n. García Etxebarria and S. S. Hosseini, Higher Form Symmetries and M-theory, JHEP 12 (2020) 203, [2005.12831].
Higher-form symmetries of 6d and 5d theories. L Bhardwaj, S Schäfer-Nameki, 10.1007/JHEP02(2021)1592008.09600JHEP. 02159L. Bhardwaj and S. Schäfer-Nameki, Higher-form symmetries of 6d and 5d theories, JHEP 02 (2021) 159, [2008.09600].
The Global Form of Flavor Symmetries and 2-Group Symmetries in 5d SCFTs. F Apruzzi, L Bhardwaj, J Oh, S Schafer-Nameki, 10.21468/SciPostPhys.13.2.0242105.08724SciPost Phys. 1324F. Apruzzi, L. Bhardwaj, J. Oh and S. Schafer-Nameki, The Global Form of Flavor Symmetries and 2-Group Symmetries in 5d SCFTs, SciPost Phys. 13 (2022) 024, [2105.08724].
Higher symmetries of 5D orbifold SCFTs. M Zotto, J J Heckman, S N Meynet, R Moscrop, H Y Zhang, 10.1103/PhysRevD.106.0460102201.08372Phys. Rev. D. 10646010M. Del Zotto, J. J. Heckman, S. N. Meynet, R. Moscrop and H. Y. Zhang, Higher symmetries of 5D orbifold SCFTs, Phys. Rev. D 106 (2022) 046010, [2201.08372].
M Zotto, I N García Etxebarria, S Schafer-Nameki, 10.21468/SciPostPhys.13.5.1052203.100972-Group Symmetries and M-Theory. 13105M. Del Zotto, I. n. García Etxebarria and S. Schafer-Nameki, 2-Group Symmetries and M-Theory, SciPost Phys. 13 (2022) 105, [2203.10097].
The role of U(1)'s in 5d theories, Higgs branches, and geometry. A Collinucci, R Valandro, 10.1007/JHEP10(2020)178JHEP. 101782006.15464A. Collinucci and R. Valandro, The role of U(1)'s in 5d theories, Higgs branches, and geometry, JHEP 10 (2020) 178, [2006.15464].
Higgs branches of 5d rank-zero theories from geometry. A Collinucci, M Marco, A Sangiovanni, R Valandro, 10.1007/JHEP10(2021)0182105.12177JHEP. 1018A. Collinucci, M. De Marco, A. Sangiovanni and R. Valandro, Higgs branches of 5d rank-zero theories from geometry, JHEP 10 (2021) 018, [2105.12177].
5-Brane Webs, Symmetry Enhancement, and Duality in 5d Supersymmetric Gauge Theory. O Bergman, D Rodríguez-Gómez, G Zafrir, 10.1007/JHEP03(2014)1121311.4199JHEP. 03112O. Bergman, D. Rodríguez-Gómez and G. Zafrir, 5-Brane Webs, Symmetry Enhancement, and Duality in 5d Supersymmetric Gauge Theory, JHEP 03 (2014) 112, [1311.4199].
A new 5d description of 6d D-type minimal conformal matter. H Hayashi, S.-S Kim, K Lee, M Taki, F Yagi, 10.1007/JHEP08(2015)0971505.04439JHEP. 0897H. Hayashi, S.-S. Kim, K. Lee, M. Taki and F. Yagi, A new 5d description of 6d D-type minimal conformal matter, JHEP 08 (2015) 097, [1505.04439].
Instanton Operators and the Higgs Branch at Infinite Coupling. S Cremonesi, G Ferlito, A Hanany, N Mekareeya, 10.1007/JHEP04(2017)0421505.06302JHEP. 0442S. Cremonesi, G. Ferlito, A. Hanany and N. Mekareeya, Instanton Operators and the Higgs Branch at Infinite Coupling, JHEP 04 (2017) 042, [1505.06302].
Instanton operators and symmetry enhancement in 5d supersymmetric quiver gauge theories. K Yonekura, 10.1007/JHEP07(2015)1671505.04743JHEP. 07167K. Yonekura, Instanton operators and symmetry enhancement in 5d supersymmetric quiver gauge theories, JHEP 07 (2015) 167, [1505.04743].
Duality walls and defects in 5d N = 1 theories. D Gaiotto, H.-C Kim, 10.1007/JHEP01(2017)0191506.03871JHEP. 0119D. Gaiotto and H.-C. Kim, Duality walls and defects in 5d N = 1 theories, JHEP 01 (2017) 019, [1506.03871].
3d Coulomb branch and 5d Higgs branch at infinite coupling. G Ferlito, A Hanany, N Mekareeya, G Zafrir, 10.1007/JHEP07(2018)0611712.06604JHEP. 0761G. Ferlito, A. Hanany, N. Mekareeya and G. Zafrir, 3d Coulomb branch and 5d Higgs branch at infinite coupling, JHEP 07 (2018) 061, [1712.06604].
Tropical Geometry and Five Dimensional Higgs Branches at Infinite Coupling. S Cabrera, A Hanany, F Yagi, 10.1007/JHEP01(2019)0681810.01379JHEP. 0168S. Cabrera, A. Hanany and F. Yagi, Tropical Geometry and Five Dimensional Higgs Branches at Infinite Coupling, JHEP 01 (2019) 068, [1810.01379].
Brane Webs and Magnetic Quivers for SQCD. A Bourget, S Cabrera, J F Grimminger, A Hanany, Z Zhong, 10.1007/JHEP03(2020)1761909.00667JHEP. 03176A. Bourget, S. Cabrera, J. F. Grimminger, A. Hanany and Z. Zhong, Brane Webs and Magnetic Quivers for SQCD, JHEP 03 (2020) 176, [1909.00667].
Symplectic) Leaves and (5d Higgs) Branches in the Poly(go)nesian Tropical Rain Forest. M Van Beest, A Bourget, J Eckhard, S Schafer-Nameki, 10.1007/JHEP11(2020)124JHEP. 111242008.05577M. van Beest, A. Bourget, J. Eckhard and S. Schafer-Nameki, (Symplectic) Leaves and (5d Higgs) Branches in the Poly(go)nesian Tropical Rain Forest, JHEP 11 (2020) 124, [2008.05577].
RG-flow) Trees in the Tropical Rain Forest. M Van Beest, A Bourget, J Eckhard, S Schafer-Nameki, 10.1007/JHEP03(2021)241JHEP. 032412011.07033M. Van Beest, A. Bourget, J. Eckhard and S. Schafer-Nameki, (5d RG-flow) Trees in the Tropical Rain Forest, JHEP 03 (2021) 241, [2011.07033].
Magnetic quivers for rank 1 theories. A Bourget, J F Grimminger, A Hanany, M Sperling, G Zafrir, Z Zhong, 10.1007/JHEP09(2020)189JHEP. 091892006.16994A. Bourget, J. F. Grimminger, A. Hanany, M. Sperling, G. Zafrir and Z. Zhong, Magnetic quivers for rank 1 theories, JHEP 09 (2020) 189, [2006.16994].
More on N = 2 S-folds. S Giacomelli, M Martone, Y Tachikawa, G Zafrir, 10.1007/JHEP01(2021)054JHEP. 01542010.03943S. Giacomelli, M. Martone, Y. Tachikawa and G. Zafrir, More on N = 2 S-folds, JHEP 01 (2021) 054, [2010.03943].
S-fold magnetic quivers. A Bourget, S Giacomelli, J F Grimminger, A Hanany, M Sperling, Z Zhong, 10.1007/JHEP02(2021)054JHEP. 02542010.05889A. Bourget, S. Giacomelli, J. F. Grimminger, A. Hanany, M. Sperling and Z. Zhong, S-fold magnetic quivers, JHEP 02 (2021) 054, [2010.05889].
Magnetic quivers from brane webs with O7 + -planes. M Akhond, F Carta, 10.1007/JHEP10(2021)0142107.09077JHEP. 1014M. Akhond and F. Carta, Magnetic quivers from brane webs with O7 + -planes, JHEP 10 (2021) 014, [2107.09077].
Connecting 5d Higgs branches via Fayet-Iliopoulos deformations. M Van Beest, S Giacomelli, 10.1007/JHEP12(2021)2022110.02872JHEP. 12202M. van Beest and S. Giacomelli, Connecting 5d Higgs branches via Fayet-Iliopoulos deformations, JHEP 12 (2021) 202, [2110.02872].
Magnetic quivers for rank 2 theories. A Bourget, J F Grimminger, M Martone, G Zafrir, 10.1007/JHEP03(2022)2082110.11365JHEP. 03208A. Bourget, J. F. Grimminger, M. Martone and G. Zafrir, Magnetic quivers for rank 2 theories, JHEP 03 (2022) 208, [2110.11365].
Higgs branches of U/SU quivers via brane locking. A Bourget, J F Grimminger, A Hanany, R Kalveks, Z Zhong, 10.1007/JHEP08(2022)0612111.04745JHEP. 0861A. Bourget, J. F. Grimminger, A. Hanany, R. Kalveks and Z. Zhong, Higgs branches of U/SU quivers via brane locking, JHEP 08 (2022) 061, [2111.04745].
Magnetic Quivers from Brane Webs with O5 Planes. A Bourget, J F Grimminger, A Hanany, M Sperling, Z Zhong, 10.1007/JHEP07(2020)2042004.04082JHEP. 07204A. Bourget, J. F. Grimminger, A. Hanany, M. Sperling and Z. Zhong, Magnetic Quivers from Brane Webs with O5 Planes, JHEP 07 (2020) 204, [2004.04082].
Five-brane webs, Higgs branches and unitary/orthosymplectic magnetic quivers. M Akhond, F Carta, S Dwivedi, H Hayashi, S.-S Kim, F Yagi, 10.1007/JHEP12(2020)164JHEP. 12164M. Akhond, F. Carta, S. Dwivedi, H. Hayashi, S.-S. Kim and F. Yagi, Five-brane webs, Higgs branches and unitary/orthosymplectic magnetic quivers, JHEP 12 (2020) 164, [2008.01027].
Balanced B and D-type orthosymplectic quivers -magnetic quivers for product theories. M Sperling, Z Zhong, 10.1007/JHEP04(2022)1452111.00026JHEP. 04145M. Sperling and Z. Zhong, Balanced B and D-type orthosymplectic quivers -magnetic quivers for product theories, JHEP 04 (2022) 145, [2111.00026].
Lifting 4d dualities to 5d. O Bergman, G Zafrir, 10.1007/JHEP04(2015)141JHEP. 041411410.2806O. Bergman and G. Zafrir, Lifting 4d dualities to 5d, JHEP 04 (2015) 141, [1410.2806].
H Hayashi, S.-S Kim, K Lee, F Yagi, 10.1007/JHEP05(2019)2031509.033006d SCFTs, 5d Dualities and Tao Web Diagrams. 203H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, 6d SCFTs, 5d Dualities and Tao Web Diagrams, JHEP 05 (2019) 203, [1509.03300].
Rank-3 antisymmetric matter on 5-brane webs. H Hayashi, S.-S Kim, K Lee, F Yagi, 10.1007/JHEP05(2019)1331902.04754JHEP. 05133H. Hayashi, S.-S. Kim, K. Lee and F. Yagi, Rank-3 antisymmetric matter on 5-brane webs, JHEP 05 (2019) 133, [1902.04754].
Wilson loops in 5d long quiver gauge theories. C F Uhlemann, 10.1007/JHEP09(2020)1452006.01142JHEP. 09145C. F. Uhlemann, Wilson loops in 5d long quiver gauge theories, JHEP 09 (2020) 145, [2006.01142].
The Cat's Cradle: deforming the higher rank E 1 and E 1 theories. O Bergman, D Rodríguez-Gómez, 10.1007/JHEP02(2021)122JHEP. 021222011.05125O. Bergman and D. Rodríguez-Gómez, The Cat's Cradle: deforming the higher rank E 1 and E 1 theories, JHEP 02 (2021) 122, [2011.05125].
T Gomez, E R Sharpe, hep-th/0008150D-branes and scheme theory. T. Gomez and E. R. Sharpe, D-branes and scheme theory, hep-th/0008150.
Gluing Branes, I. R Donagi, M Wijnholt, 10.1007/JHEP05(2013)0681104.2610JHEP. 0568R. Donagi and M. Wijnholt, Gluing Branes, I, JHEP 05 (2013) 068, [1104.2610].
. S Cecotti, C Cordova, J J Heckman, C Vafa, T -Branes, Monodromy , 10.1007/JHEP07(2011)0301010.5780JHEP. 0730S. Cecotti, C. Cordova, J. J. Heckman and C. Vafa, T-Branes and Monodromy, JHEP 07 (2011) 030, [1010.5780].
6D RG Flows and Nilpotent Hierarchies. J J Heckman, T Rudelius, A Tomasiello, 10.1007/JHEP07(2016)0821601.04078JHEP. 0782J. J. Heckman, T. Rudelius and A. Tomasiello, 6D RG Flows and Nilpotent Hierarchies, JHEP 07 (2016) 082, [1601.04078].
. M Zotto, J J Heckman, A Tomasiello, C Vafa, 10.1007/JHEP02(2015)0541407.6359JHEP. 02546d Conformal MatterM. Del Zotto, J. J. Heckman, A. Tomasiello and C. Vafa, 6d Conformal Matter, JHEP 02 (2015) 054, [1407.6359].
A Beauville, arXiv preprint math/9903070Symplectic singularities. A. Beauville, Symplectic singularities, arXiv preprint math/9903070 (1999) .
Symplectic singularities from the poisson point of view. D Kaledin, D. Kaledin, Symplectic singularities from the poisson point of view, .
The Higgs mechanism -Hasse diagrams for symplectic singularities. A Bourget, S Cabrera, J F Grimminger, A Hanany, M Sperling, A Zajac, Z Zhong, 10.1007/JHEP01(2020)1571908.04245JHEP. 01157A. Bourget, S. Cabrera, J. F. Grimminger, A. Hanany, M. Sperling, A. Zajac and Z. Zhong, The Higgs mechanism -Hasse diagrams for symplectic singularities, JHEP 01 (2020) 157, [1908.04245].
Trifectas for T N in 5d. J Eckhard, S Schafer-Nameki, Y.-N Wang, 10.1007/JHEP07(2020)199JHEP. 07199J. Eckhard, S. Schafer-Nameki and Y.-N. Wang, Trifectas for T N in 5d, JHEP 07 (2020) 199, [2004.15007].
D-branes, categories and N=1 supersymmetry. M R Douglas, 10.1063/1.1374448hep-th/0011017J. Math. Phys. 42M. R. Douglas, D-branes, categories and N=1 supersymmetry, J. Math. Phys. 42 (2001) 2818-2843, [hep-th/0011017].
E Sharpe, hep-th/0307245Lectures on D-branes and sheaves. 7E. Sharpe, Lectures on D-branes and sheaves, 7, 2003. hep-th/0307245.
A Collinucci, R Savelli, 10.1007/JHEP09(2015)1611410.4178T-branes as branes within branes. 161A. Collinucci and R. Savelli, T-branes as branes within branes, JHEP 09 (2015) 161, [1410.4178].
Smith normal form in combinatorics. R P Stanley, Journal of Combinatorial Theory, Series A. 144R. P. Stanley, Smith normal form in combinatorics, Journal of Combinatorial Theory, Series A 144 (2016) 476-495.
GUTs and Exceptional Branes in F-theory -I. C Beasley, J J Heckman, C Vafa, 10.1088/1126-6708/2009/01/058JHEP. 01580802.3391C. Beasley, J. J. Heckman and C. Vafa, GUTs and Exceptional Branes in F-theory -I, JHEP 01 (2009) 058, [0802.3391].
T-Branes and Geometry. L B Anderson, J J Heckman, S Katz, 10.1007/JHEP05(2014)080JHEP. 05801310.1931L. B. Anderson, J. J. Heckman and S. Katz, T-Branes and Geometry, JHEP 05 (2014) 080, [1310.1931].
T-Branes at the Limits of Geometry. L B Anderson, J J Heckman, S Katz, L P Schaposnik, 10.1007/JHEP10(2017)0581702.06137JHEP. 1058L. B. Anderson, J. J. Heckman, S. Katz and L. P. Schaposnik, T-Branes at the Limits of Geometry, JHEP 10 (2017) 058, [1702.06137].
D H Collingwood, W M Mcgovern, Nilpotent orbits in semisimple Lie algebras. Routledge. D. H. Collingwood and W. M. McGovern, Nilpotent orbits in semisimple Lie algebras. Routledge, 2017.
H Kraft, C Procesi, Minimal singularities ingl n, Inventiones mathematicae. 62H. Kraft and C. Procesi, Minimal singularities ingl n, Inventiones mathematicae 62 (1980) 503-515.
Branes and the Kraft-Procesi Transition. S Cabrera, A Hanany, 10.1007/JHEP11(2016)1751609.07798JHEP. 11175S. Cabrera and A. Hanany, Branes and the Kraft-Procesi Transition, JHEP 11 (2016) 175, [1609.07798].
Branes and the Kraft-Procesi transition: classical case. S Cabrera, A Hanany, 10.1007/JHEP04(2018)1271711.02378JHEP. 04127S. Cabrera and A. Hanany, Branes and the Kraft-Procesi transition: classical case, JHEP 04 (2018) 127, [1711.02378].
Quiver Subtractions. S Cabrera, A Hanany, 10.1007/JHEP09(2018)0081803.11205JHEP. 098S. Cabrera and A. Hanany, Quiver Subtractions, JHEP 09 (2018) 008, [1803.11205].
A Bourget, J F Grimminger, A Hanany, M Sperling, Z Zhong, 2102.06190Branes, Quivers, and the Affine Grassmannian. A. Bourget, J. F. Grimminger, A. Hanany, M. Sperling and Z. Zhong, Branes, Quivers, and the Affine Grassmannian, 2102.06190.
Supersymmetric Boundary Conditions in N=4 Super Yang-Mills Theory. D Gaiotto, E Witten, 10.1007/s10955-009-9687-3J. Statist. Phys. 1350804.2902D. Gaiotto and E. Witten, Supersymmetric Boundary Conditions in N=4 Super Yang-Mills Theory, J. Statist. Phys. 135 (2009) 789-855, [0804.2902].
Quiver Theories for Moduli Spaces of Classical Group Nilpotent Orbits. A Hanany, R Kalveks, 10.1007/JHEP06(2016)1301601.04020JHEP. 06130A. Hanany and R. Kalveks, Quiver Theories for Moduli Spaces of Classical Group Nilpotent Orbits, JHEP 06 (2016) 130, [1601.04020].
One-parameter families containing three-dimensional toric gorenstein singularities. K Altmann, Explicit birational geometry of. K. Altmann, One-parameter families containing three-dimensional toric gorenstein singularities, Explicit birational geometry of (2000) 21-50.
The Tate Form on Steroids: Resolution and Higher Codimension Fibers. C Lawrie, S Schafer-Nameki, 10.1007/JHEP04(2013)0611212.2949JHEP. 0461C. Lawrie and S. Schafer-Nameki, The Tate Form on Steroids: Resolution and Higher Codimension Fibers, JHEP 04 (2013) 061, [1212.2949].
L Bhardwaj, M Bullimore, A E V Ferrari, S Schafer-Nameki, 2301.02249Generalized Symmetries and Anomalies of 3d N=4 SCFTs. L. Bhardwaj, M. Bullimore, A. E. V. Ferrari and S. Schafer-Nameki, Generalized Symmetries and Anomalies of 3d N=4 SCFTs, 2301.02249.
| []
|
[
"Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition",
"Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition",
"Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition",
"Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition"
]
| [
"Sofia Broomé [email protected] \nKTH\nSweden\n",
"Ernest Pokropek [email protected] \nKTH\nSweden\n",
"Boyu Li [email protected] \nKTH\nSweden\n",
"Hedvig Kjellström [email protected] \nKTH\nSweden\n\nSilo AI\nSweden\n",
"Sofia Broomé [email protected] \nKTH\nSweden\n",
"Ernest Pokropek [email protected] \nKTH\nSweden\n",
"Boyu Li [email protected] \nKTH\nSweden\n",
"Hedvig Kjellström [email protected] \nKTH\nSweden\n\nSilo AI\nSweden\n"
]
| [
"KTH\nSweden",
"KTH\nSweden",
"KTH\nSweden",
"KTH\nSweden",
"Silo AI\nSweden",
"KTH\nSweden",
"KTH\nSweden",
"KTH\nSweden",
"KTH\nSweden",
"Silo AI\nSweden"
]
| []
| Most action recognition models today are highly parameterized, and evaluated on datasets with appearance-wise distinct classes. It has also been shown that 2D Convolutional Neural Networks (CNNs) tend to be biased toward texture rather than shape in still image recognition tasks[19], in contrast to humans. Taken together, this raises suspicion that large video models partly learn spurious spatial texture correlations rather than to track relevant shapes over time to infer generalizable semantics from their movement. A natural way to avoid parameter explosion when learning visual patterns over time is to make use of recurrence. Biological vision consists of abundant recurrent circuitry, and is superior to computer vision in terms of domain shift generalization. In this article, we empirically study whether the choice of low-level temporal modeling has consequences for texture bias and cross-domain robustness. In order to enable a light-weight and systematic assessment of the ability to capture temporal structure, not revealed from single frames, we provide the Temporal Shape (TS) dataset, as well as modified domains of Diving48 allowing for the investigation of spatial texture bias in video models. The combined results of our experiments indicate that sound physical inductive bias such as recurrence in temporal modeling may be advantageous when robustness to domain shift is important for the task. | 10.1109/wacv56688.2023.00418 | [
"https://export.arxiv.org/pdf/2112.12175v4.pdf"
]
| 250,607,580 | 2112.12175 | 133b7bc0f306aecf0c57f6ed1c963a2396e2701c |
Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition
Sofia Broomé [email protected]
KTH
Sweden
Ernest Pokropek [email protected]
KTH
Sweden
Boyu Li [email protected]
KTH
Sweden
Hedvig Kjellström [email protected]
KTH
Sweden
Silo AI
Sweden
Recur, Attend or Convolve? On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition
Most action recognition models today are highly parameterized, and evaluated on datasets with appearance-wise distinct classes. It has also been shown that 2D Convolutional Neural Networks (CNNs) tend to be biased toward texture rather than shape in still image recognition tasks[19], in contrast to humans. Taken together, this raises suspicion that large video models partly learn spurious spatial texture correlations rather than to track relevant shapes over time to infer generalizable semantics from their movement. A natural way to avoid parameter explosion when learning visual patterns over time is to make use of recurrence. Biological vision consists of abundant recurrent circuitry, and is superior to computer vision in terms of domain shift generalization. In this article, we empirically study whether the choice of low-level temporal modeling has consequences for texture bias and cross-domain robustness. In order to enable a light-weight and systematic assessment of the ability to capture temporal structure, not revealed from single frames, we provide the Temporal Shape (TS) dataset, as well as modified domains of Diving48 allowing for the investigation of spatial texture bias in video models. The combined results of our experiments indicate that sound physical inductive bias such as recurrence in temporal modeling may be advantageous when robustness to domain shift is important for the task.
Introduction
One of the most fundamental questions when it comes to video understanding is how to model the dependency between frames in such a way that temporal relationships relevant to the activity in the video can be learned. A robust action recognition system should be able to figure out how frames relate to each other, and which shapes and objects have changed or persisted over time. With this knowledge, it can start to infer relationships at a higher level, such as object-object or agent-object relationships.
Three principally different approaches to frame dependency are 3D convolutions, self-attention and recurrence.
These methods model the world (the visual sequence) in principally different manners: linearly, non-linearly, and non-linearly with a time-causal direction, meaning that they each use different inductive biases for the temporal modeling (Fig. A). In spite of its essentiality, the frame dependency question has almost disappeared from action recognition articles, possibly in the race to improve on the classification benchmarks. Emphasis is instead placed on other aspects of deep video models, such as advanced architectural superstructures, regularization or training schemes. A shift toward attention-based video models has recently taken place, but without a discussion of the physical interpretation of its underlying temporal model.
Humans are still significantly stronger at generalization than artificial neural networks in vision tasks [20,46]. Recurrent models are critical in the only visual system that has been 'solved' to date -biological vision [1,10,12,13,15,32,34,44,52]. Based on the observation that feedback connections are abundant in biological but not computer vision [32,56], in this article, we hypothesize that the lack of recurrence when learning spatiotemporal features may be one reason for this discrepancy. We therefore investigate the following research question empirically in extensive and systematic experiments: does the principally different mathematical natures of 3D convolutions, self-attention and recurrence affect cross-domain robustness in video modelsand in particular, does recurrence bring about an advantage?
Video models lack robustness to domain shift [8,61,62], and it has been repeatedly shown [8,25,35,60] that the datasets most frequently cited during the 2010s (UCF-101 [51], HMDB [33], Kinetics [29]) exhibit significant spatial biases. This is a plausible reason for the poor cross-domain robustness in action recognition, since overly relying on spatial rather than motion cues intuitively results in overfitting to one domain (e.g., certain backgrounds, viewpoints or similar actor appearances).
Contemporary state-of-the-art approaches to action recognition are predominantly either fully convolutional [5,17,18,21,60], combine convolutions with temporal sampling and fusion strategies [58,59,63], or, more recently, attention-based Video Transformers (VTs) [4, 23, Less texture More texture S1 S2 T Figure 1. Animated figure, displayed on click in Adobe Reader. Example clip showing our three modified domains of Diving48 enabling the investigation of texture bias in video models: S1 (segmented divers over a blurred background), S2, (cropped bounding boxes of divers over a blurred background), and T (masked boxes of divers, and the original background). 31,45,53]. The sheer size of the models, typically more than 50M trainable parameters, gives them a strong capacity to learn in-domain patterns. As models grow larger, ever more resources are spent to train them. State-of-the-art models should display competitive benchmarking numbers on large-scale datasets, such as Kinetics-400 and Kinetics-600. It is questionable whether these benchmarks are suitable for temporal modeling, or rather for how large amounts of YouTube clips efficiently can be stored as weight representations. At the same time, the reciprocal dependency between the hardware and software of standard graphics processing units (GPUs), on the one hand, and models requiring massive parallel computation for their training, on the other hand, is becoming ever more intertwined [30,41]. The question looms whether we have cornered ourselves in action recognition, in the expectancy to work on ever larger models, in industry as well as in academia. Theoretical works [2,39,50] have indicated that overparametrization helps generalization, in that local minima of the loss landscape for such models often are thought to be global. These studies are made on held-out data, but never on data with significant domain shift, to the best of our knowledge.
Although less efficient to train on GPUs, recurrent video models have a more parameter-efficient approach per timestep, which may hinder over-reliance on texture cues, and promote learning the temporally relevant motion cues. The need to be economical with the use of trainable parameters, we hypothesize, creates incitement to learn better shape representations instead of texture representations. In turn, this allows for better generalization across datasets and in the wild. For contour detection, it was found that a model with recurrent dynamics was more sample-efficient and generalized better than a feed-forward model [37,38].
The primary contributions of our paper are as follows:
• We present the first empirical results from systematic experiments on how the choice of frame dependency modeling in action recognition can affect crossdomain robustness.
• We introduce a lightweight dataset allowing for investigation of both temporal shape modeling ability and domain generalization, called the Temporal Shape dataset.
• We provide the first discussion and experiments on shape vs. texture bias (following Geirhos et al. [19]) in deep video models.
• We make segmentation-based shape and texture versions of the Diving48 dataset public (as well as 303 instance-segmented frames), allowing studies on whether a video model has learned to rely more on (temporal) shape or on texture.
Related Work
Domain shift in action recognition. In [7,61], crossdomain datasets are introduced to study methods for video domain adaptation. [7] proposes to align the temporal features where the domain shift is most notable, whereas [61] proposes to improve the generalizability of so-called local features instead of global features, and use a novel augmentation scheme. Strikingly, however, all experiments in [7,61] are based on features extracted frame-by-frame, by a 2D ResNet [26], and aggregated after-the-fact, meaning that they in effect do not handle spatiotemporal features. Using frame-wise features saves large amounts of time and computation, but it avoids an essential aspect of video modeling. Different from the field of Domain adaptation, we are not proposing methods on top of base architectures to reduce domain shift, but rather study empirically which types of fundamental video models inherently seem to be more robust to it. In an important work by Yi et al. [62], benchmarks are introduced to study robustness against common video corruptions, evaluated for spatiotemporal attentionand convolution-based models. Different from our work, the domain shift is restricted to data corruptions rather than the same classification task in a new domain, and recurrent models are not evaluated.
Emphasis on temporality in action recognition. Many works emphasize the importance of temporal modeling, as the field of video understanding is growing, e.g., [14,22,40,43,47,49,60,63]. [22] and [40] compare temporal model-ing abilities between principally different architectures, but without explicitly investigating domain shift generalization. [49] examines video architectures and datasets for human activity understanding on a number of qualitative attributes such as pose variability, brevity and density of the actions. [28] investigates how much the motion contributes to the classification performance of the C3D architecture [54]. Both [6] and [55] perform large-scale studies of the features of different variants of 2D and 3D CNNs in action recognition. Last, we are connected to [47], which discusses the risk that models with strong image modeling abilities may prioritize those cues over the temporal modeling cues. Reminiscent of the findings of [19], the authors of [47] find that inflated convolutions tend to learn classes better where motion is less important, and that generalization can be helped by training on more temporally focused data (in analogy to training on shape-based data in [19]). Different from our work, however, only fully convolutional models are studied and the focus is not on comparing models with fundamentally different approaches to frame dependency modeling.
Experiment design
In this section, we describe the experiment design for the two datasets: Temporal Shape and Diving48.
Main idea. In all experiments, we begin by training on a specific domain, and validating on a held-out dataset from the same domain. We save the model checkpoint which performed the best on the validation set, and then proceed to evaluate it on other domains that are different in some respects but share the same task. Following [7], the domain we train on will be referred to as the source, and the unseen domains that we evaluate on as the target. To measure crossdomain robustness, we define the robustness ratio (rr.) as the ratio between a model's accuracy on a target domain and its best validation accuracy on the source domain. When the target task corresponds to the source task, this number should ideally be close to one (higher is better). It can be noted that the rr. is a heuristic metric, which builds on the assumption that the performance on the in-domain validation set typically is higher than on other domains. If the performance on the validation set is poor to begin with, the rr. is less informative.
Method common to all experiments. In our study, we are purposefully comparing the basic functionality of models. No pre-training, dropout, or data augmentation is applied in our experiments, except for 50% chance of horizontal flipping of the clips on Diving48. Sequences are uniformly sub-sampled into equal length (a fixed input size is required for the input to both 3D CNNs and attention-based models). There are non-uniform frame sampling methods, which can be used as augmentation, or as informed priors (e.g., the TimeSformer only samples the middle frames during inference in [4]); these are thus not used in our study, in order to study the bare bones of the models. Code related to neural networks was written in PyTorch [42] using Lightning [16]. Further implementation details and code can be found in the corresponding repositories ( Temporal Shape experiments, Diving48 experiments and diver segmentation). The datasets are available for download on Harvard Dataverse and linked to from the repositories.
Models
We will compare ConvLSTMs, 3D CNNs and VTs, since these present three principally different temporal modeling approaches with varying types and degrees of inductive bias. As VT, we will use the TimeSformer [4], because it recently achieved state-of-the-art results on a number of action recognition benchmarks.
It is a challenging task to compare neural network models which have principally different architectures. In our work, we decided on controlling for three different factors: the performance on a particular dataset, the number of trainable parameter and the layer structure (i.e., the number and expressivity of hierarchical abstractions). The experiments were designed prior to running them, to keep the process as unbiased as possible. The experiments are further completely reproducible as they were run on five fixed random seeds throughout the study.
Convolutional LSTMs. The ConvLSTM [48] layer functions like an LSTM layer [27], but with matrix multiplication replaced with 2D convolutions. This crucially means that they allow for the input to maintain its spatial structure, contrary to classical recurrent layers which require a flattened input. Frame dependency is modeled using recurrence, which introduces non-linearities between timesteps. Further, time can only flow in the causal direction. A Con-vLSTM video model, in this work, is a model fully based on these types of layers, with a classification head on top.
TimeSformer. The TimeSformer (hereon, TimeSf) [4] is a VT, relying entirely on self-attention mechanisms to model frame dependency. As in [11], each frame is first divided into patches, which are flattened. We use the TimeSformer-PyTorch library [57], mainly with the standard settings unless otherwise specified (divided space-time attention). Self-attention is applied both among the patches of one frame (spatial attention) and across patches located in the same positions across the temporal axis (temporal attention). Two variants are used in the TS experiments, with the number of heads A set to either 1 or 8 (TimeSf-1 and TimeSf-8). TimeSf-1 is closer to the ConvLSTM and 3D CNN in terms of parameter count, whereas TimeSf-8 is the standard setting. We note again that in order to study the fundamental behavior of the models, we do not use pretraining, advanced data augmentation, nor averaging over multiple predictions. This results in a lower performance on Diving48 for TimeSf than its state-of-the-art results. In order to control for layer structure or number of parameters which requires architectural modifications, it is not possible to use a pre-trained checkpoint. It is well-known that VTs, or Vision Transformers (ViTs) in general, require a lot of training data due to their minimal inductive bias. We therefore stress that we are not questioning the overall performance of these models -a pre-trained version would have performed better on the Diving48 task than in our experiments, but we are investigating the fundamental behavior of models in our experiments, prior to more advanced or large-scale training schemes.
3D CNNs. In a 3D CNN, time is treated as space, and thus the input video as a volume, across which we convolve local filter volumes. Convolution is a linear operation, meaning that the order of frames that the 3D filter traverses does not matter. Instead, all non-linearities are applied hierarchically, between layers, which is how this model still can learn the arrow of time. Its layer structure is typically similar to a 2D CNN, including batch normalization and pooling. This is also the case for the instances used in our study.
Experiments on the Temporal Shape dataset
Our proposed TS dataset is a synthetically created dataset for classification of short clips showing either a square dot or a random MNIST digit tracing shapes with their trajectories over time (Fig. 2). The dataset has five different trajectory classes (i.e., temporal shapes): circle, line, arc, spiral and rectangle. The task is to recognize which class was drawn by the moving entity across the frames of the sequence. The spatial appearance of the moving object is not correlated with the class, and can thus not be employed in the recognition. In the 2Dot, 5Dot and MNIST domains, the background is black, and in MNIST-bg, the background contains white Perlin noise. The Perlin noise can be more or less fine-grained; scale is regulated by a random parameter σ ∈ [1,10]. The dataset can be thought of as a heavily scaled-down version of an action template dataset, such as 20BN-Something-something-v2 [24], stripped of appearance cues.
The sequences consist of 20 64x64 frames, in grey scale. Each of the five classes has different amounts of possible variation in their states. The shapes can have varying starting positions, starting angles, direction, size and speeds. In the experiments, 4000 clips were used for training and 1000 for validation (model selection), and 500 clips for evaluation only. The classes are sampled so as to obtain class balance.
Since the dataset is small, we use lightweight models. We control for layer structure by letting the compared mod-els have three layers each of analogous blocks with the same number of hidden units in each. One block for the ConvL-STM and 3D CNN consists of a model-specific layer, max pooling, followed by batch normalization. These two models used the same convolutional kernel sizes in all three layers (3×3). For the TimeSformer, we used one TimeSformer layer as one block, and the latent dimension for each attention head, D h , as the number of hidden units, since these were similar in scale.
We run experiments for different numbers of hidden units per layer, h ∈ {2, 4, 6, 8, 10, 12, 16, 24, 32, 48}. For each of the ten experiments of varying model sizes, we train the models on the five-class task on the source domain for 100 epochs, with ten epochs of early stopping patience, repeated under five different random seeds set from the beginning of the study. For TimeSf-1 and TimeSf-8, the maximum number of epochs is 300 (100 for early stopping) because they demand more epochs to converge than the other two types of models, due to their minimal inductive bias. We then evaluate the best model checkpoint from the source domain on different target domains with the same classification task. Experiments were conducted in two 'directions', training on 2Dot and evaluating on the other domains, or training on MNIST-bg and evaluating on the other domains, since these represent two extremes on the continuum of less to more spatial noise.
Training on the TS data is light-weight compared to real video data, and runs fast (in the minutes-range, up to an hour, for the model sizes we evaluated) on one GPU card. We train with a batch size of 64 in all TS experiments.
Experiments on Diving48
Diving48 [35] is a well-known dataset for fine-grained and time-critical action recognition. It consists of 18k short clips with dives from 48 classes. Successfully classifying these dives requires temporal modeling, since one needs to keep track of the movements of the divers and their order. The dataset is varied appearance-wise, in terms of both backgrounds and viewpoints, which may contain unknown biases. The same competition sites can be present in both the training and test split, "to avoid biases for certain competitions", according to the authors [35]. Instead, in our view, this in fact increases the risk for bias, since the ability to recognize a dive at an unseen site is never tested. It would have been preferable to separate competition locations entirely between training and test set. Thus, even though the dataset presents a very challenging classification task from a temporal modeling perspective, it is likely not free from spatial biases (as will be demonstrated by our experiments). Modified domains of Diving48. We always train on the original dataset, but evaluate our trained models on slightly modified domains of the original test set. We modify the test set into three new domains: two based on shape and 2Dot 5Dot
MNIST MNIST-bg Figure 2. The videos can be displayed on click in Adobe Reader. Example clip showing the four domains of the TS dataset, for the class circle. In 2Dot and 5Dot, the circle is drawn with a square of width 2 and 5 pixels. In MNIST and MNIST-bg, the circle is drawn with a MNIST digit, w/ and w/o a Perlin noise background.
one based on texture (following Geirhos et al. [19], Fig. 1).
To do this, we extend the concepts of shape and texture bias from [19] to the temporal domain in the following way. In the shape domains, we blur the background and only maintain the segmented diver(s) (S1), or the divers and their bounding boxes (S2). In the texture domain (T), we conversely mask bounding boxes where the diver(s) are in each frame, and only keep the background. The masked boxes are filled with the average Imagenet [9] pixel value, following [8]. The class evidence should lie only in the divers' movement; hence, the texture version should not contain any relevant signal, and the accuracy should ideally drop to random performance. In this way, we can study how different models drop in score when tested on the shape or texture domain, indicating both cross-domain robustness (for S1 and S2) and texture bias (for T).
Instance Segmentation of Diving48. The segmentation of divers in Diving48 is detailed in the supplemental. We release 303 manually labeled frames with instance segmentation (single or double dives), since off-the-shelf COCOtrained [36] networks fail at this task for the class Person, presumably because of the unusual shapes assumed in the air by the diver, or include people in the audience.
Training. Just as for TS, we deliberately avoid bells and whistles when training models on Diving48, to study their fundamental behavior. All three models are trained with the same SGD optimizer, cross-entropy loss, and a constant learning rate of 0.001. Each model is trained for 500 epochs maximally, with an early stopping patience of 30 epochs if the validation performance does not improve. The only data augmentation used is horizontal flipping of 50% probability for the entire clip. The models are trained using PyTorch Lightning's ddp parallelization scheme across eight A100 GPUs, with a batch size of 8 and a clip length of 32 uniformly sampled frames, at 224×224. Given that the purpose of our experiments is not to optimize classification performance, we evaluate the models at different levels of performance, ranging from 30% to 50% accuracy. Some of the advanced state-of-the-art methods today, including pre-training and heavy data augmentation, obtain up to 80% performance on Diving48, but when the dataset was introduced in 2018, and standard video methods were tested off-the-shelf on it, the best result was 27% ac-curacy [35]. Thus, the range of 30-50% is reasonably wellperforming, and well above random (which is at 2.1%).
Experiments. We conduct three different kinds of experiments on Diving48, namely control for: layer structure and performance (a-c), performance of the best performing variants (d), and number of parameters and performance (e-h). ConvLSTM has four blocks of 128 hidden ConvL-STM units each (14.3M params.) in all experiments.
a-c. Controlling for layer structure and performance. In this experiment, we let the models have four layers, with h = 128 in each. We again treat D h as the hidden unit analogy for TimeSf. We evaluate model checkpoints at different performance levels: 30%, 35%, and 38.3% accuracy. The last accuracy, was chosen because it was the limiting, highest performance by the 3D CNN in this experiment. Having the same layer structure gives rise to a varying number of parameters for each type of model. Here, the 3D CNN has 10.6M params., and TimeSf 85M.
d. Controlling for performance only. Here, we compare models at their best performance, after hyperparameter search. Since it was not possible to train TimeSf to a higher accuracy than 39.7% in all variants we tried, this experiment was only conducted with the 3D CNN and ConvLSTM. 1 The 3D CNN was an 11-layer VGG-style model (23.3M params.). The checkpoints used were both at exactly 50.07% validation accuracy. e-h. Controlling for number of parameters and performance. Here, we have chosen models with a similar amount of trainable parameters, in this case 14M. To arrive at this number of parameters for TimeSf, its depth was reduced from 12 to 11, and D h and D were halved, to 32 and 256, respectively, relative to the default model. The 3D CNN has six blocks with 128 units in each.
Results and discussion
Having presented the experimental design for both datasets, next, we discuss our empirical findings, first on TS, and then on Diving48 and its modified domains.
Temporal Shape
Condensed results: TimeSf and ConvLSTM are more cross-domain robust than the 3D CNN in the absence of spatial texture bias. Training on 2Dot. Fig. 3a shows that although the 3D CNN generally obtains higher results on the source validation set and the nearby 5Dot domain, the ConvLSTM and TimeSf drop less compared to their original results when tested on MNIST (further from the source domain). ConvL-STM in fact outperforms the 3D CNN in absolute numbers on the MNIST domain. The inductive bias of a 3D CNN is highly local in space and time, which might impede learning of these temporal shapes. Generalization to the MNIST-bg domain proves too challenging for all three models.
Robustness ratio vs. model size. In Fig. 10, we have plotted the rr. for the three target domains when training on 2Dot. For 5Dot, the rr. for ConvLSTM decreases slightly with model size, whereas the 3D CNN and TimeSf, in contrast, increase the rr. with increased model size. For MNIST, which is further from the validation domain, the upward trend for the 3D CNN is broken, and less pronounced for TimeSf. For the most challenging domain, MNIST-bg, the rr. becomes very low for all three models with increased size. The trends in Figs. 10 a-c point to how a larger model size with promising performance in a nearby domain can potentially be an obstacle in domains that are further from the source for TimeSf and the 3D CNN.
Training on MNIST-bg. In this experiment, TimeSf-8 and TimeSf-1 were the most robust (Fig. 3 b). A VT is an excellent model when it comes to learning sparse, long-term dependencies in space and time. We hypothesize that this allowed TimeSf to to fully disregard the Perlin noise (which is highly stochastic and demanding to model) and learn the true temporal shapes, and that this, in turn, allowed it to be unbiased in the other domains, since the training data was designed to exclude spatial bias. In real-world data, however, there will always be biases, and it is therefore best to construct models which inherently encode as little bias as possible, regardless of the training data.
Diving48: Sensitivity to shape and texture
Condensed results: ConvLSTM exhibits less texture bias and is more cross-domain robust than TimeSf and 3D CNN. Table 1 shows the average results for the Diving48 experiments. We note that ConvLSTM drops the most for T, both relative to the validation (T/V) and to the S1 (T/S1) accuracies. ConvLSTM is also most robust to the S2 domain, whereas the 3D CNN is most robust to the S1 domain. Experiments a-d. In experiments a-c (Fig. 5), where we vary the validation accuracy on the source domain between 30% and 38.3%, both TimeSf and the 3D CNN perform better on T than on S1 and S2, even if only the two latter contain class evidence. This suggests that spatial bias is indeed present in Diving48, and that these models are more prone to encode it than ConvLSTM. Tables 2-4 show that T/S1 > 1 for these two models, also visible in Fig. 5 a-c. 2 Table 3. Results for experiment b: 4x128, 35% validation accuracy. * when a low T/V result is not accompanied by T/S1 < 1.
Model S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3DModel S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3DModel S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3D
Model Table 4. Results for experiment c: 4x128, 38.3% validation accuracy. * when a low T/V result is not accompanied by T/S1 < 1.
S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3D
In contrast, ConvLSTM clearly drops for T. TimeSf is large here, at 85M params., whereas the 3D CNN is interestingly quite small at 10.6M params. This suggests that 2 In Tables 3-4, TimeSf drops the most for T relative to the validation set (T/V). This can be explained by its overall large drops, rather than being robust to texture bias, most clearly visible in Fig. 5 a-c. For T/V to be a meaningful metric, T/S1 should be < 1. Therefore, we have put asterisks on the lowest T/V results which are not accompanied by T/S1 < 1. not only the parameter count causes susceptibility for overfitting, but that there may be innate tendencies to overfitting in the choice of spatiotemporal modeling. A recurrent model necessarily takes each timestep into account as it traverses the sequence in the time-causal direction, since each timestep is non-linearly registered in the hidden state. We hypothesize that this enables it to register motion changes over time more in detail, and count these as salient, when that is the case (as it should be for Diving48).
Model In experiment d, where we compare a ConvLSTM and a 3D CNN at 50.07% validation accuracy -the best results on Diving48, the 3D CNN does not longer improve on the texture dataset relative to S1 and S2, but the drop on T is S1 S2 T markedly larger for ConvLSTM (Table 5).
S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3D
Qualitative examples and diving attributes. Table 6 shows a breakdown of the models' predictions on five randomly selected clips from a randomly chosen class (34). The models instances used here are from experiment c (38.3% acc.). Top-1 acc. for these five clips being equal for all models at 0.4, we note that ConvLSTM has 100% top-5 acc. for both S1 and S2, whereas the 3D CNN has 80% and 60% (40% and 40% for TimeSf). As for the texture (T) results, the top-5 acc. of the 3D CNN remains at 80% relative to S1 and even improves from 60% to 80% relative to S2, whereas ConvLSTM drops by 40% and TimeSf drops by 50%. Thus, so far ConvLSTM and TimeSf display sound dropping on T. Next, we study the predictions made by the models in detail to observe that there is a qualitative difference between the predictions of ConvLSTM and TimeSf. Each label of Diving48 has four attributes: takeoff, somersault, twist and flight position. Among the top-1 predictions for both S1 and S2 (Table 7), we study how many attributes are correct in the misclassifications for each model. (32,35) obtain three correct attributes, and for TimeSf, the best misclassification has only two correct attributes. This suggests that the 3D CNN and TimeSf have modeled the classes in terms of the true attributes to a lesser extent than ConvLSTM, i.e., ConvLSTM has learned more relevant temporal patterns, at the same global validation performance. Observing the three lower sections of Table 7 for further randomly selected classes 12, 22 and 45, the ConvLSTM still achieves the largest proportion of correct attributes in the misclassifications. Just as for class 34, the Table 7. Examples of predictions and misclassifications. Each class has four attributes, and the Correct attr. column shows how many attributes were correct among the set of misclassifications.
Bv. S1 S2 T Experiments e-h. The results for experiments e-h, where the number of trainable parameters and performance are fixed, are shown in Fig. 6 (tabulated results in the supplemental). Here, the 3D CNN is the most robust out of the three, although ConvLSTM approaches the 3D CNN and drops more steeply for T in g-h, where the performance is higher (40% and 45% acc.). In these experiments, although least robust, TimeSf does not improve on T relative to S1 and S2 any more. This suggests that TimeSf is more likely to display texture bias when it has a larger amount of parameters, as it does in experiments a-c.
Conclusions and discussion
We have studied cross-domain robustness for three models that are principally different in terms of temporal modeling, in their bare-bones settings. A 3D CNN treats frames as a linear volume, a VT lets frames have non-linear but timesymmetric relationships, and a ConvLSTM models frame dependency non-linearly in a time-causal direction. Recently, a discrepancy in terms of feedback connections between biological and computer vision has been discussed [32,56], and in our work we have hypothesized that the lack of feedback connections is one reason for the similarly lacking generalization abilities in computer vision.
Our experiments were carried out on two very different datasets, one synthetic, without bias, and one with natural data, thus with more noise and potential spatial bias. The combined results (Figs. 3-5, Tables 1 and 7) on these datasets indicated that convolutional-recurrent temporal modeling is more robust to domain shift than selfattention and 3D convolutions in terms of bare-bones behavior, presumably owing to its lesser encoding of texture bias. Our results are fully reproducible with public seeds, code and data. The fact that our observations regarding texture bias are made for a fine-grained dataset such as Div-ing48, constructed to contain as little bias as possible, suggests that the issue may be worse when it comes to more spatially biased datasets such as Kinetics, which is left for future work. It is furthermore left for future work whether ImageNet pre-trained VTs display more or less texture bias than their trained-from-scratch counterparts. Another observation from our study is that when the parameter count was kept equal (experiments e-h), these trends were less pronounced.
Moreover, qualitative random examples consistently showed that the ConvLSTM learned more relevant diving patterns than the two others, when scrutinizing the three models' misclassifications -which emphasizes the texture bias tendency of TimeSf and the 3D CNN. Sharing parameters across timesteps, as recurrent models do, narrows the parameter space, possibly incentivizing these models to prioritize which patterns to learn. Another reason to use smaller models is that they require less data to train, which is ethically desirable, both in that the data can be inspected more easily, and from a sustainability perspective [3].
Our study indicates that sound physical inductive bias such as recurrence in temporal modeling may be advantageous when robustness to domain shift is important for the task. In action recognition, benchmarking has thus far mainly been conducted for in-domain-tasks where large models perform well. We encourage the video understanding community to increasingly conduct evaluation on tasks involving domain shift. We hope that our proposed datasets and framework for evaluation can help such future domain shift robustness investigations of spatiotemporal features. Figure A highlights the conceptual differences between 3D convolution, self-attention and recurrence in terms of temporal modeling.
A. Supplemental figures regarding the model concepts
B. Plots for each model size on the Temporal Shape dataset
In the main article (Figure 3), the shaded area of standard error is across both model sizes and repeated runs with different seeds (meaning 10 * 5 = 50 runs per model and domain). Detailed plots for each model size with five repeated runs each are shown in Figures 8-9. Table 9. Results for experiment (f): 14M parameters, 35% validation accuracy.
Model S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3D
Model
S1/V ↑ S2/V ↑ T/V ↓ T/S1 ↓ 3D
C. Robustness ratios for training both on 2Dot and MNIST-bg
In the main article, robustness ratios vs. model size are only plotted when training on 2Dot. In Figure 10, we include results when training on MNIST-bg as well. We show the two plots next to each other for comparison. Tables 8-11 show tabularized results corresponding to Figure 6 in the main article (experiments e-h).
D. Detailed results on Diving48
E. Qualitative examples on Diving48
Here, we include the top-1 and top-5 accuracies tables corresponding to the qualitative examples of classes 12, 22 and 45 shown in Table 7 in the main article. In Tables 12, 13 and 14, the trends regarding the top-1 and top-5 accuracy on the different datasets are slightly less clear. We observe that in Tables 12 and 14, ConvLSTM and TimeSf drop the clearest in top-5 performance on T relative to S1 and S2. On the other hand, in Table 13 (Class 22), the top-5 accuracy is relatively improved on T compared to S1 and S2 for ConvL-STM and the 3D CNN, whereas TimeSf is unchanged. We inspected these clips, to verify that the segmentation had not failed, which it had not. However, the ConvLSTM is still the only one out of the three to have 20% in top-1 accuracy both for S1 and S2 on class 22, dropping to 0 in top-1 on T (Table 13). Last, for class 45, the ConvLSTM has the best results on S1 and S2 (20% top-5 accuracy) out of the three table structure as Table 12. In the experiments, 4000 clips were used for training and 1000 for validation. The number of samples was chosen so as to be able to sample randomly with replacement, while still keeping the risk low that an identical clip occurs in both the training and the validation set. For the 2Dotdomain, each class has more than 30k possible variations (lower bounds: 31k circle, 34k line, 51k rectangle, 150k arc), except the spiral class which has 7200 as a lower bound on the possible variations. When the training set consists of 5000 samples in total, we generate around 1000 samples per class. For the spiral class, a frequentist estimation gives that 800/7200 = 0.11 of the 200 spiral validation samples might be present in the training split (22 clips). However, this is still an over-estimation, since the spirals sometimes bounce against the sides of the frame which gives rise to extra variation. We decided to consider this as acceptable noise of the dataset. Some amount of data leakage can be considered interesting since this may occur in standard datasets as well.
F.2. Instance Segmentation of Diving48
To segment divers, it did not suffice to apply a pretrained network and use the class "Person", which we first attempted (DeeplabV3 pre-trained on MS-COCO, provided by PyTorch). First of all, the off-the-shelf model could often not recognize the divers in the air as the "Person" class -they can be up-side down, or assume strange shapes in the air. Secondly, the model would often detect pixels of the "Person" class in the audience, when there was audience visible, which we, naturally, did not want to include.
Thus, we resorted to labelling our own segmented frames from the dataset (no segmentation masks were available online). We manually labelled 303 frames from the dataset containing one or two divers, picked from 303 randomly chosen videos of the training split. When there were two Best val. Best val. MNIST 5Dot 2Dot h=48 Figure 9. Average results (% acc.) across ten trials with varying numbers of hidden units per layer, repeated five times each. Training and validation on the MNIST-bg domain. The shaded area corresponds to standard deviation across the trials. divers, we segmented each as its own instance. The segmentation masks will be made public.
We fine-tuned a MaskRCNN on our labeled dataset, using a random split of 290 frames as training set and 13 frames to validate, and monitored the bounding box IoU on the validation set. The best model achieved 93% validation bounding box IoU, which we used to segment the frames of the entire dataset (at 32 frames per clip). We used the confidence of the mask predictions as a threshold. The nonzero predictions were mostly confined to a bounding box surrounding the diver(s). When the threshold was t = 0, bounding boxes around the divers were used as crops (S2). When increased to t = 0.4, we obtained proper segments of the diver shape (S1). The frames contain a lot of motion blur which made the segmentation more challenging, and the segmentation at t = 0.4 is not perfect -sometimes parts of for example an arm or foot is missing. The performance of the segmentation at t = 0.4 was deemed sufficient after manual inspection of 100 randomly chosen videos, where all videos had enough evidence to recognize the development of the dive. The segmentation at t = 0 (bounding boxes, S2) was satisfactory in all 100 clips inspected. Table 17 lists the different variants we tested when training on Diving48 from scratch. In all variants, the number of heads was 8 (A = 8), the patch size was 16 × 16, the learning rate was fixed at 0.001, and the weight decay was 0.00001. When SGD was used, the momentum was always 0.9. Table 16 lists the different model specifications for each of the eight experiments a-h on Diving48 in the main article. For further details on the models, this is described in the main article and in the code repository.
G. Parameter count
H. TimeSformer variants attempted for training
I. Model specifications for the Diving48 experiments
Figure 3 .
3Average results (% acc.) across ten trials with varying numbers of hidden units per layer, repeated five times each (thus in total, 50 runs per model). Plots corresponding to each model size can be found in the supplemental.
Figure 4 .Figure 5 .
45Robustness ratio (rr.) (↑) when training on 2Dot, vs. number of hidden units per layer. The target domain is progressively further away from the source in subplots a-c. TimeSf-1 is excluded here due to its near random validation accuracy for small model sizes. Diving48 accuracy drops, from source to target, experiments a-d. Bv. is best result in the validation domain. Note how ConvLSTM drops for T, in contrast to the 3D CNN and TimeSf.
Class 34 has the attribute values inward takeoff, 2.5 somersault, no twist and tuck flight position. For ConvLSTM, the misclassifications of class 34 are 8, 20, 35 and 44, where 8, 35 and 44 all contain 3/4 correct attributes, and 20 contains 1/4 correct attributes (no twist). For the 3D CNN, only two predictions
Figure 6 .
6Experiments e-h on Diving48, reads as Fig. 5.3D CNN comes second, and TimeSf last.3
Figure 7 .
7An overview of the conceptual differences in terms of frame dependency modeling between 3D convolutions, self-attention and recurrence. models, where the others have 0% accuracy, except for 20% top-5 accuracy for the 3D CNN on the texture dataset.
Figure 10 .
10Robustness ratio (rr.) (↑) when training on 2Dot (top -same as in the main article for comparison) and on MNIST-bg (bottom), vs. number of hidden units per layer. The target domain is progressively further away from the source in subplots a-c. TimeSf-1 is excluded here due to its near random validation accuracy for small model sizes.
Table 5 .
5Experiment d: best variants, 50.07% val. accuracy.
Table 11 .
11Results for experiment (h): 14M parameters, 45% vali-
dation accuracy.
Table 12 .
12Qualitative example with predictions on five random
clips from class 12, made by the model instances from experiment
c) (38.3% acc.).
S1
S2
T
Model
Top-1
Top-5
Top-1
Top-5
Top-1
Top-5
ConvLSTM
0.2
0.2
0.2
0.2
0.0
0.6
3D CNN
0.0
0.4
0.2
0.6
0.0
0.8
TimeSf
0.0
0.2
0.0
0.2
0.0
0.2
Table 13. Class 22, same
Table 14 .
14Class 45, same table structure as Table 12.F. Dataset details
F.1. Sampling with Replacement in the Temporal
Shape Dataset
Figure 8. Average results (% acc.) across ten trials with varying numbers of hidden units per layer, repeated five times each. Training and validation on the 2Dot domain. The shaded area corresponds to standard deviation across the trials.5dot
MNIST MNIST-bg
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
h=2
Best val.
5dot
MNIST MNIST-bg
h=4
3D CNN
ConvLSTM
TimeSf-8
TimeSf-1
Best val.
5dot
MNIST MNIST-bg
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
h=6
Best val.
5dot
MNIST MNIST-bg
h=8
Best val.
5dot
MNIST MNIST-bg
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
h=10
Best val.
5dot
MNIST MNIST-bg
h=12
Best val.
5dot
MNIST MNIST-bg
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=16
Best val.
5dot
MNIST MNIST-bg
h=24
Best val.
5dot
MNIST MNIST-bg
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=32
Best val.
5dot
MNIST MNIST-bg
h=48
Best val. MNIST
5Dot
2Dot
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
h=2
Best val. MNIST
5Dot
2Dot
h=4
3D CNN
ConvLSTM
TimeSf-8
TimeSf-1
Best val. MNIST
5Dot
2Dot
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=6
Best val. MNIST
5Dot
2Dot
h=8
Best val. MNIST
5Dot
2Dot
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=10
3D CNN
ConvLSTM
TimeSf-8
TimeSf-1
Best val. MNIST
5Dot
2Dot
h=12
Best val. MNIST
5Dot
2Dot
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=16
Best val. MNIST
5Dot
2Dot
h=24
Best val. MNIST
5Dot
2Dot
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
h=32
Table 15
15shows the number of parameter for the various architectures used in the Temporal Shape experiments.
Table 15 .
15List of the number of trainable parameters for each model at each of the ten expreriments on Temporal Shape, where the model complexity was increased (the number of hidden units per layer, for three-layer models). TimeSformer-8 and TimeSformer-1 designates A = 8 or A = 1, respectively, i.e., the number of attention heads per layer.Nb. parameters
# hidden per layer
3D CNN
ConvLSTM
TimeSformer-8
TimeSformer-1
Table 16 .
16List of the model variants used in the experiments a-h for Diving48. For the 3D CNN and ConvLSTM, the [x,y,z] lists designate the number of hidden units per layer (x for the first layer, y for the second, z for the third, etc), and the filter sizes lists similarly correspond to the filter size per layer. Hidden [128,128,128,128], Filter sizes [7,7,5,3] Depth=4, D = 1024, D h = 128 b Hidden [128,128,128,128], Filter sizes [7,7,5,3] Hidden [128,128,128,128], Filter sizes [7,7,5,3] Depth=4, D = 1024, D h = 128 c Hidden [128,128,128,128], Filter sizes [7,7,5,3] Hidden [128,128,128,128], Filter sizes [7,7,5,3] Depth=4, D = 1024, D h = 128Experiment
3D CNN
ConvLSTM
TimeSformer
a
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
d
Hidden [32,64,128,128,128,256,
256,256,512,512,512]
Filter sizes [5,3,3,3,3,3,3,3,3,3,3]
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
-
e
Hidden [128,128,128,128,128,128],
Filter sizes [7,7,7,5,3,3]
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
Depth=11, D = 256, D h = 32
f
Hidden [128,128,128,128,128,128],
Filter sizes [7,7,7,5,3,3]
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
Depth=11, D = 256, D h = 32
g
Hidden [128,128,128,128,128,128],
Filter sizes [7,7,7,5,3,3]
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
-
h
Hidden [128,128,128,128,128,128],
Filter sizes [7,7,7,5,3,3]
Hidden [128,128,128,128],
Filter sizes [7,7,5,3]
-
Table 17 .
17List of attempted TimeSformer variants, trained from scratch on Diving48. D and D h are parameters in the TimeSformer[4] architecture, attn. do. and ff.do are attention dropout and feed-forward network dropout, T is the number of uniformly sampled frames that constitute the clip, and addditional ll. means an additional linear layer on top of the predictions output from the TimeSformer model.Best val.
Ep.
D
D h
Depth
Attn. do.
Ff. do.
T
Batch size
Optimizer
Additional ll.
Patience
32.7
88
512
64
12
0
0
8
8
SGD
1
30
31.5
84
512
64
12
0
0
8
8
SGD
0
30
36.1
78
512
64
3
0
0
32
8
SGD
0
30
39.7
122
1024
128
4
0
0
32
8
SGD
0
30
31.1
76
512
64
12
0.1
0.1
8
8
SGD
0
30
31.7
71
256
32
11
0
0
8
8
SGD
0
30
36.5
85
256
32
11
0
0
32
8
SGD
0
30
19.0
79
256
32
11
0
0
32
8
Adam
0
30
31.7
75
256
32
11
0
0
8
32
Adam
0
30
32.4
133
256
32
11
0
0
8
48
SGD
0
30
36.5
85
256
32
11
0
0
32
8
SGD
0
75
A list of the variants we attempted with TimeSf is in the supplemental.
Tables containing the corresponding top-1 and top-5 accuracy for these additional clips are in the supplemental.
Acknowledgements. The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg foundation. We further thank Marcus Klasson, Taras Kucherenko and Ci Li for helpful feedback and discussions.
Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons. Alessandra Angelucci, Paul C Bressloff, Progress in Brain Research. 154Alessandra Angelucci and Paul C. Bressloff. Contribution of feedforward, lateral and feedback connections to the clas- sical receptive field center and extra-classical receptive field surround of primate V1 neurons. Progress in Brain Research, 154:93-120, 2006.
Reconciling modern machine-learning practice and the classical bias-variance trade-off. Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal, Proceedings of the National Academy of Sciences. the National Academy of Sciences116Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Man- dal. Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the Na- tional Academy of Sciences, 116(32):15849-15854, 2019.
On the dangers of stochastic parrots: Can language models be too big. Emily M Bender, Timnit Gebru, Angelina Mcmillan-Major, Shmargaret Shmitchell, ACM Conference on Fairness, Accountability, and Transparency. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic par- rots: Can language models be too big? In ACM Conference on Fairness, Accountability, and Transparency, 2021.
Is space-time attention all you need for video understanding. Gedas Bertasius, Heng Wang, Lorenzo Torresani, International Conference on Machine Learning. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In International Conference on Machine Learning, 2021.
Quo Vadis, Action Recognition? A new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, IEEE Conference on Computer Vision and Pattern Recognition. Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recog- nition, 2017.
Deep analysis of CNN-based spatio-temporal representations for action recognition. Chun-Fu Richard Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, Quanfu Fan, IEEE Conference on Computer Vision and Pattern Recognition. Chun-Fu Richard Chen, Rameswar Panda, Kandan Ramakr- ishnan, Rogerio Feris, John Cohn, Aude Oliva, and Quanfu Fan. Deep analysis of CNN-based spatio-temporal represen- tations for action recognition. In IEEE Conference on Com- puter Vision and Pattern Recognition, 2021.
Temporal attentive alignment for large-scale video domain adaptation. Min-Hung Chen, Zsolt Kira, Ghassan Al-Regib, Jaekwon Yoo, Ruxin Chen, Jian Zheng, 2019 IEEE International Conference on Computer Vision. Min-Hung Chen, Zsolt Kira, Ghassan Al-Regib, Jaekwon Yoo, Ruxin Chen, and Jian Zheng. Temporal attentive align- ment for large-scale video domain adaptation. In 2019 IEEE International Conference on Computer Vision, 2019.
Why can't I dance in the mall? Learning to mitigate scene bias in action recognition. Jinwoo Choi, Chen Gao, C E Joseph, Jia-Bin Messou, Huang, Advances in Neural Information Processing Systems. Jinwoo Choi, Chen Gao, Joseph C.E. Messou, and Jia-Bin Huang. Why can't I dance in the mall? Learning to miti- gate scene bias in action recognition. In Advances in Neural Information Processing Systems, 2019.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K Li, Li Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pat- tern Recognition, 2009.
Competition for consciousness among visual events: the psychophysics of reentrant visual processes. James T Vincent Di Lollo, Ronald A Enns, Rensink, Journal of Experimental Psychology. General. 129Vincent di Lollo, James T. Enns, and Ronald A. Rensink. Competition for consciousness among visual events: the psy- chophysics of reentrant visual processes. Journal of Experi- mental Psychology. General, 129 4:481-507, 2000.
Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representa- tions, 2021.
Recurrent excitation in neocortical circuits. R Douglas, Christof Koch, Misha A Mahowald, H E Martin, Ortiz, Suarez, Science. 269R Douglas, Christof Koch, Misha A. Mahowald, KA Martin, and H. E. Ortiz Suarez. Recurrent excitation in neocortical circuits. Science, 269:981 -985, 1995.
Recurrent neuronal circuits in the neocortex. J Rodney, Kevan A C Douglas, Martin, Current Biology. 17Rodney J. Douglas and Kevan A. C. Martin. Recurrent neu- ronal circuits in the neocortex. Current Biology, 17:R496- R500, 2007.
Temporal reasoning in videos using convolutional gated recurrent units. Debidatta Dwibedi, Pierre Sermanet, Jonathan Tompson, IEEE Conference on Computer Vision and Pattern Recognition Workshops. Debidatta Dwibedi, Pierre Sermanet, and Jonathan Tompson. Temporal reasoning in videos using convolutional gated re- current units. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018.
Masking disrupts reentrant processing in human visual cortex. Johannes Jacobus Fahrenfort, H Steven Scholte, A F Victor, Lamme, Journal of Cognitive Neuroscience. 19Johannes Jacobus Fahrenfort, H. Steven Scholte, and Vic- tor A. F. Lamme. Masking disrupts reentrant processing in human visual cortex. Journal of Cognitive Neuroscience, 19:1488-1497, 2007.
A framework for contrastive self-supervised learning and designing a new approach. William Falcon, Kyunghyun Cho, arXiv:2009.00104arXiv preprintWilliam Falcon and Kyunghyun Cho. A framework for con- trastive self-supervised learning and designing a new ap- proach. arXiv preprint arXiv:2009.00104, 2020.
SlowFast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, IEEE International Conference on Computer Vision. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast networks for video recognition. In IEEE International Conference on Computer Vision, 2019.
Girshick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, B Ross, IEEE Conference on Computer Vision and Pattern Recognition. Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross B. Gir- shick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2021.
Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, Wieland Brendel, International Conference on Learning Representations. Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increas- ing shape bias improves accuracy and robustness. In Inter- national Conference on Learning Representations, 2019.
Generalisation in humans and deep neural networks. Robert Geirhos, R M Carlos, Jonas Temme, Rauber, H Heiko, Matthias Schütt, Felix A Bethge, Wichmann, Advances in Neural Information Processing Systems. Robert Geirhos, Carlos RM Temme, Jonas Rauber, Heiko H Schütt, Matthias Bethge, and Felix A Wichmann. Generali- sation in humans and deep neural networks. In Advances in Neural Information Processing Systems, 2018.
Large-scale weakly-supervised pre-training for video action recognition. Deepti Ghadiyaram, Matt Feiszli, Du Tran, Xueting Yan, Heng Wang, Dhruv Kumar Mahajan, IEEE Conference on Computer Vision and Pattern Recognition. Deepti Ghadiyaram, Matt Feiszli, Du Tran, Xueting Yan, Heng Wang, and Dhruv Kumar Mahajan. Large-scale weakly-supervised pre-training for video action recognition. In IEEE Conference on Computer Vision and Pattern Recog- nition, 2019.
Video time: Properties, encoders and evaluation. A Ghodrati, E Gavves, C G M Snoek, British Machine Vision Conference. A. Ghodrati, E. Gavves, and C. G. M. Snoek. Video time: Properties, encoders and evaluation. In British Machine Vi- sion Conference, 2018.
Video action transformer network. Rohit Girdhar, João Carreira, Carl Doersch, Andrew Zisserman, IEEE Conference on Computer Vision and Pattern Recognition. Rohit Girdhar, João Carreira, Carl Doersch, and Andrew Zis- serman. Video action transformer network. In IEEE Confer- ence on Computer Vision and Pattern Recognition, 2019.
The "something something" video database for learning and evaluating visual common sense. Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fründ, Peter N Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, Roland Memisevic, IEEE International Conference on Computer Vision. Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michal- ski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fründ, Peter N. Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The "something something" video database for learning and evaluating visual common sense. In IEEE International Conference on Computer Vi- sion, 2017.
Rethinking training data for mitigating representation biases in action recognition. Kensho Hara, Yuchi Ishikawa, Hirokatsu Kataoka, IEEE Conference on Computer Vision and Pattern Recognition Workshops. Kensho Hara, Yuchi Ishikawa, and Hirokatsu Kataoka. Re- thinking training data for mitigating representation biases in action recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2021.
Deep residual learning for image recognition. X Kaiming He, Shaoqing Zhang, Jian Ren, Sun, IEEE Conference on Computer Vision and Pattern Recognition. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735-1780, Nov. 1997.
What makes a video a video: Analyzing temporal information in video understanding models and datasets. De-An Huang, Vignesh Ramanathan, Dhruv Mahajan, Lorenzo Torresani, Manohar Paluri, Li Fei-Fei, Juan Carlos Niebles, IEEE Conference on Computer Vision and Pattern Recognition. De-An Huang, Vignesh Ramanathan, Dhruv Mahajan, Lorenzo Torresani, Manohar Paluri, Li Fei-Fei, and Juan Carlos Niebles. What makes a video a video: Analyz- ing temporal information in video understanding models and datasets. In IEEE Conference on Computer Vision and Pat- tern Recognition, 2018.
The kinetics human action video dataset. Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman, abs/1705.06950CoRRWill Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset. CoRR, abs/1705.06950, 2017.
NVIDIA and the battle for the future of AI chips. Nicole Kobie, 2021Nicole Kobie. NVIDIA and the battle for the future of AI chips. Wired, 2021.
Videolightformer: Lightweight action recognition using transformers. Raivo Koot, Haiping Lu, arXiv:2107.00451arXiv preprintRaivo Koot and Haiping Lu. Videolightformer: Lightweight action recognition using transformers. arXiv preprint arXiv:2107.00451, 2021.
Beyond the feedforward sweep: feedback computations in the visual cortex. G Kreiman, Thomas Serre, Annals of the New York Academy of Sciences. G. Kreiman and Thomas Serre. Beyond the feedforward sweep: feedback computations in the visual cortex. Annals of the New York Academy of Sciences, 1464, 2020.
HMDB: A large video database for human motion recognition. H Kuehne, H Jhuang, E Garrote, T Poggio, T Serre, IEEE International Conference on Computer Vision. H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: A large video database for human motion recogni- tion. In IEEE International Conference on Computer Vision, 2011.
The distinct modes of vision offered by feedforward and recurrent processing. A F Victor, Pieter R Lamme, Roelfsema, Trends in Neurosciences. 23Victor A. F. Lamme and Pieter R. Roelfsema. The distinct modes of vision offered by feedforward and recurrent pro- cessing. Trends in Neurosciences, 23:571-579, 2000.
RESOUND: Towards action recognition without representation bias. Yingwei Li, Yi Li, Nuno Vasconcelos, European Conference on Computer Vision. Yingwei Li, Yi Li, and Nuno Vasconcelos. RESOUND: To- wards action recognition without representation bias. In Eu- ropean Conference on Computer Vision, 2018.
Microsoft COCO: Common Objects in Context. Tsung-Yi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European Conference on Computer Vision. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision, 2014.
Stable and expressive recurrent vision models. Drew Linsley, Alekh Karkada Ashok, Lakshmi Narasimhan Govindarajan, Rex Liu, Thomas Serre, Advances in Neural Information Processing Systems. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien LinDrew Linsley, Alekh Karkada Ashok, Lakshmi Narasimhan Govindarajan, Rex Liu, and Thomas Serre. Stable and expressive recurrent vision models. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Informa- tion Processing Systems, 2020.
Recurrent neural circuits for contour detection. Drew Linsley, Junkyung Kim, Alekh Ashok, Thomas Serre, International Conference on Learning Representations. Drew Linsley, Junkyung Kim, Alekh Ashok, and Thomas Serre. Recurrent neural circuits for contour detection. In In- ternational Conference on Learning Representations, 2020.
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning. Siyuan Ma, Raef Bassily, Mikhail Belkin, International Conference on Machine Learning. Siyuan Ma, Raef Bassily, and Mikhail Belkin. The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning. In International Con- ference on Machine Learning, 2018.
Interpreting Video Features: a Comparison of 3D Convolutional Networks and Convolutional LSTM Networks. Joonatan Mänttäri, * , Sofia Broomé, * , John Folkesson, Hedvig Kjellström, Asian Conference on Computer Vision. (*Joint first authors. 2020Joonatan Mänttäri*, Sofia Broomé*, John Folkesson, and Hedvig Kjellström. Interpreting Video Features: a Compari- son of 3D Convolutional Networks and Convolutional LSTM Networks. In Asian Conference on Computer Vision. (*Joint first authors), 2020.
Accelerating SE(3)-Transformers Training Using an NVIDIA Open-Source Model Implementation. Alexandre Milesi, Alexandre Milesi. Accelerating SE(3)-Transformers Train- ing Using an NVIDIA Open-Source Model Implementation. https://bit.ly/3wQac3v/. Accessed: 2021-11-01.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Advances in Neural Information Processing Systems. Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith ChintalaAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai- son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019.
Seeing the arrow of time. Lyndsey C Pickup, Zheng Pan, Donglai Wei, Yichang Shih, Changshui Zhang, Andrew Zisserman, Bernhard Schölkopf, William T Freeman, IEEE Conference on Computer Vision and Pattern Recognition. Lyndsey C. Pickup, Zheng Pan, Donglai Wei, YiChang Shih, Changshui Zhang, Andrew Zisserman, Bernhard Schölkopf, and William T. Freeman. Seeing the arrow of time. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
Corticocortical connections in the visual system: structure and function. Paul Antoine Salin, J Bullier, Physiological Reviews. 75Paul Antoine Salin and J. Bullier. Corticocortical connec- tions in the visual system: structure and function. Physio- logical Reviews, 75 1:107-54, 1995.
Javier Selva, Anders S Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B Moeslund, Albert Clapés, arXiv:2201.05991Video transformers: A survey. arXiv preprintJavier Selva, Anders S. Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B. Moeslund, and Albert Clapés. Video transformers: A survey. arXiv preprint arXiv:2201.05991, 2022.
Deep learning: The good, the bad, and the ugly. Annual Review of Vision Science. Thomas Serre, Thomas Serre. Deep learning: The good, the bad, and the ugly. Annual Review of Vision Science, 2019.
Only time can tell: Discovering temporal data for temporal modeling. Laura Sevilla-Lara, Shengxin Zha, Zhicheng Yan, Vedanuj Goswami, Matt Feiszli, Lorenzo Torresani, IEEE Winter Conference on Applications of Computer Vision. Laura Sevilla-Lara, Shengxin Zha, Zhicheng Yan, Vedanuj Goswami, Matt Feiszli, and Lorenzo Torresani. Only time can tell: Discovering temporal data for temporal modeling. In IEEE Winter Conference on Applications of Computer Vi- sion, 2021.
Convolutional LSTM Network: A machine learning approach for precipitation nowcasting. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, Wang Chun Woo, Advances in Neural Information Processing Systems. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang chun Woo. Convolutional LSTM Network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, 2015.
What actions are needed for understanding human actions in videos. A Gunnar, Olga Sigurdsson, Abhinav Kumar Russakovsky, Gupta, IEEE International Conference on Computer Vision. Gunnar A. Sigurdsson, Olga Russakovsky, and Abhinav Ku- mar Gupta. What actions are needed for understanding hu- man actions in videos? In IEEE International Conference on Computer Vision, 2017.
Theoretical insights into the optimization landscape of overparameterized shallow neural networks. Mahdi Soltanolkotabi, Adel Javanmard, J Lee, IEEE Transactions on Information Theory. 65Mahdi Soltanolkotabi, Adel Javanmard, and J. Lee. The- oretical insights into the optimization landscape of over- parameterized shallow neural networks. IEEE Transactions on Information Theory, 65:742-769, 2019.
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, abs/1212.0402CoRR. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. CoRR, abs/1212.0402, 2012.
Two distinct modes of sensory processing observed in monkey primary visual cortex (v1). Hans Supèr, Henk Spekreijse, A F Victor, Lamme, Nature Neuroscience. 4Hans Supèr, Henk Spekreijse, and Victor A. F. Lamme. Two distinct modes of sensory processing observed in monkey primary visual cortex (v1). Nature Neuroscience, 4:304-310, 2001.
VIM-PAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning. Jie Hao Tan, Thomas Lei, Mohit Wolf, Bansal, Hao Tan, Jie Lei, Thomas Wolf, and Mohit Bansal. VIM- PAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning, 2021.
Learning Spatiotemporal Features with 3D Convolutional Networks. Du Tran, D Lubomir, Rob Bourdev, Lorenzo Fergus, Manohar Torresani, Paluri, IEEE International Conference on Computer Vision. Du Tran, Lubomir D. Bourdev, Rob Fergus, Lorenzo Tor- resani, and Manohar Paluri. Learning Spatiotemporal Fea- tures with 3D Convolutional Networks. In IEEE Interna- tional Conference on Computer Vision, 2015.
A closer look at spatiotemporal convolutions for action recognition. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann Lecun, Manohar Paluri, IEEE Conference on Computer Vision and Pattern Recognition. Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
Going in circles is the way forward: the role of recurrence in visual inference. Nikolaus Ruben S Van Bergen, Kriegeskorte, Whole-brain interactions between neural circuits. 65Ruben S van Bergen and Nikolaus Kriegeskorte. Going in circles is the way forward: the role of recurrence in visual inference. Current Opinion in Neurobiology, 65:176-193, 2020. Whole-brain interactions between neural circuits.
TimeSformer-PyTorch. Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification. Heng Wang, Heng Wang. TimeSformer-PyTorch. Implementation of TimeSformer from Facebook AI, a pure attention-based so- lution for video classification. https://github.com/ lucidrains/TimeSformer-pytorch, 2021. Ac- cessed: 2021-11-13.
Tdn: Temporal difference networks for efficient action recognition. Limin Wang, Zhan Tong, Bin Ji, Gangshan Wu, IEEE Conference on Computer Vision and Pattern Recognition. Limin Wang, Zhan Tong, Bin Ji, and Gangshan Wu. Tdn: Temporal difference networks for efficient action recogni- tion. In IEEE Conference on Computer Vision and Pattern Recognition, 2021.
Temporal segment networks for action recognition in videos. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool, IEEE Transactions on Pattern Analysis and Machine Intelligence. 41Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment net- works for action recognition in videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:2740-2755, 2019.
Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, Kevin P Murphy, European Conference on Computer Vision. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin P. Murphy. Rethinking spatiotemporal feature learn- ing: Speed-accuracy trade-offs in video classification. In Eu- ropean Conference on Computer Vision, 2018.
VideoDG: Generalizing Temporal Relations in Videos to Novel Domains. Zhiyu Yao, Yunbo Wang, Jianmin Wang, Philip Yu, Mingsheng Long, IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhiyu Yao, Yunbo Wang, Jianmin Wang, Philip Yu, and Mingsheng Long. VideoDG: Generalizing Temporal Rela- tions in Videos to Novel Domains. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2021.
Benchmarking the robustness of spatialtemporal models against corruptions. Chenyu Yi, Siyuan Yang, Haoliang Li, Yap-Peng Tan, Alex C Kot, Advances in Neural Information Processing Systems. Chenyu Yi, Siyuan Yang, Haoliang Li, Yap-Peng Tan, and Alex C. Kot. Benchmarking the robustness of spatial- temporal models against corruptions. In Advances in Neural Information Processing Systems, 2021.
Temporal relational reasoning in videos. Bolei Zhou, Alex Andonian, Aude Oliva, Antonio Torralba, European Conference on Computer Vision. 2Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Tor- ralba. Temporal relational reasoning in videos. In European Conference on Computer Vision, 2018. 2 1573 1497
| []
|
[
"Conditions for Nuclear-Matter Lasers",
"Conditions for Nuclear-Matter Lasers"
]
| [
"V I Yukalov \nCentre for Interdisciplinary Studies in Chemical Physics\nand Bogolubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research\nUniversity of Western Ontario\nN6A 3K7, 141980London, DubnaOntarioCanada, Russia\n"
]
| [
"Centre for Interdisciplinary Studies in Chemical Physics\nand Bogolubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research\nUniversity of Western Ontario\nN6A 3K7, 141980London, DubnaOntarioCanada, Russia"
]
| []
| Conditions are analysed when in dense and hot nuclear matter large amounts of Bose particles can be created. An intensive production of Bose particles is the main necessary condition for realizing their coherent emission similar to radiation from photon lasers. The consideration is based on the multichannel model of nuclear matter. Analysis shows that possible candidates for nuclear-matter lasing are mesons (mainly pions), dibaryons, and gluons. | null | [
"https://export.arxiv.org/pdf/hep-ph/9902386v1.pdf"
]
| 18,702,311 | hep-ph/9902386 | f6c5c65de72503fb82d833cdcd4c54638050b66c |
Conditions for Nuclear-Matter Lasers
arXiv:hep-ph/9902386v1 18 Feb 1999
V I Yukalov
Centre for Interdisciplinary Studies in Chemical Physics
and Bogolubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research
University of Western Ontario
N6A 3K7, 141980London, DubnaOntarioCanada, Russia
Conditions for Nuclear-Matter Lasers
arXiv:hep-ph/9902386v1 18 Feb 1999
Conditions are analysed when in dense and hot nuclear matter large amounts of Bose particles can be created. An intensive production of Bose particles is the main necessary condition for realizing their coherent emission similar to radiation from photon lasers. The consideration is based on the multichannel model of nuclear matter. Analysis shows that possible candidates for nuclear-matter lasing are mesons (mainly pions), dibaryons, and gluons.
Introduction
Since any kind of Bose particles shares the same statistical properties as photons, it sounds reasonable to pose a question as to whether it could be feasible to realize coherent emission of other Bose particles by analogy with the laser radiation of photons. This question is now intensively discussed in connection with the possibility of realizing atom lasers [1][2][3][4][5][6][7]. Experiments [8,9] show that Bose condensed atoms in a trap are in a coherent state. Therefore a condensate released from the trap propagates according to a single-mode wave equation represented by the nonlinear Schrödinger equation [10][11][12]. Output couplers for Bose condensed atoms are realized by means of short radiofrequency pulses transferring atoms from a trapped state to a untrapped state [8,9,13,14]. With the help of additional external fields one could create Bose condensates in non-ground states [15] or in vortex states [16], thus, forming various modes of atom lasers.
Another possibility is related to the creation of a large number of pions in hadronic, nuclear, and heavy-ion collisions [17]. In such collisions up to hundreds of pions can be created simultaneously. When the density of pions produced in the course of these collisions is such that their mean particle separation approaches the thermal wave-length then multi-particle interference becomes important. Strong correlations between pions can result in the formation of a coherent state and in the feasibility of getting a pion laser [18,19].
Coherent states are usually associated with Bose condensed states. Therefore those particles that could exhibit Bose condensation under extreme conditions characteristic of fireballs produced in heavy-ion collisions could be also considered as candidates for lasing. For example, such candidates could be dibaryons that, as was shown [20][21][22][23], can form a Bose-Einstein condensate.
One of the main stipulations for the creation of coherent states is, as is mentioned above, sufficient density of generated Bose particles. It is, hence, necessary to understand what are the optimal conditions providing the maximal possible density of bosons. It is the aim of this paper to analyse the behaviour of dense and hot nuclear matter in order to answer the questions -what kind of bosons and under what proviso can be generated in large quantities in such a matter.
Multichannel Model
To consider dense and hot nuclear matter, in which various kinds of particles can be generated, we use the multichannel approach to clustering matter [20,21]. The idea of this approach goes back to the methods of describing composite particles [24][25][26][27][28]. The most complete basis for this problem was formulated by Weinberg [29][30][31][32][33]. According to this approach, it is possible to introduce into any theory fictitious elementary particles, or quasiparticles, without changing any physical predictions. To accomplish this, the interaction among the original, truly elementary, particles must be modified in the appropriate way. By "composite particles" one can mean bound states or resonances. If fictitious elementary particles, quasiparticles, are introduced to take the place of all composite particles, then perturbation theory can always be used. The modification of the Hamiltonian weakens the original interaction enough to remove divergencies. If such quasiparticles are introduced for each resonance or bound state, then two-body scattering problems can always be solved by perturbation theory. A nice account of the quasiparticle approach was given by Weinberg in Ref. [34]. A resumé of this approach can be formulated as follows: One introduces fictitious elementary particles into the theory, in rough correspondence with the bound states of the theory. In order not to change the physics, one must at the same time change the potential. Since the bound states of the original theory are now introduced as elementary particles, the modified potential must not produce them also as bound states. Hence, the modified potential is weaker, and can in fact be weak enough to allow the use of perturbation theory.
Composite particles in other words are called clusters. Following the multichannel approach to describing clustering matter [20,21], let us consider an ensemble of particles that can form different bound states interpreted as composite particles or clusters. A space H i of quantum states associated with a cluster of z i particles is termed an i-channel. The number z i of particles forming a bound cluster is a compositeness number. The average density of matter is a sum
ρ = i z i ρ i ,(1)
in which
ρ i = ζ i (2π) 3 n i ( → k )d → k (2)
is an average density of i-channel clusters, ζ i being a degeneracy factor, and
n i ( → k ) = a † i ( → k )a i ( → k )
is a momentum distribution of the i-channel clusters. The statistical weight of each channel is characterized by the channel probability
w i ≡ z i ρ i ρ .(3)
The Hamiltonian of clustering matter reads
H = i H i + CV ,(4)
where H i is an i-channel Hamiltonian and CV is a nonoperator term providing the validity of the principle of statistical correctness [20,21], V being the system volume. Since strong short-range interactions between original particles are included into the definition of bound clusters, the left long-range interactions can be treated as week [29][30][31][32][33][34]. These long-range interactions permit us to apply the mean-field approximation resulting in an i-channel
Hamiltonian H i = k ω i ( → k )a † i ( → k )a i ( → k )(5)
with an effective spectrum
ω( → k ) = k 2 + m 2 i + U i − µ i ,(6)
where m i is an i-cluster mass; U i , a mean field; and µ i the chemical potential of i-clusters. Then the momentum distribution of i-clusters, in the Hartree approximation, takes the form
n i ( → k ) = 1 exp{βω i ( → k ) ∓ 1} ,(7)
in which β is inverse temperature; the upper or lower signs in (7) stand for Bose-or Fermi clusters, respectively. When the average baryon density
n B = i ρ i B i ,(8)
where B i is the baryon number of an i-cluster, is fixed, then the chemical potentials of i-clusters,
µ i = µ B B i (n B = const) ,(9)
are expressed through the baryon potential µ B defined from (8).
The mean density of matter (1) may be written as the sum
ρ = ρ 1 + ρ z , ρ 1 ≡ {i} 1 ρ i , ρ z ≡ {i}z z i ρ i(10)
of the density of unbound particles, ρ 1 , and the density of particles bound in clusters, ρ z . Then the conditions of statistical correctness [20,21] are
δH δρ = 0 , δH δρ z = 0 .(11)
The original unbound particles in nuclear matter are quarks and gluons. Their collection is named quark-gluon plasma. The mean-field potential of the quark-gluon plasma can be written [20] as
U 1 ≡ U(ρ) = J 1+ν ρ −ν/3 ,(12)
where J is an effective intensity of interactions and ν is an exponent of a confining potential, 0 < ν ≤ 2. In what follows we take ν ≈ 2. The mean field for i-channel clusters [20] reads
U i = z i [Φρ z + U(ρ) − U(ρ z )] ,(13)
where Φ is a reference interaction parameter. With the potential (12), we have
U i = z i Φρ z + z i J 1+ν ρ −ν/3 − ρ −ν/3 z .
From here and the condition of statistical correctness (11), we find the correcting term
C = ν 3 − ν J 1+ν ρ 1−ν/3 − ρ 1−ν/3 z − 1 2 Φρ 2 z .(14)
We have yet two undefined parameters, J and Φ. The first of them is an effective intensity of interactions in the quark-gluon plasma, which we take [20] as J = 225 MeV . The second, that is the reference parameter Φ, may be scaled [20,21] by means of nucleon-nucleon interactions V 33 (r) as follows:
Φ = 1 9 V 33 (r)d → r .(15)
Accepting for V 33 (r) the Bonn potential [35], we get Φ = 35 MeV f m 3 . For nuclear matter of the normal baryon density n 0B = 0.167 f m −3 , this gives an average interaction energy Φn 0B = 5.845 MeV . In this way, the model is completely defined and we can calculate all its thermodynamic characteristics. For the pressure we have
p = i p i − C , p i = ±T ζ i (2π) 3 ln 1 ± n i ( → k ) d → k .(16)
The energy density is
ε = i ε i + C , ε = ζ i (2π) 3 k 2 + m 2 i n i ( → k )d → k +ρ i U i .(17)
From here, we may find the specific heat and the reduced specific heat,
C V = ∂ε ∂T , σ V = T ε C V ,(18)
respectively, and the compression modulus
κ −1 T = n B ∂p ∂n B .(19)
One may also define an effective sound velocity, c ef f , by the ratio
c 2 ef f = p ε .(20)
Statistical weights of the corresponding channels are given by the channel probabilities defined in (3). For the following analysis, it is convenient to introduce also the plasma-channel probability
w pl = 1 ρ (ρ g + ρ u + ρū + ρ d + ρd) ,(21)
where ρ g is the density of gluons, while other terms are the densities of uand d-quarks and antiquarks, respectively. The pion-channel probability
w π = 2 ρ (ρ π + + ρ π − + ρ π 0 )(22)
is expressed through the densities of π + , π − , and π 0 mesons. The probability of other meson channels, except pions, is
w ηρω = 2 ρ (ρ η + ρ ρ + + ρ ρ − + ρ ρ 0 + ρ ω ) .(23)
The nucleon-channel probability writes
w 3 = 3 ρ (ρ n + ρn + ρ p + ρp)(24)
containing the densities of neutrons, antineutrons, protons, and antiprotons. We calculate also the probabilities of multiquark channels, such as the dibaryon-channel probability
w 6 = 6 ρ (ρ 6q + ρ 6q ) ,(25)
and, analogously, the 9-quark and 12-quark channel probabilities. Now we can analyse the thermodynamic behaviour of the described model in order to define what kinds of Bose particles and under what conditions can be generated in large quantities, that is, when the corresponding Bosechannel probabilities are maximal. The choice of parameters is the same as in Ref. [20].
Analysis
The multichannel model of nuclear matter described in the previous section has been solved numerically. The pressure (16) is shown in Fig.1 as a function of temperature Θ = k B T in MeV and of relative baryon density n B /n 0B . The pressure is a monotonic function of its variables as well as the energy density (17) in Fig.2. But it is interesting that their ratio (20) in Fig.3 is a nonmonotonic function displaying a maximum at temperature around T d = 160 MeV . The latter, as will be clear from the following, can be associated with the temperature of the deconfinement crossover. The specific heat and the reduced specific heat given in (18) are presented in Fig.4 and 5, respectively. The compression modulus (19) is shown in Fig.6. Again, the maxima of the reduced specific heat and the compression modulus can be associated with the deconfinement crossover. The following Figs. 7 to 11 present the behaviour of the channel probabilities for the quark-gluon plasma (21), pions (22), other mesons (23), nucleons (24) and dibaryons (25). Since the possibility of the appearance of the dibaryon Bose condensate is of special interest, we show in Fig. 12 the corresponding channel probability w. The Bose condensates of heavier multiquark clusters do not arise. The channel probabilities of such heavier clusters are negligibly small being, for instance, for 9-and 12-quark clusters less than 10 −3 and 10 −5 , respectively. We show also in Figs. 13 to 15 the channel probabilities, as functions of the relative baryon density n B /n 0B at zero temperature, for the quark-gluon plasma (21), nucleons (24), and dibaryons (25).
The analysis demonstrates that the maximal density of pions can be generated around the temperature T ≈ 160 MeV of the deconfinement crossover at low baryon density n B < n 0B . The corresponding channel probability of pion production can reach w π ≈ 0.6. The total probability of other meson channels reaches only w ηρω ≈ 0.16 at T ≈ 200 MeV and n B < n 0B . However, the generation of these mesons is more noticeable than that of pions at high temperatures and baryon densities, although being always not intensive, with the related probability not exceeding the order of 10 −1 .
The optimal region for the creation of dibaryons, where their channel probability reaches w 6 ≈ 0.7, is the region of low temperatures T < 20 MeV and the diapason of baryon densities n B /n 0B ≈ 5 − 20. At zero temperature their probability rather slowly diminishes with increasing the baryon density, so that at n B ≈ 100n 0B , we have w 6 ≈ 0.4. At low temperatures dibaryon form a Bose-condensed state.
Above the deconfinement crossover temperature T d ≈ 160 MeV , there is an intensive generation of gluons in the quark-gluon plasma. At sufficiently high temperatures, gluon radiation can, in principle, become so intensive that to acquire a noticeable coherent component.
Thus, the most probable candidates for realizing laser generation are pions, dibaryons, and gluons. Each kind of these Bose particles has its own region where the corresponding channel probability is maximal. For pions it is T ≈ 160 MeV and n B < n 0B ; for dibaryons, T < 20 MeV and n B ≈ (5 − 20)n 0B ; and for gluons, this is the high-temperature region T > 160 MeV . If it is feasible to realize the corresponding conditions, one could get a pion laser, dibaryon laser, or gluon laser, respectively. Note that to realize such a lasing in reality one has to accomplish several other requirements of which we are considering here only one necessary condition.
It is also worth noting that if one tries to achieve the desired conditions of lasing in the process of hadronic or heavy-ion collisions then one can get only a pulsed radiation of Bose particles. If the lifetime of a fireball formed during a collision is longer than the local-equilibrium time then the quasiequilibrium picture of the process is permissible. In such a case, it is possible to use the multichannel model, as is described here, with temperature and baryon density given as functions of time, the time dependence being in accordance with the related fireball expansion.
Figure captions
Figure captions
Fig. 1 .
1The pressure (in units of J 4 ) of the multichannel model.
Fig. 2 .
2The energy density (in unites of J 4 ) on the temperature-baryon density plane.
Fig. 3 .
3The pressure-to-energy density ratio related to an effective sound velocity squared, c 2 ef f .
Fig. 4 .
4The specific heat (in units of J 3 ) for the multichannel model.
Fig. 5 .
5The reduced heat displays a maximum that can be associated with the deconfinement crossover.
Fig. 6 .
6The compression modulus (in units of J 4 ) for the multichannel model.
Fig. 7 .
7The channel probability of the quark-gluon plasma.
Fig. 8 .
8The pion channel probability.
Fig. 9 .
9The total probability of other, except pion, meson channels.
Fig. 10 .
10The nucleon channel probability.
Fig. 11 .
11The dibaryon channel probability.
Fig. 12 .
12The channel probability of Bose-condensed dibaryons.
Fig. 13 .
13The plasma channel probability at zero temperature as a function of the relative baryon density.
Fig. 14 .
14The nucleon channel probability at zero temperature.
Fig. 15 .
15The dibaryon channel probability at zero temperature.
AcknowledgementI am grateful to E.P. Yukalova for useful discussions. A grant from the University of Western Ontario, London, Canada, is appreciated.
. H M Wiseman, M J Collet, Phys. Lett. A. 202246Wiseman, H.M. and Collet, M.J., 1995, Phys. Lett. A, 202, 246.
. C J Bordé, Phys. Lett. A. 204217Bordé, C.J., 1995, Phys. Lett. A, 204, 217.
. R J Spreeuw, T Pfau, U Janicke, M Wilkens, Europhys. Lett. 32469Spreeuw, R.J., Pfau, T., Janicke, U., and Wilkens, M., 1995, Europhys. Lett., 32, 469.
. A M Guzmán, M Moore, P Meystre, Phys. Rev. A. 53977Guzmán, A.M., Moore, M., and Meystre, P., 1996, Phys. Rev. A, 53, 977.
. M Holland, K Burnett, C Gardiner, J Cirac, P Zoller, Phys. Rev. A. 541757Holland, M., Burnett, K., Gardiner, C., Cirac J., and Zoller, P., 1996, Phys. Rev. A, 54, 1757.
. G M Moy, J J Hope, C M Savage, Phys. Rev. A. 553631Moy, G.M., Hope, J.J., and Savage C.M., 1997, Phys. Rev. A, 55, 3631 (1997).
. V I Yukalov, Bull. Russ. Acad. Sci. Phys. 62305Yukalov, V.I., 1998, Bull. Russ. Acad. Sci. Phys., 62, 305.
. M R Andrews, C G Townsend, H J Miesner, Science. 637Andrews, M.R., Townsend, C.G., Miesner, H.J., et. al., 1997, Science, 275, 637.
. E A Burt, R W Christ, C M Myatt, Phys. Rev. Lett. 79337Burt, E.A., Christ, R.W., Myatt, C.M., et al., 1997, Phys. Rev. Lett., 79, 337.
. Y Castin, R Dum, Phys. Rev. Lett. 775315Castin, Y. and Dum, R., 1996, Phys. Rev. Lett., 77, 5315.
. M Holland, J Cooper, Phys. Rev. A. 53Holland, M. and Cooper, J., 1996, Phys. Rev. A, 53, 1954.
. Y Kagan, E L Surkov, G V Shlyapnikov, Phys. Rev. A. 5518Kagan Y., Surkov, E.L., and Shlyapnikov, G.V., 1997, Phys. Rev. A, 55, 18.
. M O Mewes, M R Andrews, D M Kurn, Phys. Rev. Lett. 78582Mewes, M.O., Andrews, M.R., Kurn, D.M., et al., 1997, Phys. Rev. Lett., 78, 582.
. R J Ballagh, K Burnett, T F Scott, Phys. Rev. Lett. 781607Ballagh, R.J., Burnett, K., and Scott, T.F., 1997, Phys. Rev. Lett., 78, 1607.
. V I Yukalov, E P Yukalova, V S Bagnato, Phys. Rev. A. 564845Yukalov, V.I., Yukalova, E.P., and Bagnato, V.S., 1997, Phys. Rev. A, 56, 4845.
. A L Fetter, Phys. Rev. A. 534245Fetter, A.L., 1996, Phys. Rev. A, 53, 4245.
. B Lörstad, Int. J. Mod. Phys. A. 42861Lörstad, B., 1989, Int. J. Mod. Phys. A, 4, 2861.
. S Pratt, Phys. Lett. B. 301159Pratt, S., 1993, Phys. Lett. B, 301, 159.
. T Csörgö, J Zimányi, Phys. Rev. Lett. 80916Csörgö, T. and Zimányi, J., 1998, Phys. Rev. Lett., 80, 916.
. V I Yukalov, E P Yukalova, Phys. Part. Nucl. 2837Yukalov, V.I. and Yukalova, E.P., 1997, Phys. Part. Nucl., 28, 37.
. V I Yukalov, E P Yukalova, Physica A. 243382Yukalov, V.I. and Yukalova, E.P., 1997, Physica A, 243, 382.
. A Faessler, A J Buchmann, M I Krivoruchenko, B V Martemyanov, Phys. Lett. B. 391255Faessler, A., Buchmann, A.J., Krivoruchenko, M.I., and Martemyanov, B.V., 1997, Phys. Lett. B, 391, 255.
. A J Buchmann, A Faessler, M I Krivoruchenko, Ann. Phys. 254109Buchmann, A.J., Faessler, A., and Krivoruchenko, M.I., 1997, Ann. Phys., 254, 109.
. J C Howard, B Jouvet, Nuovo Cimento. 18466Howard, J.C. and Jouvet, B., 1960, Nuovo Cimento, 18, 466.
. M Gell-Mann, F Zachariasen, Phys. Rev. 124953Gell-Mann, M. and Zachariasen, F., 1961, Phys. Rev., 124, 953.
. M T Vaughn, R Aaron, Amado , R D , Phys. Rev. 1241258Vaughn, M.T., Aaron, R., and Amado, R.D., 1961, Phys. Rev., 124, 1258.
. R Acharya, 870Acharya, R., 1962, Nuovo Cimento, 24, 870.
. A Salam, Salam, A., 1962, Nuovo Cimento, 25, 224.
. S Weinberg, Phys. Rev. 130776Weinberg, S., 1963, Phys. Rev., 130, 776.
. S Weinberg, Phys. Rev. 131440Weinberg, S., 1963, Phys. Rev., 131, 440.
. S Weinberg, Phys. Rev. B. 133232Weinberg, S., 1964, Phys. Rev. B, 133, 232.
. M Scadron, S Weinberg, Phys. Rev. B. 1331589Scadron, M. and Weinberg, S., 1964, Phys. Rev. B, 133, 1589.
. M Scadron, S Weinberg, Wright , J , Phys. Rev. B. 135202Scadron, M., Weinberg, S., and Wright, J., 1964, Phys. Rev. B, 135, 202.
. S Weinberg, Physica A. 96327Weinberg, S., 1979, Physica A, 96, 327.
. R Machleidt, K Holinde, C Elster, Phys. Rep. 1491Machleidt, R., Holinde, K., and Elster, C., 1987, Phys. Rep., 149, 1.
| []
|
[
"Thermally-induced qubit coherence in quantum electromechanics",
"Thermally-induced qubit coherence in quantum electromechanics",
"Thermally-induced qubit coherence in quantum electromechanics",
"Thermally-induced qubit coherence in quantum electromechanics"
]
| [
"N Etehadi Abari \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n",
"A Rakhubovsky \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n",
"R Filip \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n",
"N Etehadi Abari \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n",
"A Rakhubovsky \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n",
"R Filip \nDepartment of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic\n"
]
| [
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic",
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic",
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic",
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic",
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic",
"Department of Optics\nPalacký University\n17. Listopadu 12771 46OlomoucCzech Republic"
]
| []
| Quantum coherence, the ability of a quantum system to be in a superposition of orthogonal quantum states, is a distinct feature of the quantum mechanics, thus marking a deviation from classical physics. Coherence finds its applications in quantum sensing and metrology, quantum thermodynamics and computation. A particularly interesting is the possibility to observe coherence arising in counter-intuitive way from thermal energy that is without implementation of intricate protocols involving coherent driving sequences. In this manuscript, we investigate quantum coherence emerging in a hybrid system composed of a two-level system (qubit) and a thermal quantum harmonic oscillator (a material mechanical oscillator), inspired by recent experimental progress in fabrication of such systems. We show that quantum coherence is created in such a composite system solely from the interaction of the parts and persists under relevant damping. Implementation of such scheme will demonstrate previously unobserved mechanisms of coherence generation and can be beneficial for hybrid quantum technologies with mechanical oscillators and qubits. | 10.1088/1367-2630/ac9a66 | [
"https://export.arxiv.org/pdf/2206.04499v1.pdf"
]
| 249,538,428 | 2206.04499 | 572f9e9236f905ef5ef29439994b86d9cf191f90 |
Thermally-induced qubit coherence in quantum electromechanics
(Dated: June 10, 2022)
N Etehadi Abari
Department of Optics
Palacký University
17. Listopadu 12771 46OlomoucCzech Republic
A Rakhubovsky
Department of Optics
Palacký University
17. Listopadu 12771 46OlomoucCzech Republic
R Filip
Department of Optics
Palacký University
17. Listopadu 12771 46OlomoucCzech Republic
Thermally-induced qubit coherence in quantum electromechanics
(Dated: June 10, 2022)
Quantum coherence, the ability of a quantum system to be in a superposition of orthogonal quantum states, is a distinct feature of the quantum mechanics, thus marking a deviation from classical physics. Coherence finds its applications in quantum sensing and metrology, quantum thermodynamics and computation. A particularly interesting is the possibility to observe coherence arising in counter-intuitive way from thermal energy that is without implementation of intricate protocols involving coherent driving sequences. In this manuscript, we investigate quantum coherence emerging in a hybrid system composed of a two-level system (qubit) and a thermal quantum harmonic oscillator (a material mechanical oscillator), inspired by recent experimental progress in fabrication of such systems. We show that quantum coherence is created in such a composite system solely from the interaction of the parts and persists under relevant damping. Implementation of such scheme will demonstrate previously unobserved mechanisms of coherence generation and can be beneficial for hybrid quantum technologies with mechanical oscillators and qubits.
I. INTRODUCTION
Coherence is a fundamental concept in quantum mechanics that is connected to the superposition of quantum states in a basis preferred for a certain application. Quantum states that possess this non-zero coherent superposition of basis states can provide advantage for science and technology over the incoherent statistical mixtures of the same basis states. Coherence enhances performance of the quantum protocols in sensing and metrology [1,2], quantum thermodynamics [3,4], and quantum information processing [5][6][7]. Quantum coherence has been shown to play a role in biological processes as well [8,9]. In order to quantify the coherence, a few resource theories have been put forward [10][11][12][13]. Interplay between coherence and other quantum resources such as entanglement, discord and steering has been investigated in [14,15]. On the other hand, it remains unexplored how quantum coherence emerges during quantum dynamics from incoherent thermal states.
Generally, quantum coherence of an open system emerges in presence of an external strong coherent drive. Recently, it has been shown [16] that quantum coherence can emerge in a steady state of a system that only interacts with its environment given certain properties of this interaction. Subsequent studies proposed similar system-environment phenomena [17][18][19][20][21][22]. In parallel, an experimental proposal in double-quantum-dot solid-state systems was analyzed [23]. However, even proof-ofprinciple experimental tests of such phenomena are still missing due to the challenging engineering of composite interactions.
In our work we investigate coherence emerging in a hybrid electromechanical system similar to the one studied in [24]. We show that coherence in each subsystem can emerge solely from coherent interaction between the constituents that start from fully incoherent states. We analyze such thermal rise of quantum coherence in quantum electromechanics and propose an experiment to observe the principle mechanism. Moreover, we describe a regime where thermal mechanical oscillator monotonously stimulates qubit coherence, even if the phonon number is much larger than unity. Hybrid systems such as this combine benefits of the constituents allowing the positive synergy to open new perspectives in science and technology. Electromechanical systems, by combining advantages of superconducting devices and high-Q mechanical oscillators, allow preparation of exotic states of macroscopic mechanical oscillators [24,25], and transduce quantum information between microwave and optical domains [26,27]. Such transduction not only allows an effective long-range communication between superconducting devices but also their effective readout by optical means [28].
II. RESULTS
A. Model of the qubit-mechanical system
In this manuscript, we demonstrate a possibility to generate coherence in a coupled system of nanomechanical oscillator and a two-level system (a qubit) from a fully incoherent state. A schematic depiction of the scheme is in Fig. 1 (a). First, we introduce FIG. 1. (a) Schematic diagram of the physical system. A single-mode mechanical harmonic oscillator of frequency ω m is coupled to a qubit (frequency ω q ) via a general coupling rate g 0 . (b) A sketch of the interaction protocol between the qubit and the mechanical mode. Before the interaction, the qubit and the mechanical oscillator are prepared in the incoherent states, respectively, ρ q (0) and ρ m (0), either by cooling, or by equilibration with the corresponding bath. The quantum coherence is evaluated after the interaction has finished, and can be probed by microwave readout. (c) An experimental illustration of the model consisting of a charge qubit (CPB) coupled to the mechanically compliant capacitors in an electromechanical system [24]. The red-dashed rectangle area indicates the Josephson Junction (JJ) represented by a nonlinear inductor and a Josephson capacitor C J . The suspended superconducting islands of the CPB which connect the charge qubit to the superconducting reservoir (other parts of the circuit) are displayed in light and dark green colors. The motion of the mechanical oscillator (blue electrodes) can modify the separation between the two capacitors C ± m (x) which are modulated with the opposite phase by the anti-symmetric motion of the mechanical oscillator (MO). V dc characterizes the DC-voltage applied to the MO. The gate-charge (offset charge) n g = C g (x)V g (x)/2 applying on the CPB, can be defined by the equivalent capacitor C g (x) and voltage V g (x) of the circuit which are now position-dependent (see Appendix A). The modulation of the offset charge via the mechanical motion induces a coupling between the mechanical motion and qubit-electrostatic energy. (d) The equivalent circuit of the experimental model. a theoretical description of the system and the figures of merit. The electromechanical systems of interest, akin to investigated in [24], can be described by the Hamiltonian ( ≡ 1)
H = ω q 2 σ z + ω m 2 (X 2 m + P 2 m ) + √ 2g 0 (sin θσ x + cos θσ z )X m .(1)
Here the first two terms describe the free dynamics of the qubit (with Pauli matrices σ i and transition frequency ω q ) and the nanomechanical oscillator (with eigenfrequency ω m and the dimensionless position and momentum quadratures, respectively, X m and P m normalized such that [X m , P m ] = i). For convenience, we also define the detuning ∆ = ω q − ω m . The third term in (1) describes the interaction between the qubit and mechanics required to achieve emerging quantum coherence [16]. We focus on proof-of-principle demonstration of the interaction mechanism using only one dominant mode at the frequency ω m coupled to an external bath. From a thermal occupation of the qubit, this composite interaction can generate a coherent displacement of the oscillator, continuously generating quantum coherence in the qubit. In an experiment the hybrid interaction can be realized via capacitive, magnetic flux or electromotive coupling methods [29]. The coupling can be tuned in magnitude by changing the rate g 0 or adjusted by manipulating the value of θ. This can be advantageously reached by utilizing the suitable lumped elements in the superconducting circuit [24,29,30] since in our model θ depends on the charging and Josephson energies while g 0 can be controlled through DC voltage bias and capacitors of the circuit as well as charging energy (see Fig. 1(c,d) and Appendix A for more details).
To investigate emerging coherence in such system, we assume that both mechanics and qubit are prepared initially in thermal states, states that lack coherence in the natural basis of Fock states. The initial state of the compound system therefore reads
ρ(0) = ρ qubit (0) ⊗ ρ m (0) = P ee |e e| + (1 − P ee )|g g| ⊗ ∞ k=0 n k m (1 + n m ) k+1 |k k|,(2)
where |g [|e ] is the ground [excited] state of the qubit, |k is a Fock state of the mechanical oscillator, P ee = n q /(2n q + 1). The mean occupation number of mechanics n m and the occupation parameter n q of the qubit obey Bose-Einstein statistics: Maximally attainable coherence C q and mechanical displacement C m as a function of the initial occupation of mechanics (a,d) or qubit (b,e) given a constant initial occupation of the other subsystem. In (a,d) the qubit is initially in the ground state P ee = 0. In (b,e) the mechanics has initial occupation n m = 0.5. Insets show the evolution of C q and C m as functions of time for different occupation. Note that C q and C m assume their corresponding maximal values at different instants of time.
n i = [exp( ω i /k B T i ) − 1] −1 for i = q,
(c,f) Optimum values of C q , C m as a function of the initial temperature assuming equal temperature baths for both subsystems. The inset plots of panels (c,f) show how C max q , C max m change as a function of equal initial occupation (in this case, the initial temperatures differ in panel (f)). In each panel, weak coupling regime g 0 = 0.1ω m is assumed. The panels (a,b,c) correspond to the resonance between the qubit and MO (ω m = ω q ), in (d,e,f) ω q − ω m = ∆ = 10ω m .
The dynamics generated by the Hamiltonian (1) is capable of driving the initially incoherent state (2) into a state in which both mechanics and the qubit possess quantum coherence. From a plethora of available measures of coherence (see Ref. [11] for a review), we choose the l 1 -norm-based measure [31] to quantify the qubit coherence. This measure has the meaning of mean displacement in xy−plane and can be computed for the qubit as
C q = σ x 2 + σ y 2 .(3)
Throughout the manuscript we will compare the qubit's coherence with the mean coherent displacement of the oscillator C m = X m 2 + P m 2 . Note that in general the l 1 -norm such as displacement is not a proper coherence monotone for an oscillator (a system with infinite-dimensional Hilbert space) as it can diverge on states with finite mean energy [32]. Nevertheless, the mean coherent displacement is an illustrative quantity that can provide a quantum advantage in e.g. metrology. The mean values in Eq. (3) are computed over the evolved quantum state ρ(t). In the case of unitary dynamics, ρ(t) = e −iHt ρ(0)e iHt . In a realistic case where both systems are subject to decoherence caused by interaction with the corresponding environment, one has to use more complicated tools, such as solving master equation (see Section IV for elaboration).
B. Quantum coherence generated by pulsed noiseless dynamics
The simple model of the Hamiltonian (1) captures a rich dynamics whose exact type depends on the interplay between the eigenfrequencies of individual subsystems ω m,q and the coupling defined by its magnitude g 0 and phase θ. Moreover, generation of coherence in this system is determined by the initial state before the interaction starts. In this subsection, we show that, counterintuitively, increasing temperature of the initial quantum state can be beneficial for generation of coherence in the qubit. To estimate the limits of attainable values of coherence, we start with the noiseless case when the two subsystems, the MO and the qubit are decoupled from their environments and only couple to each other.
In order to see the effect of the initial thermal occupation on the coherence generation, we simulate the dynamics of the system driven by only the Hamiltonian (1) and ignore the coupling to the environment. In this case, the quantum state of the bipartite system after the interaction can be obtained straightforwardly by applying unitary transformation to the initial product state (2). The estimates of the coherence emerging from the unitary qubit-mechanical interaction are shown at Fig. 2. The numerical study assumes weak coupling regime g 0 = 0.1ω m ω m + ω q , and θ = π/4, equal coupling of mechanical displacement to both σ x and σ z , in order to gain the optimum values of the coherence parameters (the dependence of the coherence parameters to θ, i.e., coupling rates g x , g z , as well as the absolute value of the qubit-mechanical coupling g 0 and the detuning ∆ is discussed in more details in Sec. II C).
As seen in Fig. 2(a,d), having a hotter initial mechanical state has a positive effect on qubit coherence such that by increasing the mechanical temperature or equivalently increasing thermal occupation n m , we reach higher maximum values for C max q . The duration of time it takes to reach the maximal value C max q is also reduced with increasing initial occupation n m , which is illustrated by the inset plots. This phenomenon contrasts with a steady-state qubit coherence induced by a multimode bosonic bath [16,21,23], where the maximum of coherence appears for vanishing temperature. Interestingly, the increase in coherence is accompanied by only a moderate coherent displacement decrease in the oscillator. The opposite happens when the qubit's initial temperature is increased at ∆ = 0 when we fix the value of n m = 0.5. As seen in Fig. 2(b) the maximum accessible amounts of C q can be reached when P ee = 0, i.e., n q = 0. Elevated initial occupations of the qubit do not significantly alter the qubit coherence C q in the dispersive regime ∆ = 10ω m . Therefore, it is advantageous to keep the qubit initially in the ground state and increase the oscillator's initial temperature to observe emerging quantum coherence, more significant than the steady-state coherence [16,21,23].
The optimum values of C m show a slow reduction as a function n m for ∆ = 0, while in the dispersive regime the maximum values of C max m do not change considerably with respect to n m (compare Fig. 2(a,d), blue dots, and inset plots for C m ). In addition, by increasing the detuning and moving from the resonance case to the off-resonance one, the decrease rate of C max m becomes faster when the qubit temperature rises (compare Fig. 2(b,e), blue dots, and inset plots for C m ).
Finally, for the case in which the initial temperatures of the qubit and the MO are equal (T m = T q = T ), the maximum attainable amounts of C q and C m are shown in panels (c) and (f) of Fig. 2, for resonance and off-resonance cases, respectively. Inset plots of Fig. 2(c,f) also demonstrate the optimum values of coherence parameters as a function of the initial occupation assumed equal for both subsystems (n q = n m = n m,q ). As is seen, by increasing the thermal occupation numbers of two subsystems at the same time, i.e., increasing n m,q , C max q , and C max m decrease (the reduction rate of the C max q as a function of n m,q is not significant in dispersive regime ∆ = 10ω m ). For the case of resonance, the results of the inset plots are the same as the main plot (c), as ω m = ω q and T m = T q = T give us the identical occupations n m = n q = n m,q . However, at ∆ = 10ω m , the main plot of Fig. 2
(f) for C max q
shows a small increase as the temperature of the baths rises simultaneously. Therefore, we can conclude that as long as n q < n m , by raising the temperature, it is possible to observe an increase of the value of the qubit coherence parameter.
In addition, by comparing the first row and the second row of Fig. 2, we realize that by increasing the detuning, the energy exchange between the mechanical mode and qubit through the coupling channel g x = g 0 sin θ reduces, which causes the reduction in maximum accessible amount of qubit coherence since C q depends on both g x and g z = g 0 cos θ (see Sec. II C for further details). On the other hand, as C m is only influenced by coupling rate g z , increasing the detuning does not affect the maximum reachable amount of C max m .
C. Effect of the interaction parameters on the coherence generation
To demonstrate the effects of the coupling rates g x and g z on the generation of quantum coherence in the system, in first and second columns of Fig. 3, we showed the evolution of coherence parameters C q and C m in time and with respect to θ, in weak coupling regime g 0 = 0.1ω m , for resonance (∆ = 0) and off-resonance (∆ = 10ω m ) conditions, respectively. For both cases, the maximum oscillator displacement rises at ω m t = π, but the maximum qubit coherence appears delayed at resonance in Fig. 3 (a,b). Out-of-resonance, in Fig. 3 (d,e), both displacement and coherence appear synchronously.
As is seen from Fig. 3(a,d), the qubit coherence parameter C q (t) takes the non-zero value when θ nπ/2 (n = 1, 2, · · · ), i.e., when both g x , g z 0. The maximum amount of C q (t) can be obtained for θ = (2n + 1)π/4, which shows that C q strongly depends on the factor |g x g z | = |g 2 0 sin(2θ)/2|. In addition, increasing the detuning causes a fast reduction in the maximum available amounts of qubit coherence C q (compare panels (a) and (d) in Fig. 3). Moreover, at resonance, the evolution of C q (t) becomes maximized around t ≈ 2mπ/ω m (m ∈ N), whereas at ∆ = 10ω m , the interference pattern shows itself in shorter time interval and the maximum values of C q shift to smaller time interval 2π/3 < ω m t < 4π/3.
On the other hand, for the fixed values of g 0 = 0.1ω m , n m = 0.5 and n q = 0, the mechanical displacement C m (t) is not influenced by changing the detuning (see Fig. 3(b,e)) and is only affected by the displacement coupling rate g z = g 0 cos θ. Therefore, the maximum amount of C m is achieved when θ = (2n + 1)π/2 and t ≈ (2m − 1)π/ω m .
In Panels (c) and (f) of Fig. 3, the maximum values of dynamical coherence parameters are depicted as a function of absolute qubit-mechanical coupling g 0 /ω m which shows that the stronger coupling gives rise to higher quantum coherence in the system.
The dependence of the mean values of the mechanical quadratures X m (t) and P m (t) on g z and the Pauli matrices σ x (t) and σ y (t) on both g x and g z can also be revealed analytically for a very short time interval in ideal evolution where we can approximate the time evolution operator U(t) = e −iHt ≈ I − iHt. Therefore, the final state of the system up to second order in time is given by
ρ f (t) ≈ ρ(0) − it H, ρ(0) + t 2 H ρ(0) H + O(t 2 ).(4)
Under such approximation, the system operators' mean values become
X m (t) ≈ √ 2g z t 2 ω m n m (4n m + 3)(2P ee − 1) + ω q 2 (2n m + 1) ,(5a)P m (t) ≈ − √ 2g z t (2P ee − 1), (5b) σ x (t) ≈ 2g x g z t 2 (2n m + 1) (2P ee − 1) , (5c) σ y (t) ≈ 0. (5d)
where P n,n = n n m /(1 + n m ) n+1 denotes coefficients of expansion of the initial thermal state of the mechanics in the Fock-state basis, n m is this state's mean occupation. From Eq. 5, we see that up to O(t 2 ), the mechanical quadratures X m (t) and P m (t) are only affected by g z . However, σ x (t) and therefore, C q depends on the product g x g z . From Eqs. (5c) and (5d), we obtain C q ≈ | σ x (t) | = 2|g x g z |t 2 (2n m + 1)/(2n q + 1). This indicates that in short time interval, C q changes quadratically with time (C q ∝ t 2 ). The qubit coherence C q also depends on the mechanical and the qubit occupation number ratio C q ∝ (2n m +1)/(2n q +1) which reveals why we could attain better results of C max q when n q < n m (see Fig. 2). Hence, the best result can be achieved when we fix n q = 0, while increasing the initial occupation n m (see Fig. 2). While the short-time approximation agrees qualitatively with simulations, the quantitative agreement holds only for very short times ω m t 1.
The maximal values of coherence are reached at considerably longer times which, unfortunately, do not admit the analytical solution.
It is also worth looking at the variations of the quantum coherence with respect to the detuning to investigate the resonant nature of this phenomenon. The optimum values of coherence parameters C max q and C max m as a function of normalized detuning ∆/ω m have been demonstrated in Fig. 4(a), where we can detect a maximum peak for C max q around ∆/ω m ≈ 0. However, the maximum amounts of the mechanical coherent displacement C max m won't alter much as a function of detuning which is consistent with Fig. 3(b,e) when we fix the values of g 0 = 0.1ω m , θ = π/4, n m = 0.5 and n q = 0.
In addition, in panels (b,c) of Fig. 4, we showed the evolution of C q and C m , respectively, for different values of normalized detuning. As is seen in panel (b), by changing the detuning from ∆ = −0.5ω m to ∆ = 10ω m and moving to the dispersive regime, the amplitude of C q diminishes fast. On the other hand, the oscillation amplitude of C m becomes maximized for the initial time interval, and increasing the detuning doesn't change it (see Fig. 4(c)).
To find out why the coherence parameters respond to the detuning like what is mainly shown in Fig. 4, it would be better to take a look at the Hamiltonian of the system in the interaction picture, given by
H (I) = e +iH 0 t He −iH 0 t − H 0 = g x (σ − a † e −i∆t + σ + a e +i∆t ) + g x (σ + a † e iΣt + σ − a e −iΣt ) + g z σ z (a † e +iω m t + a e −iω m t ),(6)
where a = (X m + iP m )/ √ 2 denotes the mechanical annihilation operator, and Σ = ω q + ω m . From Eq. (6), we can see that for ∆ ≈ 0, the rotating terms g x (σ − a † + σ + a), which are responsible for an exchange of excitations between the qubit and the MO, play the dominant role in the dynamics of the system, more specifically in C q through the coupling channel g x . By increasing the absolute value of the detuning, both the rotating and counter-rotating terms in Eq. (6) start oscillating fast with the frequency of ∆ and Σ, respectively. In the dispersive regime, where Σ > ∆ ≥ 10ω m , and due to the adiabatic evolution, the energy exchange between the qubit and the MO which mainly happens through the coupling channel g x diminishes. This affects the qubit coherence which depends on both g x and g z factors and leads us to the smaller maximum amounts of C q . As the mechanical coherent displacement is mainly influenced by the coupling rate g z and therefore, the displacement term g z σ z (a † e +iω m t + a e −iω m t ), changing the detuning can not significantly impact C m (see Fig. 3(b,e) and Fig. 4(a,c)).
D. Quantum coherence in the presence of damping and noise
In order to study the dynamics of the system more realistically, we need to take the dissipation and decoherence effects into account. For an open system interacting with an environment, its density matrix obeys the Lindblad master equatioṅ
ρ = −i[H, ρ] + γ m 2 (n m + 1)L(a)ρ + γ m 2 n m L(a † )ρ + γ q 1 2 (n q + 1)L(σ − )ρ + γ q 1 2 n q L(σ + )ρ(7)
Here,
L(O) = 2OρO † − (O † Oρ + ρO † O) (O ≡ a, a † , σ ± )
denotes the Lindblad superoperator. Further, γ m , γ q 1 = 1/T 1 represent the mechanical and qubit relaxation rates, respectively. By solving the master equation (7) numerically, we have investigated the effects of the mechanical and qubit damping on the dynamics of coherence parameters for resonance case ∆ = 0 (see Fig. 5).
In panel (a) of Fig. 5, we showed the changes of the attainable quantum coherence with respect to the mechanical damping rates γ m /ω m in the absence of the qubit dissipation and noise (γ q 1 = n q = 0) and when the system is operated in resonance condition ∆ = 0 and weak coupling regime g 0 = 0.1ω m . We also consider the mechanical occupation to be n m = 0.5. As can be seen in Fig. 5(a), the maximum values of coherence parameters C max q and C max m do not change considerably as γ m /ω m increases. In addition, it is evident from the inset plots of Fig. 5(a) that the dynamical coherence parameters overlap for all γ m /ω m < 10 −2 which means that they are completely robust against mechanical damping as far as γ m /ω m < 10 −2 . Moreover, larger values of the mechanical dissipation such as γ m /ω m = 10 −2 , do not affect the coherence parameters for the initial time interval (red dotted-dashed lines in inset plots of Fig. 5(a)). However, we could observe decrease of the coherence parameters for a longer time. By comparing the inset plots in 5(a), we realize that C q decreases with the faster rate than C m for γ m /ω m = 10 −2 . The evolution of the coherence parameters in the presence of the normalized qubit damping rate γ q 1 /ω m is plotted in Fig. 5(b) when we consider n m = 0.5, n q = 0 and γ m /ω m = 10 −6 . In this case, we can see that C max q decreases with increasing qubit relaxation rate, while C max m does not change much with increasing γ q 1 /ω m which emphasizes the robustness of mechanical displacement against qubit damping. The inset plots also confirm these results. In addition, by looking to the inset plots of Fig. 5(b), it is clear that for γ q 1 /ω m = 10 −2 , coherence parameters would be resistant to the qubit dissipation in shorter time interval ω m t ≤ 2π. Simulations in the longer time interval show that both C q and C m decrease and eventually reach small non-zero steady-state values (C q ≈ 0.01, C m ≈ 0.1).
To summarize our study of the influence of the baths, the maximum of attainable coherence seems to be reached at rather early times that amount to the interaction running for only a few periods of mechanical oscillations. For the state-of-theart electromechanical systems, due to their exceptional Q-factors, the interaction at these timescales is very close to unitary. Therefore, we can state that the interaction with thermal reservoirs during the coherent interaction between the mechanics and the qubit, has very limited effect on the maximal coherence attainable from fast pulsed interaction studied here.
III. DISCUSSION
In this article, in contrast to the previous steady-state studies [16,19] we theoretically investigated the possibility of generating transient quantum coherence in a qubit-mechanical system from incoherent thermal states. We studied the transient interaction between a charge qubit and a mechanical oscillator, similar to what is found in electromechanical setups [24,25,[33][34][35][36][37]. We showed how the sensitivity of the qubit to the offset charge enables us to couple the qubit to the mechanical motion in both vertical and parallel ways with respect to the eigenstates of the free Hamiltonian of the qubit as far as the system is operated near the degeneracy point. The simultaneous presence of these two different coupling rates allows the observation of the qubit coherence in the system with the initial incoherent thermal state. This is so in both the ideal case of unitary interaction and in the dissipative situation. It should be noted that in this model, dynamical coherence emerges without the use of conventional methods such as coherent driving [38] or coherence measurement [39,40].
Differently to the steady-state coherence, the thermal occupation number of the mechanical mode has a positive effect on generating larger coherence of the qubit. We observed that increasing the net values of the coupling rate g 0 causes an improvement in the maximum accessible amounts of the qubit coherence and mechanical coherent displacement. In addition, we demonstrated how the parallel and perpendicular components g z and g x of the coupling rates affect the quantum coherence. In the case of the qubit coherence C q , the product g x g z plays the main role while for mechanical displacement C m , the parallel coupling g z becomes important. The maximum coherence values for the qubit and the MO could be obtained for |g x | = |g z |, i.e., when we set the optimum value θ = π/4 for the coupling phase. Moreover, we showed that the qubit coherence parameter is strongly dependent on the detuning ∆ through the coupling channel g x such that by adjusting the detuning and setting it close to resonance ∆ ≈ 0, where the role of the rotating term associated with the coupling rate g x gets dominant, we reach the maximum values for the qubit coherence parameter. However, changing the detuning can not significantly alter the maximum values of mechanical displacement. Finally, we obtained that the mechanical coherence generated in our model is almost robust against both the mechanical and the qubit damping processes, while the larger values of qubit damping rate (γ q 1 > 10 −2 ω m ) give rise to the decaying of the qubit coherence parameter.
The experimental realization of such a model has been already demonstrated in Ref. [24]. Aside from the electromechanical setups, there are other experimental platforms for the realization of our model such as trapped ions [41,42] and NV-centers coupled magnetically to the mechanical motion [43][44][45]. Hybrid atom-optomechanical and electro-optomechanical systems also provide a great potential for this purpose [46][47][48][49][50][51].
Quantum coherence counts among fundamental resources in quantum information processing and quantum computation [10,11,52]. It also provides great applications in the context of quantum sensing [53], quantum thermodynamics [54][55][56], quantum biology [8] and non-equilibrium models [4,57,58]. In each of these fields, autonomous emergence of quantum coherence can be beneficial. Such proof-of-principle experimental tests will further investigate the emergence of quantum coherence and extensions of the mechanisms we addressed here.
IV. METHODS
A. Tools for numerical calculation
In this manuscript, we use the QuTiP package [59,60] to numerically investigate the evolved density matrix as well as coherent properties of the system in both ideal and dissipative situations. For the ideal case, we solve the von Neumann equatioṅ ρ(t) = −i[H, ρ] with the initial condition (2). However, the total density matrix in an open system is obtained by solving the master equationρ
= −i[H, ρ] + n 1 2 2A n ρ(t)A † n − ρ(t)A † n A n − A † n A n ρ(t) ,(8)
numerically, where in our system A 1 = γ m (n m + 1) a, A 2 = √ γ m n m a † , A 3 = γ q 1 (n q + 1) σ − and A 4 = √ γ q 1 n q σ + . As mentioned before, the Hamiltonian H appearing in von Neumann and master equations is given by
H = H 0 + H int ,(9)
where H 0 = H q + H m characterizes the free dynamics of the qubit and the MO with H q = ω q σ z /2 and H m = ω m (X 2 m + P 2 m )/2. The general form of the interaction term between a qubit and the MO can be modeled as
H int = g 0 (n · σ)X m ,(10)
where n is a normal vector in Bloch space such that n · σ = σ x cos φ sin θ + σ y sin φ sin θ + σ z cos θ.
In most experimental works [25,37], the mechanical mode only couples to the one component of the Pauli matrix, i.e., σ x X m (φ = 0, θ = π/2). However, it is also possible to couple the mechanical motion to more than one component of the Pauli matrix due to the imperfection of the quantum circuit. An experimental realization of such model can be achieved by an electromechanical system, where a nanomechanical oscillator coupled capacitively to a Cooper-pair box (CPB) as a charge qubit operating near the so-called degeneracy point (see Fig. 1(b)) [24]. In this setup, the tiny vibration of the mechanical oscillator can modify the gate-voltage V g (x) as well as the gate-capacitor C g (x) such that the gate-charge n g (x) = C g (x)V g (x)/2e becomes mechanically position-dependent (see Appendix A). By controlling the sensitivity of the charge qubit with respect to the gate charge n g (x), the direct coupling between the qubit and the MO becomes possible. For the charge qubit, the dynamics and the transition frequency ω q are strongly dependent on gate-charge n g (x) and therefore on mechanical displacement operator x. Such dependence on the one hand could be destructive as the offset-charge can induce noise to the qubit and increases its decoherence rate. On the other hand, it induces a desirable coupling between the qubit and the mechanical modes in our model. In this case, H int = (g x σ x + g z σ z ) X m describes the interaction Hamiltonian where g y = 0 (for φ = 0) and g x = g 0 sin θ, while g z = g 0 cos θ characterizes the residual coupling rate (see Appendix A).
The presence of the coupling term g x σ x X m and the additional coupling g z σ z X m at the same time, which contain the perpendicular and parallel components σ x and σ z , with respect to the free Hamiltonian of the qubit H q = ω q σ z /2, make it possible to produce a coherent state for a qubit from the completely incoherent initial state (2). In addition, the presence of an additional term g z σ z X m in this case, which also contains mechanical displacement, applies the net average force on the MO. This allows the observation of the mechanical coherence in the system as well.
To quantify the quantum coherence of the qubit and the MO, we employ the measure of the l 1 -norm of coherence and define the qubit coherence as C q (t) = σ x (t) 2 + σ y (t) 2 and use C m (t) = X m (t) 2 + P m (t) 2 for the mechanical coherent displacement, respectively. The expectation values of time-dependent operators σ x (t) , σ y (t) , X m (t) and P m (t) are determined through the following relations
σ x(y) (t) = Tr ρ(t) (σ x(y) ⊗ I n ) = Tr ρ q (t) σ x(y) , (12a) X m (t) = Tr ρ(t) (I q ⊗ X m ) = Tr ρ m (t) X m , (12b) P m (t) = Tr ρ(t) (I q ⊗ P m ) = Tr ρ m (t) P m ,(12c)
where I n , I q are the identity operators for the qubit and the MO, ρ(t) represents the evolved density matrix of the system, while ρ q (t) and ρ m (t) denote the reduced density matrices of the qubit and the MO, respectively. Once we compute the evolved density matrix of the system in both ideal and non-ideal situations, we can easily calculate the coherence parameters. To extract the interaction term, we start with the equivalent circuit of Fig. 1(d), such that the equivalent voltage V g (x), which is the voltage difference across open terminals A and B (the equivalent voltage applied across the Josephson junction), is given by (see Fig 6(
a)) V g (x) = V A (x) − V B (x) = V dc C − m (x) C − m (x) + C 0 − C + m (x) C + m (x) + C 0 , (A1) with C ± m (x) = 0 A x 0 ± x = C 0 m (1 ± x/x 0 ) ,(A2)
where x 0 indicates the static separation between the parallel plate capacitors C ± m (x), while 0 and A represent the permittivity and area of the plate capacitors, respectively. By expanding V g (x) around small motion at x = 0, we have
C ± m (x) ≈ C 0 m (1 ∓ x x 0 ), (A3) V g (x) ≈ 2V dc C 0 m C 0 (C 0 m + C 0 ) 2 · x x 0 + O x x 0 2 .(A4)
Similarly, the equivalent capacitance C g (x) is found by replacing the DC-voltage source with a short circuit ( Fig. 6 (b)),
1 C g (x) = 1 C eq (x) = 1 C 0 + C − m (x) + 1 C 0 + C + m (x) ,(A5)C g (x) ≈ 1 2 (C 0 + C 0 m ) + O x x 0 2 .(A6)
Up to the first order in x, only the gate-voltage is linearly controlled by the mechanical displacement. Therefore, the off-set charge n g (x) = C g (x)V g (x)/2 becomes
n g (x) ≈ V dc 2ex 0 · C 0 m C 0 (C 0 + C 0 m ) x.(A7)
The general Hamiltonian of the qubit in the presence of the mechanical motion is given by where E c and E J are the charging and Josephson energy, respectively, n is the Cooper-pair number operator and ϕ is the superconducting phase operator which can be related to the flux operator through ϕ = 2πΦ/Φ 0 , where Φ 0 = h/(2e) is the flux quantum. In the number-operator basis the second term of Eq. (A8) can be written as
H q (x) = 4E c n − n g (x) 2 − E J cos ϕ,(A8)− E J cos ϕ = − E J 2 n |n n + 1| + |n + 1 n| . (A9)
The eigenenergies of the Hamiltonian (A8) for each n-subspace is given by
λ (n) ± (x) = 4E c n − n g (x) 2 + 4E c n − n g (x) + 2E c ± 1 2 E 2 J + (4E c ) 2 1 + 2n − 2n g (x) 2 .(A10)
Taking the lowest two energy-levels |n = 0 and |n = 1 as a ground and excited states of a qubit, respectively, into account, the qubit frequency becomes
ω q (x) = ω (0) q (x) = E 2 J + (4E c ) 2 1 − 2n g (x) 2 ,(A11)
and the Hamiltonian (A8) takes the following form
H q (x) ≈ 4E c 1 − 2n g (x) |1 1| + 4E c n 2 g (x) I − E J 2 (|0 1| + |1 0|) .(A12)
Now, the interaction Hamiltonian near the charge degeneracy point n g ≈ 1/2 is given by,
H int = ∂H q ∂x x n g → 1 2 = 8E c (n − n g ) ∂n g (x) ∂x x n g → 1 2 ≈ 8E c |1 1| − n g ∂n g (x) ∂x x n g → 1 2 (A13) where ∂n g (x) ∂x = V dc 2ex 0 · C 0 m C 0 (C 0 + C 0 m ) .(A14)
Using the diagonal bases Here, we define x = √ 2x zpf X m with x zpf = √ /(2mω m ) being the zero-point fluctuation, σ z ≡ |+ +| − |− −| and σ x ≡ |+ −| + |− +| denote the z and x components of Pauli matrix, aligned with the energy quantization axis and perpendicular to it, respectively, and g 0 is the single phonon qubit-mechanics coupling, defined as
|+ = cos ϑ|0 + sin ϑ|1 ,(A15)g 0 = 4E c 2e · C 0 m C 0 (C 0 + C 0 m ) · x zpf x 0 V dc .(A17)
By using the spherical coordinates in the Bloch space, where φ = 0 and θ ≡ θ 0 + π/2, the general form of interaction Hamiltonian (1) is derived and the qubit-mechanical coupling rates g x and g z can be extracted as g x = g 0 cos θ 0 = g 0 sin θ, (A18) g z = −g 0 sin θ 0 = g 0 cos θ, In addition, standing close to the degeneracy point induces a small qubit-independent shift (QID) with the coupling rate g m = g 0 (1−2n g ) to the MO which is negligible for n g → 1/2 such that we have ignored it in Eq. (1). The complete dynamical behavior of the coherence parameters in the presence of this shift has been discussed in Appendix B.
time ω m t for two different values P ee = 0 and P ee = 0.48. In accordance with Fig. 7(a,b), the presence of g m can slightly change the values of the coherence parameters in time.
Similar to what we get in Eq. (5), we can also calculate the coherence components for a very tiny time interval, when we apply QID g m X m into the dynamics of the system X m (t) ≈ √ 2t 2 g z ω m (2P ee − 1)n m (4n m + 3) + ω q (n m + 1 2 ) + g m ω m n m (4n m + 3) + ω q (2P ee − 1)(n m + 1 2 ) , (B2a)
P m (t) ≈ − √ 2g z t(2P ee − 1),(B2b)
σ x (t) ≈ 2g x t 2 (2n m + 1) g z (2P ee − 1) + g m , (B2c) σ y (t) ≈ 0.
(B2d)
As is evident from Eqs. (5c) and (B2c), the qubit coherence parameter evolves quadratically in a very short time domain. By introducing the fitting function F (g m =0) fit = |2g x g z (2n m + 1)(2P ee − 1)t 2 | and F (+g m ) fit = |2g x (2n m + 1)(g z (2P ee − 1) + g m )t 2 | associated with the Eqn. (5c) and (B2c), in Panels (c) and (d) of Fig.7 and their insets we checked the consistency of the analytical and numerical results for qubit coherence in two conditions of the absence and the presence of the coupling rate g m , respectively. As can be seen, for a short time interval the results are matched which confirm that qubit coherence parameter behaves quadratically for the initial time interval.
m, with k B being the Boltzmann constant and T i the temperature of the corresponding subsystem.
FIG. 2. Maximally attainable coherence C q and mechanical displacement C m as a function of the initial occupation of mechanics (a,d) or qubit (b,e) given a constant initial occupation of the other subsystem. In (a,d) the qubit is initially in the ground state P ee = 0. In (b,e) the mechanics has initial occupation n m = 0.5. Insets show the evolution of C q and C m as functions of time for different occupation. Note that C q and C m assume their corresponding maximal values at different instants of time. (c,f) Optimum values of C q , C m as a function of the initial temperature assuming equal temperature baths for both subsystems. The inset plots of panels (c,f) show how C max q , C max m change as a function of equal initial occupation (in this case, the initial temperatures differ in panel (f)). In each panel, weak coupling regime g 0 = 0.1ω m is assumed. The panels (a,b,c) correspond to the resonance between the qubit and MO (ω m = ω q ), in (d,e,f) ω q − ω m = ∆ = 10ω m .
FIG. 3 .
3Coherence dynamics caused by qubit-oscillator interaction: (a,b,d,e) The contour plots of the coherence parameters C q and C m with respect to the normalized time ω m t and θ when g 0 = 0.1ω m . (c,f) The maximal attainable values of the quantum coherence as a function of normalized qubit-mechanical coupling g 0 /ω m for θ = π/4. In (a-c) ∆ = 0, in (d-f) ∆ = 10ω m . Other parameters are n q = 0 and n m = 0.5: the qubit is initialized in its ground state while the MO is in a thermal state.
FIG. 4 .
4Resonant features of emergent quantum coherence: (a) Optimum values of the coherence parameters as a function of the normalized detuning ∆/ω m . The definition of the detuning ∆ = ω q − ω m does not allow values below −ω m . The evolution of (b) the qubit coherence C q (t) and (c) the mechanical coherent displacement C m (t) for different values of detuning. Other numerical parameters are g 0 = 0.1ω m , θ = π/4, n m = 0.5, and n q = 0.
FIG. 5 .
5Robustness of emerging quantum coherence: optimum values of the coherence parameters as a function of the normalized (a) mechanical damping rate γ m /ω m when γ q 1 /ω m = 0 and (b) qubit damping rate γ q 1 /ω m for γ m /ω m = 10 −6 . Inset plots show the evolution of the coherence parameters for different values of (a) γ m /ω m and (b) γ q 1 /ω m . Other numerical parameters are ∆ = 0, g 0 = 0.1ω m , n m = 0.5, and n q = 0.
ACKNOWLEDGMENTS N.E.A. acknowledges the project CZ.02.1.01/0.0/0.0/16 026/0008460 of MEYS CR. A.A.R. and R.F. acknowledge the support of the project 20-16577S of the Czech Science Foundation. R.F. also acknowledges the grant LTAUSA19099 of MEYS CR. Appendix A: Extracting the interaction Hamiltonian of the qubit-mechanical system
FIG. 6 .
6(a) The Thevenin equivalent representation of the circuit inFig. 1(c,d)for calculating V g (x), and (b) the equivalent short circuit used for calculating C g (x).
FIG. 7 .
7|− = − sin ϑ|0 + cos ϑ|1 , where 2ϑ = π/2 − θ 0 and θ 0 = arctan[4E c (1 − 2n g )/E J ] [24], the Eq. (A13) can be written as H int = g 0 X m cos θ 0 σ x − sin θ 0 σ z + (1 − 2n g ) Optimum values of coherence parameters as a function of (a) the mechanical occupation number n m when P ee = 0 and (b) the qubit probability amplitude P ee when n m = 0.5, in the presence and in the absence of the coupling term g m . Inset plots in panels (a) and (b) show the evolution of the Bloch vector during ω m t ∈ [0, 30] for (a) different mechanical occupation numbers n m = 0 and n m = 15 with P ee = 0, as well as (b) different values of P ee = 0 and P ee = 0.48 with n m = 0.5. The evolution of C q for (c) P ee = 0 and (d) P ee = 0.48 together with the quadratic fittings in the absence and in the presence of g m , when n m = 0.5. Inset plots in panels (c) and (d) show the zoomed rectangle-region of fitting for a short time interval ω m t ∈ [0, 1]. (e) The evolution of C m for two different values of P ee = 0 and P ee = 0.48 with and without coupling constant g m . Other numerical parameters are the same as those in Fig. 2.
Appendix B: Effects of the coherent driving term on the quantum coherence In the vicinity of the degeneracy point (n g → 1/2), the coupling term g m = g 0 (1 − 2n g ) is too small in comparison with other coupling rates g x and g z so that, the QID term can slightly modify the dynamics of the quantum coherence. By considering the shift term g m X m , the interaction Hamiltonian of the system now becomesInFig. 7(a,b) the optimum values of the coherence parameters C q and C m with respect to the mechanical-thermal number n m(Fig. 7(a)) and the qubit weight P ee(Fig. 7(b)) are depicted in the presence and the absence of the coupling rate g m . From those panels, we can see that the constant shift moderately improves the results for both qubit coherence and mechanical coherent displacement. In addition, the inset plots in panels (a) and (b) ofFig 7 showthe evolution of the Bloch vector for different values of n m(Fig. 7(a)) and P ee(Fig. 7(c)) when we consider the coupling term g m 0. As is evident from inset plots ofFig. 7(a), by increasing the mechanical occupation numbers n m , the expectations σ x (t) and σ y (t) take larger values which give rise to larger amount of qubit coherence parameter C q . On the other hand, increasing the qubit thermal number n q or equivalently, P ee , causes the reduction in mean values of σ x (t) and σ y (t) and consequently C q (see inset plots inFig 7(b)). These results are completely in agreement with the previous outcomes explained in the body of the manuscript. Moreover, in Panels (c,d) and (e) ofFig. 7, the evolution of C q and C m in the absence and the presence of the QID term are depicted as a function of normalized
Advances in quantum metrology. Vittorio Giovannetti, Seth Lloyd, Lorenzo Maccone, 10.1038/nphoton.2011.35arXiv:1102.2318Nature Photonics. 5Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone, "Advances in quantum metrology," Nature Photonics 5, 222-229 (2011), arXiv:1102.2318.
Quantum sensing. C L Degen, F Reinhard, P Cappellaro, 10.1103/RevModPhys.89.035002arXiv:1611.02427Reviews of Modern Physics. 8935002C. L. Degen, F. Reinhard, and P. Cappellaro, "Quantum sensing," Reviews of Modern Physics 89, 035002 (2017), arXiv:1611.02427.
Energy cost of creating quantum coherence. Avijit Misra, Uttam Singh, Samyadeb Bhattacharya, Arun Kumar Pati, 10.1103/PhysRevA.93.052335Physical Review A. 9352335Avijit Misra, Uttam Singh, Samyadeb Bhattacharya, and Arun Kumar Pati, "Energy cost of creating quantum coherence," Physical Review A 93, 052335 (2016).
The role of quantum coherence in non-equilibrium entropy production. P Jader, Lucas C Santos, Gabriel T Céleri, Mauro Landi, Paternostro, 10.1038/s41534-019-0138-ynpj Quantum Information. 5Jader P. Santos, Lucas C. Céleri, Gabriel T. Landi, and Mauro Paternostro, "The role of quantum coherence in non-equilibrium entropy production," npj Quantum Information 5, 1-7 (2019).
Information and computation: Classical and quantum aspects. A Galindo, M A Martín-Delgado, 10.1103/RevModPhys.74.347Reviews of Modern Physics. 74A. Galindo and M. A. Martín-Delgado, "Information and computation: Classical and quantum aspects," Reviews of Modern Physics 74, 347-423 (2002).
Quantum computers. T D Ladd, F Jelezko, R Laflamme, Y Nakamura, C Monroe, J L O'brien, 10.1038/nature08812arXiv:1009.2267Nature. 464T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O'Brien, "Quantum computers," Nature 464, 45-53 (2010), arXiv:1009.2267.
Coherent control of quantum systems as a resource theory. J M Matera, D Egloff, N Killoran, M B Plenio, 10.1088/2058-9565/1/1/01LT01Quantum Science and Technology. 1J. M. Matera, D. Egloff, N. Killoran, and M. B. Plenio, "Coherent control of quantum systems as a resource theory," Quantum Science and Technology 1, 01LT01 (2016).
Quantum coherence in biological systems. Seth Lloyd, 10.1088/1742-6596/302/1/012037Journal of Physics: Conference Series. 30212037Seth Lloyd, "Quantum coherence in biological systems," Journal of Physics: Conference Series 302, 012037 (2011).
Quantum Coherence in Photosynthetic Light Harvesting. Akihito Ishizaki, Graham R Fleming, 10.1146/annurev-conmatphys-020911-125126Annual Review of Condensed Matter Physics. 3Akihito Ishizaki and Graham R. Fleming, "Quantum Coherence in Photosynthetic Light Harvesting," Annual Review of Condensed Matter Physics 3, 333-361 (2012).
Operational Resource Theory of Coherence. Andreas Winter, Dong Yang, 10.1103/PhysRevLett.116.120404Physical Review Letters. 116120404Andreas Winter and Dong Yang, "Operational Resource Theory of Coherence," Physical Review Letters 116, 120404 (2016).
Colloquium: Quantum Coherence as a Resource. Alexander Streltsov, Gerardo Adesso, Martin B Plenio, 10.1103/RevModPhys.89.041003arXiv:1609.02439Reviews of Modern Physics. 8941003Alexander Streltsov, Gerardo Adesso, and Martin B. Plenio, "Colloquium: Quantum Coherence as a Resource," Reviews of Modern Physics 89, 041003 (2017), arXiv:1609.02439.
Resource Theory of Coherence Based on Positive-Operator-Valued Measures. Felix Bischof, Hermann Kampermann, Dagmar Bruß, 10.1103/PhysRevLett.123.110402arXiv:1812.00018Physical Review Letters. 123110402Felix Bischof, Hermann Kampermann, and Dagmar Bruß, "Resource Theory of Coherence Based on Positive-Operator-Valued Mea- sures," Physical Review Letters 123, 110402 (2019), arXiv:1812.00018.
Quantum Coherences and Classical Inhomogeneities as Equivalent Thermodynamics Resources. Andrew Smith, Kanupriya Sinha, Christopher Jarzynski, 10.3390/e24040474Entropy. 24474Andrew Smith, Kanupriya Sinha, and Christopher Jarzynski, "Quantum Coherences and Classical Inhomogeneities as Equivalent Ther- modynamics Resources," Entropy 24, 474 (2022).
Quantum coherence in multipartite systems. Yao Yao, Xing Xiao, Li Ge, C P Sun, 10.1103/PhysRevA.92.022112Physical Review A. 9222112Yao Yao, Xing Xiao, Li Ge, and C. P. Sun, "Quantum coherence in multipartite systems," Physical Review A 92, 022112 (2015).
Quantum coherence and geometric quantum discord. Ming-Liang Hu, Xueyuan Hu, Jieci Wang, Yi Peng, Yu-Ran Zhang, Heng Fan, 10.1016/j.physrep.2018.07.004Physics Reports Quantum Coherence and Geometric Quantum Discord. Ming-Liang Hu, Xueyuan Hu, Jieci Wang, Yi Peng, Yu-Ran Zhang, and Heng Fan, "Quantum coherence and geometric quantum discord," Physics Reports Quantum Coherence and Geometric Quantum Discord, 762-764, 1-100 (2018).
Steady-State Coherences by Composite System-Bath Interactions. Giacomo Guarnieri, Michal Kolář, Radim Filip, 10.1103/PhysRevLett.121.070401arXiv:1802.08283Physical Review Letters. 12170401Giacomo Guarnieri, Michal Kolář, and Radim Filip, "Steady-State Coherences by Composite System-Bath Interactions," Physical Re- view Letters 121, 070401 (2018), arXiv:1802.08283.
Non-equilibrium steady-states of memoryless quantum collision models. Giacomo Guarnieri, Daniele Morrone, Francesco Barış Ç Akmak, Steve Plastina, Campbell, 10.1016/j.physleta.2020.126576arXiv:2001.01723Physics Letters A. 384126576cond-mat, physics:quant-phGiacomo Guarnieri, Daniele Morrone, Barış Ç akmak, Francesco Plastina, and Steve Campbell, "Non-equilibrium steady-states of mem- oryless quantum collision models," Physics Letters A 384, 126576 (2020), arXiv:2001.01723 [cond-mat, physics:quant-ph].
Equilibrium Coherence in the Multi-level Spin-boson Model. Mike Reppert, Deborah Reppert, Leonardo A Pachon, Paul Brumer, 10.1103/PhysRevA.102.012211arXiv:1911.07606Physical Review A. 10212211quant-phMike Reppert, Deborah Reppert, Leonardo A. Pachon, and Paul Brumer, "Equilibrium Coherence in the Multi-level Spin-boson Model," Physical Review A 102, 012211 (2020), arXiv:1911.07606 [quant-ph].
Enhanced steady-state coherence via repeated systembath interactions. Ricardo Román-Ancheyta, Michal Kolář, Giacomo Guarnieri, Radim Filip, 10.1103/PhysRevA.104.062209Physical Review A. 10462209Ricardo Román-Ancheyta, Michal Kolář, Giacomo Guarnieri, and Radim Filip, "Enhanced steady-state coherence via repeated system- bath interactions," Physical Review A 104, 062209 (2021).
Weak and ultrastrong coupling limits of the quantum mean force Gibbs state. J D Cresser, J Anders, 10.1103/PhysRevLett.127.250601arXiv:2104.12606Physical Review Letters. 127250601quant-phJ. D. Cresser and J. Anders, "Weak and ultrastrong coupling limits of the quantum mean force Gibbs state," Physical Review Letters 127, 250601 (2021), arXiv:2104.12606 [quant-ph].
Extraction of autonomous quantum coherences. Artur Slobodeniuk, Tomáš Novotný, Radim Filip, 10.22331/q-2022-04-15-689arXiv:2106.157216Artur Slobodeniuk, Tomáš Novotný, and Radim Filip, "Extraction of autonomous quantum coherences," Quantum 6, 689 (2022), arXiv:2106.15721.
Quantum-classical correspondence in spin-boson equilibrium states at arbitrary coupling. Federico Cerisola, Marco Berritta, Stefano Scali, A R Simon, James D Horsley, Janet Cresser, Anders, 10.48550/arXiv.2204.10874arXiv:2204.10874quant-phFederico Cerisola, Marco Berritta, Stefano Scali, Simon A. R. Horsley, James D. Cresser, and Janet Anders, "Quantum-classical corre- spondence in spin-boson equilibrium states at arbitrary coupling," (2022), arXiv:2204.10874 [quant-ph].
Tunable phonon-induced steady-state coherence in a double-quantum-dot charge qubit. Archak Purkayastha, Giacomo Guarnieri, Mark T Mitchison, Radim Filip, John Goold, 10.1038/s41534-020-0256-6npj Quantum Information. 6Archak Purkayastha, Giacomo Guarnieri, Mark T. Mitchison, Radim Filip, and John Goold, "Tunable phonon-induced steady-state coherence in a double-quantum-dot charge qubit," npj Quantum Information 6, 1-7 (2020).
Non-classical energy squeezing of a macroscopic mechanical oscillator. X Ma, J J Viennot, S Kotler, J D Teufel, K W Lehnert, 10.1038/s41567-020-01102-1arXiv:2005.04260Nature Physics. 17X. Ma, J. J. Viennot, S. Kotler, J. D. Teufel, and K. W. Lehnert, "Non-classical energy squeezing of a macroscopic mechanical oscillator," Nature Physics 17, 322-326 (2021), arXiv:2005.04260.
Quantum ground state and single-phonon control of a mechanical resonator. A D O'connell, M Hofheinz, M Ansmann, Radoslaw C Bialczak, M Lenander, Erik Lucero, M Neeley, D Sank, H Wang, M Weides, J Wenner, John M Martinis, A N Cleland, 10.1038/nature08967Nature. 464A. D. O'Connell, M. Hofheinz, M. Ansmann, Radoslaw C. Bialczak, M. Lenander, Erik Lucero, M. Neeley, D. Sank, H. Wang, M. Wei- des, J. Wenner, John M. Martinis, and A. N. Cleland, "Quantum ground state and single-phonon control of a mechanical resonator," Nature 464, 697-703 (2010).
Optomechanical ground-state cooling in a continuous and efficient electro-optic transducer. M Benjamin, Jonathan M Brubaker, Maxwell D Kindem, Sarang Urmey, Robert D Mittal, Peter S Delaney, Michael R Burns, Konrad W Vissers, Cindy A Lehnert, arXiv:2112.13429arXiv:2112.13429quant-ph. quant-phBenjamin M. Brubaker, Jonathan M. Kindem, Maxwell D. Urmey, Sarang Mittal, Robert D. Delaney, Peter S. Burns, Michael R. Vissers, Konrad W. Lehnert, and Cindy A. Regal, "Optomechanical ground-state cooling in a continuous and efficient electro-optic transducer," arXiv:2112.13429 [quant-ph] (2021), arXiv:2112.13429 [quant-ph].
Quantum-enabled interface between microwave and telecom light. Rishabh Sahu, William Hease, Alfredo Rueda, Georg Arnold, Liu Qiu, Johannes Fink, 10.1038/s41467-022-28924-2arXiv:2107.08303Nature Communications. 131276Rishabh Sahu, William Hease, Alfredo Rueda, Georg Arnold, Liu Qiu, and Johannes Fink, "Quantum-enabled interface between mi- crowave and telecom light," Nature Communications 13, 1276 (2022), arXiv:2107.08303.
Non-destructive optical readout of a superconducting qubit. Robert D Delaney, Maxwell D Urmey, Sarang Mittal, Benjamin M Brubaker, Jonathan M Kindem, Peter S Burns, Cindy A Regal, Konrad W Lehnert, arXiv:2110.09539arXiv:2110.09539quant-ph. quant-phRobert D. Delaney, Maxwell D. Urmey, Sarang Mittal, Benjamin M. Brubaker, Jonathan M. Kindem, Peter S. Burns, Cindy A. Re- gal, and Konrad W. Lehnert, "Non-destructive optical readout of a superconducting qubit," arXiv:2110.09539 [quant-ph] (2021), arXiv:2110.09539 [quant-ph].
Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems. Ze-Liang Xiang, J Q Ashhab, Franco You, Nori, 10.1103/RevModPhys.85.623Reviews of Modern Physics. 85Ze-Liang Xiang, Sahel Ashhab, J. Q. You, and Franco Nori, "Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems," Reviews of Modern Physics 85, 623-653 (2013).
Circuit QED and engineering charge based superconducting qubits. S M Girvin, M H Devoret, R J Schoelkopf, 10.1088/0031-8949/2009/T137/014012arXiv:0912.3902Physica Scripta. 13714012cond-matS. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, "Circuit QED and engineering charge based superconducting qubits," Physica Scripta T137, 014012 (2009), arXiv:0912.3902 [cond-mat].
Quantifying Coherence. T Baumgratz, M Cramer, M B Plenio, 10.1103/PhysRevLett.113.140401arXiv:1311.0275Physical Review Letters. 113140401quant-phT. Baumgratz, M. Cramer, and M. B. Plenio, "Quantifying Coherence," Physical Review Letters 113, 140401 (2014), arXiv:1311.0275 [quant-ph].
Quantifying coherence in infinite-dimensional systems. Yu-Ran Zhang, Lian-He Shao, Yongming Li, Heng Fan, 10.1103/PhysRevA.93.012334Physical Review A. 9312334Yu-Ran Zhang, Lian-He Shao, Yongming Li, and Heng Fan, "Quantifying coherence in infinite-dimensional systems," Physical Review A 93, 012334 (2016).
Nanomechanical measurements of a superconducting qubit. M D Lahaye, J Suh, P M Echternach, K C Schwab, M L Roukes, 10.1038/nature08093Nature. 459M. D. LaHaye, J. Suh, P. M. Echternach, K. C. Schwab, and M. L. Roukes, "Nanomechanical measurements of a superconducting qubit," Nature 459, 960-964 (2009).
Measurements of nanoresonator-qubit interactions in a hybrid quantum electromechanical system. F Rouxinol, Y Hao, F Brito, A O Caldeira, E K Irish, M D Lahaye, 10.1088/0957-4484/27/36/364003Nanotechnology. 27364003F. Rouxinol, Y. Hao, F. Brito, A. O. Caldeira, E. K. Irish, and M. D. LaHaye, "Measurements of nanoresonator-qubit interactions in a hybrid quantum electromechanical system," Nanotechnology 27, 364003 (2016).
Quantum acoustics with superconducting qubits. Yiwen Chu, Prashanta Kharel, William H Renninger, Luke D Burkhart, Luigi Frunzio, Peter T Rakich, Robert J Schoelkopf, 10.1126/science.aao1511arXiv:1703.00342Science. 358Yiwen Chu, Prashanta Kharel, William H. Renninger, Luke D. Burkhart, Luigi Frunzio, Peter T. Rakich, and Robert J. Schoelkopf, "Quantum acoustics with superconducting qubits," Science 358, 199-202 (2017), arXiv:1703.00342.
Resolving Phonon Fock States in a Multimode Cavity with a Double-Slit Qubit. Lucas R Sletten, Bradley A Moores, Jeremie J Viennot, Konrad W Lehnert, 10.1103/PhysRevX.9.021056arXiv:1902.06344Physical Review X. 921056Lucas R. Sletten, Bradley A. Moores, Jeremie J. Viennot, and Konrad W. Lehnert, "Resolving Phonon Fock States in a Multimode Cavity with a Double-Slit Qubit," Physical Review X 9, 021056 (2019), arXiv:1902.06344.
Quantum state preparation and tomography of entangled mechanical resonators. E , Alex Wollack, Agnetta Y Cleland, Rachel G Gruenke, Zhaoyou Wang, Patricio Arrangoiz-Arriola, H Amir, Safavi-Naeini, 10.1038/s41586-022-04500-yarXiv:2110.07561Nature. 604E. Alex Wollack, Agnetta Y. Cleland, Rachel G. Gruenke, Zhaoyou Wang, Patricio Arrangoiz-Arriola, and Amir H. Safavi-Naeini, "Quantum state preparation and tomography of entangled mechanical resonators," Nature 604, 463-467 (2022), arXiv:2110.07561.
The Nuclear Induction Experiment. F Bloch, W W Hansen, M Packard, 10.1103/PhysRev.70.474Physical Review. 70F. Bloch, W. W. Hansen, and M. Packard, "The Nuclear Induction Experiment," Physical Review 70, 474-485 (1946).
Measurement-induced localization of relative degrees of freedom. Hugo Cable, Peter L Knight, Terry Rudolph, 10.1103/PhysRevA.71.042107Physical Review A. 7142107Hugo Cable, Peter L. Knight, and Terry Rudolph, "Measurement-induced localization of relative degrees of freedom," Physical Review A 71, 042107 (2005).
Thermally induced creation of quantum coherence. Radim Filip, Petr Marek, 10.1103/PhysRevA.90.063820Physical Review A. 9063820Radim Filip and Petr Marek, "Thermally induced creation of quantum coherence," Physical Review A 90, 063820 (2014).
Spinmotion entanglement and state diagnosis with squeezed oscillator wavepackets. Hsiang-Yu Lo, Daniel Kienzler, Matteo Ludwig De Clercq, Vlad Marinelli, Ben C Negnevitsky, Jonathan P Keitch, Home, 10.1038/nature14458arXiv:1412.7100Nature. 521physics, physics:quant-phHsiang-Yu Lo, Daniel Kienzler, Ludwig de Clercq, Matteo Marinelli, Vlad Negnevitsky, Ben C. Keitch, and Jonathan P. Home, "Spin- motion entanglement and state diagnosis with squeezed oscillator wavepackets," Nature 521, 336-339 (2015), arXiv:1412.7100 [physics, physics:quant-ph].
Observation of Quantum Interference between Separated Mechanical Oscillator Wave Packets. D Kienzler, C Flühmann, V Negnevitsky, H.-Y Lo, M Marinelli, D Nadlinger, J P Home, 10.1103/PhysRevLett.116.140402Physical Review Letters. 116140402D. Kienzler, C. Flühmann, V. Negnevitsky, H.-Y. Lo, M. Marinelli, D. Nadlinger, and J. P. Home, "Observation of Quantum Interference between Separated Mechanical Oscillator Wave Packets," Physical Review Letters 116, 140402 (2016).
Strong magnetic coupling between an electronic spin qubit and a mechanical resonator. P Rabl, P Cappellaro, M V Dutt, L Jiang, J R Maze, M D Lukin, 10.1103/PhysRevB.79.041302Physical Review B. 7941302P. Rabl, P. Cappellaro, M. V. Gurudev Dutt, L. Jiang, J. R. Maze, and M. D. Lukin, "Strong magnetic coupling between an electronic spin qubit and a mechanical resonator," Physical Review B 79, 041302 (2009).
Coherent Sensing of a Mechanical Resonator with a Single-Spin Qubit. Shimon Kolkowitz, Ania C Bleszynski Jayich, Quirin P Unterreithmeier, Steven D Bennett, Peter Rabl, J G E Harris, Mikhail D Lukin, 10.1126/science.1216821Science. 335Shimon Kolkowitz, Ania C. Bleszynski Jayich, Quirin P. Unterreithmeier, Steven D. Bennett, Peter Rabl, J. G. E. Harris, and Mikhail D. Lukin, "Coherent Sensing of a Mechanical Resonator with a Single-Spin Qubit," Science 335, 1603-1606 (2012).
Coupling a single NV center to a superconducting flux qubit via a nanomechanical resonator. Xin-Ke Li, Xin-Ke Li, Sheng-Li Ma, Ya-Long Ren, Ji-Kun Xie, Fu-Li Li, 10.1364/JOSAB.435409JOSA B. 39Xin-Ke Li, Xin-Ke Li, Sheng-Li Ma, Ya-Long Ren, Ji-Kun Xie, and Fu-Li Li, "Coupling a single NV center to a superconducting flux qubit via a nanomechanical resonator," JOSA B 39, 69-76 (2022).
Hybrid cavity mechanics with doped systems. Aurélien Dantan, Bhagya Nair, Guido Pupillo, Claudiu Genes, 10.1103/PhysRevA.90.033820Physical Review A. 9033820Aurélien Dantan, Bhagya Nair, Guido Pupillo, and Claudiu Genes, "Hybrid cavity mechanics with doped systems," Physical Review A 90, 033820 (2014).
Hybrid optomechanics for Quantum Technologies. Benjamin Rogers, Nicola Lo Gullo, Gabriele De Chiara, G Palma, Mauro Paternostro, 10.2478/qmetro-2014-0002arXiv:1402.1195Quantum Measurements and Quantum Metrology. 2Benjamin Rogers, Nicola Lo Gullo, Gabriele De Chiara, G. Massimo Palma, and Mauro Paternostro, "Hybrid optomechanics for Quan- tum Technologies," Quantum Measurements and Quantum Metrology 2, 11-43 (2014), arXiv:1402.1195.
Generation of the mechanical Schrödinger cat state in a hybrid atom-optomechanical system. Mohammad Hossein Najmeh Etehadi Abari, Mohammad Hossein Naderi, Naderi, 10.1364/JOSAB.393352JOSA B. 37Najmeh Etehadi Abari, Mohammad Hossein Naderi, and Mohammad Hossein Naderi, "Generation of the mechanical Schrödinger cat state in a hybrid atom-optomechanical system," JOSA B 37, 2146-2156 (2020).
Unconventional quantum sound-matter interactions in spin-optomechanicalcrystal hybrid systems. Xing-Liang Dong, Peng-Bo Li, Tao Liu, Franco Nori, 10.1103/PhysRevLett.126.203601arXiv:2104.09101Physical Review Letters. 126203601cond-mat, physics:quant-phXing-Liang Dong, Peng-Bo Li, Tao Liu, and Franco Nori, "Unconventional quantum sound-matter interactions in spin-optomechanical- crystal hybrid systems," Physical Review Letters 126, 203601 (2021), arXiv:2104.09101 [cond-mat, physics:quant-ph].
Ground-state cooling of a mechanical oscillator via a hybrid electro-optomechanical system. Roson Nongthombam, Ambaresh Sahoo, Amarendra K Sarma, 10.1103/PhysRevA.104.023509Physical Review A. 10423509Roson Nongthombam, Ambaresh Sahoo, and Amarendra K. Sarma, "Ground-state cooling of a mechanical oscillator via a hybrid electro-optomechanical system," Physical Review A 104, 023509 (2021).
Optomechanical strong coupling between a single cavity photon and a single atom. Javier Argüello, - Luengo, Darrick E Chang, arXiv:2108.03526arXiv:2108.03526physics, physics:quant-ph. physics, physics:quant-phJavier Argüello-Luengo and Darrick E. Chang, "Optomechanical strong coupling between a single cavity photon and a single atom," arXiv:2108.03526 [physics, physics:quant-ph] (2021), arXiv:2108.03526 [physics, physics:quant-ph].
Quantum coherence, correlations and nonclassical states in the two-qubit Rabi model with parametric oscillator. V Yogesh, Prosenjit Maity, 10.1016/j.physa.2021.126641arXiv:2106.06746Physica A: Statistical Mechanics and its Applications. 589126641quant-phV. Yogesh and Prosenjit Maity, "Quantum coherence, correlations and nonclassical states in the two-qubit Rabi model with parametric oscillator," Physica A: Statistical Mechanics and its Applications 589, 126641 (2022), arXiv:2106.06746 [quant-ph].
From Atomic To Mesoscale: The Role Of Quantum Coherence In Systems Of Various Complexities. Svetlana A Malinovskaya, Irina Novikova, World ScientificSvetlana A. Malinovskaya and Irina Novikova, From Atomic To Mesoscale: The Role Of Quantum Coherence In Systems Of Various Complexities (World Scientific, 2015).
Description of quantum coherence in thermodynamic processes requires constraints beyond free energy. Matteo Lostaglio, David Jennings, Terry Rudolph, 10.1038/ncomms7383Nature Communications. 66383Matteo Lostaglio, David Jennings, and Terry Rudolph, "Description of quantum coherence in thermodynamic processes requires con- straints beyond free energy," Nature Communications 6, 6383 (2015).
Fundamental limitations for quantum and nanoscale thermodynamics. Michał Horodecki, Jonathan Oppenheim, 10.1038/ncomms3059Nature Communications. 42059Michał Horodecki and Jonathan Oppenheim, "Fundamental limitations for quantum and nanoscale thermodynamics," Nature Communi- cations 4, 2059 (2013).
The extraction of work from quantum coherence. Kamil Korzekwa, Matteo Lostaglio, Jonathan Oppenheim, David Jennings, 10.1088/1367-2630/18/2/023045New Journal of Physics. 1823045Kamil Korzekwa, Matteo Lostaglio, Jonathan Oppenheim, and David Jennings, "The extraction of work from quantum coherence," New Journal of Physics 18, 023045 (2016).
Role of coherence in the nonequilibrium thermodynamics of quantum systems. G Francica, J Goold, F Plastina, 10.1103/PhysRevE.99.042105Physical Review E. 9942105G. Francica, J. Goold, and F. Plastina, "Role of coherence in the nonequilibrium thermodynamics of quantum systems," Physical Review E 99, 042105 (2019).
Tan Van Vu, Keiji Saito, 10.1103/PhysRevLett.128.010602Finite-Time Quantum Landauer Principle and Quantum Coherence. 12810602Tan Van Vu and Keiji Saito, "Finite-Time Quantum Landauer Principle and Quantum Coherence," Physical Review Letters 128, 010602 (2022).
QuTiP 2: A Python framework for the dynamics of open quantum systems. J R Johansson, P D Nation, Franco Nori, 10.1016/j.cpc.2012.11.019Computer Physics Communications. 184J. R. Johansson, P. D. Nation, and Franco Nori, "QuTiP 2: A Python framework for the dynamics of open quantum systems," Computer Physics Communications 184, 1234-1240 (2013).
QuTiP: An open-source Python framework for the dynamics of open quantum systems. J R Johansson, P D Nation, Franco Nori, 10.1016/j.cpc.2012.02.021Computer Physics Communications. 183J. R. Johansson, P. D. Nation, and Franco Nori, "QuTiP: An open-source Python framework for the dynamics of open quantum systems," Computer Physics Communications 183, 1760-1772 (2012).
| []
|
[
"Data Driven Modeling Social Media Influence using Differential Equations",
"Data Driven Modeling Social Media Influence using Differential Equations"
]
| [
"Bailu Jin [email protected] \nCranfield University Address: College Rd\nCranfieldBedfordUK\n",
"Weisi Guo [email protected] \nCranfield University Address: College Rd\nCranfieldBedfordUK\n"
]
| [
"Cranfield University Address: College Rd\nCranfieldBedfordUK",
"Cranfield University Address: College Rd\nCranfieldBedfordUK"
]
| []
| Individuals modify their opinions towards a topic based on their social interactions. Opinion evolution models conceptualize the change of opinion as a uni-dimensional continuum, and the effect of influence is built by the group size, the network structures, or the relations among opinions within the group. However, how to model the personal opinion evolution process under the effect of the online social influence as a function remains unclear. Here, we show that the uni-dimensional continuous user opinions can be represented by compressed high-dimensional word embeddings, and its evolution can be accurately modelled by an ordinary differential equation (ODE) that reflects the social network influencer interactions. We perform our analysis on 87 active users with corresponding influencers on the COVID-19 topic from 2020 to 2022. The regression results demonstrate that 99% of the variation in the quantified opinions can be explained by the way we model the connected opinions from their influencers. Our research on the COVID-19 topic and for the account analysed shows that social media users primarily shift their opinion based on influencers they follow (e.g., model explains for 99% variation) and self-evolution of opinion over a long time scale is limited. | 10.1109/asonam55673.2022.10068693 | [
"https://export.arxiv.org/pdf/2207.13814v1.pdf"
]
| 251,135,105 | 2207.13814 | 673cce40e95b308bf37dc092162a54a35535bf0c |
Data Driven Modeling Social Media Influence using Differential Equations
Bailu Jin [email protected]
Cranfield University Address: College Rd
CranfieldBedfordUK
Weisi Guo [email protected]
Cranfield University Address: College Rd
CranfieldBedfordUK
Data Driven Modeling Social Media Influence using Differential Equations
Individuals modify their opinions towards a topic based on their social interactions. Opinion evolution models conceptualize the change of opinion as a uni-dimensional continuum, and the effect of influence is built by the group size, the network structures, or the relations among opinions within the group. However, how to model the personal opinion evolution process under the effect of the online social influence as a function remains unclear. Here, we show that the uni-dimensional continuous user opinions can be represented by compressed high-dimensional word embeddings, and its evolution can be accurately modelled by an ordinary differential equation (ODE) that reflects the social network influencer interactions. We perform our analysis on 87 active users with corresponding influencers on the COVID-19 topic from 2020 to 2022. The regression results demonstrate that 99% of the variation in the quantified opinions can be explained by the way we model the connected opinions from their influencers. Our research on the COVID-19 topic and for the account analysed shows that social media users primarily shift their opinion based on influencers they follow (e.g., model explains for 99% variation) and self-evolution of opinion over a long time scale is limited.
I. INTRODUCTION
A. Background
Since early research stretching back to the 1940s, social influence is proved to have a vital effect on opinion modification. Several empirical research in psychology has shown that individuals evolve their opinions towards a topic since they seek similarity or conform under social pressure. Sociologists modelled the social influence as a force, determined by the size, the network structure, and the relations among opinions, to mathematically capture the observed personal opinion evolution phenomenon.
In an age of internet-based tools becoming one of the primary sources of communication repertoires, online interaction and interpersonal communication are rapidly converging [1]. Recently we have used the social influence theories to analyse online social networks (OSN). The online social influence can involve and interact with real-world crises, such as the Russian-Ukraine conflict [2]. Therefore, understanding the mechanism of online social influence [3], [4] is critical.
However, how to model the personal opinion evolution process under the effect of the online social influence as a function remains unclear. In this preliminary paper, we aim to apply the social influence modelling method to the online social network and evaluate the online opinion evolution model using real-time Twitter data.
B. Related Work 1) Empirical research in psychology: Individuals evolve their opinions, attitudes, or stances towards topics through their social interactions. Several empirical research in psychology has studied the phenomenon of opinion evolution during interpersonal interaction. Back in 1995, Asch developed one empirical experiment on social conformity, which has shown that people would modify their opinions to seek similarity with others in the group [5]. Other experiments on small group behavior, decision making, and innovation diffusion showed how interactions reduce opinion differences between person. In 2012, a 61-million-user experiment was launched on Facebook during the 2010 US congressional elections [6]. The results show that human behavior is also amendable through interventions from online networks.
2) Models of Social Influence: Based on empirical conclusions, social influence modellings are proposed to explain the social phenomena of opinion clustering or opinion controversy. French in [7] introduced the earliest formal model on the opinion evolution in a group.
Starting from the formal model, the change of opinion is always conceptualized as a uni-dimensional continuum and determined by the size, the network structures, or the relations among opinions within the group. French-DeGroot model showed how social influence always leads to opinion consensus using the assumption that people will always influence each other in the group [8]. However, opinion consensus is not the only outcome from group discussion experiments. To explain the opinion clustering, the Hegselmann-Krause model adds a bounded confidence attribute to block the influence from opposite opinions [9].
C. Contributions & Novelty
Several studies investigate the opinion evolution phenomenon on a topic in Twitter. However, most studies focus on the sentiment evolution of the majority of users. In this paper, we propose the analysis of opinion evolution on a personal level. This paper's three major contributions are:
• We introduce a data-driven pipeline representing the personal evolution of opinions with a time kernel. • Based on previous psychology models, we model the opinion evolution process as a function of online social influence using an ordinary differential equation. • Our opinion evolution model is applied to the real-time
Twitter data under the COVID-19 topic. We find that social media users primarily shift their opinion based on influencers they follow, and self-evolution of opinion over a long time scale is limited.
II. SOCIAL NETWORK OPINION DIFFERENTIAL EQUATION MODEL
In this part, we show the reader we apply the social influence modelling method to online social networks. Fig. 1 presents the way we model the personal opinion evolution process under the effect of the online social influence as a function.
In Fig. 1A) we show two example types of Twitter users within a defined topic: left) the recipient i and the influencers j (determined by Twitter following), and right) over time, the influencers provide the forces of social influence on the recipient's opinion, leading to the evolution of its opinion. Our aim is modelling the evolution process, so a time kernel is applied to capture the modification of opinion as shown in Fig. 1B). The size of the time kernel is selected to capture sufficient activity within a period (e.g., typically 10 days).
We use x to represent the quantified opinion (compressed from aforementioned word embedding), and the opinion of recipient i at time kernel t is defined as x t i . We use function g(x 1
i , x 1 j ) represent the social influence from j to i at time t = 1. However, the nature of g(·) is unknown at this time and has to be derived either from previous experimentation (see below) or function discovery. The linear combination of previous opinion with confidence weight w ii and social influence with influence weight w ij contributes to the opinion evolution model
x 2 i = w ii x 1 i + j,j =i w ij g(x 1 i , x 1 j )(1)
Using French's formal theory [7], in this paper, we model the discrepancy of opinions x t i and x t j to determine the effect from influencer j to recipient i. So the influence effect is determined to be proportional to the size of the difference between their opinions g(
x t j , x t i ) = (x t j − x t i ).
Beyond the function, there may include influence weights (w ij ) representing the strength of the effect. Formally, social pressure on the recipient i is the sum of the effect from all influencers j conditioned by the weight (w ij ) of the tie between i and j (−1.0 ≤ w ij ≤ 1.0). The self-weight (w ii ) of the recipient i represent to what degree the recipient is anchored on his previous position (−1.0 ≤ w ii ≤ 1.0) [10]. The influence process takes place gradually, as the influencer changes its position over time and influences the recipient toward its position. For each recipient, the discrete-time interpersonal influence mechanism can be describe as a ordinary differential equation
x t+1 i = w ii x t i + j,j =i w ij (x t j − x t i )(2)
Given the opinion evolution model, our approach is to: (1) identify topic-specific influencers and recipients using Twitter, (2) apply a time kernel to analyse opinion evolution, and (3) fit the data to empirical psychology ODE models and find the influencer weight.
III. CASE STUDY: PIPELINE & DATA
Here we choose COVID-19 as our specific topic. COVID-19 pandemic has been an ongoing global pandemic since December 2019. Discussions on disease symptoms, prevention, vaccine, and local policies widely spread online. From January 2020 to September 2021, over 35 million unique users post over 198 million Twitter using Covid-19 related keywords. Our work concentrates on personal opinions evolution with the influence from the online network during the pandemic.
A. Available Users
Our model assumes that topic-specific influencers provide the forces of social influence on the recipient's opinion, leading to the evolution of opinion on the recipient. In the case study, we first need to locate these two types of Twitter users as the nodes in the opinion influence network: the recipient i and the influencer j. We use the actual Twitter "Following" relationship to build the edges of the influence network, assuming that the recipient receives the forces of influence from their "Following" accounts.
To capture the opinion evolution process in a long time range, only the "active users" are under consideration for both recipient and influencer. In the research on the communication effect of mass media, the concept of "active users" is defined as users with a minimum level of activity. In our case, we To start building up the active user network, we first look into a list of COVID-19 experts on Twitter, including medical professionals, data scientists, and journalists. We used the Twitter API to gather each user's "Following" relationship and tweet contents. We manually pick a set of 15 keywords representing the COVID-19 topic. Then we filter the topicspecific tweets that contain at least one of the keywords. Fig.2 shows the number of all Tweets and topic-specific Tweets generated from active users from March 2020 to Feb 2022, including 175624 tweets and 85946 topic-related tweets in total.
Finally, we found 87 active recipients, and the mean value of the number of influencers per recipient is 17.655. Some recipients share some of the same influencers, and one recipient may act as an influencer in another recipient's network. Although the following links appear between the influencers, the interactions between influencers are not considered in the recipient's model.
B. Word Embedding and Compression to Uni-dimensional Opinion
In this part, we present the reader the way we infer influence and opinion to regress our opinion model using online social media data.
The previously introduced social influence model mainly conceptualizes the opinion using pro-event and post-event psychology survey questions. In online social networks, we use the text content posted by one individual to represent the individual's opinion. We aim to apply the social influence modelling method to the online social network, so we use the compressed word-embedding vectors to capture the vibration of opinion.
In our case study, we gather the COVID-19 specific tweets content as the initial input. We represent tweets using uni- dimensional continuum by word-embedding and dimensional reduction.
The process is shown in Fig.3. For word-embedding, we use Sentence-BERT with pretrained all-mpnet-base-v2 model [11]. Sentence-BERT takes sentences as the input data and produces 768-dimensional output vectors. Sentence-BERT uses siamese and triplet structure on the pretrained BERT network, leading to outperforming transfer learning tasks.
We pass the topic-specific tweets from selected "active users" to Sentence-BERT to obtain vector representations of each tweet, then take the average of the vectors in each time window. In this case, each user will have 70 768dimensional word embedding vectors to represent their timevarying opinions on the COVID-19 topic.
Then each vector would be projected onto the unidimensional plane using the Uniform Manifold Approximation and Projection (UMAP) algorithm [12]. UMAP is a scalable algorithm for dimension reduction, searching for a low-dimensional data projection with the closest topological structure. After UMAP compresses the 768-dimensional wordembedding vectors to 1-dimensional vectors, each user then has 70 uni-dimensional vectors representing their opinion evolution.
In Fig. 3, we also present the visualization of 2-dimensional vectors of users' opinions on this topic to give an impression of opinion evolution. Red and blue dots represent the opinions from one of the recipients and the corresponding thirty influencers. The five sub-graphs visualize the variations of opinions from time kernel 20 to time kernel 25. It should be noted that the 2-dimensional vectors are only used in this visualization, and the opinion vectors would be compressed to uni-dimensional during model fitting to match the opinion evolution model.
IV. OPINION MODEL EVALUATION
A. Multi-linear Regression
Here we have the quantified opinions from 87 recipients and the corresponding influencers in 70 time kernels. We will then use the multi-linear regression method to evaluate the model performance in generating the opinion evolution process.
For each recipient, we would build a regression process on the model
x t+1 i = w ii x t i + j,j =i w ij (x t j − x t i ) = β ii x t i + j,j =i β ij x t j (3)
The independent variables are x t i and all corresponding x t j , where the dependant variable is x t+1 i . In this case, all the number of observation is 69 since we capture 70 time windows. But the number of independent variables is varying depending on the number of influencers n. We use the ordinary least square (OLS) method to estimate the coefficients of multilinear regression. The OLS method searching the coefficients by minimizing the sum of square errors between the observed and predicted values. Here, the coefficients β ij represent the influence weights of w ij , and the self-weight w ii could be calculated from the coefficients β ii and β ij .
B. Regression Result
The results of the regression models are shown in Table.I. We have 87 influence models and 69 observations per model. For each model, the number of influencers depends on different recipients, where the mean value of the number of influencers is 17.655, and the variance is 123.07.
The following two lines reveal the attributes that describe the performances of our multi-linear regressions, adjusted Rsquared, and probability of F-statistic.
The adjusted R-squared score shows the explanatory power of regression models that contain multiple predictors. In 87 regression models, the mean adjusted R-squared score is 0.98232, and the variance is 0.00769, representing that the influencers can explain at least 0.98% of the variance for the recipient's opinion.
The null hypothesis of the F-statistic is that the effects of the predictors are 0. The F-statistic probability shows the probability of rejecting the null hypothesis, which indicates if the group of independent variables is essential. All probabilities of F-statistic in our models are close to zero with slight variance, representing the statistically significant of the predictors.
In summary, the high adjusted R-squared values and probabilities of the F-statistic reveal the remarkable explanatory power and statistical significance of the predictors.
V. CONCLUSION & FUTURE WORK
This paper aims to model the personal opinion evolution process under the effect of the online social influence as a function. We propose the social network opinion ODE model, which considers both individual behaviour and network structure. We use Twitter empirical data to fit the parameters of the model. To achieve this, we introduced a pipeline to quantitatively represent personal opinion evolution with real-time Twitter data under the COVID-19 topic. Using the quantified real-time data as input, at least based on this topic, the opinion data indicate that social media users primarily shift their opinion based on influencers they follow and self-evolution of opinions over a long time scale is limited compared to the influences from others.
Our next step is to analyze the relationships between the estimated influence weights and the actual interaction activities between users, revealing why and how influencers can be influential. We will also discover the social influence function through data-driven function discoveries allowing non-linear assumptions for diverse other topics.
Fig. 1 .
1A) shows the recipient i and the influencers j (determined by Twitter following), and the influencers provide the forces of social influence on the recipient's opinion over time. B) shows the evolution process of recipient i under the forces from influencers with a time kernel.
Fig. 2 .
2Number of all Tweets and topic-specific Tweets from March 2020 to Feb 2022 set 10 days as one time period and determined the minimum standard as posting more than one topic-specific Twitter in at least 60 time periods from the 1st of March 2020 to the 30th of January 2022 (700 days). The missing periods would inherit the previous t − 1 value of the opinion.
Fig. 3 .
3Process of word-embedding, dimensionality and visualization of users' opinions on COVID-19 topic.
TABLE I
IOLS REGRESSION RESULT
No. of Influence Models Observations per Model
87
69
R Mean
Adj.R Var
0.98232
0.00769
Pro F-statistic Mean
Pro F-statistic Var
0.00012
1.26e-06
ACKNOWLEDGMENTThe work is supported by "Networked Social Influence and Acceptance in a New Age of Crises", funded by USAF OFSR under Grant No.: FA8655-20-1-7031.
Online social influence and the convergence of mass and interpersonal communication. A J Flanagin, Human Communication Research. 434A. J. Flanagin, "Online social influence and the convergence of mass and interpersonal communication," Human Communication Research, vol. 43, no. 4, pp. 450-463, 2017.
Conflict detection in linguistically diverse online social networks: a russia-ukraine case study. N Tkachenko, W Guo, ACM International Conference on Management of Digital EcoSystems (MEDES). N. Tkachenko and W. Guo, "Conflict detection in linguistically diverse online social networks: a russia-ukraine case study," in ACM Interna- tional Conference on Management of Digital EcoSystems (MEDES).
Influential Networks. D Centola, Nature Human Behaviour. 37D. Centola, "Influential Networks," Nature Human Behaviour, vol. 3, no. 7, 2019.
Detecting influencers in written online conversations. O Biran, S Rosenthal, J Andreas, K Mckeown, O Rambow, Proceedings of the Second Workshop on Language in Social Media. the Second Workshop on Language in Social MediaMontréal, CanadaAssociation for Computational LinguisticsO. Biran, S. Rosenthal, J. Andreas, K. McKeown, and O. Rambow, "Detecting influencers in written online conversations," in Proceedings of the Second Workshop on Language in Social Media. Montréal, Canada: Association for Computational Linguistics, Jun. 2012, pp. 37-45. [Online]. Available: https://aclanthology.org/W12-2105
Studies of independence and conformity: I. a minority of one against a unanimous majority. S E Asch, Psychological monographs: General and applied. 7091S. E. Asch, "Studies of independence and conformity: I. a minority of one against a unanimous majority." Psychological monographs: General and applied, vol. 70, no. 9, p. 1, 1956.
A 61-million-person experiment in social influence and political mobilization. R M Bond, C J Fariss, J J Jones, A D Kramer, C Marlow, J E Settle, J H Fowler, Nature. 4897415R. M. Bond, C. J. Fariss, J. J. Jones, A. D. Kramer, C. Marlow, J. E. Settle, and J. H. Fowler, "A 61-million-person experiment in social influence and political mobilization," Nature, vol. 489, no. 7415, pp. 295-298, 2012.
A formal theory of social power. J R FrenchJr, Psychological review. 633181J. R. French Jr, "A formal theory of social power." Psychological review, vol. 63, no. 3, p. 181, 1956.
Reaching a consensus. M H Degroot, Journal of the American Statistical Association. 69345M. H. DeGroot, "Reaching a consensus," Journal of the American Statistical Association, vol. 69, no. 345, pp. 118-121, 1974.
Opinion dynamics and bounded confidence models, analysis, and simulation. R Hegselmann, U Krause, Journal of artificial societies and social simulation. 53R. Hegselmann, U. Krause et al., "Opinion dynamics and bounded con- fidence models, analysis, and simulation," Journal of artificial societies and social simulation, vol. 5, no. 3, 2002.
Polarizing effects of social interaction. D G Myers, 125Group decision makingD. G. Myers, "Polarizing effects of social interaction," Group decision making, vol. 125, pp. 137-138, 1982.
Sentence-bert: Sentence embeddings using siamese bert-networks. N Reimers, I Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsN. Reimers and I. Gurevych, "Sentence-bert: Sentence embeddings using siamese bert-networks," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. [Online]. Available: https: //arxiv.org/abs/1908.10084
Umap: Uniform manifold approximation and projection for dimension reduction. L Mcinnes, J Healy, J Melville, arXiv:1802.03426arXiv preprintL. McInnes, J. Healy, and J. Melville, "Umap: Uniform manifold approximation and projection for dimension reduction," arXiv preprint arXiv:1802.03426, 2018.
| []
|
[
"Non-Asymptotic Analysis of Ensemble Kalman Updates: Effective Dimension and Localization",
"Non-Asymptotic Analysis of Ensemble Kalman Updates: Effective Dimension and Localization"
]
| [
"Omar Al Ghattas \nUniversity of Chicago\n\n",
"Daniel Sanz-Alonso \nUniversity of Chicago\n\n"
]
| [
"University of Chicago\n",
"University of Chicago\n"
]
| []
| Many modern algorithms for inverse problems and data assimilation rely on ensemble Kalman updates to blend prior predictions with observed data. Ensemble Kalman methods often perform well with a small ensemble size, which is essential in applications where generating each particle is costly. This paper develops a non-asymptotic analysis of ensemble Kalman updates that rigorously explains why a small ensemble size suffices if the prior covariance has moderate effective dimension due to fast spectrum decay or approximate sparsity. We present our theory in a unified framework, comparing several implementations of ensemble Kalman updates that use perturbed observations, square root filtering, and localization. As part of our analysis, we develop new dimension-free covariance estimation bounds for approximately sparse matrices that may be of independent interest. | 10.48550/arxiv.2208.03246 | [
"https://export.arxiv.org/pdf/2208.03246v1.pdf"
]
| 251,371,585 | 2208.03246 | b458387798c8b4b2b11d157e07b25366f3829551 |
Non-Asymptotic Analysis of Ensemble Kalman Updates: Effective Dimension and Localization
5 Aug 2022
Omar Al Ghattas
University of Chicago
Daniel Sanz-Alonso
University of Chicago
Non-Asymptotic Analysis of Ensemble Kalman Updates: Effective Dimension and Localization
5 Aug 2022
Many modern algorithms for inverse problems and data assimilation rely on ensemble Kalman updates to blend prior predictions with observed data. Ensemble Kalman methods often perform well with a small ensemble size, which is essential in applications where generating each particle is costly. This paper develops a non-asymptotic analysis of ensemble Kalman updates that rigorously explains why a small ensemble size suffices if the prior covariance has moderate effective dimension due to fast spectrum decay or approximate sparsity. We present our theory in a unified framework, comparing several implementations of ensemble Kalman updates that use perturbed observations, square root filtering, and localization. As part of our analysis, we develop new dimension-free covariance estimation bounds for approximately sparse matrices that may be of independent interest.
Introduction
Many algorithms for inverse problems and data assimilation rely on ensemble Kalman updates to blend prior predictions with observed data. The main motivation behind ensemble Kalman methods is that they often perform well with a small ensemble size N , which is essential in applications where generating each particle is costly. However, theoretical studies have primarily focused on large ensemble asymptotics, that is, on the limit N → ∞. While these mean-field results are mathematically interesting and have led to significant practical improvements, they fail to explain the empirical success of ensemble Kalman methods when deployed with a small ensemble size. The aim of this paper is to develop a non-asymptotic analysis of ensemble Kalman updates that rigorously explains why, and under what circumstances, a small ensemble size may suffice. To that end, we establish non-asymptotic error bounds in terms of suitable notions of effective dimension of the prior covariance model that account for spectrum decay (which may represent smoothness of a prior random field) and approximate sparsity (which may represent spatial decay of correlations). Our work complements mean-field analyses of ensemble Kalman updates and identifies scenarios where mean-field behavior holds with moderate N .
In addition to contributing to explain the practical success of ensemble Kalman methods, our non-asymptotic perspective allows us to tell apart, on accuracy grounds, implementations of ensemble Kalman updates that use perturbed observations and square root filtering. These implementations become equivalent in the large N limit, and therefore their differences in accuracy cannot be captured by asymptotic results. Furthermore, our non-asymptotic perspective provides new understanding on the importance of localization, a procedure widely used by practitioners that involves tapering or "localizing" empirical covariance estimates to avoid spurious correlations.
Rather than providing a complete, definite analysis of any particular ensemble Kalman method, our goal is to bring to bear a new set of tools from high-dimensional probability and statistics to the study of these algorithms. In particular, our work builds on, and contributes to, the theory of high-dimensional covariance estimation, which we believe is fundamental to the understanding of ensemble Kalman methods. To make the presentation accessible to a wide audience, we assume no background knowledge on covariance estimation or on ensemble Kalman methods.
Problem Description
Consider the inverse problem of recovering u ∈ R d from data y ∈ R k , corrupted by noise η, where y = G(u) + η, (1.1) G : R d → R k is the forward model, and η ∼ P η = N (0, Γ) is the observation error with positivedefinite covariance matrix Γ. An ensemble Kalman update takes as input a prior ensemble {u n } N n=1 and observed data y, and returns as output an updated ensemble {υ n } N n=1 that blends together the information in the prior ensemble and in the data. Two main types of problems will be investigated: posterior approximation and sequential optimization. In the former, ensemble Kalman updates are used to approximate a posterior distribution in a Bayesian linear setting; in the latter, they are used within optimization algorithms for nonlinear inverse problems.
Posterior Approximation
If the forward model is linear, i.e. G(u) = Au for some matrix A ∈ R k×d , and A is ill-conditioned or d ≫ k, naive inversion of the data by means of the (generalized) inverse of A results in an amplification of small observation error η into large error in the reconstruction of u. In such situations, regularization is needed to stabilize the solution. To this end, one may adopt a Bayesian approach and place a Gaussian prior on the unknown u ∼ P u = N (m, C) with positive-definite C; the prior distribution then acts as a probabilistic regularizer. The Bayesian solution to the inverse problem (1.1) is a full characterization of the posterior distribution P u|y , that is, the distribution of u given y. A standard calculation shows that P u|y = N (µ, Σ), with µ = m + CA ⊤ (ACA ⊤ + Γ) −1 (y − Am),
Σ = C − CA ⊤ (ACA ⊤ + Γ) −1 AC.
(
1.2)
A posterior-approximation ensemble Kalman update transforms a prior ensemble {u n } N n=1 drawn from P u into an updated ensemble {υ n } N n=1 whose sample mean and sample covariance approximate the mean and covariance of P u|y . Ensemble Kalman updates enjoy a low computational and memory cost when the ensemble size N is smaller than the state dimension d. In Section 3 we establish nonasymptotic error bounds that ensure that if N is larger than a suitably defined effective dimension, then the sample mean and sample covariance of the updated ensemble approximate well the true posterior mean and covariance in (1.2). We refer to methods that are capable of recovering the posterior P u|y in a linear-Gaussian setting as posterior-approximation algorithms.
Sequential Optimization
When faced with a general nonlinear model G, exact characterization of the posterior can be challenging. One may then opt for an optimization framework and solve the inverse problem (1.1) by minimizing a user-chosen objective function. Starting from a prior ensemble {u n } N n=1 drawn from a measure P u that encodes prior beliefs about u, an ensemble Kalman update returns an updated ensemble {υ n } N n=1 whose sample mean approximates the desired minimizer. The process can be iterated by taking the updated ensemble to be the prior ensemble of a new ensemble Kalman update. Under suitable conditions on G, and after a sufficient number of such updates, all particles in the ensemble collapse into the minimizer of the objective. Ensemble Kalman optimization algorithms are derivative-free methods, and are therefore particularly useful when derivatives of the model G are unavailable or expensive to compute. As for posterior-approximation algorithms, implementing each update has low computational and memory cost when the ensemble size N is small. In Section 4 we will establish non-asymptotic error bounds that ensure that if N is larger than a suitably defined effective dimension, then each particle update u n → υ n , 1 ≤ n ≤ N, approximates well an idealized mean-field update computed with an infinite number of particles; this suggests that the evolution of particles along an ensemble-based sequential optimizer is close to an idealized mean-field evolution. We refer to methods that solve the inverse problem (1.1) by minimization of an objective function as sequential-optimization algorithms.
Summary of Contributions and Outline
• Section 2 contains a novel, unified exposition of the ensemble Kalman updates analyzed in this paper. We further introduce and motivate the model assumptions that underpin our theory.
• Section 3 is concerned with posterior-approximation algorithms. The main results, Theorems 3.3 and 3.5, give non-asymptotic bounds on the estimation of the posterior mean and covariance in terms of a standard notion of effective dimension that accounts for spectrum decay in the prior covariance model. Our analysis explains the statistical advantage of square root updates over perturbed observation ones. We also discuss the deterioration of our bounds in small noise limits where the prior and the posterior become mutually singular.
• Section 4 is concerned with sequential-optimization algorithms. The main results, Theorems 4.5 and 4.7, give non-asymptotic bounds on the approximation of mean-field particle updates using ensemble Kalman updates with and without localization. Our analysis explains the advantage of localized updates if the prior covariance satisfies a soft-sparsity condition. For the study of localized updates, we show in Theorems 4.1 and 4.3 new dimension-free covariance estimation bounds in terms of a new notion of effective dimension that simultaneously accounts for spectrum decay and approximate sparsity in the prior covariance model.
• Section 5 concludes with a summary of our work and several research directions that stem from our non-asymptotic analysis of ensemble Kalman updates. We also discuss the potential and limitations of localization in posterior-approximation algorithms.
• The proofs of all our results are deferred to three appendices.
Related Work
Ensemble Kalman methods -overviewed in [30,49,46,77,19,79]-first appeared as filtering algorithms in the data assimilation literature [28,31,15,43,44]. The goal of data assimilation is to estimate a time-evolving state as new observations become available [75,3,58,67,62,81,79]. Ensemble Kalman filters (EnKFs) solve an inverse problem of the form (1.1) every time a new observation is acquired. In that filtering context, (1.1) encodes the relationship between the state u and observation y at a given time t, and the prior on u is specified by propagating a probabilistic estimate of the state at time t− 1 through the dynamical system that governs the state evolution. To approximate this prior, EnKFs propagate an ensemble of N particles through the dynamics, and subsequently update this prior forecast ensemble into an updated analysis ensemble that assimilates the new observation. Thus, an ensemble Kalman update is performed every time a new observation is acquired. The goal is that the sample mean and sample covariance of the updated ensemble approximate well the mean and covariance of the filtering distribution, that is, the conditional distribution of the state at time t given all observations up to time t. While only providing provably accurate posterior approximation in linear settings [27], EnKFs are among the most popular methods for high-dimensional nonlinear filtering, in particular in numerical weather forecasting. The papers [37,65,76] introduced ensemble Kalman methods for inverse problems in petroleum engineering and the geophysical sciences. Application-agnostic ensemble Kalman methods for inverse problems were developed in [48,47], inspired by classical regularization schemes [39]. Since then, a wide range of sequential-optimization algorithms for inverse problems have been proposed that differ in the objective function they seek to minimize and in how ensemble Kalman updates are implemented. We refer to Subsection 2.1 for further background and to [19] for a review.
Ensemble Kalman methods for inverse problems and data assimilation have been studied extensively from a large N asymptotic point of view, see e.g. [66,61,69,55,27,24,41,59,34,10,22,25]. A complementary line of work [40,36,50,91,92] has focused on challenges faced by ensemble Kalman methods, including loss of stability and catastrophic filter divergence. Two overarching themes that underlie large N asymptotic analyses are to ensure consistency and to derive equations for the mean-field evolution of the ensemble. Related to this second theme, several works (e.g. [82,12,13,38,19,93,68]) set the analysis in a continuum limit; the idea is to view Kalman updates as occurring over an artificial discrete-time variable, and then take the time between updates to be infinitesimally small to formally derive differential equations for the evolution of the ensemble or its density. Large N asymptotics and continuum limits have resulted in new theoretical insights and practical advancements. However, an important caveat of these results is that they cannot tell apart implementations of ensemble Kalman methods that become equivalent in large N or continuum limit asymptotic regimes. Moreover, several papers (e.g. [5,6,51,82]) have noted that large N asymptotic analyses fail to explain empirical results that report good performance with a moderately sized ensemble in problems with high state dimension; for instance, d ∼ 10 9 and N ∼ 10 2 in operational numerical weather prediction. Finally, the note [72] shows subtle but important differences in the evolution of interacting particle systems with finite ensemble size when compared to their mean-field counterparts [34].
In this paper we adopt a non-asymptotic viewpoint to establish sufficient conditions on the ensemble size for posterior-approximation and sequential-optimization algorithms. Empirical evidence in [73] suggests that there is a sample size N * above which ensemble Kalman methods are effective. The seminal work [33] conducts insightful explicit calculations that motivate our more general theory. Following the analysis of ensemble Kalman methods in [33] and the study of importance sampling and particle filters in [1,78,80,9,84,4,83,23,85], we focus on analyzing a single ensemble Kalman update rather than on investigating the propagation of error across multiple updates. While in practice ensemble Kalman methods for posterior approximation in data assimilation and for sequential optimization in inverse problems often perform many updates, focusing on a single update suffices for the purpose of uncovering the prior assumptions that enable ensemble Kalman updates to be successful with a moderate ensemble size. Moreover, the focus on a single update allows us to tell apart, on accuracy grounds, perturbed observations and square root implementations of ensemble Kalman updates, as well as implementations with and without localization. Similar considerations motivate the study of sufficient sample size for importance sampling in [71,84,1,20,78,80], where the focus on a single update facilitates establishing clear comparisons between standard and optimal proposals, and identifying meaningful notions of dimension to characterize necessary and sufficient conditions on the required sample size. Our work builds on, and develops, tools from high-dimensional probability and statistics [98,97,7,64,21,16,17]. In particular, we bring to bear thresholded [7,16] and masked covariance estimators [64,21] to the understanding of localization in ensemble Kalman methods. In so doing, we establish new dimension-free covariance and cross-covariance estimation bounds under approximate sparsitysee Theorems 4.1 and 4.3.
Notation
Given non-negative a, b, the relation a b implies that there exists a universal constant c > 0 such that a ≤ cb. If both a b and b a, then we write a ≍ b. If the probability of an event D is close to 1, we write that D holds with high probability. For example, given a random variable X and confidence parameter δ X ∈ (0, 1), Markov's inequality implies that with probability 1 − δ X , X EX. Since δ X may be chosen arbitrarily small, we often condense this statement into: with high probability, X EX, see also Remark A.2. S d + denotes the set of d × d symmetric positivesemidefinite matrices, and S d ++ denotes the set of d × d symmetric positive-definite matrices. A † denotes the pseudo-inverse of A. 1 N denotes the N -dimensional vector with 1 in each coordinate, and 0 d denotes the d-dimensional vector of zeroes. 1 B denotes the indicator of the set B. ≡ denotes a definition. • denotes the matrix Hadamard or Schur (elementwise) product. Given a non-decreasing, non-zero convex function ψ : [0, ∞] → [0, ∞] with ψ(0) = 0, the Orlicz norm of a real random variable X is X ψ = inf{t > 0 : Eψ(t −1 |X|) ≤ 1}. In particular, let ψ p (x) ≡ e x p − 1 for p ≥ 1. Then, real random variables that satisfy X ψ2 < ∞ are referred to as sub-Gaussian. For a function g : R d → R k , Dg ∈ R d×k denotes the Jacobian of g.
Ensemble Kalman Updates
This section provides a detailed exposition of the ensemble Kalman updates we will analyze in the sequel. As alluded to earlier, all the methods we study have the same starting point of a prior ensemble
u 1 , . . . , u N i.i.d. ∼ N (m, C),
and observed data y generated according to (1.1), which are to be used in generating an updated ensemble {υ n } N n=1 . For the remainder, we denote the prior sample means by
m ≡ 1 N N n=1 u n , G ≡ 1 N N n=1 G(u n ),
and the prior sample covariances by
C ≡ 1 N − 1 N n=1 (u n − m)(u n − m) ⊤ , C pp ≡ 1 N − 1 N n=1 (G(u n ) − G)(G(u n ) − G) ⊤ , C up ≡ 1 N − 1 N n=1 (u n − m)(G(u n ) − G) ⊤ .
The population versions will be denoted by
C pp ≡ E G(u n ) − EG(u n ) G(u n ) − EG(u n ) ⊤ , C up ≡ E u n − m G(u n ) − EG(u n ) ⊤ .
Algorithms for posterior approximation are introduced in Subsection 2.1 and algorithms for sequential optimization in Subsection 2.2.
Ensemble Algorithms for Posterior Approximation
In posterior-approximation algorithms we consider the inverse problem (1.1) with a linear forward model, i.e.
y = Au + η, η ∼ N (0, Γ). (2.1)
In order to establish comparisons between different posterior-approximation algorithms, as well as to streamline our analysis, we follow the exposition in [55] and introduce three operators that are central to the theory: the Kalman gain operator K, the mean-update operator M, and the covariance-update operator C, defined respectively by
K : S d + → R d×k , K(C; A, Γ) = K(C) = CA ⊤ (ACA ⊤ + Γ) −1 , (2.2) M : R d × S d + → R d , M(m, C; A, y) = M(m, C) = m + K(C)(y − Am), (2.3) C : S d + → S d + , C(C; A) = C(C) = I − K(C)A C. (2.4)
The pointwise continuity and boundedness of all three operators was established in [55], and we summarize these results in Lemmas A.5, A.6, and A.7. We note that the Kalman update (1.2) can be rewritten succinctly as µ = M(m, C),
Σ = C(C). (2.5)
We study two main classes of posterior-approximation algorithms based on Perturbed Observation (PO) and Square Root (SR) ensemble Kalman updates. In both implementations, the updated ensemble has sample mean µ and sample covariance Σ that are, by design, consistent estimators of the posterior mean µ and covariance Σ in (2.5). The requirement of consistency highlights the focus of the literature on asymptotic notions of the quality of these procedures. Although PO and SR updates are asymptotically equivalent, differences between the two algorithms do exist in finite ensembles, and this difference is captured in our non-asymptotic analysis in Section 3.
Perturbed Observation Update
The PO update, introduced in [28], transforms each particle of the prior ensemble according to
υ n = u n + K( C) y − Au n − η n = M(u n , C) − K( C)η n , η n i.i.d. ∼ N (0, Γ), 1 ≤ n ≤ N.
The form of the update is similar to the Kalman mean update (2.5) albeit with the n-th ensemble member being assigned a perturbed observation y − η n . Consequently, denoting the sample mean of the perturbations by η ≡ N −1 N n=1 η n , the updated ensemble has sample mean
µ ≡ 1 N N n=1 υ n = M( m, C) − K( C)η, and sample covariance Σ ≡ 1 N − 1 N n=1 υ n − µ υ n − µ ⊤ = I − K( C)A C I − K( C)A ⊤ + K( C) ΓK ⊤ ( C) − I − K( C)A C uη K ⊤ ( C) − K( C)( C uη ) ⊤ I − A ⊤ K ⊤ ( C) , (2.6) where Γ ≡ 1 N − 1 N n=1 (η n − η N )(η n − η) ⊤ , and C uη ≡ 1 N − 1 N n=1 (u n − m)(η n − η) ⊤ .
The addition of perturbations serves the purpose of correcting the sample covariance, in the sense that without perturbations the sample covariance is an inconsistent estimator of Σ. To see this, note that
E η Σ = I − K( C)A C I − K( C)A ⊤ + K( C)ΓK ⊤ ( C) = I − K( C)A C = C( C),(2.7)
where E η denotes expectation with respect to the perturbations. Further, by Lemma A.7 the map C is continuous, and so the continuous mapping theorem together with the consistency of C imply consistency of Σ. Foregoing the perturbations is known to result in a downward biased estimator C( C) I − K( C)A ⊤ , which generally leads to poor performance of the algorithm. To facilitate comparison with the Kalman update in (2.5), we rewrite the PO update as follows:
µ = M( m, C) − K( C)η, Σ = C( C) + O,(2.8)
where the offset term O, obtained by subtracting (2.6) and (2.7), is given by
O = K( C)( Γ − Γ)K ⊤ ( C) − I − K( C)A C uη K ⊤ ( C) − K( C)( C uη ) ⊤ I − A ⊤ K ⊤ ( C) . (2.9)
The offset term O was introduced in [33, Proposition 4].
Square Root Update
The PO update relies crucially on the added perturbations to maintain consistency and, as noted for example in [29,90,11], is asymptotically equivalent to the exact posterior update (1.2). However, for a finite ensemble of size N , the addition of random perturbations introduces an extra source of error into the ensemble Kalman update. The SR update, introduced in [28] and surveyed in [90,57], is a deterministic alternative to the PO update. It updates the prior ensemble in a manner that ensures that Σ ≡ C( C). This is achieved by first identifying a map g : R d×N → R d×N such that Π = g( P ), where
C = P P ⊤ , and C( C) = Π Π ⊤ ,
with both factorizations guaranteed to exist since C, C( C) ∈ S d + . Consistency of Σ can then be ensured by choosing g to satisfy g( P )g( P ) ⊤ ≡ C( C), with this being referred to as the consistency condition in [57]. There are infinitely many such g, each of which lead to a variant of the SR update. Here we describe two of the most popular variants in the literature as outlined in [90]: the Ensemble Transform Kalman update [11] and the Ensemble Adjustment Kalman update [2] with respective transformations g T ( P ) = P T and g A ( P ) = B P , for matrices T and B. Both g T and g A are therefore linear maps, with g T post-multiplying P , which implies a transformation on the Ndimensional space spanned by the ensemble, and g A pre-multiplying P , so that the transformation is applied to the d-dimensional state-space instead. In both approaches we identify the relevant matrix by first writing
Π Π ⊤ = C( C) = P (I − V D −1 V ⊤ ) P ⊤ , where V = (A P ) ⊤ and D = V ⊤ V + Γ.
1. Ensemble Transform Kalman Update: taking Π = P F U for any F satisfying F F ⊤ = I − V D −1 V ⊤ and arbitrary orthogonal U satisfies the consistency condition. One approach for finding such a matrix F is by rewriting
I − V D −1 V ⊤ = (I + P ⊤ A ⊤ Γ −1 A P ) −1 = E(I + Λ) −1/2 (I + Λ) −1/2 E ⊤ = F F ⊤ ,
where the first equality follows by the Sherman-Morrison formula, and EΛE ⊤ is the eigenvalue decomposition of P ⊤ A ⊤ Γ −1 A P . In summary, we have g T ( P ) = P E(I + Λ) −1/2 U . 2. Ensemble Adjustment Kalman Update: Introducing M = V Γ −1/2 , we can write
P (I − V D −1 V ⊤ ) P ⊤ = P (I + M M ⊤ ) −1 P ⊤ .
Noting that P has full column rank, we may then define B = P (I + M M ⊤ ) −1/2 ( P ⊤ ) † , and so
g A ( P ) = B P = P (I + M M ⊤ ) −1/2 ( P ⊤ ) † P = P (I + M M ⊤ ) −1/2 .
Once a choice of g has been made, and an estimate Σ has been computed, the updated ensemble has first two moments given by
µ = M( m, C), Σ = C( C).
(2.10)
Frequently, only µ, Σ are of concern to the practitioner, but it is still possible to back-out the individual members of the updated ensemble as they may be of interest. It is clear that one choice for P is
P = 1 √ N − 1 u 1 − m, · · · , u N − m ,
in which case it holds that P 1 N = 0 d , and so
υ n = √ N − 1[ Π] n + M( m, C), 1 ≤ n ≤ N, (2.11)
where [ Π] n denotes the n-th column of Π. In Subsection 3.2 we establish error bounds for the approximation of the posterior mean and covariance (µ, Σ) in (1.2) by ( µ, Σ) as estimated using the PO and SR updates in (2.8) and (2.10). It is clear from (2.10) that as long as the choice of g is valid, in the sense that the resulting Σ is consistent, then the specific choice of g is irrelevant to the accuracy of a single SR update. We therefore make no assumptions in our subsequent analysis of the SR algorithm beyond that of g satisfying the consistency condition. Note that, when compared to the SR update in (2.10), the PO update in (2.8) contains additional stochastic terms that will, as our bounds indicate, hinder the estimation of (µ, Σ). As noted in the literature, for example in [90], the PO update increases the probability of underestimating the analysis error covariance. While our presentation and analysis of PO and SR updates is carried out in the linear-Gaussian setting, both updates are frequently utilized in nonlinear and non-Gaussian settings, with empirical evidence suggesting that the PO updates can outperform SR updates [60,63]. In fact, the consistency argument outlined above is only valid in the linear case G(u) = Au, and the statistical advantage of SR implementations in linear settings may not translate into the nonlinear case.
Ensemble Algorithms for Sequential Optimization
In the optimization approach, the solution to the inverse problem (1.1) is found by minimizing an objective function. As discussed in [19], an entire suite of ensemble algorithms have been derived that differ in the choice of objective function and optimization scheme. In this subsection we introduce the Ensemble Kalman Inversion (EKI) algorithm [48] and a new localized implementation of EKI, which we call localized EKI (LEKI) following [93]. Both EKI and LEKI use an ensemble approximation of a Levenberg-Marquardt (LM) optimization scheme to minimize a data-misfit objective
J (u) = 1 2 Γ −1/2 y − G(u) 2 2 ,(2.12)
which promotes fitting the data y. Before deriving EKI in Subsection 2.2.1 and LEKI in Subsection 2.2.2, we give some background that will help us interpret both methods as ensemble-based implementations of classical gradient-based LM schemes. The finite ensemble approximation of an idealized mean-field EKI update using EKI and LEKI updates will be studied in Section 4. Recall that classical iterative optimization algorithms choose an initialization u (0) and set
u (t+1) = u (t) + w (t) , t = 0, 1, . . . ,(2.13)
until a pre-specified convergence criterion is met. Here, w (t) is some favorable direction determined by the optimization algorithm at iteration t, given the current estimate u (t) . In the case that the inverse problem is ill-posed, directly minimizing (2.12) leads to a solution that over-fits the data. Then, implicit regularization can be achieved through the optimization scheme used to obtain the update w (t) . Under the assumption that r(u) ≡ y − G(u) is differentiable, the Levenberg-Marquardt (LM) algorithm chooses w (t) by solving the constrained minimization problem
min w J lin t (w) subject to C −1/2 w 2 2 ≤ δ l , where J lin t (w) ≡ 1 2 Dr(u (t) )w + r(u (t) ) 2 2 ,
and Dr denotes the Jacobian of r. The LM algorithm belongs to the class of trust region optimization methods, and it chooses each increment to minimize a linearized objective, J lin t , but with the added constraint that the minimizer belongs to the ball { C −1/2 w 2 ≤ δ l }, in which we trust that the objective may be replaced by its linearization. Equivalently, w (t) can be viewed as the unconstrained minimizer of a regularized objective,
min w J U t (w), J U t (w) ≡ J lin t (w) + 1 2α t C −1/2 w 2 2 , (2.14)
where α t > 0 acts as a Lagrange multiplier. We are interested in ensemble sequential-optimization algorithms, which instead of updating a single estimate u (t) -as in (2.13)-propagate an ensemble of estimates. Ensemble-based optimization schemes often rely on statistical linearization to avoid the computation of derivatives. Underpinning this idea [95,19,52] is the argument that if G(u) = Au were linear, then C up = CA ⊤ , leading to the approximation in the general nonlinear case
DG(u n ) ≈ ( C up ) ⊤ C † ≡ G. (2.15)
This approximation motivates the derivative-free label often attached to ensemble-based algorithms [54], and we note that they may be employed whenever computing DG(u) is expensive or when G is not differentiable. For the remainder, our analysis focuses on a single step of EKI and LEKI, and so we drop the iteration index t from our notation; we will use instead our previous terminology of prior ensemble and updated ensemble. Finally, similar to our presentation of posterior-approximation algorithms, our exposition is simplified by introducing the nonlinear gain-update operator P,
P : R d×k × S k + → R d×k , P(C up , C pp ; Γ) = P(C up , C pp ) = C up (C pp + Γ) −1 , (2.16)
which is shown to be both pointwise continuous and bounded in Lemma A.8.
Ensemble Kalman Inversion Update
In the EKI, each particle in the prior ensemble is updated according to the LM algorithm, so that
υ n = u n + w n , 1 ≤ n ≤ N,
where w n is the minimizer of a linearized and regularized data-misfit objective J lin n (w) =
1 2 Γ −1/2 y − η n − G(u n ) − Gw 2 2 + 1 2α C −1/2 w 2 2 , η n ∼ N (0, Γ). (2.17)
Following [48], we henceforth set α = 1, but note that our main results can be readily extended to any α > 0. Note that each ensemble member solves the optimization (2.17) with a perturbed observation y − η n , similar in spirit to the PO update of Section 2.1.1. The minimizer of (2.17) (with α = 1) is given by
w n = CG ⊤ (G CG ⊤ + Γ) −1 y − η n − G(u n ) .
Substituting CG ⊤ = C up , and approximating
G CG ⊤ = G C up = ( C up ) ⊤ C † C up ≈ C pp ,
leads to the EKI update
υ n = u n + P( C up , C pp ) y − G(u n ) − η n , 1 ≤ n ≤ N. (2.18)
In the linear forward-model setting, P( C up , C pp ) = K( C), and (2.18) takes on a form identical to the PO update in (2.8). We further define the mean-field EKI update
υ * n = u n + P(C up , C pp ) y − G(u n ) − η n , 1 ≤ n ≤ N, (2.19)
which is the update that would be performed if one has access to the population quantities C up and C pp . Equivalently, the mean-field EKI update could be performed if one had access to an infinite ensemble. The motivation to study these idealized updates stems from a general analysis technique adopted in the literature for studying ensemble Kalman updates. For example, in [25] the authors analyze the EKI algorithm in the linear forward model and continuous time setting by (1) showing that in the mean-field limit (i.e. as N → ∞), the empirical distribution of the ensemble is approximately the solution to a Fokker-Planck equation; and (2) analyzing the asymptotic behaviour of the EKI by leveraging the existing literature for the analysis of the Fokker-Planck equation. In our work we study the quality of the mean-field assumption for the first step of the EKI, that is, we quantify the deviation between υ n and υ * n for a finite ensemble of size N .
Localized Ensemble Kalman Inversion Update
In practice, ensemble-based algorithms are often implemented with N ≪ d, that is, with an ensemble that is much smaller than the state dimension. In this setting, the update is augmented with an additional localization procedure applied to C in the case of linear forward model, and to both C pp and C up in the case of a nonlinear forward model. In either case, localization is seen as an approach to deal both with the extreme rank deficiency and the sampling error that arise from using an ensemble that is significantly smaller than the dimension of the state and/or the dimension of the observation, see for example [45,46,32]. Localization is also useful when the state u, or the transformed state G(u), has elements E(i) and E(j) that represent the values of a variable of interest at physical locations that are a known distance d(i, j) apart: correlations may decay quickly with the physical distance of the variables and localization may help to remove spurious correlations in the sample covariance estimator. In ensemble Kalman methods, localization has most commonly been carried out via the Schur (elementwise) product of the estimator and a positive-semidefinite matrix M of equal dimension. In the vast majority of cases, the elements of M are taken to be
M ij = κ(d(i, j)/b),
where κ is a locally supported correlation function -usually the Gaspari Cohn 5 th -order compact piecewise polynomial [35]-and b > 0 is a length-scale parameter chosen by the practitioner. Since κ tapers off to zero as its argument becomes larger, i.e. when the underlying variables are further apart, the Schur-product operation zeroes out the corresponding elements of the estimator, and the rate at which this tapering occurs is controlled by the size of the length-scale. The localized EKI (LEKI), recently studied in [93], replaces both C pp and C up with their localized counterparts, M 1 • C pp and M 2 • C up , where M 1 and M 2 are localization matrices of appropriate dimension. Two important issues have, in our opinion, hindered the rigorous study of localized ensemble algorithms, and we highlight these next before moving on to introduce our localization framework.
1. Optimality: The justification outlined above for localization in the ensemble Kalman literature has been largely heuristic, and relying on these arguments alone one cannot hope to define a localization procedure that is demonstrably optimal. Notably, the widespread usage of the Gaspari-Cohn correlation function is not rooted in any sense of optimality. Generally, focusing solely on a band of entries near the diagonal is a sub-optimal approach to covariance estimation, as noted in the high-dimensional covariance estimation literature, see for example [21,64,8]. Moreover, even in cases where focusing on elements near the diagonal is justified, for example by assuming that the underlying target is a banded matrix, the bandwidth b > 0 must be chosen carefully as a function of the ensemble size, problem dimension, and dependence structure [7]. This type of analysis has, to the best of our knowledge, not been carried out for the Gaspari-Cohn localization scheme. An important message in the covariance estimation literature is that localization -regardless of how it is employed-can only be optimal if the target of estimation itself is sparse, and such sparsity assumptions must be made explicit in order to facilitate a rigorous mathematical analysis of the procedure. The difficulty of optimal localization in ensemble updates has also been highlighted in [33], where the authors derive an optimal localization matrix M under the unrealistic assumption that C is a diagonal matrix.
2. Schur-Product Approximations: In the literature on ensemble Kalman methods, a consensus has not been reached on how best to apply localization in practice. The issue here can be sufficiently described by deferring to the linear forward-model setting, i.e. G(u) = Au, in which the Kalman Gain is a central quantity. As mentioned for example in [45], in a localized update, the Kalman gain operator should in theory be applied to M • C, i.e. one should study the quantity
K(M • C) = (M • C)A ⊤ A(M • C)A ⊤ + Γ −1 ,
although their experimental results are based on the more computationally convenient approximation
K(M • C) ≈ M • ( CA ⊤ ) M • (A CA ⊤ ) + Γ −1 , (2.20)
which, as they mention, is a reasonable approximation in the case that A is diagonal. Subsequently, much of the literature on localization in ensemble Kalman updates has adopted this or similar approximations, as discussed in greater depth in [74,Section 3.3]. In general, however, approximations made on the Schur product are difficult to justify without strong assumptions on the forward model G.
With these issues in mind, we opt to study an alternative, data-driven approach to localization often employed in the high-dimensional covariance estimation literature [7,17,18], where it is referred to as thresholding. We ground our analysis in the assumption that the target of estimation belongs to the following soft sparsity matrix class:
U d1,d2 (q, R q ) ≡ B ∈ R d1×d2 : max i≤d1 d2 j=1 |B ij | q ≤ R q , (2.21)
where q ∈ [0, 1) and R q > 0, and write U d (q, R q ) in the case d 1 = d 2 = d. In the special case q = 0, matrices in U d1,d2 (0, R 0 ) possess rows that have no more than R 0 non-zero entries, which is the classical hard-sparsity constraint. In contrast, for q ∈ (0, 1), the class U d1,d2 (q, R q ) contains matrices with rows belonging to the ℓ q ball of radius R q q . This includes matrices with rows that contain possibly many non-zero entries so long as their magnitudes decay sufficiently rapidly, and so is often referred to as a soft-sparsity constraint. Importantly, the class U d (q, R q ) is sufficiently rich to capture the motivating intuition that correlations decay with physical distance in a rigorous manner that avoids the optimality issues mentioned above. Structured covariance matrices, such as those belonging to U d1,d2 (q, R q ) are optimally estimated using localized versions of their sample covariances. To this end, we study the localized matrix estimator B ρN ≡ L ρN (B), where L ρN (u) = u1 {|u|≥ρN } is a localization operator with localization radius ρ N , and which is applied elementwise to its argument B. In Section 4 we detail how the localization radius ρ N can be chosen optimally in terms of the parameters of the inverse problem (1.1) and the ensemble size N .
Throughout our analysis, we refrain from using approximations such as the one outlined in (2.20); that is, our analysis of localization replaces all non-localized quantities in the original update (2.18) with their localized counterparts. We introduce the LEKI update: 22) where ρ N,1 and ρ N,2 are two, potentially different localization radii. As in the non-localized case, in Section 4 we provide finite sample bounds on the deviation of the LEKI update from the mean-field update of (2. 19), and describe in detail how the additional structure imposed on C up and C pp leads to improved bounds relative to the non-localized setting. An important issue that warrants discussion is that of positive-semidefiniteness of the estimator B ρN when the target B is a square covariance matrix. In the case of the Schur-product estimator, any localization matrix M derived from a valid correlation function κ is guaranteed to be positivesemidefinite by definition [35], and so by the Schur-product Theorem [42,Theorem 7.5.3] the estimator M• B is positive-semidefinite as well. In contrast, the localization operator L ρN thresholds the sample covariance B elementwise and does not in general preserve positive-semidefiniteness. As discussed in [26,18], B ρN is positive-semidefinite with high probability, but in practice one may opt to use an augmented estimator that guarantees positive-semidefiniteness. We describe this estimator here for completeness:
υ ρ n = u n + P( C up ρN,1 , C pp ρN,2 ) y − G(u n ) − η n , 1 ≤ n ≤ N,(2.let B ρN = d j=1 λ j v j v ⊤
j be the eigen-decomposition of B ρN , so that λ j , v j are the j-th eigenvalue and eigenvector of B ρN . Consider then the positive-part estimator
B + ρN ≡ d j=1 (0 ∨ λ j )v j v ⊤ j .
Clearly then, B + ρN is positive-semidefinite, and furthermore it achieves the same rate as B ρN since
B + ρN − B op ≤ B + ρN − B ρN op + B ρN − B op ≤ max j: λj <0 | λ j − λ j | + B ρN − B op ≤ 2 B ρN − B op ,
where λ j is the j-th eigenvalue of B.
Non-Asymptotic Analysis: Posterior Approximation
This section contains our main results on posterior approximation with finite ensemble Kalman updates. Subsection 3.1 overviews the non-asymptotic covariance estimation theory that underpins our main contributions, which are described in Subsection 3.2.
Dimension-Free Covariance Estimation
We define the effective dimension [98] of a matrix Q ∈ S d + by
r 2 (Q) ≡ Tr(Q) Q op . (3.1)
The effective dimension quantifies the number of directions where Q has significant spectral content [94]. The monographs [94,97] refer to r 2 (Q) as the intrinsic dimension, while [53] uses the term effective rank. This terminology is motivated by the observation that 1 ≤ r 2 (Q) ≤ rank(Q) ≤ d and that r 2 (Q) is insensitive to changes in the scale of Q, see [94]. In situations where the eigenvalues of Q decay rapidly, r 2 (Q) is a better measure of dimension than the state dimension d.
The following result [53,Theorem 4] gives a non-asymptotic sufficient sample size requirement for accurate covariance estimation in terms of the effective dimension of the covariance matrix.
Proposition 3.1 (Covariance Estimation with Sample Covariance -Unstructured Case). Let u 1 , . . . , u N be d-dimensional i.i.d.
sub-Gaussian random vectors with E(u 1 ) = m and var(u 1 ) = C. Then it holds with high probability that As in the definition for matrices, r 2 (C) quantifies the number of directions where the distribution of u has significant spread. Proposition 3.1 and our results in Subsection 3.2 may be extended to sub-Gaussian random variables defined in an infinite-dimensional separable Hilbert space, say H = L 2 (0, 1). It is then illustrative to note that any Gaussian measure N (m, C) in H satisfies that Tr(C) < ∞; in other words, all Gaussian measures have finite effective dimension. In this context, r 2 (C) is related to the rate of decay of the eigenvalues of C, and hence to the almost sure Sobolev regularity of functions u drawn from the Gaussian measure N (m, C) on H = L 2 (0, 1), see e.g. [14,87]. In computational inverse problems and data assimilation, u is often a d-dimensional vector that represents a fine discretization of a Gaussian random field; then, r 2 (C) quantifies the smoothness of the undiscretized field.
C − C op C op r 2 (C) N ∨ r 2 (C) N .
Main Results: Posterior Approximation with Finite Ensemble
In this subsection we state finite ensemble approximation results for the posterior mean and covariance with PO and SR ensemble updates. To highlight some key insights, including the dependence of the bounds on the effective dimension of C and the differences between PO and SR updates, we present streamlined statements in Theorems 3.3 and 3.5. More general versions of these results, along with their proofs, can be found in Appendix A.3. Throughout this section, the data y is treated as a fixed quantity, and high probability statements are made with respect to the distribution of the prior ensemble, P u , see also Remark A.2.
µ − µ 2 c 1 r 2 (C) N + ϕ c 2 √ N , (3.2) where c 1 = c 1 ( A op , Γ −1 op , y − Am 2 ) and c 2 = c 2 A op , Γ −1 op , r 2 (Γ) .
Importantly, the bound (3.2) does not depend on the dimension d of the state-space, and the only dependence on C is through the effective dimension r 2 (C). The term c 2 / √ N in the PO update accounts for the additional error incurred by the presence of the offset term (2.9) in the PO update (2.8).
Remark 3.4 (Dependence of Constants on Model Parameters). Theorem A.9 in Appendix A gives a more refined statement of Theorem 3.3 with explicit expressions for the dependence of c 1 and c 2 on A and Γ. In particular, it is important to note that the constants c 1 and c 2 in Theorem 3.3 deteriorate in the small noise limit where the observation noise goes to zero, and c 2 deteriorates with r 2 (Γ). In the small noise limit, the posterior and prior distribution become mutually singular, and it is hence expected for ensemble updates to be unstable. To illustrate this intuition in a concrete setting, assume that Γ = γI for a positive constant γ, and, for simplicity, that A op = y − Am 2 = 1. Then, the expression for c 1 established in Theorem A.9 implies that for the SR update, for any error ε > 0,
N r 2 (C) ε 2 γ 4 =⇒ µ − µ 2 ε,
with high probability. Similarly, the expressions for c 1 and c 2 imply that for the PO update,
N r 2 (C) ε 2 γ 4 ∨ k γε 2 =⇒ µ − µ 2 ε.
The papers [1,80] show the need to increase the sample size along small noise limits in importance sampling when target and proposal are given, respectively, by posterior and prior. While our bounds here only give sufficient rather than necessary conditions on the ensemble size, it is noteworthy that, for fixed k, the scaling of N as γ → 0 shown here is independent of k. In contrast, necessary sample size conditions for importance sampling show a polynomial dependence on k, see [80].
Σ − Σ op c 1 r 2 (C) N + ϕ c 2 √ N + c 3 r 2 (C) N , (3.3) where c 1 = c 1 ( A op , Γ −1 op ), c 2 = c 2 ( A op , Γ −1 op , r 2 (Γ)), and c 3 = c 3 ( A op , Γ −1 op , Γ op ).
As in Theorem 3.3, the bound in Theorem 3.5 does not depend on the dimension d of the state-space, and the only dependence on C is through the effective dimension r 2 (C).
Non-Asymptotic Analysis: Sequential Optimization
This section contains our main results on approximation of the idealized mean-field EKI update with a finite ensemble size. Our non-asymptotic analysis of EKI relies on Proposition 3.1 for covariance estimation in an unstructured setting, and our analysis of LEKI relies on new non-asymptotic covariance and cross-covariance estimation bounds under approximate sparsity. These latter bounds are introduced in Subsection 4.1 before describing the main results on EKI and LEKI in Subsection 4.2. Throughout this subsection, the data y and N perturbations η 1 , . . . , η N are treated as fixed quantities, and high probability statements are made with respect to the distribution of the prior ensemble, P u , see also Remark A.2.
Dimension-Free Covariance Estimation Under Soft Sparsity
For the covariance estimation problem, imposing additional structure on the target of estimation allows for a substantial improvement in the obtainable rate of estimation relative to the unstructured setting. For example, [98,Chapter 6.5] notes that in the finite d-dimensional setting, the structured problem has estimation error of the order log d N ∨ log d N , as opposed to the d N ∨ d N order in the unstructured setting. The effective dimension defined in (3.1) refines the rate in the unstructured setting to a dimension-free quantity that incorporates spectral information of the matrix. We introduce an analogous notion of effective dimension that is more appropriate for the structured covariance estimation problem, which we term the max-log effective dimension and which, for Q ∈ S d + , is given by
r ∞ (Q) ≡ max j≤d Q (j) log(j + 1) Q (1) , where Q (1) ≥ Q (2) ≥ . . . ≥ Q (d)
is the decreasing rearrangement of the diagonal entries of Q. To the best of our knowledge, this notion of dimension has not been previously considered in the literature, and, as will be shown, refines the rate of estimation in the structured covariance estimation problem by incorporating spectral information of the underlying matrix, albeit in a different way to (3.1).
In particular, r ∞ (Q) is small whenever Q exhibits a decay of the ordered elements Q (1) , Q (2) , . . . that is faster than log(j + 1). We use the subscript ∞ to highlight that the quantity r ∞ is related to the dimension-free sub-Gaussian maxima result of Lemma B.6. Similarly, we use the subscript 2 to draw the connection between r 2 and the sub-Gaussian 2-norm concentration of Theorem A.1. Importantly, bounds based on r ∞ will be dimension-free, in the sense that they exhibit no dependence on the state-dimension d. The next result is our analog of Proposition 3.1 for estimation in the structured setting using the localized sample covariance estimator. All proofs in this subsection have been deferred to Appendix B.1. Assume further that C ∈ U d (q, R q ) for some q ∈ [0, 1) and R q > 0. Set
ρ N ≍ C (1) r ∞ (C) N ∨ r ∞ (C) N .
Then with high probability The result depends crucially on the order of the maximum elementwise distance between the sample and true covariance matrices, C − C max , and in the finite d-dimensional structured setting is the source of the improved logarithmic dependence on the state dimension. The novelty in our result is the dimension-free analysis of Σ − Σ max , which utilizes techniques in [53] combined with the dimension-free sub-Gaussian maxima bound of Lemma B.6 to derive a bound in terms of r ∞ . In the worst case -when C does not exhibit any spectral decay-for example when C = cI d for some constant c > 0, we recover exactly the logarithmic dependence on the state dimension. In particular, the bound in Theorem 4.1 under the additional assumption that N ≥ r ∞ (C)(= log d), gives
C ρN − C op R q ρ 1−q N .C ρN − C op R q log d N 1−q 2 ,
which is the minimax risk of estimating C under the operator norm in the sub-Gaussian setting for the class U d (q, R q ), as derived in [18,Theorem 1]. If the ordered variances exhibit sufficiently fast decay, our upper bound is significantly better. (Recall that in many applications d ∼ 10 9 and N < 100, and so the logarithmic dependence on d may play a significant role in determining a sufficient ensemble size.) Importantly, many of the results in the structured covariance estimation literature rely similarly on the maximum elementwise norm, and so our results can be utilized to achieve refined bounds on the estimation error of the localized estimator under structural assumptions on C that differ from the soft-sparsity assumption considered in this work.
We further introduce a max-log dimension for rectangular matrices that will facilitate analysis of the cross-covariance estimation problem that arises in our study of the LEKI update in (2.22). To this end, given d-dimensional random vectors X 1 , . . . , X N with covariance Q X , and k-dimensional random vectors Y 1 , . . . , Y N with covariance Q Y , and denoting their covariance by Q XY ∈ R d×k , we define
r ′ ∞ (Q XY ) ≡ r ′ ∞ (Q XY ; Q X , Q Y ) ≡ max j≤(d∨k) (Q X (j) ∨ Q Y (j) ) log(j + 1) Q X (1) ∨ Q Y(1)
, where Q X (j) ≡ 0 for j > d and Q Y (j) ≡ 0 for j > k. A result analogous to Theorem 4.1 then holds for cross-covariance estimation in the structured setting and is provided in Theorem B.10. We present here a version of this result that is specific to the LEKI setting.
C up ∈ U d,k (q, R q ),
where q ∈ [0, 1) and R q > 0. Set
ρ N ≍ (C (1) ∨ C pp (1) ) r ′ ∞ (C up ) N ∨ r ′ ∞ (C up ) N .
Then with high probability This approach however requires one to place sparsity assumptions on the full covariance matrix, making the result potentially less useful in practice. That is, one may wish to make structural assumptions on C up and C pp without imposing any restrictions on C, which our result allows for.
C up ρN − C up op R q ρ 1−q N .
Main Results: Approximation of Mean-Field Particle Updates with Finite Ensemble Size
In this subsection we state finite ensemble approximation results for EKI and LEKI updates. To highlight some key insights, including the dependence of the bounds on the max-log dimensions of C up and C pp , we present streamlined statements in Theorems 4.5 and 4.7. More general versions of these results, along with their proofs, can be found in Appendix B.2. 1) is Lipschitz. Let υ n and υ * n be the EKI and mean-field EKI updates defined in (2.18) and (2.19) respectively. Assume for simplicity that N ≥ r 2 (C) ∨ r 2 (C pp ) and that C op = C up op = C pp op = 1. Then, it holds with high probability that These bounds may be used to establish the sufficient ensemble size necessary to ensure that the EKI update approximates well the mean-field EKI update in the unstructured covariance setting. 1), and positive constants R 1 , R 2 . Let υ ρ n and υ * n be the LEKI and mean-field EKI updates outlined in (2.22) and (2.19) respectively. Assume for simplicity that N ≥ r ′ ∞ (C up ) ∨ r ∞ (C pp ) and that C up
Theorem 4.5 (Approximation of Mean-Field EKI with EKI (Streamlined)). Suppose that the forward model
G : R d → R k in (1.υ n − υ * n 2 c r 2 (C) N ∨ r 2 (C pp ) N , where c = c( Γ −1 op , y − G(u n ) − η n 2 ).
Theorem 4.7 (Approximation of Mean-Field EKI with LEKI (Streamlined)). Suppose that the forward model
G : R d → R k in (1.1) is Lipschitz. Assume that C up ∈ U d,k (q 1 , R 1 ) and C pp ∈ U k (q 2 , R 2 ) for q 1 , q 2 ∈ [0,op = C (1) = C pp (1) = 1. Set ρ N,1 ≍ r ′ ∞ (C up ) N and ρ N,2 ≍ r ∞ (C pp ) N .
Then it holds with high probability that Importantly, Theorem 4.7 makes no assumptions on the covariance matrix C, and so can be used even in cases where C is dense, but the covariances C up and C pp can be reasonably assumed to be sparse. In the case that sparsity assumptions on C are appropriate, then an interesting question is: what (explicit) assumptions on G ensure sparsity of C up and C pp ? We provide here two simple arguments that may provide some insight. Throughout, c 1 , c 2 , c 2 , c 4 are arbitrary positive constants independent of both state and observation dimensions d and k, and q ∈ [0, 1).
υ ρ n − υ * n 2 ( Γ −1 op ∨ Γ −1 2 op )(R 1 ∨ R 2 ) r ′ ∞ (C up ) N 1−q 1 2 ∨ r ∞ (C pp ) N 1−q 2 2 .1. Suppose C ∈ U d (q, c 1 ) and E[DG] ⊤ ∈ U d,k (q, c 2 )
. Then there exists c 3 such that C up ∈ U(q, c 3 ). We provide a formal statement of this result in Lemma B.15. The assumption on the expected Jacobian E[DG] can be understood as the requirement that, in expectation, any coordinate function G j of G depends on its input u only through a subset of u whose size does not grow with k or d. 2. Suppose C ∈ U d (q, c 1 ). Then there exists c 2 such that C pp ∈ U k (q, c 2 ) whenever G(u) = Au is a linear map with A ∈ U k,d (q, c 3 ) and A ⊤ ∈ U d,k (q, c 4 ), i.e. whenever A has both rows and columns that are sparse. This condition holds, for example, for banded A. We provide a formal statement of this result in Lemma B.17.
The two arguments imply that if G acts on local subsets of u, which holds for instance for convolution or moving average operators, then one can expect the sparsity of C to carry on to both C up and C pp .
Conclusions, Discussion, and Future Directions
This paper has introduced a non-asymptotic approach to the study of ensemble Kalman methods. Our theory explains why these algorithms may be accurate provided that the ensemble size is larger than a suitable notion of effective dimension, which may be dramatically smaller than the statespace dimension due to spectrum decay and/or approximate sparsity. Our non-asymptotic results in Section 3 tell apart PO and SR updates for posterior approximation, and our results in Section 4 demonstrate the potential advantage of using localization in sequential-optimization algorithms. As discussed in Section 2.2.2, localization is also often used in posterior-approximation algorithms. For instance, one may define a localized PO update by
µ = M( m, C ρN ) − K( C ρN )η, Σ = C( C ρN ) + O ρN ,(5.1)
where O ρN is defined replacing C with C ρN in (2.9). Similarly, one may define a localized SR update by
µ = M( m, C ρN ), Σ = C( C ρN ). (5.2)
It is then natural to ask if localized PO and SR updates can yield better approximation of the posterior mean and covariance than those without localization in Theorems 3.3 and 3.5.
The answer for the posterior mean seems to be negative. To see why, consider for intuition that we are given a random sample X 1 , . . . , X N from a normal distribution with unknown mean µ X and known covariance Σ X , and that we use the sample mean X to estimate µ X in ℓ 2 . In this setting, a standard result is that this estimator achieves the minimax rate Tr(Σ X )/N , see e.g. [56, Example 1.14]. Placing structural assumptions on Σ X can result in a significant improvement on the rate in the covariance estimation problem, as described in depth in Section 4, but it will not impact the mean estimation problem unless a possibly different estimator than the sample mean is used. Similarly, in our inverse problem setting, sparsity assumptions on the prior covariance C cannot be expected to translate into a better bound on µ − µ 2 : this quantity is a function of both the covariance deviation C ρN − C op and the prior mean deviation m − m 2 and since the latter is unaffected as shown in our intuitive argument, it dominates the overall bound, yielding an error bound of the same order as that in Theorem 3.3. A potential avenue for future investigation is then to consider estimators other than the sample mean of the ensemble, which may yield improved bounds. In short, we do not expect that localization can result in an improved bound without stronger assumptions placed directly on the target of estimation µ, or using an estimator other than the sample mean to estimate µ.
Similar issues to those arising in the estimation of the posterior mean affect the analysis of the localized offset O ρN , and we therefore do not expect improvement on the bound in Theorem 3.5 for covariance estimation with the localized PO update. We note, however, that for localized SR it is possible to derive an analog to Theorem 3.5 with an improved error bound, which we present in Theorem C.2.
Our discussion here should not be taken to imply that localization in posterior-approximation algorithms is not useful; it is plausible that localization in one step of the algorithm can lead to improved bounds in later steps, and we leave this multi-step analysis of localized posterior approximation ensemble updates as an important line for future work. A related phenomenon is known to occur in sequential Monte Carlo, where a proposal density that may be optimal for one step of the filter may not be optimal over multiple steps [1]. Another interesting direction for future study is the non-asymptotic analysis of ensemble Kalman methods for likelihood approximations in state-space models [22]. Finally, we envision that the non-asymptotic approach set forth here may be adopted to design and analyze new multi-step methods for posterior-approximation and sequential-optimization in inverse problems and data assimilation.
A Proofs: Section 3
This appendix contains the proofs of all the theorems in Section 3. Background results on covariance estimation are reviewed in Subsection A.1 and the continuity and boundedness of the Kalman gain, mean-update, covariance-update, and nonlinear gain-update operators are summarized in Subsection A.2. These preliminary results are used in Subsection A.3 to establish our main theorems.
A.1 Preliminaries: Concentration and Covariance Estimation
Theorem A.1 (Sub-Gaussian Norm Concentration, [97,Exercise 6.3.5]). Let X = [x 1 , . . . , x d ] ⊤ be a random vector with E(x i ) = 0, var(x i ) = 1, and x i ψ2 ≤ K i for i = 1, . . . , d. Let B ∈ R k×d be a fixed matrix. Then, for any δ ∈ (0, 1),
P BX 2 > b 1 K B F + b 2 K B op log(1/δ) ≤ δ, where b 1 , b 2 are universal constants and K = max i≤d K i . In particular, if X ∼ N (µ X , Σ X ), we have P X − µ X 2 > Tr(Σ X ) + 2 Σ X op log(1/δ) ≤ δ.
Remark A.2 (High Probability Statements). Theorem A.1 implies that, for any δ ∈ (0, 1), it holds with probability at least 1 − δ that
X − µ X 2 ≤ Tr(Σ X ) + 2 Σ X op log(1/δ) δ Tr(Σ X ),
since Tr(Σ X ) ≥ Σ X op . More concisely, we say that with high probability X − µ X 2 Tr(Σ X ). Throughout, we will present high probability results in this manner rather than as direct probability statements as done in Theorem A.1. [53]. For completeness, we show here the un-centered case. For n = 1, . . . , N , let u n = Z n + m, where Z n is a centered sub-Gaussian random vector with var(Z n ) = C. Then we may write
Proof of Proposition 3.1. The centered case (m = 0) is Theorem 4 of
C = 1 N − 1 N n=1 (Z n − Z)(Z n − Z) ⊤ ≍ 1 N N n=1 Z n Z ⊤ n − ZZ ⊤ ≡ C 0 − ZZ ⊤ , and further C = E(u 1 − m)(u 1 − m) ⊤ = EZZ ⊤ . Therefore, C − C op ≤ C 0 − C op + ZZ ⊤ op = C 0 − C op + Z 2 2 C op r 2 (C) N ∨ r 2 (C) N + Tr(C) N C op r 2 (C) N ∨ r 2 (C) N ,
where we used the result for the centered case for the first term and Theorem A.1 for the second. Then with high probability
C op C op 1 ∨ r 2 (C) N ∨ r 2 (C) N .
Proof. By the triangle inequality
C op = C − C + C op ≤ C − C op + C op ,C uη = 1 N − 1 N n=1 (u n − m)(η n − η) ⊤ of the cross-covariance C uη ≡ E(u 1 − m)η ⊤ 1 .
Then with high probability
C uη − C uη op ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
Proof. First, we note that
C uη ≍ N − 1 N 1 N N n=1 (u n − m)(η n − η) ⊤ ≡ N − 1 N C uη ,
and so it suffices to prove the claim for the biased sample covariance estimator, which we denote by C uη . Let Z n = u n − m, then it follows that
C uη op = 1 N N n=1 Z n η ⊤ n + Zη ⊤ op ≤ 1 N N n=1 Z n η ⊤ n op + Zη ⊤ op . (A.1)
For the second term of (A.1), by Theorem A.1
Zη ⊤ op = Z 2 η 2 Tr(C) N Tr(Γ) N = C 1/2 op Γ 1/2 op r 2 (C) N r 2 (Γ) N C op r 2 (C) N + Γ op r 2 (Γ) N ( C op ∨ Γ op ) r 2 (C) N + r 2 (Γ) N ,
where the first inequality follows by Theorem A.1 and the second inequality from the identity √ ab a + b for a, b ≥ 0. To control the first term in the right-hand side of (A.1), we define the vector
W n = Z n η n ∈ R d+k , 1 ≤ n ≤ N,
and note that W 1 , . . . , W N is an i.i.d. sub-Gaussian sequence with EW 1 = [m ⊤ , 0 ⊤ k ] ⊤ and variance diag(C, Γ). Therefore, by Proposition 3.1 it holds with high probability that
1 N N n=1 W n W ⊤ n − C O O Γ op ( C op ∨ Γ op ) Tr(C) + Tr(Γ) N ( C op ∨ Γ op ) ∨ Tr(C) + Tr(Γ) N ( C op ∨ Γ op ) ( C op ∨ Γ op ) Tr(C) N C op + Tr(Γ) N Γ op ∨ Tr(C) + Tr(Γ) N ( C op ∨ Γ op ) ( C op ∨ Γ op ) r 2 (C) N + r 2 (Γ) N ∨ C op r 2 (C) N + Γ op r 2 (Γ) N ( C op ∨ Γ op ) r 2 (C) N + r 2 (Γ) N ∨ r 2 (C) N + r 2 (Γ) N ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
Note that we can express
P ≡ 1 N N n=1 W n W ⊤ n − C O O Γ = N −1 N n=1 Z n Z ⊤ n − C N −1 N n=1 Z n η ⊤ n N −1 N n=1 η n Z ⊤ n N −1 N n=1 η n η ⊤ n − Γ , and since 1 N N n=1 Z n η ⊤ n op = E 11 PE 12 op ≤ E 11 op P op E 12 op = P op ,
where E 11 , E 12 are block selection matrices that pick the relevant sub-block matrix of P, it holds with high probability that
1 N N n=1 Z n η ⊤ n op ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
A.2 Continuity and Boundedness of Update Operators
The next three lemmas, shown in [55], ensure the continuity and boundedness of the Kalman gain, mean-update, and covariance-update operators introduced in Section 2. We include them here for completeness. Lemma A.8 below establishes similar properties for the nonlinear gain-update operator. (2.4). Let P, Q ∈ S d + , Γ ∈ S k ++ , A ∈ R k×d , y ∈ R k , and m, m ′ ∈ R d . The following hold:
K(Q) − K(P ) op ≤ Q − P op A op Γ −1 op 1 + min{ P op , Q op } A 2 op Γ −1 op , K(Q) op ≤ Q op A op Γ −1 op , I − K(P )A op ≤ 1 + P op A 2 op Γ −1 op . LemmaM(m, Q) ≤ m + Q op A op Γ −1 op y − Am 2 , M(m, Q) − M(m ′ , P ) ≤ m − m ′ 1 + A 2 op Γ −1 op Q op + Q − P op A op Γ −1 op 1 + A 2 op Γ −1 op P op y − Am ′ 2 . LemmaC(Q) − C(P ) op ≤ Q − P op 1 + A 2 op Γ −1 op ( Q op + P op ) + A 4 op Γ −1 2 op Q op P op , 0 C(Q) Q, C(Q) op ≤ Q op .
Lemma A.8 (Continuity and Boundedness of Nonlinear Gain-Update Operator). Let P be the nonlinear gainupdate operator defined in (2.16). Let P, P ∈ R d×k , Q, Q ∈ S k + , and Γ ∈ S k ++ . The following hold:
P(P, Q) − P( P , Q) op ≤ Γ −1 op P − P op + Γ −1 2 op P op Q − Q op , P(P, Q) op ≤ Γ −1 op P op + Γ −1 2 op Q op . Proof.
The proof follows in similar style to Lemma 4.1 in [55]. We note that
P (Q + Γ) −1 − P ( Q + Γ) −1 op ≤ P (Q + Γ) −1 − P ( Q + Γ) −1 op + P ( Q + Γ) −1 − P ( Q + Γ) −1 op ≤ P op (Q + Γ) −1 − ( Q + Γ) −1 op + P − P op (Q + Γ) −1 op . Since Γ ≻ 0 and Q 0, it holds that Q + Γ Γ and so (Q + Γ) −1 Γ −1 , which in turn implies (Q + Γ) −1 op ≤ Γ −1 op . Further, (Q + Γ) −1 − ( Q + Γ) −1 op = Γ −1/2 [(Γ −1/2 QΓ −1/2 + I) −1 − (Γ −1/2 QΓ −1/2 + I) −1 ]Γ −1/2 op ≤ Γ −1 op (Γ −1/2 QΓ −1/2 + I) −1 − (Γ −1/2 QΓ −1/2 + I) −1 op ≤ Γ −1 op Γ −1/2 QΓ −1/2 − Γ −1/2 QΓ −1/2 op ≤ Γ −1 2 op Q − Q op ,
where the second to last equality follows by the fact that (I + A) −
1 − I + B) −1 op ≤ B − A op for A, B ∈ S k + .
To prove the pointwise boundedness of P, take P to be the d × k matrix of zeroes, and Q to be the k × k matrix of zeroes, and plug these values into the continuity bound.
A.3 Proof of Main Results in Section 3
In order to ease notation throughout this section, we define for any N ∈ N and any symmetric matrix Q:
R N,2 (Q) ≡ T op r 2 (Q) N ∨ r 2 (Q) N , (A.2) Z N,2 (Q) ≡ T op + R N,2 (Qµ − µ 2 1 + A 2 op Γ −1 op Z N,2 (C) Tr(C) N + A op Γ −1 op y − Am 2 R N,2 (C) 1 + A 2 op Γ −1 op C op + ϕe, where e = A op Γ −1 op Z N,2 (C) Tr(Γ) N . (A.4)
Proof. It follows from Lemma A.6 that
µ − µ 2 = M( m, C) − ϕK( C)η − M(m, C) 2 ≤ M( m, C) − M(m, C) 2 + ϕ K( C)η 2 ≤ m − m 2 1 + A 2 op Γ −1 op C op (A.5) + C − C op A op Γ −1 op 1 + A 2 op Γ −1 op C op y − Am 2 (A.6) + ϕ K( C) op η 2 . (A.7)
We
Combining this with Lemma A.3 implies that with high probability
m − m 2 1 + A 2 op Γ −1 op C op 1 + A 2 op Γ −1 op Z N,2 (C) Tr(C) N . (A.8)
For (A.6), we first invoke Proposition 3.1 to control C − C op , which shows that with high probability
C − C op A op Γ −1 op 1 + A 2 op Γ −1 op C op y − Am A op Γ −1 op y − Am R N,2 (C) 1 + A 2 op Γ −1 op C op . (A.9)
Finally, for (A.7), it follows from Lemma A.5, the Gaussian concentration Theorem A.1, and Lemma A.3 that with high probability
K( C) op η ≤ A op Γ −1 op C op η A op Γ −1 op Z N,2 (C) Tr(Γ) N . (A.10)
Putting the three bounds (A.8), (A.9) and (A.10) together, we see that
µ − µ 1 + A 2 op Γ −1 op Z N,2 (C) Tr(C) N + A op Γ −1 op y − Am R N,2 (C) 1 + A 2 op Γ −1 op C op + ϕ A op Γ −1 op Z N,2 (C) Tr(Γ) N .
Proof of Theorem 3.3. The more general result, without the assumptions that N ≥ r 2 (C) and C op = 1, is provided in Theorem A.9. We therefore take that result as our starting point and demonstrate how the expression can be simplified under these additional assumptions. From Theorem A.9, we have with high probability
µ − µ 2 1 + A 2 op Γ −1 op Z N,2 (C) Tr(C) N + A op Γ −1 op y − Am 2 R N,2 (C) 1 + A 2 op Γ −1 op + ϕe = T 1 + T 2 + ϕT 3 ,
and we deal with each of these terms separately. For the first term, we have
T 1 = 1 + A 2 op Γ −1 op 1 + r 2 (C) N r 2 (C) N 1 + A 2 op Γ −1 op r 2 (C) N ,
and for the second term
T 2 = y − Am 2 A op Γ −1 op 1 + A 2 op Γ −1 op R N,2 (C) y − Am 2 A op Γ −1 op 1 + A 2 op Γ −1 op r 2 (C) N .
It follows then that
T 1 + T 2 1 + A op Γ −1 op y − Am 2 1 + A 2 op Γ −1 op r 2 (C) N = c 1 ( A op , Γ −1 op , y − Am 2 ) r 2 (C) N .
Further, for the third term we note that
T 3 = A op Γ −1 op 1 + R N,2 (C) Γ 1/2 op r 2 (Γ) N A op Γ −1 op Γ 1/2 op r 2 (Γ) N = c 2 ( A op , Γ −1 op , r 2 (Γ)) √ N .Σ − Σ op R N,2 (C) 1 + A 2 op Γ −1 op Z N,2 (C) + A 4 op Γ −1 2 op Z N,2 (C) C op + ϕe, where e = A 2 op Γ −1 2 op Z 2 N,2 (C)R N,2 (Γ) + A op Γ −1 op Z N,2 (C) A 3 op Γ −1 2 op Z 2 N,2 (C) × ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
Proof. From Proposition 4 of [33], for the PO-ensemble Kalman update we may write
Σ = C( C) + O,
while for the SR-ensemble Kalman update we have Σ = C( C). We deal initially with the C( C) term that is common to both expressions, and then proceed to show how the operator norm of the additional O term can be controlled. From Lemma A.7, the continuity of C immediately implies that
C( C) − C(C) op ≤ C − C op 1 + A 2 op Γ −1 op C op + C op + A 4 op Γ −1 2 op C op C op .
It follows from Proposition 3.1 that with high probability
C − C op R N,2 (C)
and from Lemma A.3 that
C op Z N,2 (C), so that Σ − Σ op C − C op 1 + A 2 op Γ −1 op C op + C op + A 4 op Γ −1 2 op C op C op R N,2 (C) 1 + A 2 op Γ −1 op Z N,2 (C) + A 4 op Γ −1 2 op Z N,2 (C) C op .
Next, for the PO-ensemble Kalman update, it follows by the triangle inequality that
O op ≤ K( C)( Γ − Γ)K ⊤ ( C) op (A.11) + (I − K( C)A)( C uη ) ⊤ K ⊤ ( C) op (A.12) + K( C)( C uη ) ⊤ (I − A ⊤ K ⊤ ( C)) op , (A.13)
and so we may proceed by bounding each of the three terms (A.11), (A.12), and (A.13) separately. For (A.11), we can invoke the bound on K shown in Lemma A.5, as well as the sample covariance bound from Proposition 3.1 twice to show that
K( C)( Γ − Γ)K ⊤ ( C) op ≤ K( C) 2 op Γ − Γ op ≤ A 2 op Γ −1 2 op C 2 op Γ − Γ op A 2 op Γ −1 2 op Z 2 N,2 (C)R N,2 (Γ).
Both (A.12) and (A.13) are equal in operator norm, and so we consider only (A.12). We again use Lemma A.5 and Lemma A.3 to show that
(I − K( C)A)( C uη ) ⊤ K ⊤ ( C) op ≤ K( C) op I − K( C)A op C uη op ≤ K( C) op 1 + K( C) op A op C uη op ≤ A op Γ −1 op C op 1 + A 2 op Γ −1 op C op C uη op = A op Γ −1 op C op + A 3 op Γ −1 2 op C 2 op C uη op A op Γ −1 op Z N,2 (C) + A 3 op Γ −1 2 op Z 2 N,2 (C) C uη op . Now, by Lemma A.4, C uη op ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
In summary, we have shown that with high probability
O op A 2 op Γ −1 2 op Z 2 N,2 (C)R N,2 (Γ) + A op Γ −1 op Z N,2 (C) + A 3 op Γ −1 2 op Z 2 N,2 (C) × ( C op ∨ Γ op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (Γ) N ∨ r 2 (Γ) N .
Proof of Theorem 3.5. From Theorem A.10, we have
Σ − Σ op R N,2 (C) 1 + A 2 op Γ −1 op Z N,2 (C) + A 4 op Γ −1 2 op Z N,2 (C) C op + ϕe (1 + A 2 op Γ −1 op + ( A 2 op Γ −1 op ) 2 ) r 2 (C) N + ϕe = c 1 ( A op , Γ −1 op ) r 2 (C) N + ϕe.
Further, we have
e = A 2 op Γ −1 2 op Z 2 N,2 (C)R N,2 (Γ) + A op Γ −1 op Z N,2 (C) + A 3 op Γ −1 2 op Z 2 N,2 (C) (1 ∨ Γ op ) r 2 (C) N A 2 op Γ −1 2 op Γ 1/2 op r 2 (Γ) N + A op Γ −1 op + A 3 op Γ −1 2 op (1 ∨ Γ op ) r 2 (C) N = c 2 ( A op , Γ −1 op , r 2 (Γ)) √ N + c 3 ( A op , Γ −1 op , Γ op ) r 2 (C) N .
B Proofs: Section 4
This appendix contains the proofs of all the theorems in Section 4. Results on covariance estimation are in Subsection B.1 and our main results on ensemble Kalman updates are in Subsection B.2.
B.1 Covariance Estimation
Here we establish . Given a set T , an admissible sequence of partitions of T is an increasing sequence (∆ n ) of partitions of T such that card(∆ n ) ≤ 2 2 n for n ≥ 1.
The notion of an admissible sequence of partitions allows us to define the following notion of complexity of a set T , often referred to as generic complexity.
2 n/2 Diam ∆ n (t) ,
where ∆ n (t) denotes the unique element of the partition to which t belongs, and the infimum is taken over all admissible sequences of partitions.
The following theorem is known as the Majorizing Measure Theorem and provides upper and lower bounds for centered Gaussian processes in terms of the generic complexity.
d 2 X (s, t) = E(X s − X t ) 2 .
Then there exists an absolute constant L > 0 such that
1 L γ 2 (T, d X ) ≤ E sup t∈T X t ≤ Lγ 2 (T, d X ).
We will be primarily interested in the case that T = F is some function class on the probability space (X , A, P), and with d being the metric induced either by · L2 or · ψ2 . We denote these spaces by (F , L 2 ) and (F , ψ 2 ) respectively throughout this section. The next result that we will need is a bound on the expected supremum of the squared empirical process in terms of the generic complexity. A). Then
E sup f ∈F 1 N N n=1 f (X n ) − Ef (X) γ 2 (F , ψ 2 ) √ N .
If in addition to the measurability requirement, F is assumed to contain only mean-zero functions and to be symmetric in the sense that
f ∈ F =⇒ −f ∈ F , then E sup f ∈F 1 N N n=1 f 2 (X n ) − Ef 2 (X) sup f ∈F f ψ1 γ 2 (F , ψ 2 ) √ N ∨ γ 2 2 (F , ψ 2 ) N .P(Y ≥ r) ≤ a exp − r 2 b 2 , r ≥ 0,
for certain numbers a ≥ 2 and b > 0. Then
EY ≤ Cb log a.
Finally, we recall the following dimension-free bound for the maxima of sub-Gaussian random variables. Lemma B.6 (Dimension-Free Sub-Gaussian Maxima, [96,Lemma 2.4]). Let X 1 , . . . X N be not necessarily independent sub-Gaussian random variables with
P(X n > x) ≤ ce −x 2 /cσ 2 n , for all x ≥ 0, 1 ≤ n ≤ N,
where c is a universal constant and σ n ≥ 0 are given, or alternatively X n ψ2 ≤ σ n . Then
E max n≤N X n max n≤N σ (n) log(n + 1), where σ (1) ≥ σ (2) ≥ . . . σ (N )
is the decreasing rearrangement of σ 1 , . . . , σ N . In the special case that X 1 , . . . , X N are independent and X n ∼ N (0, σ 2 n ), then
E max n≤N |X n | max n≤N σ (n) log(n + 1).
Proof. The proof of the upper bound is based on the proof of Proposition 2.4.16 in [89]. By permutation invariance, we can assume without loss of generality that σ 1 ≥ σ 2 ≥ · · · ≥ σ N . Then for any r ≥ 0,
P max n≤N X n σ n log(n + 1) ≥ r ≤ n≤N P X n ≥ rσ n log(n + 1) ≤ c n≤N exp − r 2 c log(n + 1) .
For r ≥ √ 2c, the final expression in the above display is at most c exp(−r 2 /c). It follows by Lemma B.5 that
B.1.2 Covariance Estimation under Soft Sparsity
This subsection contains the proof of Theorem 4.1. We follow the approach in [53,Theorem 4], but we restrict our attention to finite dimensional spaces for ease of exposition. Since all the results used in the proof are dimension free, Theorem 4.1 may be generalized to Banach spaces. Our proof will rely on the following max-norm covariance estimation bound, which may be of independent interest.
Theorem B.7 (Covariance Estimation with Sample Covariance -Max-Norm Bound). Let X 1 , . . . , X N be d- dimensional i.i.d. sub-Gaussian random vectors with E(X 1 ) = µ X and var(X 1 ) = Σ X . Let Σ X = (N − 1) −1 N n=1 (X n − µ X )(X n − µ X ) ⊤ , it holds with high probability that Σ X − Σ X max Σ X (1) r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N , r ∞ (Σ X ) ≡ max j Σ X (j) log(j + 1) Σ X(1)
.
Proof. The proof of this result is based on the proof of the upper bound of Theorem 4 of [53]. We deal with the case µ X = 0 first. To this end, let Z 1 , . . . , Z N be d-dimensional i.i.d. sub-Gaussian random vectors with zero mean and var(Z 1 ) = Σ X . We denote the distribution of Z 1 by P, and note that · ψ1 , · ψ2 , and · L2 are defined implicitly with respect to P. Let Σ 0 = N −1 N n=1 Z n Z ⊤ n . We rewrite the expectation of interest as a squared empirical process term over an appropriate class of functions. For j ≥ 1 we denote the j-th canonical vector (the vector with 1 in the j-th index and zero otherwise) by e j . Then, we note that
E Σ 0 − Σ X max = E sup i,j e i , ( Σ 0 − Σ X )e j = E sup i,j e i + e j 2 , ( Σ 0 − Σ X ) e i + e j 2 − e i − e j 2 , ( Σ 0 − Σ X ) e i − e j 2 ≤ 2E sup u∈U ( Σ 0 − Σ X )u, u , where U = u ∈ R d : u = ± 1 2 (e i ± e j ), 1 ≤ i, j ≤ d .
Define the set of functions F U = ·, u : u ∈ U , and note that, for any f ∈ F U , −f ∈ F U and Ef (Z 1 ) = 0. It then follows by Theorem B.4 that
2E sup u∈U ( Σ 0 − Σ X )u, u = 2E sup u∈U 1 N N n=1 Z n , u 2 − u, Σ X u = 2E sup f ∈FU 1 N N n=1 f 2 (Z n ) − Ef 2 (Z 1 ) sup f ∈FU f ψ1 γ 2 (F U ; ψ 2 ) √ N ∨ γ 2 2 (F U ; ψ 2 ) N .
Using the equivalence of the ψ 1 and L 2 norms for linear functionals, we have
sup f ∈FU f ψ1 sup f ∈FU f L2 = max u∈U E Z 1 , u 2 = max u∈U u, Σ X u = 1 2 max i,j e i ± e j , Σ X (e i ± e j ) = 1 2 max i,j e i , Σ X e i + e j , Σ X e j ± 2 e i , Σ X e j = 1 2 max i,j Σ X ii + Σ X jj ± 2Σ X ij ≤ Σ X (1) .
To control the generic complexity γ 2 (F U , ψ 2 ), let Y ∼ N (0, Σ X ) be a d-dimensional Gaussian vector, with induced metric
d Y (u, v) = E( Y, u − Y, v ) 2 = ·, u − ·, v L2 , u, v ∈ U.
Then, using the equivalence of the ψ 2 and L 2 norms for linear functionals, we have that
γ 2 (F U ; ψ 2 ) γ 2 (F U ; L 2 ) = γ 2 (U; d Y ).
It follows then by Theorem B.3 that
γ 2 (U; d Y ) E sup u∈U Y, u = E max i,j Y, ± 1 2 (e i ± e j ) ≤ E max j | Y, e j | max j Σ X (j) log(j + 1),
where the final inequality follows by Lemma B.6. We have shown that
E Σ 0 − Σ X max Σ X (1) max j Σ X (j) log(j + 1) N ∨ max j Σ X (j) log(j + 1) N = Σ X (1) r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N .
The claim for the centered case then follows immediately by an application of Markov's inequality.
In the un-centered case, taking X n = Z n + µ X , we have Σ X = Σ 0 − ZZ ⊤ and it follows that
Σ X − Σ X max ≤ Σ 0 − Σ X max + ZZ ⊤ max Σ X (1) r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N ,
since by Lemma B.6,
ZZ ⊤ max ≤ Z 2 max ≤ 1 N max j≤d Σ X (j) log(j + 1). Theorem B.8 (Covariance Estimation -Max-Norm Bound). Let X 1 , . . . , X N be d-dimensional i.i.d. sub- Gaussian random vectors with EX 1 = µ X and var(X 1 ) = Σ X . Further, assume that Σ X ∈ U d (q, R q ) for some q ∈ [0, 1) and R q > 0. Let Σ X = (N − 1) −1 N n=1 (X i −X)(X i −X) ⊤ and set ρ N ≍ Σ X (1) r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N .
Then with high probability
L ρN ( Σ X ) − Σ X op R q ρ 1−q N .
Proof. The localized sample covariance matrix has elements
[L ρN ( Σ X )] ij = Σ X ij 1 | Σ X ij |≥ρN , 1 ≤ i, j ≤ d.
By Theorem B.7, it holds with high probability that
Σ X − Σ X max Σ X (1) r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N ≍ ρ N 2 .
The remainder of the analysis is carried out conditional on this event, following the approach taken in [98,Theorem 6.27]. Define the set of indices of the i-th row of Σ X that exceed ρ N /2 by
I i (ρ N /2) ≡ j ∈ {1, . . . , d} : Σ X ij ≥ ρ N /2 , n = 1, . . . , N.
We then have
Σ X − L ρN ( Σ X ) op ≤ Σ X − L ρN ( Σ X ) 1 = max i=1,...,d d j=1 Σ X ij − Σ X ij 1 | Σ X ij |≥ρN = max i=1,...,d j∈Ii(ρN /2) Σ X ij − Σ X ij 1 | Σ X ij |≥ρN + j / ∈Ii(ρN /2) Σ X ij − Σ X ij 1 | Σ X ij |≥ρN . For j ∈ I i (ρ N /2), it holds that |Σ X ij | ≥ ρ N /2 so that j∈Ii(ρN /2) Σ X ij − Σ X ij 1 | Σ X ij |≥ρN ≤ j∈Ii(ρN /2) Σ X ij − Σ X ij + Σ X ij − Σ X ij 1 | Σ X ij |≥ρN ≤ j∈Ii(ρN /2) Σ X ij − Σ X ij max + Σ X ij − Σ X ij 1 | Σ X ij |≥ρN ≤ j∈Ii(ρN /2) ρ N 2 + ρ N = |I i (ρ N /2)| 3ρ N 2 ,
where we have used the fact that
Σ X ij − Σ X ij 1 | Σ X ij |≥ρN = 0 × 1 | Σ X ij |≥ρN + Σ X ij × 1 | Σ X ij |≤ρN ≤ ρ N . Further, since R q ≥ d j=1 |Σ X ij | q ≥ |I i (ρ N /2)| ρ N 2 q , it follows that |I i (ρ N /2)| ≤ 2 q ρ −q N R q , and so j∈Ii(ρN /2) Σ X ij − Σ X ij 1 | Σ X ij |≥ρN ≤ |I i (ρ N /2)| 3ρ N 2 ≤ 3 2 2 −q ρ 1−q N R q .
For j / ∈ I i (ρ N ), then |Σ X ij | ≤ ρ N /2 and so
| Σ X ij | ≤ | Σ X ij − Σ X ij | + |Σ X ij | ≤ Σ X − Σ X max + |Σ X ij | ≤ ρ N 2 + ρ N 2 = ρ N .
This implies that Σ X ij 1 | Σ X ij |≥ρN = 0, and therefore for q ∈ [0, 1), since |Σ X ij |/(ρ N /2) ≤ 1, it holds that
j / ∈Ii(ρN /2) Σ X ij − Σ X ij 1 | Σ X ij |≥ρN ≤ j / ∈Ii(ρN /2) |Σ X ij | = ρ N 2 j / ∈Ii(ρN /2) |Σ X ij | ρN 2 ≤ ρ N 2 j / ∈Ii(ρN /2) |Σ X ij | ρ N /2 q ≤ ρ 1−q N R q . Combining these two results gives Σ X − L ρN ( Σ X ) op ≤ 4ρ 1−q N R q .
We have therefore shown that with high probability
Σ X − L ρN ( Σ X ) op R q Σ X max r ∞ (Σ X ) N ∨ r ∞ (Σ X ) N 1−q .ρ N ≍ C max r ∞ (C) N ∨ r ∞ (C) N .
Then with high probability
L ρN ( C) op C op + R q r ∞ (C) N 1−q 2 ∨ r ∞ (C) N 1−q .
Proof. By the triangle inequality
L ρN ( C) op = L ρN ( C) − C + C op ≤ L ρN ( C) − C op + C op ,
and the result follows by Theorem 4.1.
B.1.3 Cross-Covariance Estimation under Soft Sparsity
This subsection contains the proof of Theorem 4.3. The presentation is parallel to that in Subsection B.1.2. We will rely on the following max-norm cross-covariance estimation bound, analogous to Theorem B.7.
Σ XY = 1 N − 1 N n=1 (X n − X)(Y n − Y ) ⊤ .
Then with high probability it holds that
Σ XY − Σ XY max (Σ X (1) ∨ Σ Y (1) ) r ′ ∞ (Σ XY ) N ∨ r ′ ∞ (Σ XY ) N .
Proof. Assume first that µ X = µ Y = 0, and let Z 1 , . . . , Z N be d-dimensional i.i.d. sub-Gaussian random vectors with zero mean and var(Z 1 ) = Σ X , and similarly let V 1 , . . . , V N be k-dimensional i.i.d. sub-Gaussian random vectors with zero mean and var(V 1 ) = Σ Y . Further, let W n ≡ [Z ⊤ n , V ⊤ n ] ⊤ for n = 1, . . . , N . We denote the distribution of W 1 by P and note that · ψ2 and · L2 are defined implicitly with respect to P throughout this proof.
Define Σ 0 = N −1 N n=1 Z n V ⊤ n .E Σ 0 − Σ XY max = E H( Σ 0 ) − H(Σ XY ) max = E max 1≤i,j≤d+k (H( Σ 0 ) − H(Σ XY ))e i , e j ≤ 2E sup u∈U (H( Σ 0 ) − H(Σ XY ))u, u ,
where U ≡ u ∈ R d+k : u = ± 1 2 (e i ± e j ) and e i , e j ∈ B d+k .
Writing u = [u ⊤ 1 , u ⊤ 2 ] ⊤ where u 1 ∈ R d and u 2 ∈ R k , we have H( Σ 0 )u, u = 2 N N n=1 u 1 , Z n u 2 , V n = 2 N N n=1 f u (W n ), where f u (W n ) ≡ E 1 W n , u 1 E 2 W n , u 2 and where E 1 ≡ [I d , O d×k ] ∈ R d×(d+k) and E 2 ≡ [O k×d , I k ] ∈ R k×(d+k)
are the relevant selection matrices so that E 1 W n = Z n and E 2 W n = V n . We define the class of functions
F U ≡ f u (·) = E 1 ·, u 1 E 2 ·, u 2 : u = [u ⊤ 1 , u ⊤ 2 ] ⊤ ∈ U . We then have by Theorem B.4 that 2E sup u∈U (H( Σ 0 ) − H(Σ XY ))u, u = 2E sup fu∈FU 1 N N n=1 f u (W n ) − Ef u (W n ) γ 2 (F U , ψ 2 ) √ N .
Define,
U 1 ≡ u 1 ∈ R d : u 1 = ± 1 2
(e i ± e j ) and e i , e j ∈ B d , F 1 ≡ {f (·) = E 1 ·, u 1 : u 1 ∈ U 1 },
U 2 ≡ u 2 ∈ R k : u 2 = ± 1 2 (e i ± e j ) and e i , e j ∈ B k , F 2 ≡ {f (·) = E 2 ·, u 2 : u 2 ∈ U 2 }, then F U ⊂ F 1 · F 2 , where F 1 · F 2 ≡ f (·) = f 1 (·)f 2 (·) : f 1 ∈ F 1 , f 2 ∈ F 2 .
Applying Lemma B.11,
γ 2 (F U , ψ 2 ) ≤ γ 2 (F 1 · F 2 , ψ 2 ) ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 )γ 2 (F 1 × F 2 , d + ψ2 ) ≤ Σ X (1) ∨ Σ Y (1) γ 2 (F 1 × F 2 , d + ψ2 ), where, for (f 1 , f 2 ), (f ′ 1 , f ′ 2 ) ∈ F 1 × F 2 , we define d + ψ2 (f 1 , f 2 ), (f ′ 1 , f ′ 2 ) ≡ f 1 − f ′ 1 ψ2 + f 2 − f ′ 2 ψ2
, and where we have used the equivalence of ψ 2 and L 2 norms for linear functionals to write
sup f1∈F1 f 1 ψ2 sup f1∈F1 f 1 L2 = max u1∈U1 u 1 , Σ X u 1 ≤ Σ X (1) ,
and similarly, sup f2∈F2 f 2 ψ2 ≤ Σ Y (1) . To control the term γ 2 (F 1 × F 2 , d + ψ2 ), we note that by Lemma B.12,
γ 2 (F 1 × F 2 , d + ψ2 ) ≤ γ 2 (F 1 , ψ 2 ) + γ 2 (F 2 , ψ 2 ) γ 2 (F 1 , L 2 ) + γ 2 (F 2 , L 2 ) = γ 2 (U 1 , d X ) + γ 2 (U 2 , d Y ), where d X (u, v) = E( g X , u − g X , v ) 2 , g X ∼ N (0, Σ X ), d Y (u, v) = E( g Y , u − g Y , v ) 2 , g Y ∼ N (0, Σ Y ).
By Theorem B.3 and Lemma B.6,
γ 2 (U 1 , d X ) = E sup u1∈U1 g X , u 1 = E max i,j≤d g X , ± 1 2 (e i ± e j ) ≤ E max i≤d g X , e i max i≤d Σ X (i) log(i + 1).
Similarly,
γ 2 (U 2 , d Y ) max j≤k Σ Y (j) log(j + 1).
In summary, we have
E Σ 0 − Σ XY max Σ X (1) ∨ Σ Y (1) N max i≤d Σ X (i) log(i + 1) + max j≤k Σ Y (j) log(j + 1) Σ X (1) ∨ Σ Y (1) max i≤(d∨k) (Σ X (i) ∨ Σ Y (i) ) log(i + 1) N ,
where Σ X (i) ≡ 0 for i > d and similarly Σ Y (i) ≡ 0 for i > k. The claim for the un-centered case then follows immediately by Markov's inequality. In the un-centered case, take X n = Z n + µ X and Y n = V n + µ Y for n = 1, . . . , N , then Σ XY = Σ 0 − XY ⊤ , and so
Σ XY − Σ XY max ≤ Σ 0 − Σ XY max + XY ⊤ max .
The first term is controlled by appealing to the result in the centered case. For the second term, we note that by Lemma B.6
XY ⊤ max ≤ X max Y max 1 N max i≤d Σ X (i) log(i + 1) max j≤k Σ Y (j) log(j + 1) ≤ max i≤(d∨k) (Σ X (i) ∨ Σ Y (i) ) log(i + 1) N .
Lemma B.11 (Generic Complexity of Product Classes). Let (X , A, P) be a probability space, and consider the random sample X 1 , . . . , X N i.i.d.
∼ P. Let F 1 and F 2 be two spaces of measurable functions on (X , A), and consider the corresponding metric spaces (F 1 , ψ 2 ) and (F 2 , ψ 2 ). Define the class
(F 1 · F 2 , ψ 2 ) where F 1 · F 2 ≡ {f 1 f 2 : f 1 ∈ F 1 , f 2 ∈ F 2 }.
Then,
γ 2 (F 1 · F 2 , ψ 2 ) ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 ) γ 2 (F 1 × F 2 , d + ψ2 ), where for (f 1 , f 2 ), (f ′ 1 , f ′ 2 ) ∈ F 1 × F 2 , d + ψ2 (f 1 , f 2 ), (f ′ 1 , f ′ 2 ) ≡ f 1 − f ′ 1 ψ2 + f 2 − f ′ 2 ψ2 . Proof. First, we define the quantity γ * 2 (F , d) ≡ inf sup g∈F n≥0 2 n/2 d(g, T n ),
where the infimum is taken over all sets T n with cardinality at most 2 2 n for n ≥ 1. Then, it follows by [89, Theorem 2.3.1] that γ 2 (T, d) γ * 2 (T, d). Define the mapping h :
F 1 × F 2 → F 1 · F 2 , h(f 1 , f 2 ) = f 1 f 2 ,
and note that h is surjective. The surjectivity of h implies that for any g ∈ F 1 · F 2 , the inverse
image h −1 ({g}) = {(f 1 , f 2 ) : f 1 f 2 = g, f 1 ∈ F 1 , f 2 ∈ F 2 }
is non-empty, though in general it may contain more than one element since h is not injective. To this end, we define the pseudo-inverse function h − : F 1 · F 2 → F 1 × F 2 so that for any g, h − (g) is an arbitrarily chosen element of the inverse image h −1 ({g}). Further, we write d ψ2 (f, g) ≡ f − g ψ2(P) throughout. With these facts in hand, we write
γ 2 (F 1 · F 2 , ψ 2 ) γ * 2 (F 1 · F 2 , ψ 2 ) = inf Tn⊂F1·F2 sup g∈F1·F2 n≥0 2 n/2 d ψ2 (g, T n ) ≤ inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 d ψ2 g, h( T n ) = inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 inf t∈h( Tn) d ψ2 (g, t) = inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 inf t∈h( Tn) d ψ2 h h − (g) , h h − (t) = inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 inf t∈h( Tn) d ψ2 h (f g 1 , f g 2 ) , h (f t 1 , f t 2 ) = inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 inf t∈h( Tn) d ψ2 f g 1 f g 2 , f t 1 f t 2 , (B.1) where f g 1 , f t 1 ∈ F 1 and f g 2 , f t 2 ∈ F 2 and f g 1 f g 2 = g ∈ F 1 · F 2 and f t 1 f t 2 = t ∈ h( T n ) ⊂ F 1 · F 2 . Now, we note that d ψ2 (f g 1 f g 2 , f t 1 f t 2 ) = f g 1 f g 2 − f t 1 f t 2 ψ2 ≤ f g 1 ψ2 f g 2 − f t 2 ψ2 + f t 2 ψ2 f g 1 − f t 1 ψ2 ≤ ( sup f1∈F1 f 1 ψ2 ) f g 2 − f t 2 ψ2 + ( sup f2∈F2 f 2 ψ2 ) f g 1 − f t 1 ψ2 ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 )d + ψ2 (f t 1 , f t 2 ), (f g 1 , f g 2 ) ,
and so inf t∈h( Tn)
d ψ2 (f g 1 f g 2 , f t 1 f t 2 ) ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 ) inf (f t 1 ,f t 2 )∈ Tn d + ψ2 (f t 1 , f t 2 ), (f g 1 , f g 2 ) = ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 )d + ψ2 (f g 1 , f g 2 ), T n .
Combining this bound with the previous line of work in (B.1), we have
γ 2 (F 1 · F 2 , ψ 2 ) ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 ) inf Tn⊂F1×F2 sup g∈F1·F2 n≥0 2 n/2 d + ψ2 (f g 1 , f g 2 ), T n = ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 ) inf Tn⊂F1×F2 sup y∈F1×F2 n≥0 2 n/2 d + ψ2 (y, T n ) = ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 )γ * 2 (F 1 × F 2 , d + ψ2 ) ≤ ( sup f1∈F1 f 1 ψ2 ∨ sup f2∈F2 f 2 ψ2 )γ 2 (F 1 × F 2 , d + ψ2 ),
where the final equality holds immediately from the definition of γ 2 and γ * 2 .
Lemma B.12 (Generic Complexity of Cross-Product Classes). Let (T 1 , d 1 ), (T 2 , d 2 ) be two, possibly infinite metric spaces, and consider the product metric space
(T 1 × T 2 , d), where for (t 1 , t 2 ), (t ′ 1 , t ′ 2 ) ∈ T 1 × T 2 such that t 1 , t ′ 1 ∈ T 1 and t 2 , t ′ 2 ∈ T 2 , d((t 1 , t 2 ), (t ′ 1 , t ′ 2 )) = d 1 (t 1 , t ′ 1 ) + d 2 (t 2 , t ′ 2 )
. Then
γ 2 (T 1 × T 2 , d) ≤ γ 2 (T 1 , d 1 ) + γ 2 (T 2 , d 2 ).
Proof. For any set T, denote by A(T ) the collection of all admissible partitions of T. We define B(T 1 × T 2 ) to be the set of box partitions of T 1 × T 2 of the form ∆ n,1 × ∆ n,2 , where ∆ n,i ∈ A(T i ) is an admissible partition of T i with respect to the metric d i for i = 1, 2. Clearly,
γ 2 (T 1 × T 2 , d) = inf A(T1×T2) sup t∈T1×T2 n≥0 2 n/2 Diam ∆ n (t) ≤ inf B(T1×T2) sup t∈T1×T2 n≥0 2 n/2 Diam ∆ n (t) .
Note that any t ∈ T 1 × T 2 may be written as t = (t 1 , t 2 ) where t 1 ∈ T 1 and t 2 ∈ T 2 . We therefore have
inf B(T1×T2) sup t∈T1×T2 n≥0 2 n/2 Diam ∆ n (t) = inf A(T1) inf A(T2) sup t1∈T1 t2∈T2 n≥0 2 n/2 Diam ∆ n,1 (t 1 ) × ∆ n,2 (t 2 ) ≤ inf A(T1) sup t1∈T1 n≥0 2 n/2 Diam ∆ n,1 (t 1 ) + inf A(T2) sup t2∈T2 n≥0 2 n/2 Diam ∆ n,2 (t 2 ) = γ 2 (T 1 , d 1 ) + γ 2 (T 2 , d 2 ),
where we have used that
Diam ∆ n,1 (t 1 ) × ∆ n,2 (t 2 ) = sup a,Σ XY = 1 N − 1 N n=1 (X n − X)(Y n − Y ) ⊤ .
Assume that
Σ XY ∈ U d,k (q, R q ),
where q ∈ [0, 1) and R q > 0. Set
ρ N ≍ (Σ X (1) ∨ Σ Y (1) ) r ′ ∞ (Σ XY ) N ∨ r ′ ∞ (Σ XY ) N .
Then, with high probability
L ρN ( Σ XY ) − Σ XY op R q ρ 1−q N .
Proof of Theorem B.13. By Theorem B.10 it holds with high probability that Σ XY − Σ XY max ρ N . Conditioning on this event, the analysis can be carried out in identical fashion to the one taken in the proof of Theorem 4.1 with Σ XY and Σ XY in place of Σ and Σ, and so we omit the details for brevity. U d (q, c). Consider the function G : R d → R k with coordinate functions G 1 , . . . , G k . Assume that for each i = 1, . . . , d and j = 1, . . . , k, G j : R d → R for j = 1, . . . , k, such that ∂ i G j ≡ ∂G j (u)/∂u i exists almost everywhere, and E|∂ i G j | < ∞. Let DG ∈ R k×d denote the Jacobian of G, and assume that E(DG) ⊤ ∈ U d,k (q, a) for some q ∈ [0, 1) and a > 0. Then, C up ∈ U d,k q, ac E[DG] 1−q max C 1−q max .
Proof. By Stein's Lemma (Lemma B.14), the i-th row sum of C up is given by
k j=1 C up ij = k j=1 d l=1 C il E ∂ l G j (u) = d l=1 C il k j=1 E ∂ l G j (u) = E[DG] max d l=1 C il k j=1 E[∂ l G j (u)] E(DG) max ≤ E[DG] 1−q max d l=1 C il k j=1 E[∂ l G j (u)] q ≤ a E[DG] 1−q max d l=1 C il ≤ ac E[DG] 1−q max C 1−q max ,
where the first inequality holds since q ∈ [0, 1) and E[∂ l G j (u)] ≤ E(DG) max .
Lemma B.16 (Product of Two Soft-Sparse Matrices). Fix q ∈ [0, 1) and let S ∈ U d (q, s) and assume S ⊤ = S. Let B ∈ U k,d (q, b). Then BS ∈ U k,d (q, bs B 1−q max S 1−q max ).
B.2 Proof of Main Results in Section 4
Theorem B.18 (Approximation of Mean-Field EKI with EKI). Let y be generated according to (1.1) with Lipschitz forward model G : R d → R k . Let υ n and υ * n be the EKI and mean-field EKI updates defined in (2.18) and (2.19) respectively. Then, it holds with high probability that
υ n − υ * n 2 y − G(u n ) − η n 2 ( Γ −1 op ∨ Γ −1 2 op )(1 ∨ C up op )( C op ∨ C pp op ) × r 2 (C) N ∨ r 2 (C) N ∨ r 2 (C pp ) N ∨ r 2 (C pp ) N .
Proof of Theorem B.18. First, we may write
υ n − υ * n 2 = y − G(u n ) − η n P( C up , C pp ) − P(C up , C pp ) 2 ≤ y − G(u n ) − η n 2 P( C up , C pp ) − P(C up , C pp ) 2 .
It follows by Lemma A.8 that
P( C up , C pp ) − P(C up , C pp )) 2 ≤ Γ −1 op C up − C up op + Γ −1 2 op C up op C pp − C pp op .
In order to control the two deviation terms, we write W n ≡ [u ⊤ n , G ⊤ (u n )] ⊤ and EW n = [m ⊤ , EG ⊤ (u n )] ⊤ for n = 1, . . . , N, and W N = [ m ⊤ , G ⊤ ] ⊤ . Further, let
C W = 1 N − 1 N n=1 (W n − W n )(W n − W n ) ⊤ , C W = C C up C pu C pp .
Since u ∼ N (m, C) and G is Lipschitz, by [97,Theorem 5.2.2] it holds that G(u n ) − EG(u n ) ψ2 ≤ G Lip C 1/2 op , and so by Proposition 3.1,
C up − C up op ∨ C pp − C pp op ≤ C W − C W op C W op r 2 (C W ) N ∨ r 2 (C W ) N .
Therefore, with high probability
P( C up , C pp ) − P(C up , C pp )) 2 Γ −1 op (1 ∨ C up op ) C W op r 2 (C W ) N ∨ r 2 (C W ) N .
Finally, since C W 0, C W op ≤ C op + C pp op , and further Tr(C W ) = Tr(C) + Tr(C pp ), we have that
C W op r 2 (C W ) N ∨ r 2 (C W ) N ( C op ∨ C pp op ) Tr(C) + Tr(C pp ) N ( C op ∨ C pp op ) ∨ Tr(C) + Tr(C pp ) N ( C op ∨ C pp op ) ( C op ∨ C pp op ) r 2 (C) N ∨ r 2 (C) N ∨ r 2 (C pp ) N ∨ r 2 (C pp ) N ,
where the last equality follows by similar reasoning to that used in the proof of Lemma A.4.
Proof of Theorem 4.5. The result follows immediately from B.18 and the additional assumptions.
Theorem B.19 (Approximation of Mean-Field EKI with LEKI). Let y be generated according to (1.1) with Lipschitz forward model G : R d → R k . Assume that C up ∈ U d,k (q 1 , R 1 ) and C pp ∈ U k (q 2 , R 2 ) for q 1 , q 2 ∈ [0, 1), and positive constants R 1 , R 2 . Let υ ρ n and υ * n be the LEKI and mean-field EKI updates outlined in (2.22) and (2.19) respectively. Set
ρ N,1 ≍ (C (1) ∨ C pp (1) ) r ′ ∞ (C up ) N ∨ r ′ ∞ (C up ) N , and ρ N,2 ≍ C pp (1) r ∞ (C pp ) N ∨ r ∞ (C pp ) N .
Then it holds with high probability that
υ ρ n − υ * n 2 ( Γ −1 op ∨ Γ −1 2 op )(1 ∨ C up op )(R 1 ρ 1−q1 N,1 + R 2 ρ 1−q2 N,2 ).
Proof of Theorem B. 19. As in the proof of Theorem B.18, υ ρ n − υ * n 2 ≤ y − G(u n ) − η n 2 P( C up ρN , C pp ρN ) − P(C up , C pp ) 2 , and by Lemma A.8
P( C up ρN , C pp ρN ) − P(C up , C pp ) 2 ≤ ( Γ −1 op ∨ Γ −1 2 op )(1 ∨ C up op )( C up ρN − C up op + C pp ρN − C pp op ).
By Theorem 4.3, we have
C Proofs: Section 5
This appendix contains the proofs of the auxiliary results discussed in Section 5. Assume further that C ∈ U d (q, R q ) for some q ∈ [0, 1) and R q > 0. Set
ρ N ≍ C (1) r ∞ (C) N ∨ r ∞ (C) N .
Then, with high probability, it holds that
K( C ρN ) − K( C) op A op Γ −1 op R q (1 + A 2 op Γ −1 op C op )ρ 1−q N .
Proof. By the continuity of the Kalman gain operator, Lemma A.5, and Theorem 4.1, it follows immediately that
K( C ρN ) − K( C) op ≤ A op Γ −1 op C ρN − C op (1 + A 2 op Γ −1 op C op ) A op Γ −1 op R q ρ 1−q N (1 + A 2 op Γ −1 op C op ).
Theorem C.2 (Square Root Ensemble Kalman Covariance Deviation with Localization). Consider the localized SR ensemble Kalman update given by (5.2), leading to an estimate Σ of the posterior covariance Σ defined in (1.2). Assume that C ∈ U d (q, R q ) for q ∈ [0, 1) and R q > 0. Set
ρ N ≍ C (1) r ∞ (C) N ∨ r ∞ (C) N .
Then with high probability
Σ − Σ op R q ρ 1−q N 1 + A 2 op Γ −1 op 2 C op + R q ρ 1−q N + A 4 op Γ −1 2 op C op (1 + R q ρ 1−q N ) .
If in addition to the above, one assumes for simplicity that N ≥ r ∞ (C) and C op = 1, then with high probability
Σ − Σ op c r ∞ (C) N 1−q 2 , where c = c( A op , Γ −1 op , R 2 q ).
Proof. For the localized SR update we have Σ = C( C ρN ). From Lemma A.7, the continuity of C implies that
C( C ρN ) − C(C) op ≤ C ρN − C op 1 + A 2 op Γ −1 op C ρN op + C op + A 4 op Γ −1 2 op C ρN op C op .
By Theorem 4.1,
C ρN − C op R q ρ 1−q N , and further,
C op C op + R q ρ 1−q N . Therefore, Σ − Σ op R q ρ 1−q N 1 + A 2 op Γ −1 op 2 C op + R q ρ 1−q N + A 4 op Γ −1 2 op C op (1 + R q ρ 1−q N ) .
Assume for simplicity that N ≥ r ∞ (C) and C op = 1. Then
Σ − Σ op R q ρ 1−q N 1 + A 2 op Γ −1 op + A 4 op Γ −1 2 op + R 2 q ρ 2(1−q) N A 2 op Γ −1 op + A 4 op Γ −1 2 op = R q r ∞ (C) N 1−q 2 1 + A 2 op Γ −1 op + A 4 op Γ −1 2 op + R 2 q r ∞ (C) N 1−q A 2 op Γ −1 op + A 4 op Γ −1 2 op R 2 q r ∞ (C) N 1−q 2 1 + A 2 op Γ −1 op + A 4 op Γ −1 2 op = c 1 ( A op , Γ −1 op , R 2 q ) r ∞ (C) N 1−q 2 .
Remark 3. 2 (
2Effective Dimension and Smoothness). Proposition 3.1 motivates defining r 2 (C) ≡ Tr(C)/ C op to be the effective dimension of a d-dimensional sub-Gaussian random vector u with var(u) = C.
Theorem 3. 3 (
3Posterior Mean Approximation with Finite Ensemble (Streamlined)). Consider the PO and SR ensemble Kalman updates given by (2.8) and (2.10), respectively, leading to an estimate µ of the posterior mean µ defined in (1.2). Set ϕ = 1 for the PO update and ϕ = 0 for the SR update. Assume for simplicity that N ≥ r 2 (C) and C op = 1. Then, with high probability
Remark 3. 6 (
6Dependence of Constants on Model Parameters). Theorem A.10 in Appendix A gives a more refined statement of Theorem 3.5 with explicit expressions for the dependence of c 1 , c 2 , and c 3 on A and Γ. As discussed in Remark 3.4, these bounds may be used to establish sufficient ensemble size requirements in small noise limits and other singular limits of practical importance.
Theorem 4. 1 (
1Covariance Estimation with Localization -Soft Sparsity Assumption). Let u 1 , . . . , u N be ddimensional i.i.d. sub-Gaussian random vectors with E(u 1 ) = m and var(u 1 ) = C.
Remark 4. 2 (
2Max-Log Effective Dimension). The proof of Theorem 4.1 can be found in Section B.1 and, up to the choice of ρ N , follows an identical approach to the standard proof for localized covariance estimators in the literature, for example [98, Theorem 6.27].
Theorem 4. 3 (
3Cross-Covariance Estimation with Localization -Soft Sparsity Assumption). Let u 1 , . . . , u N be an i.i.d. sequence of d-dimensional Gaussian random vectors with E(u 1 ) = m and var(u 1 ) = C. Let G : R d → R k be a Lipschitz continuous forward model. Assume that
Remark 4. 4 (
4Sparsity of the Cross-Covariance). To the best of our knowledge, estimation of the crosscovariance matrix under structural assumptions has not been a point of focus in the literature. Indeed, one may implicitly estimate the cross-covariance by applying Theorem 4.1 to the full covariance matrix C C up C pu C pp of the sub-Gaussian vector [u ⊤ , G(u) ⊤ ] ⊤ , and extracting a bound on C up ρN −C up op .
Remark 4. 6 (
6Dependence of Constants on Model Parameters). Theorem B.18 in Appendix B.2 gives a more refined statement of Theorem 4.5 with explicit expressions for the dependence of c on Γ and y − G(u n ) − η n .
Remark 4. 9 (
9On the Soft-Sparsity Assumptions).
Lemma A. 3 (
3Sample Covariance Operator Norm Bound). Let u 1 , . . . , u N and C be as in Proposition 3.1.
and the result follows by Proposition 3.1. Lemma A.4 (Cross-Covariance Estimation -Unstructured Case). Let u 1 , . . . , u N be d-dimensional i.i.d. sub-Gaussian random vectors with Eu 1 = m and var(u 1 ) = C. Let η 1 , . . . , η N be k-dimensional i.i.d. sub-Gaussian random vectors with Eη 1 = 0 and var(η 1 ) = Γ, and assume that the two sequences are independent. Consider the estimator
now control each of the terms in equations (A.5), (A.6) and (A.7) separately. For (A.5), we note that m − m ∼ N 0, C N , and so by the Gaussian concentration Theorem A.1 it holds with high probability that m − m 2 Tr(C) N .
Definition B.2 ([89, Definition 2.2.19]). Let (T, d) be a possibly infinite metric space, and define γ 2 (T, d) = inf sup t∈T n≥0
Theorem B.3 ([89, Theorem 2.4.1]). Let X t , t ∈ T be a centered Gaussian process which induces a metric d X : T × T → [0, ∞] defined by
Theorem B.4([88, 70]). Let (X , A, P) be a probability space and let X, X 1 , . . . , X N i.i.d.∼ P. Let F be a class of measurable functions on (X ,
Theorem B. 10 (
10Cross-Covariance Estimation -Max-Norm Bound). Let X 1 , . . . , X N be d-dimensional i.i.d. sub-Gaussian random vectors with EX 1 = µ X and var(X 1 ) = Σ X . Let Y 1 , . . . , Y N be k-dimensional i.i.d. sub-Gaussian random vectors with EY 1 = µ Y and var(Y 1 ) = Σ Y . Define Σ XY = E(X − µ X )(Y − µ Y ) ⊤ ,and consider the cross-covariance estimator
Proof of Theorem 4 . 3 .
43The proof follows immediately from Theorem B.13: since u 1 , . . . , u N are i.i.d. Gaussian they are sub-Gaussian. Moreover, since G is Lipschitz, by [97, Theorem 5.2.2], G(u 1 ) − E[G(u 1 )] ψ2 ≤ G Lip C 1/2 op < ∞, and so G(u 1 ), . . . , G(u N ) are i.i.d. sub-Gaussian random vectors. Lemma B.14 (Stein's Lemma [86]). Let u ∼ N (m, C) be a d-dimensional Gaussian vector. Let h : R d → R such that ∂ j h ≡ ∂h(u)/∂u j exists almost everywhere and E|∂ j h(u)| < ∞, j = 1, . . . , d. Then Cov u j , h(u) = d l=1 C jl E[∂ l h(u)]. Lemma B.15 (Soft-Sparsity of Cross-Covariance -Nonlinear Forward Map). Let u be a d-dimensional Gaussian random vector with E(u) = m and var(u) = C ∈
ρN − C pp op R 2 ρ 1−q2 N,2 .Therefore, with high probabilityP( C up ρN , C pp ρN ) − P(C up , C pp ) 2 ( Γ −1 op ∨ Γ −1 2 op )(1 ∨ C up op )(R 1 ρ 1−q1 N,1 + R 2 ρ 1−q2 N,2 ).Proof of Theorem 4.7. The result follows immediately from Theorem B.19 and the additional assumptions.
Lemma C. 1 (
1Kalman Gain Deviation with Localization). Let u 1 , . . . , u N be d-dimensional i.i.d. sub-Gaussian random vectors with Eu 1 = m and E(u 1 − m)(u 1 − m) ⊤ = C.
A.6 (Continuity and Boundedness of Mean-Update Operator [55, Corollary 4.3 & Lemma 4.7]). Let M be the mean-update operator defined in (2.3). Let P, Q ∈ S d + , Γ ∈ S k ++ , A ∈ R k×d , y ∈ R k , and m, m ′ ∈ R d . The following hold:
A.7 (Continuity and Boundedness of Covariance-Update Operator [55, Lemma 4.4 & Lemma 4.6]). Let C be the covariance-update operator defined in
Proof of Theorem 4.1. The result follows immediately by Theorem B.8.The following result is analogous to Lemma A.3. It follows directly from Theorem 4.1.
Lemma B.9 (Localized Sample Covariance Operator Norm Bound). Let u 1 , . . . , u N be i.i.d. d-dimensional
sub-Gaussian random vectors with E(u 1 ) = m and var(u 1 ) = C. Assume that for some q ∈ [0, 1)
and R q > 0 it holds that max i=1,...,d
d
j=1 |C ij | q ≤ R q . Let C be the sample covariance estimator
and L ρN ( C) be the localized sample covariance estimator with
Define the dilation operator: H : R d×k → R (d+k)×(d+k) by see for example [94, Section 2.1.16], and note that A max = H(A) max . Let B m be the space of standard basis vectors in m dimensions, i.e. any b ∈ B m is an m-dimensional vector with 1 in a single coordinate and zero otherwise. Then, for e i , e j ∈ B d+k , we haveH(A) =
O A
A ⊤ O
,
Diam ∆ n,1 (t 1 ) + Diam ∆ n,2 (t 2 ) . B.13 (Cross-Covariance Estimation -Operator-Norm bound). Let X 1 , . . . , X N be d-dimensional i.i.d. sub-Gaussian random vectors with E(X 1 ) = µ X and var(X 1 ) = Σ X . Let Y 1 , . . . , Y N be k-dimensional i.i.d. sub-Gaussian random vectors with E(Y 1 ) = µ Y and var(Y 1 ) = Σ Y . Define Σ XY = E(X − µ X )(Y − µ Y ) ⊤and consider the cross-covariance estimatorb∈∆n,1(t1)×∆n,2(t2)
d(a, b)
=
sup
a1,b1∈∆n,1(t1),
a1,b2∈∆n,2(t2)
d (a 1 , a 2 ), (b 1 , b 2 )
≤
sup
a1,b1∈∆n,1(t1)
d 1 (a 1 , b 1 ) +
sup
a2,b2∈∆n,2(t2)
d 2 (a 2 , b 2 )
= Theorem
AcknowledgmentsDSA is thankful to NSF and NGA for their support through the grant DMS-2027056, to the BBVA Foundation for the José Luis Rubio de Francia start-up grant, and to DOE for funding DOE DE-SC0022232. The authors are thankful to Subhodh Kotekal, Yandi Shen, and Nathan Waniorek for many helpful discussions.Proof. The (i, j)-th element of BS is given by [BS] ij = d l=1 B il S lj , and so the sum of the i-thwhere the first inequality holds since q ∈ [0, 1), and the second follows by the symmetry of S. Lemma B.17 (Product of Three Soft-Sparse Matrices). Fix q ∈ [0, 1) and let S ∈ U d (q, s) and assume, that is, B is both row and column sparse. ThenTherefore, the sum of the i-th row ofwhere the final equality follows by Lemma B.16.
Importance sampling: Intrinsic dimension and computational cost. S Agapiou, O Papaspiliopoulos, D Sanz-Alonso, A M Stuart, Statistical Science. 323S. Agapiou, O. Papaspiliopoulos, D. Sanz-Alonso, and A. M. Stuart. Importance sampling: Intrinsic dimension and computational cost. Statistical Science, 32(3):405-431, 2017.
An ensemble adjustment Kalman filter for data assimilation. J L Anderson, Monthly Weather Review. 12912J. L. Anderson. An ensemble adjustment Kalman filter for data assimilation. Monthly Weather Review, 129(12):2884-2903, 2001.
M Asch, M Bocquet, M Nodet, Data Assimilation: Methods, Algorithms, and Applications. SIAM11M. Asch, M. Bocquet, and M. Nodet. Data Assimilation: Methods, Algorithms, and Applica- tions, volume 11. SIAM, 2016.
Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. T Bengtsson, P J Bickel, B Li, Probability and statistics: Essays in honor of David A. Freedman. Institute of Mathematical StatisticsT. Bengtsson, P. J. Bickel, B. Li, et al. Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. In Probability and statistics: Essays in honor of David A. Freedman, pages 316-334. Institute of Mathematical Statistics, 2008.
A localization technique for ensemble Kalman filters. K Bergemann, S Reich, Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography. 136648K. Bergemann and S. Reich. A localization technique for ensemble Kalman filters. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 136(648):701-707, 2010.
A mollified ensemble Kalman filter. K Bergemann, S Reich, Quarterly Journal of the Royal Meteorological Society. 136651K. Bergemann and S. Reich. A mollified ensemble Kalman filter. Quarterly Journal of the Royal Meteorological Society, 136(651):1636-1643, 2010.
Covariance regularization by thresholding. P J Bickel, E Levina, The Annals of Statistics. 366P. J. Bickel and E. Levina. Covariance regularization by thresholding. The Annals of Statistics, 36(6):2577-2604, 2008.
Regularized estimation of large covariance matrices. P J Bickel, E Levina, The Annals of Statistics. 361P. J. Bickel and E. Levina. Regularized estimation of large covariance matrices. The Annals of Statistics, 36(1):199-227, 2008.
Pushing the limits of contemporary statistics: Contributions in honor of Jayanta K. Ghosh: Sharp failure rates for the bootstrap particle filter in high dimensions. P J Bickel, B Li, T Bengtsson, Institute of Mathematical Statistics. P. J. Bickel, B. Li, and T. Bengtsson. Pushing the limits of contemporary statistics: Contri- butions in honor of Jayanta K. Ghosh: Sharp failure rates for the bootstrap particle filter in high dimensions. Institute of Mathematical Statistics, pages 318-329, 1994.
A N Bishop, P , arXiv:2006.08843On the mathematical theory of ensemble (linear-Gaussian) Kalman-Bucy filtering. arXiv preprintA. N. Bishop and P. Del Moral. On the mathematical theory of ensemble (linear-Gaussian) Kalman-Bucy filtering. arXiv preprint arXiv:2006.08843, 2020.
Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. C H Bishop, B J Etherton, S J Majumdar, Monthly Weather Review. 1293C. H. Bishop, B. J. Etherton, and S. J. Majumdar. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly Weather Review, 129(3):420- 436, 2001.
A strongly convergent numerical scheme from ensemble Kalman inversion. D Blömker, C Schillings, P Wacker, SIAM Journal on Numerical Analysis. 564D. Blömker, C. Schillings, and P. Wacker. A strongly convergent numerical scheme from ensemble Kalman inversion. SIAM Journal on Numerical Analysis, 56(4):2537-2562, 2018.
Well posedness and convergence analysis of the ensemble Kalman inversion. D Blömker, C Schillings, P Wacker, S Weissmann, Inverse Problems. 35885007D. Blömker, C. Schillings, P. Wacker, and S. Weissmann. Well posedness and convergence analysis of the ensemble Kalman inversion. Inverse Problems, 35(8):085007, 2019.
. V I Bogachev, Measures, American Mathematical SocV. I. Bogachev. Gaussian Measures. American Mathematical Soc., 1998.
Analysis scheme in the ensemble Kalman filter. G Burgers, P Jan Van Leeuwen, G Evensen, Monthly Weather Review. 1266G. Burgers, P. Jan van Leeuwen, and G. Evensen. Analysis scheme in the ensemble Kalman filter. Monthly Weather Review, 126(6):1719-1724, 1998.
Adaptive covariance matrix estimation through block thresholding. T T Cai, M Yuan, The Annals of Statistics. 404T. T. Cai and M. Yuan. Adaptive covariance matrix estimation through block thresholding. The Annals of Statistics, 40(4):2014-2042, 2012.
Minimax estimation of large covariance matrices under ℓ 1 -norm. T T Cai, H H Zhou, Statistica Sinica. T. T. Cai and H. H. Zhou. Minimax estimation of large covariance matrices under ℓ 1 -norm. Statistica Sinica, pages 1319-1349, 2012.
Optimal rates of convergence for sparse covariance matrix estimation. T T Cai, H H Zhou, The Annals of Statistics. 405T. T. Cai and H. H. Zhou. Optimal rates of convergence for sparse covariance matrix estimation. The Annals of Statistics, 40(5):2389-2420, 2012.
Iterative ensemble Kalman methods: A unified perspective with some new variants. N K Chada, Y Chen, D Sanz-Alonso, Foundations of Data Science. 33N. K. Chada, Y. Chen, and D. Sanz-Alonso. Iterative ensemble Kalman methods: A unified perspective with some new variants. Foundations of Data Science, 3(3):331-369, 2021.
The sample size required in importance sampling. S Chatterjee, P Diaconis, The Annals of Applied Probability. 282S. Chatterjee and P. Diaconis. The sample size required in importance sampling. The Annals of Applied Probability, 28(2):1099-1135, 2018.
The masked sample covariance estimator: an analysis using matrix concentration inequalities. Information and Inference: A. R Y Chen, A Gittens, J A Tropp, Journal of the IMA. 11R. Y. Chen, A. Gittens, and J. A. Tropp. The masked sample covariance estimator: an analysis using matrix concentration inequalities. Information and Inference: A Journal of the IMA, 1(1):2-20, 2012.
Autodifferentiable ensemble Kalman filters. Y Chen, D Sanz-Alonso, R Willett, SIAM Journal on Mathematics of Data Science. 42Y. Chen, D. Sanz-Alonso, and R. Willett. Autodifferentiable ensemble Kalman filters. SIAM Journal on Mathematics of Data Science, 4(2):801-833, 2022.
Conditions for successful data assimilation. A J Chorin, M Morzfeld, Journal of Geophysical Research: Atmospheres. 11820A. J. Chorin and M. Morzfeld. Conditions for successful data assimilation. Journal of Geo- physical Research: Atmospheres, 118(20):11-522, 2013.
On the stability and the uniform propagation of chaos properties of ensemble Kalman-Bucy filters. P , Del Moral, J Tugaut, The Annals of Applied Probability. 282P. Del Moral and J. Tugaut. On the stability and the uniform propagation of chaos properties of ensemble Kalman-Bucy filters. The Annals of Applied Probability, 28(2):790-850, 2018.
Ensemble Kalman inversion: mean-field limit and convergence analysis. Z Ding, Q Li, Statistics and Computing. 311Z. Ding and Q. Li. Ensemble Kalman inversion: mean-field limit and convergence analysis. Statistics and Computing, 31(1):1-21, 2021.
Operator norm consistent estimation of large-dimensional sparse covariance matrices. N El Karoui, The Annals of Statistics. 366N. El Karoui. Operator norm consistent estimation of large-dimensional sparse covariance matrices. The Annals of Statistics, 36(6):2717-2756, 2008.
Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems. O G Ernst, B Sprungk, H.-J Starkloff, SIAM/ASA Journal on Uncertainty Quantification. 31O. G. Ernst, B. Sprungk, and H.-J. Starkloff. Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantifica- tion, 3(1):823-851, 2015.
Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. G Evensen, Journal of Geophysical Research: Oceans. 99c5G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(c5):10143-10162, 1995.
Sampling strategies and square root analysis schemes for the EnKF. G Evensen, Ocean Dynamics. 546G. Evensen. Sampling strategies and square root analysis schemes for the EnKF. Ocean Dynamics, 54(6):539-560, 2004.
Data Assimilation: the Ensemble Kalman Filter. G Evensen, Springer Science and Business MediaG. Evensen. Data Assimilation: the Ensemble Kalman Filter. Springer Science and Business Media, 2009.
Assimilation of Geosat altimeter data for the Agulhas current using the ensemble Kalman filter with a quasigeostrophic model. G Evensen, P Van Leeuwen, Monthly Weather Review. 1241G. Evensen and P. Van Leeuwen. Assimilation of Geosat altimeter data for the Agulhas current using the ensemble Kalman filter with a quasigeostrophic model. Monthly Weather Review, 124(1):85-96, 1996.
On the efficiency of covariance localisation of the ensemble Kalman filter using augmented ensembles. A Farchi, M Bocquet, Frontiers in Applied Mathematics and Statistics. 3A. Farchi and M. Bocquet. On the efficiency of covariance localisation of the ensemble Kalman filter using augmented ensembles. Frontiers in Applied Mathematics and Statistics, page 3, 2019.
Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants. R Furrer, T Bengtsson, Journal of Multivariate Analysis. 982R. Furrer and T. Bengtsson. Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants. Journal of Multivariate Analysis, 98(2):227-255, 2007.
Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler. A Garbuno-Inigo, F Hoffmann, W Li, A M Stuart, SIAM Journal on Applied Dynamical Systems. 191A. Garbuno-Inigo, F. Hoffmann, W. Li, and A. M. Stuart. Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler. SIAM Journal on Applied Dynamical Sys- tems, 19(1):412-441, 2020.
Construction of correlation functions in two and three dimensions. G Gaspari, S E Cohn, Quarterly Journal of the Royal Meteorological Society. 125554G. Gaspari and S. E. Cohn. Construction of correlation functions in two and three dimensions. Quarterly Journal of the Royal Meteorological Society, 125(554):723-757, 1999.
A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks. G A Gottwald, A J Majda, Nonlinear Processes in Geophysics. 20G. A. Gottwald and A. J. Majda. A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks. Nonlinear Processes in Geophysics, 20(5):705- 712, 2013.
An iterative ensemble Kalman filter for multiphase fluid flow data assimilation. Y Gu, D S Oliver, Spe Journal. 1204Y. Gu and D. S. Oliver. An iterative ensemble Kalman filter for multiphase fluid flow data assimilation. Spe Journal, 12(04):438-446, 2007.
Ensemble Kalman filter for neural network based one-shot inversion. P A Guth, C Schillings, S Weissmann, arXiv:2005.02039arXiv preprintP. A. Guth, C. Schillings, and S. Weissmann. Ensemble Kalman filter for neural network based one-shot inversion. arXiv preprint arXiv:2005.02039, 2020.
A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems. M Hanke, Inverse Problems. 131M. Hanke. A regularizing Levenberg-Marquardt scheme, with applications to inverse ground- water filtration problems. Inverse Problems, 13(1):79-95, 1997.
Catastrophic filter divergence in filtering nonlinear dissipative systems. J Harlim, A J Majda, Communications in Mathematical Sciences. 81J. Harlim and A. J. Majda. Catastrophic filter divergence in filtering nonlinear dissipative systems. Communications in Mathematical Sciences, 8(1):27-43, 2010.
Kinetic methods for inverse problems. M Herty, G Visconti, Kinetic & Related Models. 1251109M. Herty and G. Visconti. Kinetic methods for inverse problems. Kinetic & Related Models, 12(5):1109, 2019.
R A Horn, C R Johnson, Matrix Analysis. Cambridge University PressR. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 2012.
Methods for ensemble prediction. P L Houtekamer, J Derome, Monthly Weather Review. 1237P. L. Houtekamer and J. Derome. Methods for ensemble prediction. Monthly Weather Review, 123(7):2181-2196, 1995.
Data assimilation using an ensemble Kalman filter technique. P L Houtekamer, H L Mitchell, Monthly Weather Review. 1263P. L. Houtekamer and H. L. Mitchell. Data assimilation using an ensemble Kalman filter technique. Monthly Weather Review, 126(3):796-811, 1998.
A sequential ensemble Kalman filter for atmospheric data assimilation. P L Houtekamer, H L Mitchell, Monthly Weather Review. 1291P. L. Houtekamer and H. L. Mitchell. A sequential ensemble Kalman filter for atmospheric data assimilation. Monthly Weather Review, 129(1):123-137, 2001.
Review of the ensemble Kalman filter for atmospheric data assimilation. P L Houtekamer, F Zhang, Monthly Weather Review. 14412P. L. Houtekamer and F. Zhang. Review of the ensemble Kalman filter for atmospheric data assimilation. Monthly Weather Review, 144(12):4489-4532, 2016.
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems. M A Iglesias, Inverse Problems. 32225002M. A. Iglesias. A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems. Inverse Problems, 32(2):025002, 2016.
Ensemble Kalman methods for inverse problems. M A Iglesias, K J H Law, A M Stuart, Inverse Problems. 29445001M. A. Iglesias, K. J. H. Law, and A. M. Stuart. Ensemble Kalman methods for inverse problems. Inverse Problems, 29(4):045001, 2013.
Understanding the ensemble Kalman filter. M Katzfuss, J R Stroud, C K Wikle, The American Statistician. 704M. Katzfuss, J. R. Stroud, and C. K. Wikle. Understanding the ensemble Kalman filter. The American Statistician, 70(4):350-357, 2016.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence. D Kelly, A J Majda, X T Tong, Proceedings of the National Academy of Sciences. 11234D. Kelly, A. J. Majda, and X. T. Tong. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence. Proceedings of the National Academy of Sciences, 112(34):10589- 10594, 2015.
Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time. D Kelly, A M Stuart, Nonlinearity. 2710D. Kelly and A. M. Stuart. Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time. Nonlinearity, 27(10), 2014.
Hierarchical ensemble Kalman methods with sparsitypromoting generalized gamma hyperpriors. H Kim, D Sanz-Alonso, A Strang, arXiv:2205.09322arXiv preprintH. Kim, D. Sanz-Alonso, and A. Strang. Hierarchical ensemble Kalman methods with sparsity- promoting generalized gamma hyperpriors. arXiv preprint arXiv:2205.09322, 2022.
Concentration inequalities and moment bounds for sample covariance operators. V Koltchinskii, K Lounici, Bernoulli. 231V. Koltchinskii and K. Lounici. Concentration inequalities and moment bounds for sample covariance operators. Bernoulli, 23(1):110-133, 2017.
Ensemble Kalman inversion: a derivative-free technique for machine learning tasks. N Kovachki, A M Stuart, Inverse Problems. 35995005N. B Kovachki and A. M. Stuart. Ensemble Kalman inversion: a derivative-free technique for machine learning tasks. Inverse Problems, 35(9):095005, 2019.
Convergence of the square root ensemble Kalman filter in the large ensemble limit. E Kwiatkowski, J Mandel, SIAM/ASA Journal on Uncertainty Quantification. 31E. Kwiatkowski and J. Mandel. Convergence of the square root ensemble Kalman filter in the large ensemble limit. SIAM/ASA Journal on Uncertainty Quantification, 3(1):1-17, 2015.
Theory of Point Estimation. L E Lehmann, G Casella, Springer Science & Business MediaL. E. Lehmann, and G. Casella. Theory of Point Estimation. Springer Science & Business Media, 2006.
Mean field limit of ensemble square root filters-discrete and continuous time. T Lange, W Stannat, arXiv:2011.10516arXiv preprintT. Lange and W. Stannat. Mean field limit of ensemble square root filters-discrete and con- tinuous time. arXiv preprint arXiv:2011.10516, 2020.
K J H Law, A M Stuart, K Zygalakis, Data Assimilation. SpringerK. J. H. Law, A. M. Stuart, and K. Zygalakis. Data Assimilation. Springer, 2015.
Deterministic mean-field ensemble Kalman filtering. K J H Law, H Tembine, R Tempone, SIAM Journal on Scientific Computing. 383K. J. H. Law, H. Tembine, and R. Tempone. Deterministic mean-field ensemble Kalman filtering. SIAM Journal on Scientific Computing, 38(3):A1251-A1279, 2016.
Implications of stochastic and deterministic filters as ensemble-based data assimilation methods in varying regimes of error growth. W G Lawson, J A Hansen, Monthly weather review. 1328W. G. Lawson and J. A. Hansen. Implications of stochastic and deterministic filters as ensemble-based data assimilation methods in varying regimes of error growth. Monthly weather review, 132(8):1966-1981, 2004.
Large sample asymptotics for the ensemble Kalman filter. F Le Gland, V Monbet, V.-D Tran, INRIA. PhD thesisF. Le Gland, V. Monbet, and V.-D. Tran. Large sample asymptotics for the ensemble Kalman filter. PhD thesis, INRIA, 2009.
Nonlinear Data Assimilation. P Van Leeuwen, Y Cheng, S Reich, SpringerP. Van Leeuwen, Y. Cheng, and S. Reich. Nonlinear Data Assimilation. Springer, 2015.
The impact of ensemble filter definition on the assimilation of temperature profiles in the tropical Pacific. O Leeuwenburgh, G Evensen, L Bertino, Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography. 131613O. Leeuwenburgh, G. Evensen, and L. Bertino. The impact of ensemble filter definition on the assimilation of temperature profiles in the tropical Pacific. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 131(613):3291-3300, 2005.
Partial estimation of covariance matrices. Probability Theory and Related Fields. E Levina, R Vershynin, 153E. Levina and R. Vershynin. Partial estimation of covariance matrices. Probability Theory and Related Fields, 153(3-4):405-419, 2012.
An iterative ensemble Kalman filter for data assimilation. G Li, A C Reynolds, SPE annual technical conference and exhibition. Society of Petroleum EngineersG. Li and A. C. Reynolds. An iterative ensemble Kalman filter for data assimilation. In SPE annual technical conference and exhibition. Society of Petroleum Engineers, 2007.
On numerical properties of the ensemble Kalman filter for data assimilation. J Li, D Xiu, Computer Methods in Applied Mechanics and Engineering. 197J. Li and D. Xiu. On numerical properties of the ensemble Kalman filter for data assimilation. Computer Methods in Applied Mechanics and Engineering, 197(43-44):3574-3583, 2008.
Filtering Complex Turbulent Systems. A J Majda, J Harlim, Cambridge University PressA. J. Majda and J. Harlim. Filtering Complex Turbulent Systems. Cambridge University Press, 2012.
Performance of ensemble Kalman filters in large dimensions. A J Majda, X T Tong, Communications on Pure and Applied Mathematics. 715A. J. Majda and X. T. Tong. Performance of ensemble Kalman filters in large dimensions. Communications on Pure and Applied Mathematics, 71(5):892-937, 2018.
On the convergence of the ensemble Kalman filter. J Mandel, L Cobb, J D Beezley, Applications of Mathematics. 566J. Mandel, L. Cobb, and J. D. Beezley. On the convergence of the ensemble Kalman filter. Applications of Mathematics, 56(6):533-541, 2011.
Empirical processes with a bounded ψ 1 diameter. Geometric and Functional Analysis. S Mendelson, 20S. Mendelson. Empirical processes with a bounded ψ 1 diameter. Geometric and Functional Analysis, 20(4):988-1027, 2010.
What the collapse of the ensemble Kalman filter tells us about particle filters. M Morzfeld, D Hodyss, C Snyder, Tellus A: Dynamic Meteorology and Oceanography. 691283809M. Morzfeld, D. Hodyss, and C. Snyder. What the collapse of the ensemble Kalman filter tells us about particle filters. Tellus A: Dynamic Meteorology and Oceanography, 69(1):1283809, 2017.
Note on interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler by. N Nüsken, S Reich, arXiv:1908.10890Garbuno-Inigo, Hoffmann, Li and StuartarXiv preprintN. Nüsken and S. Reich. Note on interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler by Garbuno-Inigo, Hoffmann, Li and Stuart. arXiv preprint arXiv:1908.10890, 2019.
A local ensemble Kalman filter for atmospheric data assimilation. E Ott, B R Hunt, I Szunyogh, A V Zimin, E J Kostelich, M Corazza, E Kalnay, D J Patil, J A Yorke, Tellus A: Dynamic Meteorology and Oceanography. 56E. Ott, B. R. Hunt, I. Szunyogh, A. V. Zimin, E. J. Kostelich, M. Corazza, E. Kalnay, D. J. Patil, and J. A. Yorke. A local ensemble Kalman filter for atmospheric data assimilation. Tellus A: Dynamic Meteorology and Oceanography, 56(5):415-428, 2004.
Localization in the ensemble Kalman filter. MSc Atmosphere. R Petrie, Ocean and Climate University of ReadingR Petrie. Localization in the ensemble Kalman filter. MSc Atmosphere, Ocean and Climate University of Reading, 2008.
Probabilistic Forecasting and Bayesian Data Assimilation. S Reich, C Cotter, Cambridge University PressS. Reich and C. Cotter. Probabilistic Forecasting and Bayesian Data Assimilation. Cambridge University Press, 2015.
Iterative forms of the ensemble Kalman filter. A C Reynolds, M Zafari, G Li, EC-MOR X-10th European conference on the mathematics of oil recovery. 23A. C. Reynolds, M. Zafari, and G. Li. Iterative forms of the ensemble Kalman filter. In EC- MOR X-10th European conference on the mathematics of oil recovery, pages cp-23. European Association of Geoscientists & Engineers, 2006.
The ensemble Kalman filter: a signal processing perspective. M Roth, G Hendeby, C Fritsche, F Gustafsson, EURASIP Journal on Advances in Signal Processing. 20171M. Roth, G. Hendeby, C. Fritsche, and F. Gustafsson. The ensemble Kalman filter: a signal processing perspective. EURASIP Journal on Advances in Signal Processing, 2017(1):1-16, 2017.
Importance sampling and necessary sample size: An information theory approach. D Sanz-Alonso, SIAM/ASA Journal on Uncertainty Quantification. 62D. Sanz-Alonso. Importance sampling and necessary sample size: An information theory approach. SIAM/ASA Journal on Uncertainty Quantification, 6(2):867-879, 2018.
D Sanz-Alonso, A M Stuart, A Taeb, arXiv:1810.06191Inverse Problems and Data Assimilation. arXiv preprintD. Sanz-Alonso, A. M. Stuart, and A. Taeb. Inverse Problems and Data Assimilation. arXiv preprint arXiv:1810.06191, 2019.
Bayesian update with importance sampling: Required sample size. D Sanz-Alonso, Z Wang, Entropy. 23122D. Sanz-Alonso and Z. Wang. Bayesian update with importance sampling: Required sample size. Entropy, 23(1):22, 2021.
Bayesian Filtering and Smoothing. S Särkkä, Cambridge University Press3S. Särkkä. Bayesian Filtering and Smoothing, volume 3. Cambridge University Press, 2013.
Analysis of the ensemble Kalman filter for inverse problems. C Schillings, A M Stuart, SIAM Journal on Numerical Analysis. 553C. Schillings and A. M. Stuart. Analysis of the ensemble Kalman filter for inverse problems. SIAM Journal on Numerical Analysis, 55(3):1264-1290, 2017.
Particle filters, the "optimal" proposal and high-dimensional systems. C Snyder, Proceedings of the ECMWF Seminar on Data Assimilation for Atmosphere and Ocean. the ECMWF Seminar on Data Assimilation for Atmosphere and OceanC. Snyder. Particle filters, the "optimal" proposal and high-dimensional systems. In Proceed- ings of the ECMWF Seminar on Data Assimilation for Atmosphere and Ocean, 2011.
Obstacles to high-dimensional particle filtering. C Snyder, T Bengtsson, P J Bickel, J L Anderson, Monthly Weather Review. 13612C. Snyder, T. Bengtsson, P. J. Bickel, and J. L. Anderson. Obstacles to high-dimensional particle filtering. Monthly Weather Review, 136(12):4629-4640, 2016.
Performance bounds for particle filters using the optimal proposal. C Snyder, T Bengtsson, M Morzfeld, Monthly Weather Review. 14311C. Snyder, T. Bengtsson, and M. Morzfeld. Performance bounds for particle filters using the optimal proposal. Monthly Weather Review, 143(11):4750-4761, 2015.
A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. C Stein, Proceedings of the sixth Berkeley symposium on mathematical statistics and probability. the sixth Berkeley symposium on mathematical statistics and probabilityUniversity of California Press2C. Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, volume 2: Probability theory, volume 6, pages 583-603. University of California Press, 1972.
Inverse problems: a Bayesian perspective. A M Stuart, Acta Numerica. 19A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numerica, 19:451-559, 2010.
The Generic Chaining: Upper and Lower Bounds of Stochastic Processes. M Talagrand, Springer Science & Business MediaM. Talagrand. The Generic Chaining: Upper and Lower Bounds of Stochastic Processes. Springer Science & Business Media, 2005.
Upper and Lower Bounds for Stochastic Processes. M Talagrand, Springer60M. Talagrand. Upper and Lower Bounds for Stochastic Processes, volume 60. Springer, 2014.
Ensemble square root filters. M K Tippett, J L Anderson, C H Bishop, T M Hamill, J S Whitaker, Monthly Weather Review. 1317M. K. Tippett, J. L. Anderson, C. H. Bishop, T. M. Hamill, and J. S. Whitaker. Ensemble square root filters. Monthly Weather Review, 131(7):1485-1490, 2003.
Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation. X T Tong, A J Majda, D Kelly, Nonlinearity. 292X. T. Tong, A. J. Majda, and D. Kelly. Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation. Nonlinearity, 29(2):54-60, 2015.
Nonlinear stability and ergodicity of ensemble based Kalman filters. X T Tong, A Majda, D Kelly, Nonlinearity. 292657X. T. Tong, A. J Majda, and D. Kelly. Nonlinear stability and ergodicity of ensemble based Kalman filters. Nonlinearity, 29(2):657, 2016.
. X T Tong, M Morzfeld, arXiv:2201.10821Localized ensemble Kalman inversion. arXiv preprintX. T. Tong and M. Morzfeld. Localized ensemble Kalman inversion. arXiv preprint arXiv:2201.10821, 2022.
An Introduction to Matrix Concentration Inequalities. J A Tropp, Now Publishers, Inc8J. A. Tropp. An Introduction to Matrix Concentration Inequalities, volume 8. Now Publishers, Inc., 2015.
On the iterated forms of Kalman filters using statistical linearization. S Ungarala, Journal of Process Control. 225S. Ungarala. On the iterated forms of Kalman filters using statistical linearization. Journal of Process Control, 22(5):935-943, 2012.
On the spectral norm of Gaussian random matrices. R Van Handel, Transactions of the American Mathematical Society. 36911R. Van Handel. On the spectral norm of Gaussian random matrices. Transactions of the American Mathematical Society, 369(11):8161-8178, 2017.
High-Dimensional Probability: An Introduction with Applications in Data Science. R Vershynin, Cambridge University Press47R. Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Sci- ence, volume 47. Cambridge University Press, 2018.
High-Dimensional Statistics: A Non-Asymptotic Viewpoint. M J Wainwright, Cambridge University Press48M. J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint, volume 48. Cambridge University Press, 2019.
| []
|
[
"Theoretical analysis of quantum key distribution systems when integrated with a DWDM optical transport network",
"Theoretical analysis of quantum key distribution systems when integrated with a DWDM optical transport network",
", 6th Vasilyevskogo Ostrova Line, 59, Saint Petersburg, 199178, Russia * [email protected]"
]
| [
"Irina Vorontsova \nLaboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia\n",
"Roman Goncharov \nLaboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia\n",
"Angelina Tarabrina \nLaboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia\n",
"Fedor Kiselev \nLaboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia\n\nQuanttelecom LLC\n\n",
"Vladimir Egorov \nLaboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia\n\nQuanttelecom LLC\n\n"
]
| [
"Laboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia",
"Laboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia",
"Laboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia",
"Laboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia",
"Quanttelecom LLC\n",
"Laboratory for Quantum Communications\nITMO University\nBirzhevaya Line, 16, Saint Petersburg199034Russia",
"Quanttelecom LLC\n"
]
| []
| A theoretical research and numerical simulation of the noise influence caused by spontaneous Raman scattering, four-wave mixing, and linear channel crosstalk on the performance of QKD systems was conducted. Three types of QKD systems were considered: coherent one-way (COW) QKD protocol, subcarrier-wave (SCW) QKD system, and continuous-variable (CV) QKD integrated with classical DWDM channels. We calculate the secure key generation rate for the systems mentioned addressing different channel allocation schemes (i.e., configurations). A uniform DWDM grid is considered with quantum channel located in C-band and O-band (at 1310 nm) of a telecommunication window. The systems' performance is analyzed in terms of the maximal achievable distance values. Configurations for the further analysis and investigation are chosen optimally, i.e., their maximal achievable distances are the best. | 10.1364/josab.469933 | [
"https://export.arxiv.org/pdf/2209.15507v1.pdf"
]
| 252,668,500 | 2209.15507 | 66ee32123012c994a899e5b6ea9d6bfd542fc96b |
Theoretical analysis of quantum key distribution systems when integrated with a DWDM optical transport network
Irina Vorontsova
Laboratory for Quantum Communications
ITMO University
Birzhevaya Line, 16, Saint Petersburg199034Russia
Roman Goncharov
Laboratory for Quantum Communications
ITMO University
Birzhevaya Line, 16, Saint Petersburg199034Russia
Angelina Tarabrina
Laboratory for Quantum Communications
ITMO University
Birzhevaya Line, 16, Saint Petersburg199034Russia
Fedor Kiselev
Laboratory for Quantum Communications
ITMO University
Birzhevaya Line, 16, Saint Petersburg199034Russia
Quanttelecom LLC
Vladimir Egorov
Laboratory for Quantum Communications
ITMO University
Birzhevaya Line, 16, Saint Petersburg199034Russia
Quanttelecom LLC
Theoretical analysis of quantum key distribution systems when integrated with a DWDM optical transport network
, 6th Vasilyevskogo Ostrova Line, 59, Saint Petersburg, 199178, Russia * [email protected]
A theoretical research and numerical simulation of the noise influence caused by spontaneous Raman scattering, four-wave mixing, and linear channel crosstalk on the performance of QKD systems was conducted. Three types of QKD systems were considered: coherent one-way (COW) QKD protocol, subcarrier-wave (SCW) QKD system, and continuous-variable (CV) QKD integrated with classical DWDM channels. We calculate the secure key generation rate for the systems mentioned addressing different channel allocation schemes (i.e., configurations). A uniform DWDM grid is considered with quantum channel located in C-band and O-band (at 1310 nm) of a telecommunication window. The systems' performance is analyzed in terms of the maximal achievable distance values. Configurations for the further analysis and investigation are chosen optimally, i.e., their maximal achievable distances are the best.
Introduction
Quantum communications and quantum key distribution (QKD) in particular are ones of the most rapidly advancing branches of quantum technologies [1,2]. The main idea behind QKD is an opportunity to transfer a cryptographically secure key between two and more authenticated users connected to each other through quantum and information channels. The security of QKD to attacks from an eavesdropper is guarantied by the principles of quantum mechanics [3], that ensures safety of the transmitted data from all kinds of hacking and other existing attacks (e.g., in the field of quantum computing [4]).
The very first QKD protocol was proposed by Charles Bennett and Gilles Brassard yet in the year of 1984 [5]. Time passed, and a considerable advance has been made in many areas related to QKD since then, i.e., new protocols and QKD systems appeared [2], theoretical approaches to the security analysis were optimized and improved [6], experimental works and practical implementations of QKD were put into action [7].
Mainly, there are two ways to approach realization of a quantum channel for QKD practically, namely, through fiber-optical communication networks [8] and free-space channels [9]. In spite of the growing interest to free-space QKD systems, featuring a number of advantages [9], QKD over fiber-optical networks is of particular importance for practical QKD systems. The reason to it is the possibility of their direct implementation and integration with an existing telecommunication infrastructure, clarity and well-established scheme of installation, maintenance, and cost, while ensuring a sufficient level of security.
Evidently, power values featured by a signal transmitted over a quantum channel are significantly lower than the ones of information channels. This fact has long been a hindering factor impeding the widespread use of QKD technologies. To battle the problem, the so-called dark fibers are still used, i.e., fibers allocated for propagation of one (here -quantum) signal only. In optical networks, fibers of this type are usually used as backup. Additionally, quantum channels are subject to a number of additional technical requirements, such as low optical losses, full network connectivity provided by point-to-point systems, and relatively low secure key generation rates. The combination of these factors leads to inexpediency of large-scale allocation of dark fibers for QKD systems, for such an approach is far not optimal for both practical implementation and economic feasibility. Thus, integration of well-known channel multiplexing technologies with QKD systems is needed. In turn, the use of channel multiplexing technologies (particularly, the simultaneous distribution of quantum and information channels in a single fiber using dense wavelength division multiplexing (DWDM) systems) for QKD systems widens the band of a quantum channel and reduces the their maintenance expenses. However, nonlinear effects that arise in an optical fiber in the presence of high-power radiation make the problem even more complex and impose specific limitations on this approach. The thing is that nonlinear effects inevitably appear when powerful information channels propagate over a fiber-optic network, which results in noise photons present at frequencies reserved for quantum channels. It was shown [10][11][12], that the main sources of noise when working with DWDM systems for the simultaneous propagation of quantum and information channels in a single optical fiber include noise that is a consequence of such nonlinear effects as spontaneous Raman scattering (SpRS) and four-wave mixing (FWM), as well as linear channel crosstalk (LCXT) of classical information channels.
In this paper, a theoretical research and numerical simulation of the noise influence caused by SpRS, FWM, and LCXT on the performance of QKD systems was performed. There was a brief overview of these processes in the work, as well as of three types of QKD systems: the ones based on the coherent one-way (COW) QKD protocol, subcarrier-wave (SCW) QKD systems, and continuous-variable (CV) QKD integrated with information DWDM channels. We calculated the secure key generation rate for the systems mentioned, addressing different channel allocation schemes (configurations). A mathematical model used for calculations was discussed in detail. A uniform DWDM grid was considered for a quantum channel located in C-band and O-band (at 1310 nm) of a telecommunication window. The systems' performance was analyzed in terms of their maximal achievable distance values. Configurations for the analysis and investigation were chosen optimally, i.e., their maximal achievable distances were the best. The results obtained were then compared and analyzed. Thus, features and patterns specific for each of the QKD system were discovered and formulated.
Mathematical model for the secure key generation rate evaluation
One of the main characteristics used for a qualitative description and estimation of a particular QKD system efficiency is the secure key generation rate . The mathematical model should be discussed separately for each particular protocol. Two types of protocols are addressed in the article: discrete-variable (DV) and continuous-variable (CV) ones. The schematic representation of states used for the information encoding for different QKD systems is shown in the Figure 1.
Discrete-variable QKD systems
Subcarrier wave QKD systems
Special attention should be paid to the use of subcarrier wave QKD systems (SCW QKD) [13,14]. In this case, as a result of phase modulation of monochromatic laser radiation with a frequency multimode coherent states are generated at subcarrier frequencies. Thus, the quantum channel is moved to sidebands (see Figure 1a). When modulated with a frequency Ω, the energy of the carrier mode is redistributed into 2 vacuum subcarrier modes, which form the resulting signal at the frequency = + Ω (− ). In this case, the signal amplitudes can be expressed in terms of Wigner -function ( ) formalism [15]. Quantum bit error rate QBER SCW is defined through conditional probability of receiving an inconclusive measurement result and the conditional probability of an incorrect bit measurement
1 − − = det (0, ),(1)
where is the phase offset, caused by imperfections of system components. Then QBER SCW is given by:
QBER SCW = 1 − = det (0, + ) det (0, ) + det (0, + )
.
(
Detection probability featured in the expressions above, in turn, can be calculated as:
det ( A , B ) = D ph ( A , B ) + dark Δ + ram = cl ( A , B ) + dark + ram ,(4)
where D is the detector quantum efficiency, is the dark count rate, dark is the dark count probability, cl is the detector click probability, is the time window, Δ is the detector gating time, and ph ( A , B ) is the mean number of photons arriving on detector obtained by the following formula:
ph ( A , B ) = 0 ( ) B (1 − (1 − ) | 00 ( ) | 2 ).(5)
where ( ) = 10 −0,1 is the transmission coefficient of the quantum channel, is the attenuation of the fiber, B describes optical losses in the receiver's module, 0 is the mean photon number in the carrier mode, is the spectral attenuation coefficient, and the angle is derived from:
cos ≡ 1 − 1 2 + 0, 5 2 = cos 2 − sin 2 cos ( − ), cos = 1 − 1 2 + 0, 5 2 .(6)
Assuming is large and modulation index is small, approximate expression can be used for calculating 00 ( ):
where ( ) 2 = 2 2 (1 + cos ( − )), and 0 ( ) is the zero-order Bessel function of the first kind.
After performing the necessary substitutions in the equation 3, one can obtain the final expression for determining the coefficient through the real parameters of the SCW QKD system:
QBER SCW = 2 (1 − ) (1 − cos ( )) + 0 + dark + ram 4 (1 − ) + 2 0 + 2 dark + 2 ram ,(8)
where ≡ B ( ) D is the total optical transmission coefficient, = 0 2 is the mean photon number in sidebands and ≡ Δ / .
Secure key generation rate SCW , when considering collective attacks, is lower bounded by the Devetak-Winter bound:
SCW = S B 1 − leak EC (QBER SCW ) − max ( : ) ,(9)
where S is the modulation frequency, B = (1 − )/ is the probability of successful state detection if Bob guesses basis correctly, is the number of bases, leak EC (QBER SCW ) the amount of information disclosed by Alice during error correction and max ( : ) is the Holevo information, giving the upper bound for the information accessible to the eavesdropper. This expression can be rewritten for collective beam-splitting attacks under the assumption that Eve is not affected by Raman scattering in the following way [15]:
SCW = (1 − ) S 2 1 − ℎ(QBER SCW ) − ℎ 1 − − 0 2 2 ,(10)
where approximation is used again for simplifying the form of Wigner -function 00 ≈ 1 − 2 .
In further simulations, the following parameters of the SCW QKD system were used: Ω = 4.8 GHz, 0 = 3.93, = 0.319, = 0.18 dB/km, = 10 −3 , Δ = 1 ns, = 0.1, = 4 × 10 −6 , = 5 • , = 1 ns, and Bob's module losses are 8 dB.
Coherent one-way QKD
In coherent one-way (COW) protocol [16], information is encoded within one time window, in which two quantum states can be formed. In this case, information encoding takes place when one of the states is a vacuum state within the time window under consideration. An intensity modulator is used to either prepare a pulse or completely block the beam, so that to create a so-called empty (vacuum) pulse. A logical bit is encoded in the two-pulse sequences formed by a non-empty and an empty pulses. A situation when an empty signal is registered first and then followed by a non-empty pulse corresponds to a logical 0. In turn, the absence of a signal in the first temporal interval within one time window corresponds to logical 1 (see Figure 1b). The protocol also implies that Bob can send two signals within the same time slot, i.e., decoy sequence. This sequence is used as a trap state, allowing one to establish the fact of eavesdropping in the channel. When the signal enters Bob's module it is divided into two parts. The first portion of the pulses is transmitted and used to retrieve the raw key, whereas the second one goes straight to a detector so that to measures the time of arrival of the coherent pulse. Thus, this is how the states are distinguished. There is also a control line used to check interference of two neighboring quantum states.
To calculate the secure key generation rate, the raw key is established first. It consists of quantum signals, detector dark counts, after-pulses and additional noise [17]:
raw = + d dc + AP + noise rep duty dead ,(11)
where rep is the pulse repetition frequency, , dc , AP , and noise are the signal, dark count, after-pulse and quantum channel noise detection probabilities respectively, d is the number of detectors (here d = 2). Probability noise consists of SpRS noise ram , FWM FWM , and LCXT LCXT :
noise = ram + FWM + LCXT .(12)
A detailed description of the mathematical model for ram , FWM , and LCXT is given in the next sections.
The quantum signal detection probability is defined as:
= F IL ,(13)
where F = exp (− ) is the fiber transmission at length , is the fiber attenuation, IL denotes the insertion loss due to optical filtering in the receiver, and is the quantum detection efficiency. The after-pulse detection probability AP is given by:
AP ≈ AP · ( + d dc + noise ),(14)
where AP is the ratio between the after-pulse detection probability and the total detection probability.
Next, in order to account for the decrease in the detection rate due to the quantum detector dead time dead , the coefficient dead is introduced:
dead = 1 + dead rep + d dc + noise −1 .(15)
In addition, the necessary parameter is the coefficient duty :
duty = A + A ,(16)
where A is the length of Alice's storage line. A certain fraction of the raw key raw is discarded and the sifted key generation rate is expressed as:
sift = 1 2 + d dc + AP + noise rep duty dead ,(17)
where = 1 for the considered QKD system. To evaluate the secure key generation rate , one operates with the notions of the mutual information per bit between Alice and Bob ( AB ), and between Alice and a potential eavesdropper ( AE ):
COW = sift AB COW − AE COW .(18)
In the equation above denotes the sifted key rate and can be obtained by the formula 17. The mutual information per bit AB COW between Alice and Bob can be obtained by:
AB COW = 1 − ec (QBER COW ),(19)
where ( ) = − log 2 − (1 − ) log 2 (1 − ) is the Shannon entropy for a given QBER and the parameter = 6/5 [10]. To calculate the mutual information between Alice and Eve COW one can write:
AE COW = (1 − F ) + (1 − ) 1 + − F 2 − F ,(20)
with denoting visibility and = − standing for the fiber transmission over a distance characterized by attenuation coefficient . In our calculations dead = 2 ns, AP = 0.008, = 0.2, = 0.98, and A = 10 m.
Continuous-variable QKD
Previously, DV QKD systems were discussed. However, the approach to the mathematical description in case of CV QKD systems has certain differences from DV protocols. In particular, the level of quantum errors is estimated from the value of signal-to-noise ratio (SNR) [18], not QBER. It is defined as follows:
SNR = 1 A 1 + 1 ,(21)
where the parameter = 1 (2) for homodyne (heterodyne) detection, denotes transmittance, is the excess noise in the channel, and A) is the variance of the quadrature operator. The secure key generation rate is determined by the formula:
= AB − EB ,(22)
where denotes matching efficiency, AB is mutual information between Alice and Bob, and EB is the Holevo information.
In its turn, the mutual information between Alice and Bob is defined through the SNR [18]:
AB = 2 log 2 (1 + SNR) = 2 log 2 1 + 1 A 1 + 1 .(23)
When working with CV QKD systems, it is necessary that the concept based on so-called covariance matrices be utilized. Diagonal elements of a covariance matrix provide information about variances of quadrature operators, while its off-diagonal elements contain mutual covariance functions of two quadratures [19].
It is the covariance matrix that makes it possible to estimate the information available to a potential eavesdropper. A mathematical model for a CV QKD system with Gaussian modulation of coherent states is addressed here. The scenario with trusted preparation noise in case of heterodyne detection and reverse reconciliation is considered [20].
In such a scenario, Alice prepares a sequence of coherent states | 1 , . . . | , . . . , | of the form | = | + with quadrature components and , which are realizations of two independent and identically distributed random variables and (see Figure 1c). The latter ones obey the same normal distribution with zero center and modulation variance˜A. Next, Alice sends the | state through a Gaussian quantum channel, whereas Bob performs a coherent -heterodyne -detection, thereby receiving information about the state.
One can show through the Schmidt representation [21] that the value of the von Neumann entropy of an eavesdropper E coincides with the entropy shared by Alice and Bob AB , so that:
E = AB = − ∑︁ log 2 .(24)
Then, to calculate the Holevo information, which is the difference between the von Neumann entropy E of an eavesdropper before and E |B after Bob's measurement, the following expression is valid:
EB = E − E |B = AB − A|B .(25)
The von Neumann entropy of an eavesdropper E is defined through a covariance matrix:
Σ trusted rec. AB = 1 2 √︁ ch ( 2 − 1) √︁ ch ( 2 − 1) ( ch ( − 1) + 1 + ch ) 1 2 ,(26)
which takes the following form:
1 2 1 2 .(27)
The corresponding symplectic eigenvalues can be found as:
1, 2 = 1 2 ( ± ( − )),(28)= √︁ ( + ) 2 − 4 2 .(29)
To obtain E |B , it is enough to estimate only one block of the covariance matrix of the common state of the remaining modes after a projective measurement of Bob's mode, which describes eavesdropper information:
Σ E |B = 1 B + 1 1 1 2 2 2 3 1 2 ,(30)1 = ((1 − rec ) rec + rec ch + 1) + + ch ( ch − ) (1 + (1 − rec ) rec ) ,(31)2 = √︂ ch 2 ch − 1 ( rec + (1 − rec ) rec + 1) ,(32)3 = (1 − rec ) ch rec + rec ch ( ch − 1) + rec + ch ,(33)
where B = ch det ( − 1) + 1 + det ch + rec is the variance of Alice's quadrature operator, ch = ch /(1 − ch ) + 1, rec = rec /(1 − det ) + 1 is the variance of entangled states, ch denotes channel excess noise, rec is excess noise of Alice, ch = 10 − /10 is channel transmittance at the distance , denotes attenuation coefficient, det = det 10 −losses/10 is transmittance of the signal arm, and det denotes detector efficiency.
The matrix Σ E |B can be represented in the form (27), therefore, the corresponding symplectic eigenvalues can be obtained similarly:
3, 4 = ± ( 3 − 1 ) 2 ( + 1) ,(34)= √︃ ( 1 + 3 ) 2 − 4 2 2 .(35)
Thus, all the components necessary for the Holevo information evaluation are obtained, and, therefore, it is possible to determine the secure key generation rate.
Some of the parameters needed for the numerical simulation of the CV QKD system were taken from [20] as model parameters.
Channel noise sources
While propagating through a fiber-optical network, a quantum signal is inevitably impaired by losses. In this article, we consider three effects contributing to the overall noise mainly, that are SpRS, FWM nonlinearity, and LCXT. We then analyze the way they affect the quantum channel and, subsequently, QKD system performance.
Spontaneous Raman Scattering
Raman scattering is a third-order nonlinear effect that should be addressed in terms of optical fiber communication systems utilizing DWDM.
The effect of the SpRS results in the broadband noise in fiber-optical networks. This type of noise is considered to be insignificant for classical networking, though its effect on QKD systems is substantial [22,23]. The way it affects quantum channels depends on the relative shift of the spectrum between quantum and classical channels. The SpRS noise can be minimized by a proper choice of information and quantum channels' configurations.
Regarding the propagation direction, there are yet two types of spontaneous Raman scattering noise to be addressed in terms of signal propagation in medium, namely, forward and backward SpRS noises. The first one occurs when signal and pump lights are co-propagating, whereas the second one takes place in case of their counter-propagation. Here though the situation where the signals in quantum and classical channels propagate in optical fiber along the same direction is considered. This being the case, forward SpRS noise induced by the presence of classical channels is given by [10,17]:
ram,f = out ch ∑︁ =1 ( c , q )Δ .(36)
In the expression above, out denotes the output power for a single channel, is the length of the optical fiber, ℎ is the number of classical channels present in a DWDM system, ( c , q ) describes the normalized scattering cross-section for the wavelengths of classical ( c ) and quantum ( q ) channels, and Δ is the bandwidth of the quantum channel filtering system.
Here we operate in terms of output power values, not the input ones. The reason to justify it lies in the fact that the bit error rate (BER) requirements for a DWDM system are addressed directly if output power value is utilized. The latter one can be obtained through the receiver sensitivity and insertion losses IL of the system:
out (dBm) = x (dBm) + (dBm),(37)
where x is the sensitivity of the receiver and denotes the insertion losses of the system.
Four-wave mixing
Four-wave mixing is a third-order nonlinear process in fiber transmission by its origin. Source photons interact within a fiber in such a way that additional photons at new frequencies are created from the initial ones. At the same time, the energy-momentum conservation is preserved, i.e., there is no real excitation of the medium [24]. Generally, FWM is considered to be negligibly small in terms of QKD with DWDM, as its effect on a quantum channel can be minimized by properly choosing classical channel separation or fulfilling phase-matching conditions [10]. However, depending on the chosen configuration of classical and quantum channels, stimulated FWM process can result in the generation of photons at frequencies of the quantum channel [25] and thus adds up to the overall noise in the quantum channel band.
For three pump channels featuring frequencies , , and , the value of the resulting FWM noise peak power generated at a new frequency + − is given by [10]:
= 2 2 2 − (1 − − ) 2 9 2 ℎ ,(38)
where the phase-matching efficiency for FWM and parameter Δ are defined as:
= 2 2 + Δ 2 1 + 4 − sin 2 (Δ /2) (1 − − ) 2 ,(39)
and
Δ = 2 2 | − || − | · + 2 | − | + | − | ,(40)
correspondingly. In the equations above, is the transmission distance of the interacting light fields in the optical fiber, denotes the degeneracy factor ( = 6, = 3), ( , ) and ( , ) are the input power and optical frequency of the interacting fields correspondingly, stands for the third-order nonlinear coefficient, is the loss coefficient, and / are the dispersion coefficient of an optical fiber and its slope respectively with standing for the wavelength of the FWM radiation.
Finally, the resulting FWM noise power can be obtained as a sum of the FWM products featuring frequencies coinciding with the one of a quantum channel :
FWM = ∑︁ , + − = .(41)
Linear channel cross-talk
In practice, any DWDM system suffers losses due to the linear channel crosstalk (LCXT). The mechanism of LXCT is associated with the imperfection of the demultiplexers, i.e., their inability to prevent a part of radiation corresponding to undesired wavelengths from reaching the photodetector [26]. The corresponding noise can affect the weak quantum signal significantly if the isolation of the more powerful classical channels is not sufficient enough. The power leakage from the filter into a quantum channel can be obtained as follows:
LCXT = out (dBm) − ISOL (dB).(42)
Noteworthy, the noise power calculated for all the processes mentioned can be then transformed into a photon detection probability, so that to be used to calculate the secure key generation rate. Mathematically, relationship between the noise power and the corresponding photon detection probability is described as follows:
ram,f/FWM/LCXT = ram,f/FWM/LCXT ℎ / Δ ,(43)
where denotes the detector efficiency, = 10 −0.1 is the transmission coefficient associated with the insertion losses of a detection system, ℎ is the Planck constant, and is the speed of light.
Configurations and motivation for their choice
In the course of further work, using numerical methods we have simulated simultaneous propagation of 10 (or 40) classical channels and one quantum channel in a single optical fiber. Channel wavelengths corresponded to the DWDM grid. Their frequencies were estimated according to ITU standard with 100 GHz grid spacing applying the following formula:
[THz] = 191.6 + ,(44)
where is the channel number and is the grid spacing. The mathematical model addressed propagation in the presence of three channel noise sources: SpRS, FWM, and LCXT, which detailed mathematical description was given above. Within the framework of this model, the secure key generation rate was estimated for three types of QKD protocols: COW QKD, SCW QKD, and CV QKD protocols. The value of maximal achievable propagation distance served as an optimality criterion for the configurations under consideration: the solution was recognized as optimal when maximal values of the propagation distance were achieved.
Further analysis showed that channel location significantly affects the behavior of the noise in a channel. It has previously been shown [27] that the greatest contribution was made by SpRS. For this reason, the configurations were primarily selected in such a way as to minimize it.
The authors [28] propose to locate a quantum channel between groups of classical channels and assign shorter wavelengths to quantum channels and longer wavelengths to classical ones. This approach is explained by the analysis of the Raman scattering cross-section graph: the latter takes its smallest values at wavelengths to the right of (smaller than) and to the left of (bigger than) the pump wavelength. However, it must be noted that the 200 GHz grid and bidirectional information channels are considered in the above-cited article, which is not fully consistent with our requirements. Nevertheless, following a similar approach, we selected configurations that satisfy the conditions necessary. In order to find the best possible channel configuration, the following approach was implemented. We assigned wavelengths from the considered range in increments of grid spacing to the quantum channel and calculated the maximal achievable distance, in each case placing information channels at the wavelengths according to the observations mentioned above. The placement was pronounced optimal when regions of the smallest SpRS cross-section values were exploited most.
As a result, it was found that the maximal achievable distances for the specified parameters were 61.61 km and 32.95 km for configurations with 10 and 40 classical channels, respectively. The location of the channels for these cases is shown in Figure 2. The wavelength of a quantum channel is 1536.61 nm and 1537.40 nm for the first and second solutions correspondingly. Thus, four configurations were considered for further analysis. Their description is summarized and presented in Table 1.
Results and discussion
To analyze the performance of SCW QKD, COW QKD, and CV QKD protocols for the discussed configurations, the numerical simulation was performed. The parameters describing the DWDM system are presented in Table 2. For the selected configurations, by means of numerical simulations the dependence of the secure key generation rate on the optical fiber length was obtained. The results are shown in Figure 3.
As can be seen from the graphs, the largest value of the maximal distance is achieved for COW protocol (red line in the graphs) for the configurations with a quantum channel in the C-band (namely, Config. #1 & #3), whereas SCW protocol (blue line in the graphs) is outperforming the others when it comes to the configurations with a quantum channel wavelength of 1310 nm, showing, however, just little advantage over COW QKD protocol. In turns, the shortest values correspond to CV QKD protocol (yellow line in the graphs) for all the cases considered. Moreover, an increase in the number of channels leads to a crucial decrease in the maximal achievable distances of QKD systems when a quantum channel is placed in the C-band (see Figure 3a and Figure 3c), while no significant change is observed for the configurations with a quantum channel at 1310 nm (see Figure 3b and Figure 3d). Regarding the configuration for 40 channels with a quantum channel in the C-band, the smallest values of the maximal distance are achieved for all the QKD protocols considered (see Figure 3c).
It should be noted that it is not entirely correct to compare the results obtained for the DV protocols (namely, COW and SCW) and CV QKD directly. The reason to it is the difference in the detection methods used. CV QKD systems employ coherent detection (either homodyne or heterodyne), while DV QKD systems utilize single photon detectors, i.e., count photons. From a theoretical perspective the difference occurs due to the fact that the dimension of the Hilbert space is infinite for CV QKD and thus corresponds to a 2 -mode Fock space [29].
Therefore, it is preferable that a comparative analysis of configurations within each of the protocols be addressed also. The graphs are presented in Figure 4.
As already mentioned, for all the QKD systems considered, Config. #3 (40 channels, quantum channel in the C-band) is the most unprofitable for all the protocols considered. The situation is far from being same unambiguous when looking at the best achievable results though. SCW QKD being the case (Figure 4a), it is Config. #2 (10 channels, quantum channel at 1310 nm) that shows the best performance, which, though, has just little advantage over Config. #4 (40 channels, quantum channel at 1310 nm). The two latter configurations are barely discernible when talking about COW QKD protocol either (Figure 4b), though here best results correspond to Config. #1 (10 channels, quantum channel in the C-band). For the CV QKD system (Figure 4c), the difference between the maximal achievable distance values for configurations #2, #3, and #4 is indistinct. Noteworthy, though providing shortest maximal achievable distance values, CV QKD systems have significantly greater secure key generation rate values. Table 3 summarizes all the results obtained for all the configurations and QKD systems considered in the article.
Conclusion
In this work, by means of numerical simulations, a theoretical investigation of three QKD systems' performance was conducted for the case they were integrated with a DWDM optical network in the presence of SpRS, FWM, and LCXT noise. A criterion to estimate their performance was the value of a maximal achievable distance of a QKD system, i.e., where a secure key can be still generated. Comparative analysis showed that there was a general tendency for all of the QKD systems considered, namely, the configuration with 40 channels and a quantum channel in the C-band is the most unprofitable in terms of the maximal achievable distance of a system, thus corresponding to the minimal values. Allocating 1310 nm wavelength for a quantum channel is beneficial when the amount of information channels is needed to be increased (e.g., 40 in this work), as a decrease in the maximal achievable distance value is unsubstantial then. Noteworthy, an important feature of CV QKD systems is their high secure key generation rates.
Fig. 1 .
1Representation of states used for the information encoding for different QKD systems: a) SCW, b) COW, and c) CV QKD := det (0, + ),
Fig. 2 .
2Schematic graphical representation of the proposed channel configurations (classical information channels correspond to the blue squares and quantum channelsto the orange circles)
Fig. 3 .
3The dependence of the secure key generation rate on the fiber length for the configuration considered: a) 10 channels and a quantum channel in the C-band (i.e., Config. #1), b) 10 channels and a quantum channel at a wavelength of 1310 nm (i.e., Config. #2), c) 40 channels and a quantum channel in the C-band (i.e., Config. #3), and d) 40 channels and a quantum channel at a wavelength of 1310 nm (i.e., Config. #4)
Fig. 4 .
4The dependence of the secure key generation rate on the fiber length for the a) SCW QKD, b) COW QKD, and c) CV QKD
Table 1 .
1Description of the optimal configurations chosen for numerical simulationsConfiguration
Number of
channels
Quantum channel
wavelength, nm
Config. #1
10
1536.61
Config. #2
10
1310
Config. #3
40
1537.40
Config. #4
40
Table 2 .
2Parameters of the DWDM systemParameter
Value
0.18 dB/km
Δ
15 GHz
ℎ
10 or 40
−32 dBm
IL
8 dB
A criterion to estimate the efficiency of the QKD systems' performance for the considered
configurations is their maximal achievable distance.
Table 3 .
3Maximal achievable distance values for SCW, COW, and CV QKD systems for the four configurations consideredQKD system
Number of
channels
Max distance
(C-band), km
Max distance
(O-band), km
SCW
10
40
61.15
32.94
66.28
65.12
COW
10
40
70.59
42.75
65.06
64.45
CV QKD
10
40
28.86
13.27
29.75
29.44
( ) ≈ 0 ( ) ≈ 1 − ( ) 2 /4,(7)
Acknowledgments. The work was done by Leading Research Center "National Center for Quantum Internet" of ITMO University by order of JSCo Russian Railways.DisclosuresThe authors declare no conflicts of interest.
The security of practical quantum key distribution. V Scarani, H Bechmann-Pasquinucci, N J Cerf, M Dušek, N Lütkenhaus, M Peev, Rev. Mod. Phys. 81V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek, N. Lütkenhaus, and M. Peev, "The security of practical quantum key distribution," Rev. Mod. Phys. 81, 1301-1350 (2009).
Advances in quantum cryptography. S Pirandola, U L Andersen, L Banchi, M Berta, D Bunandar, R Colbeck, D Englund, T Gehring, C Lupo, C Ottaviani, J L Pereira, M Razavi, J Shaari, M Tomamichel, V C Usenko, G Vallone, P Villoresi, P Wallden, Adv. Opt. Photonics. 121012S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. L. Pereira, M. Razavi, J. Shamsul Shaari, M. Tomamichel, V. C. Usenko, G. Vallone, P. Villoresi, and P. Wallden, "Advances in quantum cryptography," Adv. Opt. Photonics 12, 1012 (2020).
Quantum cryptography. N Gisin, G Ribordy, W Tittel, H Zbinden, Rev. modern physics. 74145N. Gisin, G. Goire Ribordy, W. Tittel, and H. Zbinden, "Quantum cryptography," Rev. modern physics 74, 145 (2002).
Algorithms for quantum computation: Discrete logarithms and factoring. P W Shor, Proceedings 35th annual symposium on foundations of computer science. 35th annual symposium on foundations of computer scienceIeeeP. W. Shor, "Algorithms for quantum computation: Discrete logarithms and factoring," in Proceedings 35th annual symposium on foundations of computer science, (Ieee, 1994), pp. 124-134.
Quantum cryptography: Public key distribution and coin tossing. C H Bennett, G Brassard, Theor. Comput. Sci. 560C. H. Bennett and G. Brassard, "Quantum cryptography: Public key distribution and coin tossing," Theor. Comput. Sci. 560, 7-11 (2014).
SECURITY OF QUANTUM KEY DISTRIBUTION. R Renner, Tech. Rep. 1R. Renner, "SECURITY OF QUANTUM KEY DISTRIBUTION," Tech. Rep. 1 (2008).
Practical aspects of security certification for commercial quantum technologies. N Walenta, M Soucarros, D Stucki, D Caselunghe, M Domergue, M Hagerman, R Hart, D Hayford, R Houlmann, M Legré, T Mccandlish, J.-B Page, M Tourville, R Wolterman, Electro-Optical and Infrared Systems: Technology and Applications XII; and Quantum Information Science and Technology. SPIE964896480N. Walenta, M. Soucarros, D. Stucki, D. Caselunghe, M. Domergue, M. Hagerman, R. Hart, D. Hayford, R. Houlmann, M. Legré, T. McCandlish, J.-B. Page, M. Tourville, and R. Wolterman, "Practical aspects of security certification for commercial quantum technologies," in Electro-Optical and Infrared Systems: Technology and Applications XII; and Quantum Information Science and Technology, vol. 9648 (SPIE, 2015), p. 96480U.
Long-distance quantum key distribution in optical fibre. P A Hiskett, D Rosenberg, C G Peterson, R J Hughes, S Nam, A E Lita, A J Miller, J E Nordholt, New J. Phys. 8P. A. Hiskett, D. Rosenberg, C. G. Peterson, R. J. Hughes, S. Nam, A. E. Lita, A. J. Miller, and J. E. Nordholt, "Long-distance quantum key distribution in optical fibre," New J. Phys. 8 (2006).
Limits and security of free-space quantum communications. S Pirandola, Phys. Rev. Res. 313279S. Pirandola, "Limits and security of free-space quantum communications," Phys. Rev. Res. 3, 013279 (2021).
Reducing spontaneous Raman scattering noise in high quantum bit rate QKD systems over optical fiber. M Mlejnek, N Kaliteevskiy, D Nolan, arXiv:1712.05891arXiv preprintM. Mlejnek, N. Kaliteevskiy, and D. Nolan, "Reducing spontaneous Raman scattering noise in high quantum bit rate QKD systems over optical fiber," arXiv preprint arXiv:1712.05891 (2017).
Optimized channel allocation scheme for jointly reducing four-wave mixing and Raman scattering in the DWDM-QKD system. J.-N Niu, Y.-M Sun, C Cai, Y.-F Ji, Appl. Opt. 577987J.-N. Niu, Y.-M. Sun, C. Cai, and Y.-F. Ji, "Optimized channel allocation scheme for jointly reducing four-wave mixing and Raman scattering in the DWDM-QKD system," Appl. Opt. 57, 7987 (2018).
Coexistence of continuous variable QKD with intense DWDM classical channels. R Kumar, H Qin, R Alléaume, New J. Phys. 17R. Kumar, H. Qin, and R. Alléaume, "Coexistence of continuous variable QKD with intense DWDM classical channels," New J. Phys. 17 (2015).
Secure polarization-independent subcarrier quantum key distribution in optical fiber channel using BB84 protocol with a strong reference. A V Gleim, V I Egorov, Y V Nazarov, S V Smirnov, V V Chistyakov, O I Bannik, A A Anisimov, S M Kynev, A E Ivanova, R J Collins, S A Kozlov, G S Buller, Opt. Express. 242619A. V. Gleim, V. I. Egorov, Y. V. Nazarov, S. V. Smirnov, V. V. Chistyakov, O. I. Bannik, A. A. Anisimov, S. M. Kynev, A. E. Ivanova, R. J. Collins, S. A. Kozlov, and G. S. Buller, "Secure polarization-independent subcarrier quantum key distribution in optical fiber channel using BB84 protocol with a strong reference," Opt. Express 24, 2619 (2016).
Quantum Gryptography using Frequency Modulation of Weak Ligh Pulses. J Merolla, Y Mazurenko, J Goedgebuer, IEEEJ. Merolla, Y. Mazurenko, and J. Goedgebuer, "Quantum Gryptography using Frequency Modulation of Weak Ligh Pulses," (Institute of Electrical and Electronics Engineers (IEEE), 2005), pp. 101-101.
Security of subcarrier wave quantum key distribution against the collective beam-splitting attack. G P Miroshnichenko, A V Kozubov, A A Gaidash, A V Gleim, D B Horoshko, Opt. Express. 2611292G. P. Miroshnichenko, A. V. Kozubov, A. A. Gaidash, A. V. Gleim, and D. B. Horoshko, "Security of subcarrier wave quantum key distribution against the collective beam-splitting attack," Opt. Express 26, 11292 (2018).
Fast and simple one-way quantum key distribution. D Stucki, N Brunner, N Gisin, V Scarani, H Zbinden, Appl. Phys. Lett. 87D. Stucki, N. Brunner, N. Gisin, V. Scarani, and H. Zbinden, "Fast and simple one-way quantum key distribution," Appl. Phys. Lett. 87, 1-3 (2005).
Quantum key distribution and 1 Gbps data encryption over a single fibre. P Eraerds, N Walenta, M Legré, N Gisin, H Zbinden, New J. Phys. 12P. Eraerds, N. Walenta, M. Legré, N. Gisin, and H. Zbinden, "Quantum key distribution and 1 Gbps data encryption over a single fibre," New J. Phys. 12 (2010).
Continuous-Variable Quantum Key Distribution with Gaussian Modulation-The Theory of Practical Implementations. F Laudenbach, C Pacher, C.-H F Fung, A Poppe, M Peev, B Schrenk, M Hentschel, P Walther, H Hübel, Adv. Quantum Technol. 11800011F. Laudenbach, C. Pacher, C.-H. F. Fung, A. Poppe, M. Peev, B. Schrenk, M. Hentschel, P. Walther, and H. Hübel, "Continuous-Variable Quantum Key Distribution with Gaussian Modulation-The Theory of Practical Implementations," Adv. Quantum Technol. 1, 1800011 (2018).
Gaussian quantum information. C Weedbrook, S Pirandola, R García-Patrón, N J Cerf, T C Ralph, J H Shapiro, S Lloyd, Rev. Mod. Phys. 84C. Weedbrook, S. Pirandola, R. García-Patrón, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, "Gaussian quantum information," Rev. Mod. Phys. 84, 621-669 (2012).
Analysis of the Trusted-Device Scenario in Continuous-Variable Quantum Key Distribution. F Laudenbach, C Pacher, Adv. Quantum Technol. 21900055F. Laudenbach and C. Pacher, "Analysis of the Trusted-Device Scenario in Continuous-Variable Quantum Key Distribution," Adv. Quantum Technol. 2, 1900055 (2019).
Nielsen Michael, Chuang Isaac, Quantum Computation and Quantum Information. Nielsen Michael and Chuang Isaac, Quantum Computation and Quantum Information (2010).
Minimizing Spontaneous Raman Scattering Noise for Quantum Key Distribution in WDM Networks. R Lin, J Chen, Tech. rep. R. Lin and J. Chen, "Minimizing Spontaneous Raman Scattering Noise for Quantum Key Distribution in WDM Networks," Tech. rep. (2021).
Intercore spontaneous raman scattering impact on quantum key distribution in multicore fiber. C Cai, Y Sun, Y Ji, New J. Phys. 2283020C. Cai, Y. Sun, and Y. Ji, "Intercore spontaneous raman scattering impact on quantum key distribution in multicore fiber," New J. Phys. 22, 083020 (2020).
R W Boyd, Nonlinear Optics. Academic Press4th edR. W. Boyd, Nonlinear Optics (Academic Press, 2020), 4th ed.
Photon-pair generation in optical fibers through four-wave mixing: Role of Raman scattering and pump polarization. Q Lin, F Yaman, G P , Phys. Rev. A -At. Mol. Opt. Phys. 75Q. Lin, F. Yaman, and G. P. Agrawal, "Photon-pair generation in optical fibers through four-wave mixing: Role of Raman scattering and pump polarization," Phys. Rev. A -At. Mol. Opt. Phys. 75 (2007).
LINEAR CROSSTALK IN WAVELENGTH-DIVISION-MULTIPLEXED OPTICAL-FIBER TRANSMISSION SYSTEMS. A M Hill, D B Payne, J. Light. Technol. LT. 3A. M. Hill and D. B. Payne, "LINEAR CROSSTALK IN WAVELENGTH-DIVISION-MULTIPLEXED OPTICAL- FIBER TRANSMISSION SYSTEMS." J. Light. Technol. LT-3, 643-651 (1985).
A theoretical study of subcarrier-wave quantum key distribution system integration with an optical transport network utilizing dense wavelength division multiplexing. F Kiselev, N Veselkova, R Goncharov, V Egorov, J. Phys. B: At. Mol. Opt. Phys. 54F. Kiselev, N. Veselkova, R. Goncharov, and V. Egorov, "A theoretical study of subcarrier-wave quantum key distribution system integration with an optical transport network utilizing dense wavelength division multiplexing," J. Phys. B: At. Mol. Opt. Phys. 54 (2021).
Wavelength Assignment in Hybrid Quantum-Classical Networks. S Bahrani, M Razavi, J A Salehi, Sci. Reports. 8S. Bahrani, M. Razavi, and J. A. Salehi, "Wavelength Assignment in Hybrid Quantum-Classical Networks," Sci. Reports 8 (2018).
Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction. A Leverrier, Phys. Rev. Lett. 118200501A. Leverrier, "Security of Continuous-Variable Quantum Key Distribution via a Gaussian de Finetti Reduction," Phys. Rev. Lett. 118, 200501 (2017).
| []
|
[
"FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations Black and Female Black and Male Non-Black and Female Non-Black and Male Figure 1. Sample outputs from the StyleGAN2 model debiased using our method with respect to Black+Gender attributes",
"FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations Black and Female Black and Male Non-Black and Female Non-Black and Male Figure 1. Sample outputs from the StyleGAN2 model debiased using our method with respect to Black+Gender attributes"
]
| [
"Cemre Karakas \nBogaziçi University Istanbul\nTurkey\n",
"Alara Dirik \nBogaziçi University Istanbul\nTurkey\n",
"Eylul Yalcinkaya [email protected] \nBogaziçi University Istanbul\nTurkey\n",
"Pinar Yanardag [email protected] \nBogaziçi University Istanbul\nTurkey\n"
]
| [
"Bogaziçi University Istanbul\nTurkey",
"Bogaziçi University Istanbul\nTurkey",
"Bogaziçi University Istanbul\nTurkey",
"Bogaziçi University Istanbul\nTurkey"
]
| []
| Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at | 10.1007/978-3-031-19778-9_33 | [
"https://arxiv.org/pdf/2202.06240v1.pdf"
]
| 246,823,309 | 2202.06240 | 6ec7a6c0cd8679e64792f38c61f79a624a1268d4 |
FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations Black and Female Black and Male Non-Black and Female Non-Black and Male Figure 1. Sample outputs from the StyleGAN2 model debiased using our method with respect to Black+Gender attributes
Cemre Karakas
Bogaziçi University Istanbul
Turkey
Alara Dirik
Bogaziçi University Istanbul
Turkey
Eylul Yalcinkaya [email protected]
Bogaziçi University Istanbul
Turkey
Pinar Yanardag [email protected]
Bogaziçi University Istanbul
Turkey
FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations Black and Female Black and Male Non-Black and Female Non-Black and Male Figure 1. Sample outputs from the StyleGAN2 model debiased using our method with respect to Black+Gender attributes
* Equal contribution
Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at
Introduction
Generative Adversarial Networks (GANs) [8] are popular image generation models capable of synthesizing highquality images, and they have been used for a variety of visual applications [18,28,33,34,41,42]. Like any other deep learning model, GANs are essentially statistical models trained to learn a data distribution and generate realistic data that is indistinguishable to the discriminator from that in the training set. To achieve this, GANs exploit and favor the samples that provide the most information, and may neglect minority samples. Therefore, a well-trained GAN favors learning the majority attributes, and the samples they generate suffer from the same biases in the datasets on which they are trained. For example, a GAN, trained on a face dataset with few images of non-Caucasian individuals, will generate images of mostly Caucasian individuals [21,29]. Our preliminary analysis of the pre-trained StyleGAN2-FFHQ model confirms the significance of the generation bias: out of 10K randomly generated images, the male attribute is present in 42%, the young attribute is present in 70%, and the eyeglasses attribute is present in 20%. Our analysis shows that these biases also exist in the FFHQ training data with 42%, 72%, and 22% for the male, young and eyeglasses attributes, respectively (see Appendix A for more details). These examples show that GANs not only inherit biases from the training data, but also carry over to the applications built on top of them. This is a particularly important issue because pre-trained large-scale GANs such as StyleGAN2 [15] are often used as the backbone of various computer vision applications in a variety of domains such as image processing, image generation and manipulation, anomaly detection, dataset generation and augmentation. Therefore, any model or application that depends on large pre-trained models such as StyleGAN2 would inherit or even amplify their biases and is therefore bound to be unfair.
In this work, we aim to address the problem of fairness in GANs by debiasing a pre-trained StyleGAN2 model with respect to single or multiple attributes. After debiasing, the edited StyleGAN2 models allow the user to generate unbiased images in which the target attributes are fairly represented. Unlike previous work that requires extensive preprocessing or training an additional model for each target attribute, our approach directly debiases the GAN model to produce more balanced outputs, and it can also be used for various downstream applications. Moreover, our approach does not require any sub-sampling of the input or output data, and is able to debias the GAN model within minutes without comprimising the image quality. Our main contributions are as follows:
• We first propose a simple method that debiases the GAN model with respect to a single attribute, such as gender or eyeglasses.
• We then extend our method for jointly debiasing multiple attributes such as gender and eyeglasses.
• To handle more complex attributes such as race, we propose a third method based on CLIP [24], where we debias StyleGAN2 with text-based prompts such as 'a black person' or 'an asian person'.
• We perform extensive comparisons between our proposed method and other approaches to enforce fairness for a variety of attributes. We empirically show that our method is very effective in de-biasing the GAN model to produce balanced datasets without compromising the quality of the generated images.
• To promote fair generative models and encourage further research on this topic, we provide our source code and debiased StyleGAN2 models for various attributes at http://catlab-team.github.io/ fairstyle.
Related Work
In this section, we first review related work in fairness and bias. We then discuss studies that specifically address fairness and bias in generative models. Finally, we discuss related work in the area of latent space manipulation.
Fairness and Bias in AI
Fairness and bias detection in deep neural networks have attracted much attention in recent years [5,22]. Most existing work on fairness focuses on studying the fairness of classifiers, as the predictions of these models can be directly used for discriminatory purposes or associate unjustified stereotypes with a particular class. Approaches to eliminating model bias can be divided into three main categories: Preprocessing methods that aim to collect balanced training data [19,20,40], methods to introduce constraints or regularizers into the training process [2,36,39], and postprocessing methods that modify the posteriors of the trained models to debias them [6,11]. In our work, we focus on debiasing and fairness methods developed specifically for GANs, which we discuss below.
Detecting and Eliminating Biases in GANs
The fairness of generative models is much less studied compared to the fairness of discriminative models. Most research on the bias and fairness of GANs aims to either eliminate the negative effects of using imbalanced data on generation results or to identify and explain the biases. Research on bias and fairness of GANs can be divided into three main categories: improving the training and generation performance of GANs using biased datasets, identifying and explaining biases, and debiasing pre-trained GANs.
The first research category, training GANs on biased datasets, aims to solve the problem of low quality image generation when the model is trained on imbalanced datasets with disjoint manifolds and fails to learn the true data distribution. [31] proposes a heuristic motivated by rejection sampling to inject disconnectedness into GAN training to improve learning on disconnected manifolds. [30] proposes Discriminator Optimal Transport (DOT), a gradient ascent method driven by a Wasserstein discriminator to improve samples. [3] uses a rejection sampling method to approximately correct errors in the distribution of the GAN generator. [9] proposes a weakly supervised method to detect bias in existing datasets and assigns importance weights to samples during training. The second category of research aims to detect or explain bias in generative models. [17] proposes to use attribute-specific classifiers and train a generative model to specifically explain which style channels of StyleGAN2 contribute to the underlying classifier decisions. The third line of research aims to debias and improve the sample quality of pre-trained GANs. [10] proposes to train a probabilistic classifier to distinguish samples from two distributions and use this likelihood-free importance weighting method to correct for bias in generative models. However, this method requires training a classifier for each attribute targeted for debiasing and cannot handle biases in multiple attributes (e.g., gender and eyeglasses). [29] proposes a conditional latent space sampling method to generate attribute-balanced images. More specifically, latent codes from StyleGAN2 are sampled and classified. Then, a Gaussian Mixture Model (GMM) is trained for each attribute to create a set of balanced latent codes. Another recent work, [25], proposes to use the latent codes from the W -space of StyleGAN2 to train a linear SVM model for each attribute and then use the normal vector to the separation hyperplane to steer the latent code away from or towards acquiring the target attribute for debiasing. Unlike [25,29], our method does not require model training and aims to directly debias the GAN model which can be used to generate attribute-balanced image sets.
Latent Space Manipulation
Several methods have been proposed to exploit the latent space of GANs for image manipulation, which can be divided into two broad categories: supervised and unsupervised methods. Supervised approaches typically benefit from pre-trained attribute classifiers that guide the optimization process to discover meaningful directions in the latent space, or use labeled data to train new classifiers that directly aim to learn directions of interest [7,26]. Other work shows that it is possible to find meaningful directions in latent space in an unsupervised manner [13,32]. GANSpace [12]) proposes to apply principal component analysis (PCA, [35]) to randomly select the latent vectors of the intermediate layers of the BigGAN and StyleGAN models. A similar approach is used in SeFA [27], where they directly optimize the intermediate weight matrix of the GAN model in closed form. LatentCLR [38] proposes a contrastive learning approach to find unsupervised directions that are transferable to different classes. In addition, both StyleCLIP [23] and StyleMC [16] use CLIP to find text-based directions within StyleGAN2 and perform both coarse and fine-grained manipulations of different attributes. Another recent work, StyleFlow [1], proposes a method for attribute-conditioned sampling and attribute-controlled editing with StyleGAN2. With respect to GAN editing, [4] proposes a method to permanently change the parameters of a GAN to produce images in which the desired attribute (e.g., clouds, thick eyebrows) is always present. However, they did not aim to debias GANs for fairness and their methodology differs from ours.
Methodology
In this section, we propose three methods to debias a pre-trained StyleGAN2 model. We begin with a brief description of the StyleGAN2 architecture and then describe our methods for debiasing a single attribute, joint debiasing of multiple attributes, and debiasing with text-based directions. Figure 2 illustrates a general view of our framework.
Background on StyleGAN2
The generator of StyleGAN2 contains several latent spaces: Z, W, W+ and S, also referred to as the style space. z ∈ Z is a latent vector drawn from a prior distribution p(z), typically chosen as a Gaussian. The generator G acts as a mapping function G : Z → X , where X is the target image domain. Therefore, G transforms the vectors from z into an intermediate latent space W by forward propagating them through 8 fully connected layers. The resulting latent vectors w ∈ W are then transformed into channel-wise style parameters, forming the style space, denoted S. In our work, we use the style space S to perform manipulations, as it is shown [37] to be the most disentangled, complete and informative space of StyleGAN2.
The synthesis network of the generator in StyleGAN2 consists of several blocks, each block having two convolutional layers for synthesizing feature maps. Each main block has an additional 1 × 1 convolutional layer that maps the output feature tensor to RGB colors, referred to as tRGB. The three different style code vectors are referred to as s B1 , s B2 , and s B+tRGB , where B indicates the block number. Given a block B, the style vectors s B1 and s B2 of each block consist of style channels that control disentangled visual attributes. The style vectors of each layer are obtained from the intermediate latent vectors w ∈ W of the same layer by three affine transformations,
w B1 → s B1 , w B2 → s B2 , w B2 → s B+tRGB .
Measuring Generation Bias
To assess whether our method produces a balanced distribution of attributes, we begin by formulating and quantifying the bias in the generated images. Given an ndimensional image dataset I ⊆ R n , GANs attempt to learn such a distribution P (I) = P data (I). Thus, a welltrained generator is a mapping function G : Z → I, where Z ⊆ R m denotes the m-dimensional latent space, usually assumed to be a Gaussian distribution. Moreover, we can sample latent codes z and use the trained model to generate
a realistic dataset D = {G (z i )} N i=1
of N generated images belonging to the distribution P (I) ≈ P data (I).
Assuming that real and generated images contain k semantic attributes a 1 , a 2 , ..., a k , a well-trained GAN learns any bias inherent in the original data distribution P data (I) with respect to the semantic attributes. In our work, we Figure 2. An overview of the FairStyle architecture, z denotes a random vector drawn from a Gaussian distribution, w denotes the latent vector generated by the mapping network of StyleGAN2. Given a target attribute at, si,j represents the style channel with layer index i and channel index j controlling the target attribute. We introduce fairstyle bias tensors into the GAN model, in which we edit the corresponding style channel si,j for debiasing. The edited vectors are then fed into the generator to get a new batch of images from which we obtain updated classifier results for at. The fairstyle bias tensors are iteratively edited until the GAN model produces a balanced distribution with respect to the target attribute. The de-biased GAN model can then be used for sampling purposes or directly used as a generative backbone model in downstream applications. are interested in finding both the marginal distribution of the individual semantic attributes P (a i ) and the joint distributions of the attribute pairs P (a i , a j ) of the generated dataset D. To measure generation bias, we generate N random images with pre-trained StyleGAN2 trained on the FFHQ dataset, and use 40 pre-trained binary attribute classifiers [14] to assign labels to each image such that a i = 1 if the image contains the attribute a i , and a i = 0 otherwise.
Identifying channels that control certain attributes
For a target attribute a t such as eyeglasses, we first propose a simple approach that identifies a single style channel s i,j responsible for controlling the target attribute, where layer and channel indices are denoted by i and j, respectively. We assume that there is a binary classifier C at corresponding to the target attribute, such as pre-trained CelebA binary classifiers [14]. The identified style channel s i,j is then used for debiasing the GAN model with respect to single (Section 3.4) and multiple attributes (Section 3.5).
To identify s i,j , we first generate N = 128 random noise vectors to obtain their style codes using StyleGAN2. Given an arbitrary style code s, we generate two perturbed style codes by adding and subtracting a value of c at the corresponding index i and channel j. This process is repeated for 128 randomly generated style codes, and each perturbed style code is forward propagated through the Style-GAN2 generator to synthesize images. Finally, we identify s i,j corresponding to the target attribute by selecting the style channel for which the perturbation causes the highest average change in classification score over the batch of N = 128 images:
arg max i,j N k=1 |C at (G(s − ∆s i,j )) − C at (G(s + ∆s i,j ))| N(1)
where ∆s i,j represents the perturbation value c, k denotes the index of the generated image, and G denotes the generator of StyleGAN2. In other words, we repeat the same process for each channel of the style codes and leave the values of the other style channels unchanged. In our experiments, we use the perturbation value c = 10.
Debiasing single attributes
Once we have identified a style channel s i,j that controls the target attribute a t , we can perturb the value of the channel to increase or decrease the representation of the target attribute in the generated output. In our work, we use this intuition to edit the parameters of a pre-trained StyleGAN2 model that can be used to generate balanced outputs with respect to the target attribute a t .
To this end, we introduce additional bias tensors, which we call fairstyle tensors, into the GAN model (see Figure 2). These tensors are added to the StyleGAN2 convolution modulations on a channel-wise manner. More specifically, for a fairstyle tensor, b, we set b i,j = c and b m,n = 0, where m, n = i, j, and c is initialized to 0. In other words, the values inside the fairstyle tensors are set to zero except for the channel indices i, j that correspond to the target attribute.
We then iteratively generate a batch of N = 128 latent codes and compute their updated style vectors. Given an arbitrary style vector s, we then compute the updated vector s = s + b. We forward propagate these style vectors to generate a batch of images and compute the distribution of the target attribute using an attribute classifier. Our goal is to optimize fairstyle tensor b such that the images generated using the updated GAN model have a fair distribution with respect to the target attribute a t . Similar to [29], we use the Kullback-Leibler divergence between the class distribution of a t and a uniform distribution to compute a fairness loss value L fair , formulated as follows:
L fair = KL(P D (a t ) || U(a t ))(2)
where P D denotes the class probability distributions and U denotes the uniform distribution. We used a onedimensional gradient descent for optimizing fairstyle tensors b. The updated GAN model with the optimized fairstyle tensors can then be used to generate images with a balanced distribution with respect to the target attribute.
Debiasing multiple attributes
While our first method is effective at debiasing the GAN model with respect to a single attribute such as eyeglasses, it does not allow for the joint debiasing of multiple attributes such as gender and eyeglasses. Therefore, we propose to extend our method to multiple attributes. Let a t1 and a t2 represent attributes that we want to jointly debias, such as gender and eyeglasses. Let s i1,j1 and s i2,j2 represent the target style channels identified by the method in Section 3.3 for attributes a t1 and a t2 , respectively. Similar to our first method, we iteratively generate N = 128 random noise vectors and their corresponding style codes. Given an arbitrary style code s, we then compute the fairstyle tensor for the corresponding channels as follows:
b i1,j1 = x 2 × s i2,j2 −s i2,j2 σ si 2 ,j 2 + y 2 b i2,j2 = x 1 × s i1,j1 −s i1,j1 σ si 1 ,j 1 + y 1(3)
where x 1 , y 1 , x 2 , y 2 are learned parameters initialized at 0 and optimized using gradient descent over a batch of N images, ands i,j ,σ si,j denote the mean and standard deviation for a given target style channel s i,j calculated as follows:s
i,j = 1 N N k=1 s i,j(4)σ 2 si,j = 1 N − 1 N k=1 (s i,j −s i,j ) 2(5)
Similar to our first method, we use KL divergence as a loss function between the joint class distribution of attributes a t1 , a t2 and a uniform distribution. After optimizing the fairstyle tensor, we use the GAN model to produce a balanced distribution of images with respect to the target attributes.
Our method can also be extended to support joint debiasing for more than two attributes. Let the number of attributes for which we want to jointly debias our model be M and assume that we have identified a style channel s i,j for each target attribute. In this case, each corresponding channel of the fairstyle tensor is updated as follows:
b im,jm = M k=1,k =m (x m k × s i k ,j k −s i k ,j k σ si k ,j k + y m k ) (6)
We note that Eq. 6 is simply a generalized version of Eq. 3 where each fairstyle tensor channel for a target depends on the other target channels. In this case, the number of resulting subclasses is equal to M 2 and the number of parameters to be learned is equal to 2 × M × (M − 1).
Debiasing attributes with text-based directions
The first two methods debias the GAN model with single or multiple channels, where the channels responsible for the desired attributes were identified using pre-trained attribute classifiers. However, the complexity of the attributes is limited by the availability of the classifiers. To debias even more complex attributes such as 'a black person' or 'an asian person', we debias style channels with text-based directions using CLIP. We use StyleMC [16] to identify the individual style channels for a given text.
In addition to the text-based directions, we also replace the attribute classifier with a CLIP-based one, since binary classifiers are not available for more complex attributes. In this case, we label images by comparing their CLIP-based distances D CLIP with a text prompt a t describing our target attribute and with another text prompt a tneg negating the attribute (e.g., 'the photo of a person with curly hair' vs. 'the photo of a person with straight hair') as follows:
C at = 1, if D CLIP (G(s), a t ) < D CLIP (G(s), a tneg ). 0, otherwise. (7)
where s is an arbitrary style code, D CLIP is the cosine distance between CLIP embeddings of the generated image and the text prompt a t or a tneg , and C at is the binary label assigned based on whichever text prompt (a t or a tneg ) achieves the shortest CLIP distance from the input image. We note that the negative text prompt a tneg , as in the example above, may be biased and exclude certain groups, such as 'the photo of a black person'.
With an effective approach to assign classification scores to generated images, we identify a direction s at consisting of one or more style channels using [16]. We use the same debiasing approach as our first method by replacing b with αs at , where α is the hyperparameter for manipulation strength.
Experiments
In this section, we explain our experimental setup and evaluate the proposed methods using StyleGAN2 trained on the FFHQ dataset. Furthermore, we show that our methods effectively debias StyleGAN2 without requiring model training or affecting the quality of generation. Next, we compare our methods to FairGen [29] and StyleFlow [1] methods.
Experimental Setup
For the first two methods, we identify a layer and a style channel for the gender, eyeglasses, smiling and age attributes and use them in our single or multiple attribute debiasing methods as described in Section 3.4 and Section 3.5. For the third method, described in Section 3.6, we experiment with a variety of simple and complex attributes such as 'a person with eyeglasses', 'a smiling person', 'a black person', 'an asian person' using [16]. We generate and label 1000 images to compute the mean and std statistics for our second method.
For our experiments, we use the official pre-trained StyleGAN2 models and binary attribute classifiers pretrained with the CelebA-HQ dataset 1 . To identify attributerelevant style channels, we exclude s tRGB layers from the style channel search since they cause entangled manipulations [37]. Following [16], we also exclude the style channels of the last four blocks from the search, as they represent very fine-grained features.
For the comparison with FairGen, we use the pre-trained GMM models 2 . For FairGen, we had to limit our comparison to the available pre-trained models in Table 1. We used the StyleFlow's official implementation 3 to uniformly sample latent codes from each attribute group. Although Style-Flow is not intended for fairness, we use it for conditional sampling similar to [29]. In StyleFlow, we had to limit our comparisons to gender, smiling, eyeglasses and age and their multiple attributes age and eyeglasses, age and gender, gender and eyeglasses. We exclude the comparison for racial attributes for both methods because no pre-trained models were available for these attributes or training code to train new ones.
Fairness Analysis
To assess the fairness of the generated images, we report the KL divergence between the marginal or joint distribution of the generated images with respect to the target attributes and a uniform distribution (see Eq. 2). Our goal is to obtain a distribution with respect to one or more attributes that closely resembles a uniform distribution in order to achieve a fair distribution. To this end, we generate 10K images for each of our methods as well as for the pre-trained StyleGAN2 model, FFHQ dataset, FairGen and StyleFlow.
We start with our first method to debias a single target attribute, and present marginal distribution of the datasets generated with our method and the pre-trained StyleGAN2 in Figure 3 (a-d). As can be seen in the figure, our first method can successfully debias attributes and achieves almost perfectly balanced datasets for the attributes gender, eyeglasses, age and smiling. Next, we use our second method to debias gender and eyeglasses, eyeglasses and smiling and gender and smiling attributes. As can be seen in Figure 3 (e-g), our second method is very effective at debiasing even extremely imbalanced distributions as in the case of the gender and eyeglasses attributes, and can achieve a significant balance.
We then measure the KL divergence between the distribution of generated datasets and a uniform distribution, and provide a comprehensive comparative analysis with the FFHQ training dataset, pre-trained StyleGAN2, FairGen, and StyleFlow. We debias single attributes for eyeglasses, age, smiling, gender and joint attributes for the Age+Gender, Age+Eyeglasses, and Gender+Eyeglasses (see Table 1). As can be seen in the table, our method outperforms StyleFlow, Fairgen and the pre-trained StyleGAN model on all attributes and achieves KL divergence values that are very close to uniform distribution in all singleattribute debiasing experiments.
We also perform additional single-attribute debiasing experiments for the highly biased attributes black, asian, and white. Since the CelebA classifiers did not cover these attributes, we used our CLIP-based method to debias the StyleGAN2 model for the black, asian, and white attributes. We present the results of this experiment in Table 2. As can be seen in the table, our method achieves a distribution that is very close to a uniform distribution, and effectively produces unbiased datasets with respect to the racial attributes.
Qualitative Results
We use our methods to debias StyleGAN2 for multiple attributes and show the generated images in Figure 1 and Figure 4. As can be seen in the figures, our method is able to generate balanced images for the attributes gender with eyeglasses ( Figure 4 (a-d)), gender and black (Figure 1 (a-d)) and attributes black and eyeglasses (Figure 4) (e-h)).
Runtime Analysis
Our method directly debias the StyleGAN2 model within a short period of time. More specifically, the average time to debias a single attribute is 2.25 minutes, while debiasing joint attributes takes 4.2 minutes.
Generation Quality
We note that a fair generative model should not compromise on generation quality to maintain its usefulness. To ensure that our methods generate high quality and diverse images, we report the Fréchet Inception Distance (FID) between sets of 10K images generated by the debiased Style-GAN2 model produced by our method and by the pretrained StyleGAN2 model. Unlike our method, FairGen and StyleFlow do not edit the GAN model, but rely on subsampling latent vectors from GMM or normalizing flows models. Therefore, we exclude them from the FID experiments.
To test image quality after debiasing the GAN model, we use the attribute pairs gender and eyeglasses, race and gender and race and eyeglasses to compute the FID scores of the debiased datasets. While the pre-trained StyleGAN2 model achieves a FID score of 14.11, our method achieves fairly similar FID score of 14.72 (a lower FID score is better). Note that a small increase in FID scores is expected as the distribution of generated images is shifted for debiasing compared to the real images from the training data. However, we note that the increase in FID score is negligible and the debiased GAN model still generates high quality images (see Figure 1 and Figure 4).
Limitations and Broader Impact
While our proposed method is effective in debiasing GAN models, it requires pre-trained attribute classifiers for style code optimization. We note that the debiasing process can be affected by biases in these classifiers, a problem that also occurs in the competing methods. This is especially important when debiasing attributes that are known to be biased, such as racial attributes like black or asian.
Conclusion
Generative models are only as fair as the data sets on which they are trained. In this work, we attempt to address this problem and propose three novel methods for debiasing a pre-trained StyleGAN2 model to allow fairer data generation with respect to a single or multiple target attributes. Unlike previous work that requires training a separate model for each target attribute or subsampling from the latent space to generate debiased datasets, our method restricts the debiasing process to the style space of Style-GAN2 and directly edits the GAN model for fast and stable fair data generation. In our experiments, we have shown that our method is not only effective in debiasing, but also does not affect the generation quality.
We believe that our method is not only useful for generating fairer data, but also our debiased models can serve as a fairer framework for various applications built on StyleGAN2. We hope that our work will not only raise awareness of the importance of fairness in generative models, but also serve as a foundation for future research.
A. Fairness Analysis on FFHQ Data and Style-GAN2 FFHQ Model
To understand how fair the StyleGAN2 model works on FFHQ, we randomly generated 1000 images. Then we used binary classifiers to label each image for the attributes gender, smiling, eyeglasses, and young for marginal and joint distributions ( Table 3, Table 4). As can be seen, the StyleGAN2 model generates images that are slightly biased towards Male=False, moderately biased towards Smil-ing=True and strongly biased towards Young=True and Eyeglasses=False attributes.We also examine the joint distribution of attribute pairs such as gender + eyeglasses, gender + smiling and eyeglasses + smiling. As can be seen, the joint probability distribution of the attributes can be extremely imbalanced even if the marginal probability distributions of the individual attributes are not, such as the ratio of women + eyeglasses to men + eyeglasses. In Figure 7 and Figure 8, respectively, we show the percentage of assigned binary labels for single and multiple attributes.
B. Additional debiasing results
We also performed debiasing for eyeglasses ( Figure 5) and afro hair attribute ( Figure 6) on the same latent codes showing the before/after of our debiasing method. Figure 5. A set of images generated with the same latent codes before and after debiasing the StyleGAN2 model with respect to the 'Eyeglasses' attribute on a single channel with our method. Figure 6. A set of images generated with the same latent codes before and after debiasing the StyleGAN2 model with respect to the 'a person with afro hairstyle' text-based attribute with our method.
Figure 3 .
3Distribution of single and joint attributes before and after debiasing StyleGAN2 model with our methods.
Figure 4 .
4Qualitative results for fair image generation in GANs with Gender+Eyeglasses and Black+Eyeglasses attributes.
Figure 7 .
7Marginal probability distributions of 'male', 'smiling', 'eyeglasses', 'young' attributes sampled from images generated by StyleGAN2 pre-trained on the FFHQ dataset.
Figure 8 .
8Joint probability distributions of ('male', 'eyeglasses'), ('eyeglasses', 'smiling'), ('male', 'smiling') attribute pairs sampled from images generated by StyleGAN2 pre-trained on the FFHQ dataset.
Table 1 .
1KL Divergence between a uniform distribution and the distribution of images generated with our method, StyleFlow and FairGen. FFHQ and StyleGAN2 are included for comparison purposes.Method
Age+Gender Age+Glasses Gender+Glasses
Glasses
Age
Smiling
Gender
FFHQ
0.2456
0.3546
0.2421
0.186
0.091
0.005
0.015
StyleGAN2
0.2794
0.3836
0.2495
0.180
0.109
0.011
0.018
StyleFlow
0.2141
0.1620
0.1214
0.061
3.98 × 10 −4
0.045
0.023
FairGen
3.73 × 10 −2 3.30 × 10 −2
1.85 × 10 −3
7.07 × 10 −4 1.77 × 10 −3 1.80 × 10 −5 4.21 × 10 −4
FairStyle
2.57 × 10 −2 1.57 × 10 −2
2.41 × 10 −4
0
1.80 × 10 −7
8 × 10 −8
3.20 × 10 −7
Table 2 .
2KL Divergence between a uniform distribution and the distribution of images generated by our text-based method to debias the black, asian, and white attributes. FFHQ and StyleGAN2 are included for comparison purposes.Method
Black
Asian
White
FFHQ
0.576
0.279
0.042
StyleGAN2
0.603
0.319
0.057
FairStyle
8.00 × 10 −6 7.20 × 10 −7 2 × 10 −6
Table 3 .
3Marginal distributions of attributes measured on the FFHQ dataset and images generated by StyleGAN2 pretrained on the FFHQ dataset.Attribute
FFHQ
StyleGAN2
Eyeglasses
F=0.78, T=0.22
F=0.80, T=0.20
Young
F=0.28, T=0.72
F=0.30, T=0.70
Smiling
F=0.43, T=0.57
F=0.44, T=0.56
Male
F=0.58, T=0.42
F=0.58, T=0.42
Table 4 .
4Joint distributions of attribute pairs measured on the FFHQ dataset and images generated by StyleGAN2 pretrained on the FFHQ dataset.Attributes
FFHQ
StyleGAN2
Eyegl.-Smile
FF=0.34, FT=0.44
TF=0.09, TT=0.13
FF=0.35, FT=0.45
TF=0.09, TT=0.11
Smile-Male
FF=0.22, FT=0.36
TF=0.21, TT=0.21
FF=0.23, FT=0.35
TF=0.21, TT=0.21
Male-Eyegl.
FF=0.50, FT=0.08
TF=0.28, TT=0.14
FF=0.53, FT=0.05
TF=0.27, TT=0.15
https://github.com/NVlabs/stylegan2 2 https://github.com/genforce/fairgen 3 https://github.com/RameenAbdal/StyleFlow
Acknowledgments This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN X GPU and GCP research credits from Google.
Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ArXiv, abs. Rameen Abdal, Peihao Zhu, Jyoti Niloy, Peter Mitra, Wonka, 36Rameen Abdal, Peihao Zhu, Niloy Jyoti Mitra, and Pe- ter Wonka. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous nor- malizing flows. ArXiv, abs/2008.02401, 2021. 3, 6
A reductions approach to fair classification. Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, Hanna M Wallach, abs/1803.02453ArXiv. 2Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna M. Wallach. A reductions approach to fair classification. ArXiv, abs/1803.02453, 2018. 2
Discriminator rejection sampling. ArXiv. Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian J Goodfellow, Augustus Odena, abs/1810.06758Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian J. Goodfellow, and Augustus Odena. Discriminator rejection sampling. ArXiv, abs/1810.06758, 2019. 2
Rewriting a deep generative model. ArXiv, abs. David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, and Antonio Torralba. Rewriting a deep generative model. ArXiv, abs/2007.15646, 2020. 3
Gender shades: Intersectional accuracy disparities in commercial gender classification. Joy Buolamwini, Timnit Gebru, FAT. Joy Buolamwini and Timnit Gebru. Gender shades: Inter- sectional accuracy disparities in commercial gender classifi- cation. In FAT, 2018. 2
Computational fairness: Preventing machine-learned discrimination. Michael Feldman, Michael Feldman. Computational fairness: Preventing machine-learned discrimination. 2015. 2
Ganalyze: Toward visual definitions of cognitive image properties. Lore Goetschalckx, Alex Andonian, Aude Oliva, Phillip Isola, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive im- age properties. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 5744-5753, 2019. 3
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahra- mani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Pro- cessing Systems 27, pages 2672-2680. Curran Associates, Inc., 2014. 1
Fair generative modeling via weak supervision. Aditya Grover, Kristy Choi, Rui Shu, Stefano Ermon, ICML. Aditya Grover, Kristy Choi, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In ICML, 2020. 2
Bias correction of learned generative models using likelihood-free importance weighting. Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, Stefano Ermon, DGS@ICLR. Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, and Stefano Ermon. Bias correction of learned generative models using likelihood-free importance weighting. In DGS@ICLR, 2019. 2
Equality of opportunity in supervised learning. Moritz Hardt, Eric Price, Nathan Srebro, NIPS. Moritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. In NIPS, 2016. 2
Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris, arXiv:2004.02546Ganspace: Discovering interpretable gan controls. arXiv preprintErik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable gan con- trols. arXiv preprint arXiv:2004.02546, 2020. 3
On the" steerability" of generative adversarial networks. Ali Jahanian, Lucy Chai, Phillip Isola, arXiv:1907.07171arXiv preprintAli Jahanian, Lucy Chai, and Phillip Isola. On the" steer- ability" of generative adversarial networks. arXiv preprint arXiv:1907.07171, 2019. 3
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, abs/1812.04948CoRRTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. CoRR, abs/1812.04948, 2018. 4
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8107-8116, 2020. 2
Stylemc: Multi-channel based fast text-guided image generation and manipulation. Umut Kocasari, Alara Dirik, Mert Tiftikci, Pinar Yanardag, 6ArXiv, abs/2112.08493, 2021. 3, 5Umut Kocasari, Alara Dirik, Mert Tiftikci, and Pinar Ya- nardag. Stylemc: Multi-channel based fast text-guided im- age generation and manipulation. ArXiv, abs/2112.08493, 2021. 3, 5, 6
Explaining in style: Training a gan to explain a classifier in stylespace. Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T Freeman, Phillip Isola, Amir Globerson, Michal Irani, Inbar Mosseri, abs/2104.13369ArXiv. 2Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, and Inbar Mosseri. Ex- plaining in style: Training a gan to explain a classifier in stylespace. ArXiv, abs/2104.13369, 2021. 2
Single image deraining: A comprehensive benchmark analysis. Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, Xiaochun Cao, Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K. Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, and Xiaochun Cao. Single image deraining: A comprehensive benchmark analysis, 2019. 1
Deep learning face attributes in the wild. Ziwei Liu, Ping Luo, Xiaogang Wang, Xiaoou Tang, IEEE International Conference on Computer Vision (ICCV). Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. 2015 IEEE In- ternational Conference on Computer Vision (ICCV), pages 3730-3738, 2015. 2
The variational fair autoencoder. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard S Zemel, abs/1511.00830CoRRChristos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. The variational fair autoencoder. CoRR, abs/1511.00830, 2016. 2
Daniel Mcduff, Shuang Ma, Yale Song, Ashish Kapoor, arXiv:1906.11891Characterizing bias in classifiers using generative models. arXiv preprintDaniel McDuff, Shuang Ma, Yale Song, and Ashish Kapoor. Characterizing bias in classifiers using generative models. arXiv preprint arXiv:1906.11891, 2019. 1
Fairness in machine learning. ArXiv, abs. L Oneto, Silvia Chiappa, L. Oneto and Silvia Chiappa. Fairness in machine learning. ArXiv, abs/2012.15816, 2020. 2
Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski, arXiv:2103.17249Styleclip: Text-driven manipulation of stylegan imagery. arXiv preprintOr Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. arXiv preprint arXiv:2103.17249, 2021. 3
Learning transferable visual models from natural language supervision. A Radford, J W Kim, Chris Hallacy, Aditya Ramesh, G Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, J Clark, G Krüger, Ilya Sutskever, abs/2103.00020ArXiv. 2A. Radford, J. W. Kim, Chris Hallacy, Aditya Ramesh, G. Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, J. Clark, G. Krüger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. ArXiv, abs/2103.00020, 2021. 2
Fair attribute classification through latent space de-biasing. V Vikram, Ramaswamy, S Y Sunnis, Olga Kim, Russakovsky, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vikram V. Ramaswamy, Sunnis S. Y. Kim, and Olga Rus- sakovsky. Fair attribute classification through latent space de-biasing. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9297-9306, 2021. 3
Interfacegan: Interpreting the disentangled face representation learned by gans. Yujun Shen, Ceyuan Yang, Xiaoou Tang, Bolei Zhou, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20203Yujun Shen, Ceyuan Yang, Xiaoou Tang, and Bolei Zhou. Interfacegan: Interpreting the disentangled face representa- tion learned by gans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 3
Closed-form factorization of latent semantics in gans. Yujun Shen, Bolei Zhou, arXiv:2007.06600arXiv preprintYujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. arXiv preprint arXiv:2007.06600, 2020. 3
Learned image downscaling for upscaling using content adaptive resampler. Wanjie Sun, Zhenzhong Chen, IEEE Transactions on Image Processing. 291Wanjie Sun and Zhenzhong Chen. Learned image downscal- ing for upscaling using content adaptive resampler. IEEE Transactions on Image Processing, 29:4027-4040, 2020. 1
Improving the fairness of deep generative models without retraining. Shuhan Tan, Yujun Shen, Bolei Zhou, abs/2012.04842ArXiv. 56Shuhan Tan, Yujun Shen, and Bolei Zhou. Improving the fairness of deep generative models without retraining. ArXiv, abs/2012.04842, 2020. 1, 3, 5, 6
Discriminator optimal transport. A Tanaka, NeurIPS. A. Tanaka. Discriminator optimal transport. In NeurIPS, 2019. 2
Learning disconnected manifolds: a no gans land. ArXiv, abs. Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jérémie Mary, Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, and Jérémie Mary. Learning disconnected manifolds: a no gans land. ArXiv, abs/2006.04596, 2020. 2
Unsupervised discovery of interpretable directions in the gan latent space. Andrey Voynov, Artem Babenko, PMLR, 2020. 3International Conference on Machine Learning. Andrey Voynov and Artem Babenko. Unsupervised discov- ery of interpretable directions in the gan latent space. In In- ternational Conference on Machine Learning, pages 9786- 9796. PMLR, 2020. 3
Spatial attentive single-image deraining with a high quality real rain dataset. Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, Rynson Lau, Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson Lau. Spatial attentive single-image de- raining with a high quality real rain dataset, 2019. 1
High-resolution image synthesis and semantic manipulation with conditional gans. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans, 2017. 1
Principal component analysis. Chemometrics and intelligent laboratory systems. Svante Wold, Kim Esbensen, Paul Geladi, 2Svante Wold, Kim Esbensen, and Paul Geladi. Principal component analysis. Chemometrics and intelligent labora- tory systems, 2(1-3):37-52, 1987. 3
Learning non-discriminatory predictors. Blake E Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, Nathan Srebro, abs/1702.06081ArXiv. 2Blake E. Woodworth, Suriya Gunasekar, Mesrob I. Ohan- nessian, and Nathan Srebro. Learning non-discriminatory predictors. ArXiv, abs/1702.06081, 2017. 2
Stylespace analysis: Disentangled controls for stylegan image generation. Zongze Wu, Dani Lischinski, Eli Shechtman, arXiv:2011.1279936arXiv preprintZongze Wu, Dani Lischinski, and Eli Shechtman. Stylespace analysis: Disentangled controls for stylegan image genera- tion. arXiv preprint arXiv:2011.12799, 2020. 3, 6
Latentclr: A contrastive learning approach for unsupervised discovery of interpretable directions. Enis Oguz Kaan Yüksel, Simsar, Pinar Ezgi Gülperi Er, Yanardag, arXiv:2104.00820arXiv preprintOguz Kaan Yüksel, Enis Simsar, Ezgi Gülperi Er, and Pinar Yanardag. Latentclr: A contrastive learning approach for unsupervised discovery of interpretable directions. arXiv preprint arXiv:2104.00820, 2021. 3
Fairness constraints: Mechanisms for fair classification. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, Krishna P Gummadi, AISTATS. 2Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez- Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In AISTATS, 2017. 2
Learning fair representations. Richard S Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, Cynthia Dwork, ICML. Richard S. Zemel, Ledell Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning fair representations. In ICML, 2013. 2
Stack-gan++: Realistic image synthesis with stacked generative adversarial networks. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris N Metaxas, abs/1710.10916CoRRHan Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiao- gang Wang, Xiaolei Huang, and Dimitris N. Metaxas. Stack- gan++: Realistic image synthesis with stacked generative ad- versarial networks. CoRR, abs/1710.10916, 2017. 1
Unpaired image-to-image translation using cycleconsistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, abs/1703.10593CoRRJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle- consistent adversarial networks. CoRR, abs/1703.10593, 2017. 1
| [
"https://github.com/NVlabs/stylegan2",
"https://github.com/genforce/fairgen",
"https://github.com/RameenAbdal/StyleFlow"
]
|
[
"OPTIMAL CONSENSUS CONTROL MODELS ON THE SPHERE",
"OPTIMAL CONSENSUS CONTROL MODELS ON THE SPHERE"
]
| [
"Hui Huang ",
"Hansol Park "
]
| []
| []
| In this paper, we investigate the consensus models on the sphere with control signals, where both the first and second order systems are considered. We provide the existence of the optimal control-trajectory pair and derive the first order optimality condition taking the form of the Pontryagin Minimum Principle. Numeric simulations are also presented to show that the obtained optimal control can help to accelerate the process of reaching a consensus. | 10.2139/ssrn.4225473 | [
"https://export.arxiv.org/pdf/2208.05281v1.pdf"
]
| 251,467,981 | 2208.05281 | e2fc79292b033dab4e68d252b28f6e411a262727 |
OPTIMAL CONSENSUS CONTROL MODELS ON THE SPHERE
Hui Huang
Hansol Park
OPTIMAL CONSENSUS CONTROL MODELS ON THE SPHERE
In this paper, we investigate the consensus models on the sphere with control signals, where both the first and second order systems are considered. We provide the existence of the optimal control-trajectory pair and derive the first order optimality condition taking the form of the Pontryagin Minimum Principle. Numeric simulations are also presented to show that the obtained optimal control can help to accelerate the process of reaching a consensus.
Introduction
Large systems of interacting particles arise in the study of collective behaviours of biological and physical systems on manifolds, such as nematic patterns on a sphere [11], application to launching unmanned spacecrafts [5], and crystals on a cylinder [6]. It is receiving lots of attention due to the appearance of a consensus emergence of these particle systems. This paper addresses centralized control problems for one of the well known consensus models on the sphere, i.e. the swarm sphere model [10], whose first order form is given by: 1) where Ω i ∈ Skew(d) := {A ∈ R d×d : A = −A} is the natural frequency of the i-th particle, κ is the coupling strength, x 0 i is the initial position of the i-th particle, N is the number of particles, and · represents the standard Euclidean norm. Adding the inertia (mass) term to system (1.1) as in [8] we can obtain the second order model satisfying: (1.2) where m is the mass, γ is the friction coefficients, and v 0 i is the initial velocity of the i-th particle. Actually, the systems (1.1) and (1.2) can be also seen as generalized Kuramoto models on the sphere. According to [8], if the identical natural frequencies satisfies Ω i ≡ O d−1 for all i ∈ [N ], then the emergent dynamics for both systems (1.1) and (1.2) with generic initial can be observed, namely it satisfies
d dt xi = Ωixi + κ N N k=1 x k − xi, x k xi 2 xi , xi(0) = x 0 i ∈ S d−1 , ∀ i ∈ [N ] := {1, 2, · · · , N },(1. d dt xi = vi, d dt vi = − γ m vi − vi 2 xi 2 xi + 1 m Ωixi + κ mN N k=1 x k − xi, x k xi 2 xi , xi(0) = x 0 i ∈ S d−1 , vi(0) = d dt xi t=0+ = v 0 i ∈ T x 0 i S d−1 ∀ i ∈ [N ],lim t→∞ xi(t) − xj(t) = 0 ∀ i, j ∈ [N ].
In this context the consensus is understood as a travelling formulation in which every particle has the same position. However this behaviour strongly depends on the initial configuration of the particle system. In the present work we are interested in the design of centralized control signals enforcing consensus emergence in systems (1.1) and (1.2) in order to generate an external intervention able to steer the system towards a desired configuration. Very related to this work, [1,4,3] study the problem of consensus control for Cucker-Smale type models. Meanwhile, according to [9], the Kuramoto model (position synchronization model) can be derived from the Cucker-Smale model (velocity alignment model). So in this work we are aiming to extend those results on velocity alignment as in [1] to our position synchronization models (1.1) and (1.2).
Swarm sphere models with controls
In this section we consider both the first order and second order swarm sphere models with control signals.
2.1. Second order model. We consider a set of N particles with state (
x i (t), v i (t)) ∈ T S d−1 ⊂ R d × R d moving on a sphere via system (1.2) with Ω i ≡ O d−1 for all i ∈ [N ]
. We are interested in the study of consensus emergence, i.e. the convergence towards a configuration in which
x i =x = 1 N N j=1 x j , ∀ i ∈ [N ]
. In particular we are concerned with inducing the consensus through the synthesis of an external forcing term u(t) := (u 1 (t), . . . , u N (t)) in the form of
dxi dt = vi, dvi dt = − vi 2 xi 2 xi − γ m vi + κ mN N j=1 xj − xi, xj xi 2 xi + ui − ui, xi xi xi 2 xi(0) = x 0 i ∈ S d−1 , vi(0) = d dt xi t=0+ = v 0 i ∈ T x 0 i S d−1 ∀ i ∈ [N ], (2.3)
where the control signals u i ∈ L 2 ([0, T ]; R d ) =: U. Formally, for T > 0 and given a set of admissible control signals u ∈ U N for the entire population, it holds that
u i 2 ≤ max i∈[N ] { u i 2 } =: M for all i ∈ [N ]
. Our goal is then to seek a solution to the minimization problem
min u(·)∈U N J (u(·); x(0), v(0)) := T 0 1 N N j=1 xj −x 2 dt + λ T 0 1 N N j=1 uj 2 dt , (2.4)
with some regularization parameter λ > 0. Here we have used the notation x(t) := (x 1 (t), . . . , x N (t)) and v(t) := (v 1 (t), . . . , v N (t)) .
Theorem 2.1. For any given T > 0 and u(·) ∈ U N , assume the initial data (x(0), v(0)) and parameters m, γ, κ, T satisfy
m γ V(0) + 2κT m + 2M T 1 2 exp γT m − 1 < 1 ,V(t) ≤ exp γT m V(0) + 2κT m + 2M T 1 2 1 − m γ V(0) + 2κT m + 2M T 1 2 exp γT m − 1 =: CV , where V(t) = max i=1,...,N v i (t) .
Proof. Let φ be a map on R d with bounded derivatives of all orders such that φ(x) = x x 2 for all x with x ≥ 1 2 . Then we consider the following regularized system
dxi dt = vi, dvi dt = −φ(xi) vi 2 − γ m vi + κ m 1 N N j=1 (xj − xi, xj φ(xi)) + ui − ui, xi φ(xi) xi(0) = x 0 i ∈ S d−1 , vi(0) = d dt xi t=0+ = v 0 i ∈ T x 0 i S d−1 ∀ i ∈ [N ],(2.
7)
It is obvious that the coefficients are local Lipschitz, so one has local existence and uniqueness up to some time τ ∈ [0, T ]. Next we prove the global existence up to time T . Actually, as long as
x i ≥ 1 2 one has d dt xi, vi = vi 2 + xi, dvi dt = vi 2 − vi 2 − γ m xi, vi = − γ m xi, vi , which implies x i (t), v i (t) = x i (0), v i (0) e − γ m t = 0 for all t ∈ [0, T ] since x i (0), v i (0) = 0. Then we have d dt xi 2 = 2 xi, vi = 0 ,
which following the initial condition x i (0) = 1 leads to x i (t) = 1 for all t ∈ [0, T ]. In addition, notice that
vi(t) ≤ vi(0) + t 0 ( vi(s) 2 + γ m vi(s) )ds + 2κT m + 2 t 0 ui(s) ds .
Using Gronwall's lemma one has
sup t∈[0,T ] vi(t) ≤ exp( γT m )( vi(0) + 2κT m + 2 T 0 u i (s) ds) 1 − m γ ( vi(0) + 2κT m + 2 T 0 u i (s) ds)(exp( γT m ) − 1) ≤ exp( γT m )( vi(0) + 2κT m + 2M T 1 2 ) 1 − m γ ( vi(0) + 2κT m + 2M T 1 2 )(exp( γT m ) − 1)
. .3) for the same initial data, then they satisfies x i = 1. So they are also solutions to the regularized system (2.7), for which pathwise uniqueness holds. Hence they are equal, which completes the proof.
This means that if initially
m γ ( v i (0) + 2κT m + 2M T 1 2 )(exp( γT m ) − 1) < 1,From (2.5), now we can assume x i (t) = 1 for all t ≥ 0 and i ∈ [N ] if {(x i , v i )} N i=1 satisfy (2.
3), and one can prove the existence of the optimal control-trajectory pair:
Theorem 2.2.
Under the same assumptions as in Theorem 2.1, there exists some control u
* i ∈ L 2 (0, T ; R d ), i ∈ [N ] and the corresponding {(x * i , v * i )} N i=1 trajectories solving the optimal control problem (2.3)-(2.4). Proof. For any given u i ∈ L 2 (0, T ; R d ), i ∈ [N ], by Theorem 2.1 there exists a corresponding solution {(x u i , v u i )} N i=1 to (2.3). Note that u i = 0 ∈ L ∞ (0, T ; R d ), so that J (0; x(0), v(0)) = T 0 1 N N j=1 x 0 j −x 0 2 dt ≤ 4T because x 0 i = 1, where {(x 0 i , v 0 i )} N i=1 is the solution to (2.3) with given u i = 0. Since J (u(·); x(0), v(0)) ≥ 0 for all u(·) ∈ U N , it holds that 0 ≤ min u(·)∈U N J (u(·); x(0), v(0)) ≤ J (0; x(0), v(0)) ≤ 4T .
By the definition of the infimum, we know that there exists a sequence of controls (u n ) n∈N ⊂ U N with corresponding (x u n , v u n ) solving (2.3), such that Notice that
λ T 0 1 N N j=1 u n j 2 dt ≤ J (u n (·); x(0), v(0)) < ∞ ,
and by Banach-Alaoglu theorem, for all i there exists a subsequence (u n k i ) k∈N and u * i ∈ L 2 (0, T ; R d ) such that
(2.8) u k i := u n k i k→+∞ u * i in L 2 (0, T ; R d ) .
For the corresponding solutions (x k , v k ) k∈N := (x u n k , v u n k ) k∈N , we have from Theorem 2.1 that
max i=1,...,N vi k (t) ≤ max i=1,...,N v k i (t) 2 + γ m max i=1,...,N v k i (t) + 2κ m + 2 max i=1,...,N u k i (t) ≤ C 2 V + CV γ m + 2κ m + 2M .
This combining with (2.6) implies the equi-boundedness and equi-absolute continuity of v k i (t) uniformly with respect to k, for all i = 1, . . . , N . This also yields the equi-Lispchitzanity of x k i (t). By Ascoli-Arzela theorem there exits a subsequence, again renamed (x k , v k ) k∈N and an absolutely continuous trajectory (x * , v * ) in [0, T ] such that for k → ∞:
x k i → x * i , in [0, T ], for all i = 1, · · · , N, v k i → v * i , in [0, T ], for all i = 1, · · · , N, x k i →ẋ * i , in [0, T ], for all i = 1, · · · , N, v k i v * i , in L 2 (0, T ; R d )
, for all i = 1, · · · , N .
(2.9)
Thus it is easy to see that
(2.10) dx * i dt = v * i and lim k→∞ T 0 1 N N j=1 x k j −x k 2 dt = T 0 1 N N j=1 x * j −x * 2 dt .
Moreover, one notice that for all ψ ∈ L 2 (0, T ; R d )
T 0 ψv k i dt = T 0 − v k i 2 x k i 2 x k i , ψ − γ m v k i , ψ + κ m 1 N N j=1 x k j , ψ − x k i , x k j x k i 2 x k i , ψ + u k i , ψ − u k i , x k i x k i , ψ x k i 2 dt
Let k → ∞, and by using (2.8) and (2.9) we have
T 0 ψv * i dt = T 0 − v * i 2 x * i 2 x * i , ψ − γ m v * i , ψ + κ m 1 N N j=1 x * j , ψ − x * i , x * j |x * i | 2 x * i , ψ + u * i , ψ − u * i , x * i x * i , ψ x * i 2 dt ,
which leads to
(2.11) dv * i dt = − v * i 2 x * i 2 x * i − γ m v * i + κ m 1 N N j=1 x * j − x * i , x * j x * i 2 x * i + u * i − u * i , x * i x * i x * i 2
a.e. .
Since (u
k i ) k∈N ⊂ L 2 (0, T ; R d ) converges weakly to u * i for all i ∈ [N ], we also have λ T 0 1 N N j=1 u * j 2 dt ≤ lim inf k→∞ λ T 0 1 N N j=1 u k j 2 dt
by the weak lower-semicontinuity of the L 2 -norm. This implies that This together with (2.10) and (2.11) implies the limit (x * , v * , u * ) is a solution to the optimal control problem (2.3)-(2.4).
J (u * (·); x(0), v(0)) = T 0 1 N N j=1 x * j −x * 2 dt + λ T 0 1 N N j=1 u * j 2 dt ≤ lim inf k→∞ T 0 1 N N j=1 x k j −x k 2 dt + λ
While the existence of (x * , v * , u * ) to the optimal control problem (2.3)-(2.4) has been obtained in Theorem 2.2, the Pontryagin Minimum Principle [7] yields first-order necessary conditions for the optimal control. Let (p * i (t), q * i (t)) ∈ R d × R d be adjoint variables associated to (x * i , v * i ), and we set p
* = {p * i } N i=1 , q * = {q * i } N i=1
. Then the optimality system consists of a solution (x * , v * , u * , p * , q * ) satisfying (2.3) along with the adjoint equations 12) and the optimality condition
− dp * i dt = − v * i 2 q * i + κ mN N j=1 q * j − x * i , x * j q * i − x * i , q * i x * j − x * j , q * j x * j − q * i u * i , x * i − u * i x * i , q * i − 2 N ( x * ,x * i x * − x * i ) , − dq * i dt = p * i − 2 x * i , q * i v * i − γ m q * i , p * i (T ) = 0, q * i (T ) = 0, i ∈ [N ] ,(2.u * = arg min w∈R dN N j=1 q * j , dv * j dt + λ N |wj| 2 = − N 2λ q * j − q * j , x * j x * j N j=1 .
Recall that the gradient of the functional J introduces in (2.4) is given as follows:
∇J = 2λ N u + [qj − qj, xj xj] N j=1 = 2λ N uj + (qj − qj, xj xj) N j=1
.
(2.13)
We will apply this gradient form to the gradient descent method in Section 3.2.
2.2.
First order model. We are also interested in the following first-order control problem:
d dt xi = κ N N k=1 (x k − xi, x k xi) + ui − ui, xi xi, xi(0) = x 0 i ∈ S d−1 ∀ i ∈ [N ],
(2.14)
with the following payoff functional
(2.15)J (u(·); x(0)) = T 0 1 N N i=1 xi −x 2 + λ ui 2 dt.
The existence of the above optimal control problem can be obtained similarly as the second order model, so here we omit the proofs and only present the theorems:
Let {p * i } N i=1 be adjoint variables associated to {x * i } N i=1 .
The corresponding PMP equation is given by
− d dt p * i = κ N N j=1 (p * j − x * i , p * i x * j − x * j , p * j x * j − x * i , x * j p * j ) − u * i , x * i p * i − x * i , p * i u * i + 2 N (x * i −x * x * i ,x * ) , (2.16)
with the optimality condition
u * = arg min w∈R dN N i=1 p * j , dx * j dt + λ N wi 2 = − N 2λ [p * j − p * j , x * j x * j ] N j=1 .
Also, the gradient of functionalJ introduced in (2.15) can be expressed as
∇J = 2λ N u + [pj + pj, xj xj] N j=1 . (2.17)
We will apply this gradient form to the gradient descent method in Section 3.1.
Algorithms and numeric simulations
In this section, we provide numeric simulations of controlled systems proposed in Section 2 in order to give a simple and immediate illustration of how the optimal control signal can be used to accelerate the process of reaching a consensus.
3.1. First order model with control. Firstly, we consider the first order control problem (2.14)-(2.15). To minimize the payoff functionalJ defined in (2.15), we apply the gradient descent with Barzilai-Borwein method [2] with the explicit form of ∇J given in (2.17).
Algorithm 1 An algorithm with caption
Require: tol > 0, kmax, u 0 , u −1 . k = 0; while ∇J > tol and k < kmax do 1) Obtain x k from (2.14) with u k ; 2) Obtain p k from (2.16) with x k and u k ; 3) Compute the learning rate based on the Barzilai-Borwein method:
α k := u k − u k−1 , ∇J (u k ) − ∇J (u k−1 ) ∇J (u k ) − ∇J (u k−1 ) 2;
4) Update u k+1 = u k − α k ∇J (u k ); 5) k := k + 1; end while
In Algorithm 1, k max is the maximum of the number of iteration, tol is the tolerance of ∇J , and we use the Runge-Kutta fourth order methods in Step 1) and
Step 2).
In Figure 1, we compare the optimal controlled system (2.14)-(2.15) and the control-free swarm sphere model (1.1) with the same initial data and parameters being set as N = 20, d = 3, ∆t = 0.01, T = 4, λ = 0.1, κ = 0.5. (3.18) As it is shown in Figure 1 (a), the controlled system (blue) under an approximated optimal control u * found with Algorithm 1 approaches to a consensus state much faster than the control-free model (red). This can also be seen in Figure 1 (b) that while the particles of the controlled system reaches a consensus, particles of the control-free system just move small distances. In summary, for the first order model, the controlled system reaches consensus much faster than the uncontrolled system. Figure (b) shows the trajectories of two systems, where both of them start from the same green initial points. At the end time T , the optimal controlled particles reach the unique blue consensus point, while the uncontrolled particles move only small distances to red points (no consensus is obtained).
3.2.
Second order model with control. Now, we consider the second order control problem (2.3)-(2.4). To minimize the payoff functional J , we use the explicit form of ∇J introduced in (2.13). Same parameters given by (3.18) are used here with additionally setting the remained parameters as m = 1 and γ = 1. In Figure 2 we give a comparison of the second order optimal control dynamics (2.3)-(2.4) and its control-free counterpart (1.2).
then there exists a pathwise unique global solution {(x i , v i )} N i=1 to the Kuramoto system (2.3) up to time T . Moreover for all i ∈ [N ] and t ∈ [0, T ] it holds that
J
(u n (·); x(0), v(0)) = min u(·)∈U N J (u(·); x(0), v(0)) .
J
(u k (·); x(0), v(0)) = lim n→∞ J (u n (·); x(0), v(0)) = min u(·)∈U N J (u(·); x(0), v(0)) .
Theorem 2. 3 .
3For any T > 0 and given u(·) ∈ U N , there exists a pathwise unique global solution {x i } N i=1 to the Kuramoto system (2.14) up to time T . Moreover for all i ∈ [N ] and t ∈ [0, T ] it holds that x i (t) = 1.
Theorem 2 . 4 .
24There exists some control u * i ∈ L 2 (0, T ; R d ), i ∈ [N ] and the corresponding {x * i } N i=1trajectories solving the optimal control problem (2.14)-(2.15).
Figure 1 .
1Controlled (blue) vs. control-free (red) first order dynamics. Figure (a) plots the timeevolution of the variations of positions defined as 1 N N i=1 x i −x 2 .
of positions (b) Variations of velocities (c) Trajectories of particles
Figure 2 .
2Controlled (blue) vs. control-free (red) second order dynamics. Figure (a) and (b) plot the time-evolution of the variations of positions/velocities, i.e. i − v 2 respectively. Figure (c) shows the trajectories of two systems.
then we have global existence and pathwise uniqueness for (2.7) up to time T . Now the solution to (2.7) for the given φ is a solution to (2.3) since x i = 1 for all i = 1, . . . , N and t ∈ [0, T ], which provides global existence to (2.3). If we consider two solutions to (2
. R Bailo, M Bongini, J A Carrillo, D Kalise, Optimal consensus control of the Cucker-Smale model. IFAC-PapersOnLine. 5113R. Bailo, M. Bongini, J. A. Carrillo, and D. Kalise. Optimal consensus control of the Cucker-Smale model. IFAC-PapersOnLine, 51(13):1-6, 2018.
Two-point step size gradient methods. J Barzilai, J M Borwein, IMA journal of numerical analysis. 81J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA journal of numerical analysis, 8(1):141-148, 1988.
(un) conditional consensus emergence under perturbed and decentralized feedback controls. M Bongini, M Fornasier, D Kalise, Discrete & Continuous Dynamical Systems. 3594071M. Bongini, M. Fornasier, and D. Kalise. (un) conditional consensus emergence under perturbed and decentralized feedback controls. Discrete & Continuous Dynamical Systems, 35(9):4071, 2015.
Sparse stabilization and optimal control of the Cucker-Smale model. M Caponigro, M Fornasier, B Piccoli, E Trélat, Mathematical Control and Related Fields. 34M. Caponigro, M. Fornasier, B. Piccoli, and E. Trélat. Sparse stabilization and optimal control of the Cucker-Smale model. Mathe- matical Control and Related Fields, 3(4):447-466, 2013.
Cooperative control with adaptive graph Laplacians for spacecraft formation flying. I Chang, S.-J Chung, L Blackmore, 49th IEEE Conference on Decision and Control (CDC). IEEEI. Chang, S.-J. Chung, and L. Blackmore. Cooperative control with adaptive graph Laplacians for spacecraft formation flying. In 49th IEEE Conference on Decision and Control (CDC), pages 4926-4933. IEEE, 2010.
Molecular dynamics and rheological properties of concentrated solutions of rodlike polymers in isotropic and liquid crystalline phases. M Doi, Journal of Polymer Science: Polymer Physics Edition. 192M. Doi. Molecular dynamics and rheological properties of concentrated solutions of rodlike polymers in isotropic and liquid crystalline phases. Journal of Polymer Science: Polymer Physics Edition, 19(2):229-243, 1981.
An introduction to mathematical optimal control theory version 0.2. L C Evans, L. C. Evans. An introduction to mathematical optimal control theory version 0.2. Lecture notes available at http://math. berkeley. edu/˜evans/control. course. pdf, 1983.
A second-order particle swarm model on a sphere and emergent dynamics. S.-Y Ha, D Kim, SIAM Journal on Applied Dynamical Systems. 181S.-Y. Ha and D. Kim. A second-order particle swarm model on a sphere and emergent dynamics. SIAM Journal on Applied Dynamical Systems, 18(1):80-116, 2019.
Flocking and synchronization of particle models. Quarterly of applied mathematics. S.-Y Ha, C Lattanzio, B Rubino, M Slemrod, 69S.-Y. Ha, C. Lattanzio, B. Rubino, and M. Slemrod. Flocking and synchronization of particle models. Quarterly of applied mathe- matics, 69(1):91-103, 2011.
Non-Abelian Kuramoto models and synchronization. M Lohe, Journal of Physics A: Mathematical and Theoretical. 4239395101M. Lohe. Non-Abelian Kuramoto models and synchronization. Journal of Physics A: Mathematical and Theoretical, 42(39):395101, 2009.
Spontaneous motion in hierarchically assembled active matter. T Sanchez, D T Chen, S J Decamp, M Heymann, Z Dogic, Nature. 49174248010Hui Huang) Institute of Mathematics and Scientific ComputingUniversity of Graz, UniversitätsplT. Sanchez, D. T. Chen, S. J. DeCamp, M. Heymann, and Z. Dogic. Spontaneous motion in hierarchically assembled active matter. Nature, 491(7424):431-434, 2012. (Hui Huang) Institute of Mathematics and Scientific Computing, University of Graz, Universitätspl. 3, 8010
Austria Email address: hui.huang@uni-graz. Graz, atGraz, Austria Email address: [email protected]
| []
|
[
"Dynamics of the breakdown of granular clusters",
"Dynamics of the breakdown of granular clusters"
]
| [
"François Coppex \nDepartment of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland\n",
"Michel Droz \nDepartment of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland\n",
"Adam Lipowski \nDepartment of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland\n\nDepartment of Physics\nA. Mickiewicz University\n61-614PoznańPoland\n"
]
| [
"Department of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland",
"Department of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland",
"Department of Physics\nUniversity of Genève\n1211Genève 4CHSwitzerland",
"Department of Physics\nA. Mickiewicz University\n61-614PoznańPoland"
]
| []
| Recently van der Meer et al. studied the breakdown of a granular cluster (Phys. Rev. Lett. 88, 174302 (2002)). We reexamine this problem using an urn model, which takes into account fluctuations and finite-size effects. General arguments are given for the absence of a continuous transition when the number of urns (compartments) is greater than two. Monte Carlo simulations show that the lifetime of a cluster τ diverges at the limits of stability as τ ∼ N 1/3 , where N is the number of balls. After the breakdown, depending on the dynamical rules of our urn model, either normal or anomalous diffusion of the cluster takes place. | 10.1103/physreve.66.011305 | [
"https://arxiv.org/pdf/cond-mat/0205058v1.pdf"
]
| 42,399,806 | cond-mat/0205058 | 00ff3d60a3406f3e1bbe216426e0061f4c4db91a |
Dynamics of the breakdown of granular clusters
3 May 2002
François Coppex
Department of Physics
University of Genève
1211Genève 4CHSwitzerland
Michel Droz
Department of Physics
University of Genève
1211Genève 4CHSwitzerland
Adam Lipowski
Department of Physics
University of Genève
1211Genève 4CHSwitzerland
Department of Physics
A. Mickiewicz University
61-614PoznańPoland
Dynamics of the breakdown of granular clusters
3 May 2002
Recently van der Meer et al. studied the breakdown of a granular cluster (Phys. Rev. Lett. 88, 174302 (2002)). We reexamine this problem using an urn model, which takes into account fluctuations and finite-size effects. General arguments are given for the absence of a continuous transition when the number of urns (compartments) is greater than two. Monte Carlo simulations show that the lifetime of a cluster τ diverges at the limits of stability as τ ∼ N 1/3 , where N is the number of balls. After the breakdown, depending on the dynamical rules of our urn model, either normal or anomalous diffusion of the cluster takes place.
I. INTRODUCTION
Dissipation of kinetic energy during inelastic collisions in gaseous granular systems has profound consequences [1,2]. One of the most spectacular ones is formation of spatial inhomogeneities [3], which drastically contrast with a uniform distribution of molecules or atoms whose dynamics is essentially elastic.
Some time ago Schlichting and Nordmeier presented a simple experiment which demonstrates some consequences of inelasticity of granular systems [4]. They used a container separated into two equal compartments by a wall which has a narrow horizontal slit at a certain height. The container is filled with balls (plastic or metallic) and subjected to vertical shaking. For vigorous shaking the balls distribute equally between two compartments. However, when the shaking is sufficiently mild, a nonsymmetric distribution occurs. In such a case the compartment with majority of balls, due to numerous inelastic collisions, is effectively cooler than the other one. Consequently, less balls are leaving this compartment which stabilizes such an asymmetric distribution of balls. To explain this experiment, Eggers derived a phenomenological equation for the flux F (n) of balls leaving a given compartment [5] F (n) = Cn 2 exp(−Bn 2 ).
(1)
In the above equation n is the concentration of balls in a given urn and B and C are constants which depend on the properties of balls, typical sizes of the system and of parameters of shaking (the constant C may be eliminated by an appropriate redefinition of the time scale). In agreement with experiment, eq. (1) predicts for sufficiently large B unequal distribution of balls. The above experiment was repeated in the case when the number of compartments L was greater than two by van der Meer et al. [6]. In such a case formation of unequal distribution of balls is accompanied by strong hysteresis which is in agreement with theoretical analysis [7]. Moreover, certain aspects of these phenomena for L = 2 were approached using hydrodynamic equations [8].
Recently, van der Meer et al. examined the case of L > 2 further [9]. In particular, they studied dynamics of configurations (clusters) starting from all balls localized in a single compartment. Using a theoretical model based on eq. (1), they have shown that when shaking is strong enough such a cluster breaks down and diffuses with the anomalous diffusion exponent 1/3 (in the following we refer to this model as MWL). For less vigorous shaking, the cluster remains relatively stable and only after some time it abruptly breaks down. Some of their predictions were confirmed experimentally.
In the framework of the MWL model it is rather difficult to include the effect of fluctuations. Such fluctuations might originate due to for example a finite number of balls and especially close to critical points they might play an important role. In an attempt to take such effects into account a generalization of Ehrenfest's [10] urn model was recently examined in the case L = 2 [11]. Relative simplicity of the model allows for a detailed study of its various characteristics.
The motivation of the present paper is to re-examine the breakdown of granular clusters using the urn model in the case L > 2. In section II we define the model and present its steady-state phase diagram for L = 3. We also argue that, in analogy to the Potts model in the mean-field limit, there are no continuous transitions for L > 2. In section III we examine dynamics of the breakdown of clusters in a similar way as van der Meer et al. [9]. Although qualitatively our results are similar to theirs, in our model the diffusion of the cluster is normal with the exponent 1/2. Moreover, we calculate the size dependence of the lifetime of a cluster τ and show that at the limits of stability it scales as N 1/3 . In section IV we present a modified version of the urn model which in the steady state reproduces the flux (1). The diffusion of the broken down cluster is then shown to be anomalous with exponent 1/3, as it was already found [9]. It was suggested that essential features of the MWL model are independent on the precise form of the flux (1), as long as it has a single hump [9]. On the contrary, our results show that at least the diffusion exponent depends on some details of the flux and not only on its qualitative shape (in our models the flux is also a single hump function). Section V contains our conclusions.
II. MODEL AND ITS STEADY-STATE PROPERTIES
Our model is a straightforward generalization of the two-urn case [11]: N particles are distributed between L urns and the number of particles in i-th urn is denoted as N i ( L i=1 N i = N ). Urns are connected through slits sequentially: i-th urn is connected with (i − 1)-th and (i + 1)-th. Moreover, periodic boundary conditions are used, i.e., first and L-th urns are connected. Particles in a given urn (say i-th) are subject to thermal fluctuations and the temperature T of this urn depends on the number of particles in it as:
T (n i ) = T 0 + ∆(1 − n i ),(2)
where n i is a fraction of the total number of particles in a given urn (n i = N i /N ) and T 0 and ∆ are positive constants. Equation (2) is the simplest function which reproduces the fact that due to inelastic collisions between particles, their effective temperature decreases as their number in a given urn increases. Next, we define the dynamics of the model as:
(i) One of the N particles is selected randomly.
(ii) With probability exp[−1/T (n i )] the selected particle is placed in a randomly chosen neighboring urn, where i is the urn of a selected particle.
The above rules implies that the flux of particles leaving i-th urn is, up to a proportionality constant, given by
F (n i ) = n i exp − 1 T (ni) ,(3)
where T (n i ) is defined in (2). Let us notice that the flux (3), similarly to (1), is a single hump function. Having an expression for the flux we can write the equations of motion as:
dn i dt = 1 2 F (n i−1 ) + 1 2 F (n i+1 ) − F (n i ),(4)
where i = 1, 2, ..., L. Steady-state properties of this model can be obtained using similar analysis as in the L = 2 case [11] or as for L > 2 but with fluxes given by eq. (1) [7]. The results of this analysis in the L = 3 case are presented in Fig. 1. In region I and II the symmetric phase (n 1 = n 2 = n 3 = 1/3) is stable. The continuous line in Fig. 1, which locates the limit of stability of this phase, is given by the following equation:
T 0 = ∆ 3 − 2∆ 3 .(5)
This equation has a very similar form to the corresponding equation in the L = 2 case [11]. Asymmetric solution, where one of the urns has the majority of balls and remaining two urns have only a small equal fraction of balls (n 1 > n 2 = n 3 ), is stable in region II and III. The line separating regions I and II can be determined only numerically as a solution of a transcendental equation, similarly to the L = 2 case [11]. There is also a third type of solution where two urns contain majority of balls and the third urn has only a small fraction of them (n 1 = n 2 > n 3 ). Such a solution, which has saddle-like stability, exists only in region III. Similar solutions can be found for the MWL model [6,7]. An important, qualitative difference with the case L = 2, is that regions I and III are always separated by region II where both symmetric and asymmetric solutions are stable, hence the tricritical point is located at the origin T 0 = ∆ = 0. It means that a phase transition between these two phases is always accompanied by hysteresis effects. On the other hand in the L = 2 case continuous transitions are possible, which are not accompanied by hysteresis [11]. Such a behaviour is actually in agreement with experimental data and with MWL model [6]. Has this qualitative difference a more general explanation or is it rather a coincidental property? In our opinion, absence of continuous transitions for L > 2 is a generic property of such systems and at least to some extent could be understood. First, let us notice that the phase transition for L = 2 is a manifestation of the spontaneous symmetry breaking in the system: in certain regime one of the two identical urns is preferentially filled with balls. Such a situation resembles the phase transition in the S = 1/2 Ising model, where below certain temperature the up-down symmetry is broken and the system acquires spontaneous magnetization [12]. Actually, this analogy can be confirmed more quantitatively. We have shown that for L = 2 and at the critical point the probability distributions has the same moment ratios as in the Ising model in dimension d greater than the socalled upper critical dimension (d > 4) [13]. Let us notice, that in our model balls are selected randomly which means that this is essentially a mean-field model. Moreover, our model is a dynamical, spaceless model, contrary to the Ising model, which is a lattice equilibrium model. The fact that such different models have some similarities shows that as far as the critical behaviour is concerned what really matters is symmetry. In both cases this is the Z 2 symmetry which is broken below the critical point.
Pushing this analogy further, we expect that for L > 2 the phase transition in our model should be similar to the phase transition of the L-state Potts model above the critical dimension [14]. In the L-state Potts model at sufficiently low temperature one of the L symmetric ground states is preferentially selected. However, it is well-known that above the upper critical dimension and for L > 2 there are only discontinuous transitions in the Potts model [14]. Consequently, the transition in the urn model, and most likely in related models which preserves Z L symmetry of compartments, should be discontinuous.
Let us notice that one can easily break the symmetry of the compartments e.g., changing the boundary conditions, which in our analogy introduces some asymmetry in the Potts model. It is possible that in such a case the system effectively will become similar to the L = 2 system and will exhibit a continuous transition. Finally, we expect that for L > 3 the phase diagram should be topologically similar to the one for L = 3 shown in Fig. 1.
III. DYNAMICAL PROPERTIES OF CLUSTER CONFIGURATIONS
In the present section we study certain dynamical properties of cluster configurations. We used Monte Carlo simulation. Since it is rather straightforward, we omit a more detailed description of the numerical implementation of the dynamical rules of our model. Initially, we place all balls in one urn and examine its subsequent evolution. If the parameters T 0 and ∆ are such that the system is in region I then such a cluster is unstable and after some time due to fluctuations it breaks down and balls spread throughout all urns. This is illustrated in Fig. 2 which shows the concentration of balls in the urn in which the balls were initially placed. Let us notice that (i) the breakdown is relatively abrupt and during the evolution up to the breakdown the concentration of balls only slightly decreases; (ii) upon approaching the line separating regions I and II the lifetime of the cluster τ increases. Since in region II the asymmetric state has an infinite lifetime it means that τ must diverge upon approaching this region. This behaviour is seen in Fig. 3. In addition to the three-urn case we also made analogous measurements of τ for L = 5 and 7 and the results are also shown in Fig. 3. Let us notice that results presented in Fig. 2 and Fig. 3 are similar to those obtained by van der Meer [9], although they are parametrized by a different variable.
The limit of stability of the asymmetric phase can be regarded as a critical point. Thus, we expect that exactly at this point e.g., the lifetime τ has a power-law divergence τ = N z , and z > 0. Such a behaviour is shown in Fig. 4. From the slope of the straight line, which is a least-square fit to our data we estimate z = 0.32(3). Let us notice that in the two-urn model at the limits of stability τ exhibits a very similar divergence [11]. In the case L = 2 more precise calculations were possible strongly suggesting that z = 1/3 which is also consistent with the present three-urn model result. Let us emphasize that because in our model the number of balls is finite, we can study size dependent quantities as shown in Fig. 4. Such calculations would not be possible for models solely based on steady-state equations.
Finally, let us examine the breakdown of a cluster in the many-urn case L ≫ 1. In such a case a continuous approach to the MWL model shows that after breaking down, the cluster diffuses with the anomalous exponent 1/3 [9]. Results of our simulations are shown in Fig. 5. From these data we conclude that spreading of a cluster occurs with the ordinary exponent 1/2 rather than anomalously. Ordinary diffusion in our model can be also easily explained analytically applying basically the same continuous approach as used in [9]. In this approach the set of equations of motion (4) is transformed into a partial differential equation. Then, one immediately realizes that the linear term in front of the exponent in eq. (3) leads to the ordinary diffusion equation. On the other hand, the anomalous diffusion of MWL model can be traced back to the quadratic (in n) term in the flux in eq. (1). This quadratic term is related with two-particle collisions [9].
IV. THE PAIR MODEL
One can easily construct urn models for which the expression for the flux will have a different form. In particular, redefining the effective temperature (2) and drawing each time a pair of balls we obtain an urn model with the flux of exactly the same form as eq. (1). This dynamics takes into account some of the two particles correlations. It allows us to recover some properties of the MWL model and establish further results.
The model, which we call a pair model, is similar to the previously described one, except that its dynamics is now defined as:
(i) Two different balls are selected randomly. One can easily see that the probability that two randomly selected balls belong to the i-th urn is given as
Ni N · Ni−1 N −1 , which for N → ∞ becomes n 2 i . Multiplying n 2 i
with the transition probability exp[−Bn 2 i ] we obtain that the flux in the pair model is proportional to eq. (1). It means that as far as the steady-state properties are concerned, the pair model is equivalent to the MWL [6,7]. In particular for L = 2 one easily obtains the critical value B = 4 for the continuous transition between the symmetric (B < 4) and asymmetric phase (B > 4). For L = 3 one obtains two critical points B 1 = 6.552703411 . . . and B 2 = 9. The first one can only be determined numerically. Similarly to Fig. 1, for B < B 2 the symmetric solution is stable whereas for B > B 1 the asymmetric solution stable. In the interval B ∈ [B 1 , B 2 ] both symmetric and asymmetric solutions are stable, which is the interval showing hysteresis with respect to the driving parameter B.
Qualitatively the dynamical properties of cluster configurations in the pair model are similar to those described in previous section. In particular for L = 3 and B = B 1 , the average lifetime of a cluster τ as a function of the number of balls N once more shows a power-law divergence τ = N z , with z = 0.31(3) suggesting that z = 1/3. It shows a certain universality of this exponent with respect to different dynamical rules.
Finally, Fig. 6 shows the diffusion of the broken down cluster. Since the asymptotic slope of our data is very close to 1/3 we conclude that in this case the diffusion is anomalous, as already predicted by van der Meer et al. who used the continuous approach [9].
The pair model and the model examined in the previous section exhibit qualitatively similar behaviour for most of the physical quantities. The main difference is the diffusion: it is anomalous in the pair model and ordinary in model examined in the previous section. It would be desirable to experimentally examine the nature of diffusion in such systems.
V. CONCLUSIONS
We examined two L > 2 versions of the L-urn model of compartmentalization of vibrated sand. Our models qualitatively recover experimental findings and previous steady-state calculations. In addition, our models take into account fluctuations caused by the finite number of balls. Using symmetry properties, we related them with high-dimensional Potts model and argued that for L > 2 phase transitions in such systems should be discontinuous. Although several quantities exhibit qualitatively a similar behaviour for the two different versions of the model, there are important differences too. In particular, these models predict a different diffusion of a brokendown cluster, which could be either ordinary or anomalous. It shows that the type of diffusion is very sensitive to dynamical rules of the model, and consequently, to the form of the flux.
FIG. 1 :
1The steady-state phase diagram for the three-urn model. See text for a description of phases.
FIG. 2 :FIG. 3 :
23The time evolution of the fraction of balls of the cluster n cl close to the limits of stability of the asymmetric phase (N = 5 · 10 4 , L = 3). The values of T0 are indicated. For ∆ = 0.3 the limit of stability of the asymmetric phase is at T0 = 0.169829772... For a larger number of balls N , stochastic fluctuations will diminish. The average lifetime of a cluster τ as a function of T0 for different number of urns L. Each point is an average of at least 300 runs.
FIG. 4 :
4The average lifetime of a cluster τ as a function of the number of balls N at the limits of stability of the asymmetric phase. Each point is an average of at least 300 runs.
FIG. 5 :
5The average occupancy of a central urn N cl as a function of time t. The slope of decay is very close to 0.5 which confirms the diffusive nature of spreading (N cl ∼ t −1/2 ). Each curve is obtained from averaging over 50 independent runs.
(
ii) If and only if the two balls are in the same urn, with probability exp[−Bn 2 i ] the selected balls are placed in the same randomly chosen neighboring urn, where i is the urn of the selected particles.
FIG. 6 :
6The average occupancy of a central urn N cl as a function of time t for the pair model. The slope of decay is very close to 1/3 which confirms the anomalous diffusive nature of spreading (N cl ∼ t −1/3 ). Each curve is obtained from averaging over 50 independent runs.
AcknowledgmentsThis work was partially supported by the Swiss National Science Foundation and the project OFES 00-0578 "COSYC OF SENS".
. P B Umbanhowar, F Melo, H L Swinney, Nature. 382793P. B. Umbanhowar, F. Melo, and H. L. Swinney, Nature 382, 793 (1996).
. T Shinbrot, F J Muzzio, Nature. 410251T. Shinbrot and F. J. Muzzio, Nature 410, 251 (2001).
. I Goldhirsch, G Zanetti, Phys. Rev. Lett. 701619I. Goldhirsch and G. Zanetti, Phys. Rev. Lett. 70, 1619 (1993).
Naturwissenschaften Unterr. H J Schlichting, V Nordmeier, Math, 49in GermanH. J. Schlichting and V. Nordmeier, Math. Naturwis- senschaften Unterr. 49, 323 (1996) (in German).
. J Eggers, Phys. Rev. Lett. 835322J. Eggers, Phys. Rev. Lett. 83, 5322 (1999).
. K Van Der Weele, D Van Der Meer, M Versluis, D Lohse, Europhys. Lett. 53328K. van der Weele, D. van der Meer, M. Versluis, and D. Lohse, Europhys. Lett. 53, 328 (2001).
. D Van Der Meer, K Van Der Weele, D Lohse, Phys. Rev. E. 6361304D. van der Meer, K. van der Weele, and D. Lohse, Phys. Rev. E 63 061304 (2001).
. J J Brey, F Moreno, R García-Rojo, M J Ruiz-Montero, Phys. Rev. E. 6511305J. J. Brey, F. Moreno, R. García-Rojo, and M. J. Ruiz- Montero, Phys. Rev. E 65, 011305 (2001).
. D Van Der Meer, K Van Der Weele, D Lohse, Phys. Rev. Lett. 88174302D. van der Meer, K. van der Weele, and D. Lohse, Phys. Rev. Lett. 88 174302 (2002).
P Ehrenfest, T. ; M Kac, J Logan, Ehrenfest The Conceptual Foundations of the Statistical Approach in Mechanics. Fluctuation Phenomena ed. E. W. Montroll and J. L. LebowitzNew York; North-Holland, AmsterdamDoverP. Ehrenfest and T. Ehrenfest The Conceptual Founda- tions of the Statistical Approach in Mechanics (Dover, New York, 1990) M. Kac and J. Logan, in Fluctua- tion Phenomena ed. E. W. Montroll and J. L. Lebowitz (North-Holland, Amsterdam, 1987).
. A Lipowski, M Droz, Phys. Rev. E. 6531307A. Lipowski and M. Droz, Phys. Rev. E 65 031307 (2002).
K Huang, Statistical Mechanics. New YorkJohn Wiley & SonsK. Huang, Statistical Mechanics (John Wiley & Sons, New York, 1987).
. A Lipowski, M Droz, cond-mat/0201472A. Lipowski and M. Droz, e-print: cond-mat/0201472.
. F Y Wu, Rev. Mod. Phys. 54235F. Y. Wu, Rev. Mod. Phys. 54, 235 (1982).
| []
|
[
"Quantitative comparison of different approaches for reconstructing the carbon-binder domain from tomographic image data of cathodes in lithium-ion batteries and its influence on electrochemical properties",
"Quantitative comparison of different approaches for reconstructing the carbon-binder domain from tomographic image data of cathodes in lithium-ion batteries and its influence on electrochemical properties"
]
| [
"Benedikt Prifling es:[email protected] \nInstitute of Stochastics\nUlm University\n89081UlmGermany\n",
"Matthias Neumann \nInstitute of Stochastics\nUlm University\n89081UlmGermany\n",
"Simon Hein \nGerman Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany\n\nHelmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany\n",
"Timo Danner \nGerman Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany\n\nHelmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany\n",
"Emanuel Heider \nZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany\n",
"Alice Hoffmann \nZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany\n",
"Philipp Rieder \nInstitute of Stochastics\nUlm University\n89081UlmGermany\n",
"André Hilger \nInstitute of Applied Materials\nHelmholtz-Zentrum Berlin für Materialien und Energie\n14109BerlinGermany\n",
"Markus Osenberg \nDepartment of Materials Science and Technology\nTU Berlin\n10623BerlinGermany\n",
"Ingo Manke \nInstitute of Applied Materials\nHelmholtz-Zentrum Berlin für Materialien und Energie\n14109BerlinGermany\n",
"Margret Wohlfahrt-Mehrens \nZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany\n",
"Arnulf Latz \nGerman Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany\n\nHelmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany\n\nInstitute of Electrochemistry\nUlm University\n89081UlmGermany\n",
"Volker Schmidt \nInstitute of Stochastics\nUlm University\n89081UlmGermany\n"
]
| [
"Institute of Stochastics\nUlm University\n89081UlmGermany",
"Institute of Stochastics\nUlm University\n89081UlmGermany",
"German Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany",
"Helmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany",
"German Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany",
"Helmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany",
"ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany",
"ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany",
"Institute of Stochastics\nUlm University\n89081UlmGermany",
"Institute of Applied Materials\nHelmholtz-Zentrum Berlin für Materialien und Energie\n14109BerlinGermany",
"Department of Materials Science and Technology\nTU Berlin\n10623BerlinGermany",
"Institute of Applied Materials\nHelmholtz-Zentrum Berlin für Materialien und Energie\n14109BerlinGermany",
"ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg\n89081UlmGermany",
"German Aerospace Center (DLR)\nInstitute of Engineering Thermodynamics\n705696StuttgartGermany",
"Helmholtz Institute for Electrochemical Energy Storage (HIU)\n89081UlmGermany",
"Institute of Electrochemistry\nUlm University\n89081UlmGermany",
"Institute of Stochastics\nUlm University\n89081UlmGermany"
]
| []
| It is well known that the spatial distribution of the carbon-binder domain (CBD) offers a large potential to further optimize lithium-ion batteries. However, it is challenging to reconstruct the CBD from tomographic image data obtained by synchrotron tomography. In the present paper, we consider several approaches to segment 3D image data of two different cathodes into three phases, namely active material, CBD and pores. More precisely, we focus on global thresholding, a local closing approach based on EDX data, a k-means clustering method, and a procedure based on a neural network that has been trained by correlative microscopy, i.e., based on data gained by synchrotron tomography and FIB-SEM data representing the same electrode. We quantify the impact of the considered segmentation approaches on morphological characteristics as well as on the resulting performance by spatially-resolved transport simulations. Furthermore, we use experimentally determined electrochemical properties to identify an appropriate range for the effective transport parameter of the CBD. The developed methodology is applied to two differently manufactured cathodes, namely an ultra-thick unstructured cathode and a two-layer cathode with varying CBD content in both layers. This comparison elucidates the impact of a specific structuring concept on the 3D microstructure of cathodes. | 10.1002/ente.202200784 | [
"https://export.arxiv.org/pdf/2207.14389v1.pdf"
]
| 251,196,949 | 2207.14389 | 808eeb9d0fbe1dddf1268f1b0582e6d66b0db2e3 |
Quantitative comparison of different approaches for reconstructing the carbon-binder domain from tomographic image data of cathodes in lithium-ion batteries and its influence on electrochemical properties
28 Jul 2022
Benedikt Prifling es:[email protected]
Institute of Stochastics
Ulm University
89081UlmGermany
Matthias Neumann
Institute of Stochastics
Ulm University
89081UlmGermany
Simon Hein
German Aerospace Center (DLR)
Institute of Engineering Thermodynamics
705696StuttgartGermany
Helmholtz Institute for Electrochemical Energy Storage (HIU)
89081UlmGermany
Timo Danner
German Aerospace Center (DLR)
Institute of Engineering Thermodynamics
705696StuttgartGermany
Helmholtz Institute for Electrochemical Energy Storage (HIU)
89081UlmGermany
Emanuel Heider
ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg
89081UlmGermany
Alice Hoffmann
ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg
89081UlmGermany
Philipp Rieder
Institute of Stochastics
Ulm University
89081UlmGermany
André Hilger
Institute of Applied Materials
Helmholtz-Zentrum Berlin für Materialien und Energie
14109BerlinGermany
Markus Osenberg
Department of Materials Science and Technology
TU Berlin
10623BerlinGermany
Ingo Manke
Institute of Applied Materials
Helmholtz-Zentrum Berlin für Materialien und Energie
14109BerlinGermany
Margret Wohlfahrt-Mehrens
ZSW-Zentrum für Sonnenenergie-und Wasserstoff-Forschung Baden-Württemberg
89081UlmGermany
Arnulf Latz
German Aerospace Center (DLR)
Institute of Engineering Thermodynamics
705696StuttgartGermany
Helmholtz Institute for Electrochemical Energy Storage (HIU)
89081UlmGermany
Institute of Electrochemistry
Ulm University
89081UlmGermany
Volker Schmidt
Institute of Stochastics
Ulm University
89081UlmGermany
Quantitative comparison of different approaches for reconstructing the carbon-binder domain from tomographic image data of cathodes in lithium-ion batteries and its influence on electrochemical properties
28 Jul 202213D imagingcarbon-binder domainelectrochemical performanceimage segmentationmicrostructuremodeling and simulationstructuring concept for lithium-ion batteries * Corresponding author,
It is well known that the spatial distribution of the carbon-binder domain (CBD) offers a large potential to further optimize lithium-ion batteries. However, it is challenging to reconstruct the CBD from tomographic image data obtained by synchrotron tomography. In the present paper, we consider several approaches to segment 3D image data of two different cathodes into three phases, namely active material, CBD and pores. More precisely, we focus on global thresholding, a local closing approach based on EDX data, a k-means clustering method, and a procedure based on a neural network that has been trained by correlative microscopy, i.e., based on data gained by synchrotron tomography and FIB-SEM data representing the same electrode. We quantify the impact of the considered segmentation approaches on morphological characteristics as well as on the resulting performance by spatially-resolved transport simulations. Furthermore, we use experimentally determined electrochemical properties to identify an appropriate range for the effective transport parameter of the CBD. The developed methodology is applied to two differently manufactured cathodes, namely an ultra-thick unstructured cathode and a two-layer cathode with varying CBD content in both layers. This comparison elucidates the impact of a specific structuring concept on the 3D microstructure of cathodes.
Introduction
Because of their outstanding energy density, low self-discharge rate and high power density, lithium-ion batteries are the most widely used technology for storing electrical energy [1]- [4]. However, further optimization of the performance is necessary due to the continuously growing requirements for electric vehicles and a general need for reducing carbon dioxide emissions to mitigate global warming [5], [6]. Since it is well known that the 3D microstructure of battery electrodes strongly influences the resulting electrochemical performance [7]- [12], tailoring the morphology of the 3D microstructure by specifically developed structuring concepts seems to be a promising approach. Obviously, the manufacturing process consisting among others of mixing [13], [14], drying [15], [16] and calendering [17]- [19] has a significant impact on the electrode morphology [20]. Although the carbon-binder domain (CBD) is regarded as passive constituent of the electrode morphology its spatial distribution is particularly crucial for the resulting electrochemical properties of cathodes [13], [21]- [23] and anodes [24], [25]. Thus, the segmentation of tomographic image data into three phases, namely active material, CBD and pores, is necessary to adequately describe the 3D microstructure of battery electrodes. On the one hand, a high resolution of 3D image data up to the nanometer scale, which can be achieved by FIB-SEM tomography, enables for the application of segmentation techniques, which distinguish between these three phases. Disadvantageously, FIB-SEM tomography provides only a small field of view such that the resulting 3D image of the electrode is often not sufficiently representative. On the other hand, X-ray based imaging techniques such as synchrotron tomography allow for a non-destructive measurement of a comparatively large cutout of the electrode. The technique has been applied successfully for the analysis of a wide range of electrode materials including transition metal oxides [26]- [28], lithium-iron phosphates [29] as well as organic active materials [30]. However, the contrast between CBD and pores is comparatively low in many cases such that a a frequently used approach is to segment only the active material and its complement, see [29], [31]- [34]. Several studies then use modeling approaches for inserting the CBD in a subsequent step, see [19], [35]- [37].
In the present paper, we consider four conceptually different data-driven approaches to reconstruct the microstructure of two differently manufactured cathodes using tomographic image data. While in [37], the CBD is virtually included based on different geometric models for a given segmentation of active material, the novelty of the present paper consists of the quantitative comparison between data-driven three-phase reconstructions. These segmentation approaches include global thresholding, k-means clustering, machine learning trained by correlative microscopy, and a reconstruction based on EDX data. This comparison elucidates the impact of different segmentation approaches on morphological and electrochemical properties of the resulting electrode microstructures. Moreover, we determine the effective transport parameter of the CBD for each segmentation approach by validating the output of spatially-resolved half-cell simulations with experimentally determined electrochemical data. This approach allows us to specify a range in which the effective transport parameter is located. Thereby, the presented approach takes the important aspect of uncertainty during the reconstruction process [38] into account when analyzing the microstructure of battery electrodes based on 3D image data. This paper is organized as follows. In Section 2, we describe the manufacturing process of two different cathodes as well as the tomographic imaging procedure. Next, we present four different approaches of segmenting active material, CBD and pores from 3D image data in Section 3. The computation of electrochemical properties by spatially-resolved numerical simulations is described in Section 4. In Section 5, the influence of the different trinarization approaches on the 3D microstructure is quantitatively investigated by means of statistical image analysis. In addition, we present results regarding simulated electrochemical properties, where a particular focus is put on the effective transport parameter of the carbon-binder domain, which is fitted via experimentally determined lithiation curves. Finally, the paper is concluded with a summary of the main results and an outlook to possible future research.
Experimental
In this section, manufacturing, material composition as well as the tomographic imaging of the cathode materials considered in the present paper are described.
Materials and cathode manufacturing
We investigate two different cathode samples, the 3D microstructure of which is quantitatively characterized based on different segmentation approaches. Moreover, an additional electrode is considered, which is solely used for the trinarization approach based on correlative microscopy in Section 3. In the following, we describe four different suspensions, denoted by A, B, C and D, which were used to manufacture these samples. Note that one of the electrodes is a two-layer electrode, where the two layers are prepared with different suspensions. All suspensions share the underlying materials, but differ with regard to their composition.
Commercially available LiNi 0.6 Co 0.2 Mn 0.2 O 2 (BASF), shortly denoted by NMC, was mixed and dispersed with carbon black (SuperP, Imerys) and graphite (SFG6L, Imerys) as conducting additive and polyvinylidene fluoride (PVdF, Solvay Solexis) as a binder, where the union of carbon black, graphite and binder forms the carbon-binder domain (CBD). N-methyl-2-pyrrolidone (NMP, Sigma Aldrich) was used as a solvent. Note that all materials were utilized as delivered without further treatment. Because two suspensions were needed simultaneously for the manufacturing of the two-layer electrode, different mixers applying the same working principle were used for the preparation of the cathode suspensions. To be precise, a 10 dm 3 planetary mixer (Netzsch, Germany) and a 1.6 dm 3 planetary mixer (Grieser, Germany) were used. Both mixers were equipped with two agitators, a cross-bar stirrer (CS) and a butterfly stirrer (BS) running at low and high speed, respectively. In the case of the 10 dm 3 mixer, an axially double butterfly stirrer was used while the 1.6 dm 3 mixer contained a single butterfly stirrer. Transport of the components into the mixing zone was ensured by a wall scraper rotating at slow speed. For each suspension, the solid material composition as well as the type of mixer used for the preparation are given in Table 1.
The suspensions were prepared starting from a binder solution containing 7 to 10 w% of PVdF which was dissolved in NMP at room temperature. First, carbon black and then graphite was added to the binder solution and dispersed, respectively. After that, NMC was added stepwise and dispersed after each addition. Finally, the viscosity of each suspension was adjusted for application by thinning with NMP. From the suspensions, ultra-thick electrodes were produced using a pilot line coating machine (LACOM, Germany). A single-layer electrode (abbreviated by SL) was prepared from the suspension A using a single slot die. A two-layer electrode (abbreviated by TL) was prepared by simultaneous slot die coating with a double slot die applying suspension B at the bottom and suspension C at the top. The suspensions were cast onto an aluminum foil (Korff, Switzerland). A drying oven with a total length of 8 m, separated into four drying stages, independently adjustable in temperature, was used for evaporation of the solvent. The belt speed was 0.8 m min −1 and the temperatures of the ovens were 50°C, for both electrodes. Note that the single-layer and the two-layer cathode share the following volume fractions: 59.54% active material, 11.54% CBD and 28.92% pore space.
In addition, a third cathode sample is considered, image data of which is solely used in Section 3 for establishing a trinarization approach based on correlative microscopy. This sample is manufactured with suspension D analogously to the single-layer and the two-layer cathode, except for a slower belt speed of 0.6 m min −1 and a slightly lower mass loading of 49.1 mg/cm 2 .
Tomographic imaging
First, we describe the imaging procedure of the single-layer as well as the two-layer cathode. The tomography measurements of these cathode samples have been conducted at the P05 beamline (Petra III, DESY, Germany) [39], [40]. More precisely, a monochromatic nearly parallel X-ray beam is guided on the rotating sample without the use of X-ray focusing optics. Behind the sample, the transmitting beam is detected with a setup consisting of a CdWO 4 scintillator for X-ray to light transformation, an optical microscope and a CMOS camera. The samples have been measured with an energy of 28 keV to assure an optimal image contrast, where a double crystal monochromator is used for selection. Both samples have been measured as close as possible to the scintillator screen to reduce phase contrast. During the tomography each sample was constantly rotated while 2401 images have been captured using a KIT CMOS camera (5120 × 3840 pixel) with an exposure time of 130 ms. Combined with the 10 times optics this resulted in a voxel size of 0.642 µm. For the reconstruction the normalized data was denoised using a total variation minimization filter [41] and then reconstructed using the gridrec routine based on the filtered back projection [42]. Note that all subsequent results regarding the single-layer as well as the two-layer sample are based on three non-overlapping equally size cutouts, where the entire thickness is used in through-plane direction.
With regard to the third cathode sample, which is used for establishing the neural network approach based on correlative microscopy, imaging by synchrotron tomography as well as by FIB-SEM tomography has been carried out. First, synchrotron tomography has been conducted at the P05 beamline (Petra III, DESY, Germany) using the µ-CT setup. For the tomography, a beam energy of 25 keV was found to yield optimal transmission contrast. The energy was filtered using a double multilayer monochromator. The sample that was fixated on the translation/rotation stage was positioned 15 mm away from the CdWO 4 scintillator. Behind the scintillator the portion of the signal that has been transformed into visible light was magnified (10 times magnification) by the microscope optics and redirected into the camera system. A KIT CMOS camera equipped with a CMOSIS CMV 20000 sensor (5120 × 3840 pixel) was then used to capture the signal with an exposure time of 130 ms. The whole tomography consisted of 3000 projections, for ring artefact reduction a center of rotation variation protocol was used. The whole setup yielded a 0.642 µm raw pixel size. The synchrotron tomography was reconstructed using the P05 in-house reconstruction tools based on the filtered back projection algorithm. After reconstruction an additional non-local means denoising step was performed [43], [44].
The FIB-SEM tomography has been conducted at Helmholtz-Zentrum Berlin (HZB) using the ZEISS Crossbeam 340. For this purpose, the sample that previously was measured at P05 has been fixated on an aluminum sample holder. For better orientation on the sample, a first low resolution large scale surface scan was performed. The scan was then aligned with the 3D synchrotron tomography reconstruction using the SIFT algorithm [45]. Afterwards, using the synchrotron tomography, a suitable ROI has been selected for FIB-SEM tomography. For the FIB-SEM tomography, a Gallium ion milling source with 30 keV and 300 pA ion current was used. The Gemini electron gun was operated at 2 keV. For imaging the SE2 chamber detector with an image capture rate of 30 seconds per image was used. The pixel size was set to 10 nm. Finally, the 3D image data obtained by FIB-SEM tomography was manually aligned with the synchrotron tomography data set using Fiji/ImageJ [46].
In addition, 2D EDX data has been gathered for the local closing approach described in Section 3.4. For this purpose, cross sections of electrodes were prepared perpendicular to the electrode surface by a broad Ar + ion beam milling device (Hitachi IM4000Plus) at an accelerating voltage of 5 kV for 2-3 hours depending on the electrode thickness. A subsequent analysis of the electrode microstructure was conducted with scanning electron microscopy (SEM) using a LEO1530VP (Zeiss) equipped with a thermal field emission gun. To determine the locally resolved elemental distribution of fluorine, energy dispersive EDX spectroscopy (X-Max50, Aztec Advanced Software, Oxford Instruments) was used. Characteristic X-rays of fluorine were used as a measure for the spatial distribution of PVdF within the electrode.
Phase-based segmentation
This section covers four different approaches to reconstruct the 3D image data obtained by synchrotron tomography. Each of these trinarization methods is designed in such a way that the experimentally determined volume fractions of all three phases can be matched. However, it is not possible to resolve the inner structure of the CBD based on synchrotron image data since the resolution is too low. Thus, we assume that each CBD voxel contains an inner porosity, sometimes called nano-porosity, of 50%, which is close to inner porosities of 47% and 58% reported in [47] and [48], respectively. Finally, a voxel-based analysis is carried out to obtain a first impression about potential differences between the four segmentation approaches.
Global thresholding
To begin with, we consider the trinarization of 3D image data by two global thresholds [49]- [51], which are chosen in such a way that the experimentally determined volume fractions of all three phases are matched. For this purpose, we choose a sufficiently large sampling window, which does not contain void space outside the electrodes to avoid edge effects. The size of this cutout is given by 1500×900×250 voxels (two-layer cathode) and 1000 × 800 × 220 voxels (single-layer cathode), respectively. In the following, we refer to this approach as Thresholding. A visualization of the grayvalue histogram together with two thresholds as vertical lines is shown in Figure 2.
Clustering approach
A further method for the segmentation of 3D image data representing three-phase materials is based on a hard clustering approach, such as k-means clustering with k = 3 [52]- [54]. In particular, this kind of unsupervised learning has been successfully applied to cathodes in lithium-ion batteries [55].
In the present paper, we slightly modify the algorithm considered in [55] in order to ensure that the experimentally determined volume fraction of each phase is matched. In general, each voxel will be classified based on the grayvalues in its 3 × 3 × 3 neighborhood. However, arranging these 27 values in a fixed order is not meaningful since, e.g., rotating or flipping the 3 × 3 × 3 neighborhood would significantly change the feature vector. To overcome this problem, we sort the grayvalues in ascending order. To additionally increase the information content of the feature vector, we further group the voxels in the local neighborhood by their distance to the currently considered voxel. Thus, the first entry of the feature vector contains the grayvalue of the current voxel, the next six entries correspond to the sorted grayvalues of the 6-neighborhood, the subsequent 12 entries belong to the voxels with distance √ 2 and the remaining 8 entries correspond to the voxels with distance √ 3. The i-th cluster C i with i ∈ {1, 2, 3} (corresponding to the three phases active material, CBD, and pores) is now given by
C i = {v j : i = argmin =1,2,3 w · 27 m=1 x m · (f (m) j − µ (m) ) 2 },
where v j denotes the j-th voxel, f j = (f ) ∈ R 27 the corresponding feature vector, and µ = (µ (1) , ..., µ (27) ) ∈ R 27 the cluster centroids in the feature space. The phase weights w 1 , w 2 , w 3 > 0 as well as the feature weights x 1 , ..., x 27 > 0 can now be chosen in such a way that we match the experimentally determined volume fractions of each phase. For this purpose, we choose w 1 = 1 and x 1 = 1 as reference. Moreover, we further reduce the number of parameters which have to be optimized by assuming equal weights for voxels with the same distance to the currently considered voxel, i.e., we assume that x 2 = ... = (ε ,exp −ε ) 2 , where ε ,exp denotes the experimentally determined volume fraction of phase andε equals the volume fraction of phase estimated on the segmented 3D image data obtained by running the k-means algorithm. This optimization is carried out with Powell's BOBYQA algorithm [56]. Since the segmentation result depends on the initial cluster centroids [57], we initialize the active material cluster by the feature vector associated with the brightest voxel and the pore cluster by the one associated with the darkest voxel. The CBD cluster is initialized with the feature vector that is most similar to the average of the feature vectors of the initial active material and pore centroid. In the following, we refer to this approach as k-means. In Figure 2, a two-dimensional sketch of this segmentation approach is shown, where three different colors are used to highlight the three clusters, whose centroid is marked with a large blue dot.
Neural network
In order to train a neural network that classifies each voxel according to the grayvalues in the synchrotron images, we make use of correlative microscopy. More precisely, a small cutout of the electrode has been imaged by FIB-SEM tomography after measuring the whole electrode sample by synchrotron tomography as described in Section 2.2. This approach relies on the fact that a three-phase reconstruction of 3D FIB-SEM data is possible due to the better contrast compared to image data obtained by synchrotron tomography. More precisely, a global threshold determined by Otsu's method is used to segment the active material [58], whereas a U-Net is trained to distinguish between pores and CBD [59]. Finally, a slicewise flood-filling algorithm has been applied to the active material phase in order to remove inclusions of CBD or pores [50], [51]. Due to the different voxel sizes of both kinds of image data, each synchrotron voxel corresponds to 128 × 128 × 128 voxels in the FIB-SEM data. Thus, we can compute the material composition -i.e. a three-dimensional vector containing the volume fractions of active material, CBD and pore space -for each synchrotron voxel, for which FIB-SEM data is available. This information serves as ground truth for training a feed-forward neural network, which uses the gray values of an input voxel and its 5 × 5 × 5 neighborhood. The neural network is a multilayer perceptron consisting of five hidden layers with 75 units each and a softmax output layer with three units representing the predicted material composition of the input voxel [60], [61].
Since the physical size of the FIB-SEM cutout is comparatively small (only 2541 voxels as training data), we make use of a data augmentation for the training data, where we flip and/or rotate the 5 × 5 × 5 neighborhood. Since these kind of transformations do not change the material composition, we increase the size of the training data by a factor of 48, which corresponds to the number of elements of the symmetry group of a hexahedron [62]. The data points are randomly shuffled and split into 60% training data, 20% validation data and 20% test data. The validation data is used for early stopping in case of ten subsequent epochs with a non-decreasing error on the validation set. The network consists of 5 hidden layers with 75 nodes each [60], [61]. The mean squared error, which is used as loss function, has been optimized using Nesterov's accelerated stochastic gradient descent [63] with a learning rate of 0.01 and a momentum coefficient of 0.99. After training the network is applied to the synchrotron image data of the single-layer as well as the two-layer sample, respectively. For each sample, this results in a 3D image, where for each voxel the material composition is predicted. This kind of information can be either interpreted as fuzzy membership or as probability of belonging to a certain phase [64], [65]. The top left plot in Figure 2 shows the prediction accuracy on the test set of the trained neural network for each of the three phases, which indicates that the material composition can be reliably predicted.
In order to transform the output of the neural network into a segmentation with three classes, we consider two procedures. The first approach relies on the experimentally determined material composition as well as on a predefined ordering of the three phases, denoted by P 1 , P 2 and P 3 . More precisely, we assign the voxels with the highest predicted probability of belonging to phase P 1 to P 1 until the target volume fraction of P 1 is matched. This procedure is then repeated for P 2 , except that we no longer consider voxels already classified as P 1 . In the following, this approach will be abbreviated as NN-P 1 -P 2 -P 3 with P 1 , P 2 , P 3 ∈ {AM,CBD,P}. For example, first segmenting the active material, then assigning the CBD leads to the trinarization NN-AM-CBD-P. The second possibility for transforming the material composition by the neural network to a trinarization is based on conditional probabilities, where the first phase P 1 is obtained analogously to the first approach. However, we then compute the conditional probabilities of voxels belonging to P 2 and P 3 conditioned on the event that these voxels are not classified as P 1 . Since these two conditional probabilities add up to one, there is -given that the phase P 1 is fixed -exactly one possibility to obtain a trinarization, which matches the experimentally determined material composition. This trinarization method will be denoted by NN-P 1 -Cond in the following. For example, first classifying the active material and then assigning the CBD and pore space based on the conditional probability, that a certain voxel is not classified as active material, leads to the trinarization NN-AM-Cond. In total, there exist six different orderings of the three phases required for the first approach, as well as three different trinarizations based on the conditional probability approach, leading to nine different neural network segmentations.
Local closing based on EDX data
Similar to [13], 2D image data obtained by energy-dispersive X-ray spectroscopy (EDX) is used to estimate the corresponding CBD gradient along the transport direction, which is then fitted by a linear function, see Figure 1. The first step to obtain a 3D segmentation that reflects the linear CBD gradient is to use the active material obtained by the k-means segmentation. Afterwards, the CBD is inserted by a morphological closing of the active material phase, where the structuring element is given by a ball with some locationdependent radius r > 0 [66], [67]. Note that it has been shown in [37] that using a morphological closing is an appropriate model for inserting the CBD. As described in [13], the closing radius r depends on the distance to the separator such that the slice-dependent amount of CBD is proportional to the estimated CBD gradient, where the known CBD volume fraction is matched by multiplying the EDX intensity values by a constant that is computed with the bisection method [68]. In the following, we refer to this approach as EDX-Closing.
Voxel-based comparison of trinarization approaches
Before investigating the influence of the different trinarization approaches on morphological and electrochemical properties in Section 5, we perform a quantitative voxel-based analysis to obtain a first impression regarding the potential differences between the segmentation approaches described above. resulting 3D microstructures in Section 5.1, we first quantify the difference between the presented threephase reconstructions by the fraction of equally assigned voxels as well as the Jaccard index [69], see Figure 3. Both measures take values between zero and one, where lower values correspond to more pronounced differences between two trinarizations. In the present setting, the Jaccard index compares the spatial distribution of a predefined phase between two different trinarizations by computing the ratio of the intersection volume and the volume of the union. Note that the fraction of equally assigned voxels as well as the Jaccard index corresponding to a certain phase are symmetric characteristics such that the entries below the main diagonal in Figure 3 contain the information regarding the single-layer cathode, whereas the entries above the main diagonal correspond to the two-layer cathode. On the one hand, the top left plot shows that there exist non-negligible differences between the neural network approaches, i.e., the method for converting the output of the neural network to a trinarization has an influence on the resulting three-phase reconstruction. On the other hand, there are even more pronounced differences between the neural network trinarizations and the remaining three approaches, namely k-means, EDX-Closing and Thresholding. In addition, the remaining three plots in Figure 3 indicate that the least differences between the trinarization approaches are observed with regard to the segmentation of active material, which is most likely caused by the high contrast between active material and the remaining two phases. Furthermore, there are negligible differences between the single-layer and the two-layer cathode, except for the trinarization obtained by global thresholding.
Simulation of electrochemical properties
The electrochemical simulations are conducted using the research branch of the framework BEST, which is developed in collaboration between the DLR Institute of Engineering Thermodynamics and the Fraunhofer Institute for Industrial Mathematics (ITWM) 1 . Focus of this work is on the influence of the CBD on electrochemical reactions and transport. Therefore, we will describe our CBD model and assumptions Note that the entries above the main diagonal correspond to the two-layer cathode, whereas the entries below the main diagonal refer to the singlelayer cathode. Due to the high accordance with regard to the spatial distribution of active material, the corresponding color bar only ranges from 0.6 to 1.
in more detail in subsequent paragraphs. A derivation of the governing equations and a description of our numerical framework can be found in previous publications [37], [70], [71]. To provide a systematic overview of the electrochemical simulation approach we summarize the model equations, boundary conditions, initial conditions and parameters in the supporting information. More specifically, the governing equations in the different phases are listed in Table S2. Interface and boundary conditions are given in Table S3. Interface models between active materials and electrolyte are listed in Table S4.
As described in the previous section, the 3D image data of both cathodes is segmented into three distinct phases, namely cathode active material, CBD and porosity. However, the inner structure of the CBD can not be resolved by means of synchrotron tomography, which has been also discussed at the beginning of Section 3. Therefore, the CBD in our simulations on the electrode scale actually contains two materials, namely, the solid carbon and binder matrix as well as liquid electrolyte. Similarly, the porous separator contains both the glass-fiber material and liquid electrolyte. In our simulations we do not resolve the actual microstructure of these materials. We rather use a homogenization approach [72] to simulate the effective transport through these mixed domains. This approach is computationally much more efficient and enables simulations on the cell scale, however, requires additional input parameters for our model. The relevant transport coefficients which need to be corrected due to the internal microstructure of the materials are the diffusion coefficient of lithium in the electrolyte (D e ) and the ionic and the electronic conductivity (κ e and σ s )) of the electrolyte and solid phase, respectively. We determine the effective transport parameters based on the concept of effective tortuosity using the general expression given by Equation 1.
X d,ef f p = γ d p · X bulk p with X ∈ {D, σ, κ}(1)
The effective transport parameter X d,ef f p is defined for a phase p, which can be electrolyte (e) or solid (s), in a domain d, which is the CBD or the separator. The effective parameter γ d p is defined using the respective volume fraction ε d p and the effective tortuosity τ d p by
γ d p = ε d p τ d p with phase p ∈ {e, s} and domain d ∈ {CBD, Sep} .(3)
We assume that the inner porosity of the CBD is equal to 50%. Hence, the effective tortuosity of the electrolyte part of the CBD τ CBD e can be computed based on γ CBD e using the relationship
τ CBD e = ε CBD e γ CBD e = 1 2 · γ CBD e .(4)
The effective tortuosity of the solid part in the CBD and the electrolyte part of the separator are computed likewise. The electrochemical parameters used in the simulations within this paper are listed in the Supporting Information, see Table S5.
In the previous paragraph we provide a qualitative description for the influence of the porous phases on the transport phenomena. Additionally, these porous materials also have an impact on the reactive surface effective at the interface to the active material. At the interfaces, where the active material is in contact with an porous electrolyte domain, we multiply the intercalation current with the porosity of the electrolyte phase. In the case of the interface between CBD domain and active material domain the reaction current is given by the equation (5).
i CBD−AM react = i react ε CBD e(5)
The list of all interface conditions can be found in Table S3. The simulation domains for the lithiation as well as the symmetrical impedance simulations are shown in Figure 4. Three different cutouts of the electrode tomography are used as simulation domain for each trinarization approach and electrode type. The trinarized 3D microstructures are cropped to a lateral size of 200 voxels for the electrochemical simulations due to computational constraints. This modifications keeps the thickness of the electrode and areal capacity unchanged. Impedance spectra are calculated using the step excitation method. Details of the approach are also provided in [37]. All electrochemical simulations are conducted using the HPC resources of JUSTUS2.
Results and discussion
This section covers the quantitative analysis of the different trinarization approaches with regard to their morphological properties by means of statistical microstructure analysis as well as the resulting electrochemical behaviour based on spatially and temporally-resolved numerical simulations.
Influence of selected trinarization approach on morphological descriptors
In this section, we discuss the influence of the different trinarization approaches described in Section 3 on the morphology of the resulting three-phase microstructures. For the sake of clarity, we only discuss the three trinarizations corresponding to the conditional probability approach, whereas the results for the remaining six neural network trinarizations can be found in the supporting information. Considering the 2D slices in Figure 2, one can already observe visual differences with regard to the morphological properties of the three phases. The CBD-phase determined by morphological closing based on EDX data is accumulated around the active material, which in turn leads to the formation of relatively large pores. Thereby, this approach differs clearly from the other approaches. On the other hand, the neural network approach results in a finely structured pore space. Moreover, by visual inspection it is hard to detect differences between the segmentation based on global thresholds and the one obtained by k-means clustering. Recall from Section 3 that all segmentation approaches are calibrated such that the volume fractions of active material and CBD-phase coincide with the experimentally determined values. In order to quantitatively evaluate the different trinarization approaches, we consider several microstructure characteristics for each of the three phases, which are considered as random closed sets [73], denoted by Ξ AM , Ξ CBD , and Ξ P .
We begin with the surface area per unit volume. This quantity is estimated from voxelized 3D image data as described in [74]. Besides the surface area per unit volume of each phase, denoted by S AM , S CBD and S P , see Table 2, the surface area per unit volume of the interface between active material and the pore space is of interest from an electrochemical point of view since the intercalation takes place at this surface. Due to the inner porosity of the CBD, this characteristic, denoted by S Int , is given by S Int = S AM,P + 0.5 · S AM,CBD . For this purpose, the surface area per unit volume of the interface between two phases is computed as described in [75]. Interestingly, the surface area per unit volume of all three phases does not depend on the underlying trinarization approach. Thus, there are only minor differences between the values of S Int .
Sample AM-Cond CBD-Cond P-Cond Thresholding k-means EDX-Closing Additionally, the microstructure descriptors r max and r min are given in Table 2, where the descriptor r max denotes the 50%-quantile of the so-called continuous pore size distribution. Similarly, the descriptor r min denotes the 50%-quantile of a phase size distribution obtained by a geometric simulation of mercury intrusion and can be considered as the radius of the typical bottleneck. By means of r max and r min , the constrictivity β = r 2 min /r 2 max ∈ [0, 1] can be defined, which is a measure for the strength of bottleneck effects and a meaningful characteristic for effective transport properties [76]- [78]. With respect to these microstructure descriptors, formally defined in [79], clear differences between the considered trinarization approaches can be observed, whereas there are no significant differences between the single-layer and the two-layer cathode. In particular, Figure 5 shows that EDX-Closing leads to significantly larger pores, which in turn leads to the largest value of r max . Furthermore, k-means and Thresholding lead to nearly identical continuous phase size distributions for all three phases, whereas the neural network trinarizations differ from each other with regard to the continuous phase size distribution of the CBD as well as the pores. It is also interesting to note that with regard to the CBD as well as the pore space, the neural network segmentation based on conditioning on the respective phase leads to larger clusters of this phase. With regard to the simulated mercury intrusion porosimetry, see Figure 6, we observe that the curves corresponding to the CBD as well as the pores are prone to discretization errors. Considering the active material, there are only slight differences, which, in turn, leads to similar values for r min . Furthermore, the approach based on EDX data is the only case, where clear differences between the single-layer and the two-layer cathode can be observed. These differences are quantified by means of the simulated mercury intrusion porosimetry of the pore space. In Figures 5 and 6, the curves corresponding to the segmentation approaches based on correlative microscopy are shifted to the left compared to the remaining three-phase reconstructions when considering the pore space. Moreover, the distribution of geodesic tortuosity is considered. This is a purely geometric quantity, in contrast to the effective tortuosity considered in Section 4, providing the distribution of the length of shortest paths through a predefined phase in the electrode divided by the thickness of the electrode, see [79] for a formal definition. Note that different concepts of tortuosity exist in the literature [80]- [83], where in the case of geodesic tortuosity Dijkstra's algorithm is used to estimate this quantity from voxelized image data [84]. As shown in Figure 7, the distribution of geodesic tortuosity of the active material neither depends on the selected trinarization approach nor on the considered cathode sample. In contrast, the length of shortest paths through the CBD as well as the pore space is larger for the trinarizations obtained by the neural networks compared to the remaining three segmentation approaches. These differences between the four trinarization approaches considered in this paper are stronger than the differences between the single-layer and the two-layer cathode.
In addition, the centered two-point coverage probability function is considered, see Figure 8. For stationary and isotropic random closed sets Ξ i , Ξ j in the three-dimensional Euclidean space R 3 with i, j ∈ {AM, CBD, P}, this characteristic is defined via C ij (r) = P(0 ∈ Ξ i , x ∈ Ξ j ) − ε i ε j for any x ∈ R 3 and r = |x| ≥ 0 , where ε i , ε j denotes the volume faction of Ξ i , Ξ j , respectively. This function is also called covariance function in the literature [66], [85]. Due to the normalization by subtracting the product of the volume fractions, a value of zero implies that the events 0 ∈ Ξ i and x ∈ Ξ j are stochastically independent. Positive values of C i,j (r) can be interpreted as a positive correlation between those two events, whereas negative values correspond to a negative correlation. Typically, choosing equal phases (i.e. i = j, see top row of Figure 8) leads to a monotonously decreasing function taking non-negative values, which approaches zero for large radii r. On the other hand, considering two different phases (i.e. i = j, see bottom row of Figure 8) leads in most cases to a monotonously increasing function approaching zero from below. Figure 8 shows that there are no differences between both samples regardless of the phases under consideration. Furthermore, the curves in the top row of Figure 8 show the same qualitative behavior as the continuous phase size distribution in Figure 5. The most noticeable effect is the unique behaviour of the closing approach based on EDX data with regard to the bottom right plot in Figure 8. More precisely, the remaining segmentation approaches show a peak at around 2 µm, which corresponds to an increased likelihood of observing CBD and pores 2 µm away from each other. The curves corresponding to EDX-Closing show a steadily increasing two-point coverage probability function instead.
Finally, we consider the volume fraction of each phase in dependence of the distance to the separator, see Figure 9. With respect to the spatial distribution of active material, there is a clear difference between the single-layer and the two-layer cathode regardless of the trinarization approach. More precisely, the two-layer sample shows a pronounced drop of the volume fraction of active material at 80 µm, i.e. at the interface between both layers. With regard to the CBD, there are clear differences between the results obtained for each of the trinarization approaches, where all three-phase reconstructions except EDX-Closing indicate a larger amount of CBD at the interface. This peak is most pronounced for k-means and Thresholding. Obviously, EDX-Closing reflects the linear gradient estimated from EDX data. Note that this linear gradient is estimated from a single 2D EDX image and is thus subject to a larger uncertainty compared to the information extracted from 3D image data. Therefore, segmentation approaches not reflecting the linear gradient observed in EDX data are not automatically considered as unrealistic. Interestingly, this does not lead to a linear behaviour of the distance-dependent porosity. Except for EDX-Closing, there are comparatively small differences between both samples, see the plots on the right-hand side of Figure 9.
Influence of selected trinarization on electrochemical properties
The influence of the selected trinarization approach on the electrochemical simulations is investigated using half-cell lithiation simulations and symmetrical impedance simulations. The only free parameter to achieve a good agreement between experiments and simulations is the effective transport parameter τ CBD within the electrolyte part of the CBD. Relevant transport mechanisms in a thick NMC electrode are the electronic conductivity through the solid phase and the lithium transport through the electrolyte. Both quantities strongly depend on the distribution and morphology of the CBD. Especially, the electrolyte transport depends on the local effective tortuosity in the CBD. The electronic conductivity depends both on the conductive network of the CBD and the conductivity of the active material, were the latter is additionally dependent on the state of charge. However, at larger CBD contents losses due to electronic transport are minor compared to transport losses in the electrolyte. Therefore, we use the lithiation simulation at a current of 6 mA/cm 2 to identify the local effective tortuosity of the CBD that leads to the best agreement between experiment and simulations. Note, at low CBD contents this assumption can be invalid and contribution of the two processes cannot be deconvoluted unambiguously. The best matching effective transport parameters are identified for both electrode types (single-layer cathode and two-layer cathode) with three cutouts each and all trinarization approaches except for NN-CBD-P-AM and NN-P-CBD-AM. In previous studies we have shown that the EDX-closing trinarization is able to provide a reasonable agreement between electrochemical measurements and simulations [13], [86]. Figure 10a visualizes the impact of the effective γ-parameter on the lithiation simulation for the EDXclosing trinarizations of the two-layer electrode in comparison to the experimental results. Lithiation curves with a current of 6 mA/cm 2 serving as target for our parameter optimization are highlighted by the green symbols.
The impact of the spatial distribution of the CBD on the cell voltage and the achievable lihtiation capacity is apparent. A smaller value for the effective transport parameter γ will reduce the achievable lihtiation capacity of the simulated electrode. In turn increasing the value of γ reduces the transport resistance in the electrolyte allowing to access larger electrode capacity. A value of γ = 0.12 provides the best match between simulations and experiments for the two-layer electrode created using the EDXclosing method presented in Figure 10. This parameter value corresponds to a local effective tortuosity of the electrolyte phase of the CBD of 4.2.
The simulation results for all six currents for the selected effective transport parameter (γ two−layer EDX = 0.12) within the CBD are shown in Figure 10b. The numerical results show some spread for higher currents due to local fluctuations in the three electrode cutouts. Nevertheless, the simulated cell voltages are in excellent agreement with the experimental data for all currents. However, as shown in Figure 10c applying the same procedure to the k-means trinarization will result in a similar match between experiments and simulations. In this case, the resulting effective tortuosity of the CBD is somewhat larger (γ = 0.06, τ =8.3). Similar results can be reported for all cases studied in this work. The figures used for both electrodes and all trinarizations to select the best matching effective tortuosity are shown in the Supplementary Information, see Figure S8. The corresponding values for the ten different trinarizations and the two different electrode types are also listed in Table 3. The impact of the trinarization on the electrode performance differs between the methods investigated in this work. Yet, the two-layer electrode and the single-layer electrode exhibit the same trends. A smaller effective tortuosity indicates that the CBD is distributed in the electrode such that even a small local transport resistance will reduce the overall transport through the electrode. The EDX-Closing trinarization leads to the smallest effective tortuosities for the single-layer and two-layer electrodes due to the distribution of the CBD at the bottlenecks of the active material microstructure. The k-means and thresholding approach, on the other hand, result in the largest effective tortuosities, which implies that the spatial distribution of CBD created by these trinarization methods is not fully covering the bottlenecks for the electrolyte transport. The results obtained for the trinarization based on neural networks are qualitatively in between these two extremes. Additional analytical techniques are required to probe the influence of the CBD distribution. As shown above the distribution and corresponding effective tortuosity values have a significant influence on lithium ion transport in the electrolyte. Impedance spectroscopy on symmetrical cells in blocking conditions has become a standard tool for the characterization of the pore transport resistance. [87]- [89] Therefore, we additionally performed impedance simulations on symmetrical cells to investigate the impact of the different trinarization methods. The corresponding impedance spectra for the single-layer and two-layer cathode are shown in Figure S6a and S6b, respectively. However, the different trinarizations result in very similar impedance spectra which will not allow to discern distribution related effects in corresponding impedance measurements. Therefore, also the electrode impedance does not provide a hint on the most favorable trinarization method.
In summary, we demonstrate that it is possible to identify one effective tortuosity per electrode type and trinarization method such that the simulations are in fair agreement with the experimental data for all currents. However, there are large variations in the effective tortuosity of the CBD between the different trinarization methods. None of the individual techniques is able to provide a consistent representation for all electrode samples investigated in this work. Hence, we could not determine the trinarization method providing the best representation of the electrode microstructure. High-resolution image data of the CBD might yield additional information on the effective CBD conductivity which then eventually will allow to choose the most suitable trinarization technique.
Conclusion and outlook
In the present paper, 3D image data of a single-layer and a two-layer cathodes obtained by synchrotron tomography has been segmented into active material, the carbon-binder domain and the pore space by four different approaches, where the approach based on correlative microscopy allows for nine different trinarizations by altering the way of converting the material composition predicted by the neural network to a three-phase reconstruction. The different segmentation approaches, which are designed to match the experimentally determined volume fractions, are quantitatively compared by means of statistical image analysis as well as spatially and temporally resolved simulations of electrochemical properties. It turns out that there are non-negligible differences between the proposed trinarization approaches. Among others, the geodesic tortuosity as well as the continuous phase size distribution of both -the CBD and the pores -depend on the chosen segmentation approach. Furthermore, it has been shown that there are clear differences between the trinarizations obtained by correlative microscopy. Thus, the rule for converting the material composition predicted by the neural network to a three-phase reconstruction is of importance, even though the differences compared to the remaining three approaches are more pronounced. However, a high level of agreement between the experimental measurements and the lithiation simulations can be achieved for all trinarization methods by adjusting the effective transport parameter of the carbon-binder domain. Note that using a fixed current for fitting this parameter allows us to match the experimental curves for five different currents, which indicates that each trinarization approach is reasonable. By doing so, the effective tortuosity within the CBD is restricted to the interval [4.2, 50]. This large range indicates that further research is required to determine the best trinarization approach. For example, the high-resolution 3D FIB-SEM data could be used to quantitatively investigate ionic transport within the nanopores. Nevertheless, this approach based on spatially resolved numerical simulations allows to predict the optimal spatial distribution of the CBD in lithium-ion battery electrodes, leading to an improved electrochemical performance Supporting Information
SI 1.1 Further trinarizations
Sample AM-C-P AM-P-C C-P-AM C-AM-P P-AM-C P-C-AM Figure S1: Geodesic tortuosity of active material (left), CBD (center) and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
SI 1.2 Electrochemical transport model and parametrization
The governing equations used for electrochemical simulations are listed in Table S2. Table S2: List of governing equations used for the electrochemical simulations. The effective transport parameters in the CBD and separator are calculated according to Equation (1).
The battery system is solved in the solid phases for the electrical potential Φ s and the lithium concentration c s and in the electrolyte phases for the electrochemical potential ϕ e and the lithium concentration c e . The governing equations in the different domains are connected using different interface and boundary conditions, see Table S3).
The used reaction models are listed in Table S4, whereas the electrochemical parameters of the materials are listed in Table S5.
The conductivity of the current collector, counter-electrode and conductive additive binder domain (CBD) are assumed values. These values, which are smaller then the real conductivities of these materials, are selected to reduce the negative impact of theses regions on the numerical solution tolerance, and · √ c e · c s · sinh F 2RT η intercalation η intercalation = Φ s − ϕ e − U 0 (c s ) doublelayer i DL = −C DL · d∆Φ dt with ∆Φ ≈ Φ s − ϕ e Electrolyte Counter-Electrode reaction Table S4: Reaction models used in this work.
i CE = 2 · i CE 00 · √ c e · sinh F 2RT η CE η CE = Φ s − ϕ e − U CE 0 with U CE 0 = 0
Parameter
Value Description Source NMC c 0 N M C / mol/cm 3 1.65 · 10 −2 initial value for lithiation simulation calculated c 0 N M C / mol/cm 3 5.0218747 · 10 −2 initial value for impedance simulation calculated c max N M C / mol/cm 3 5.0451 · 10 −2 maximal lithium concentration in NMC622 [13] D N M C / cm 2 /s See Equation (SI-8) [13] Li-ion diffusion coefficient [13] [13] transference number [13], [90] Counter-Electrode
SI 1.3 Additional images regarding the influence of the effective parameter
Symmetrical impedance spectra for the single-layer and two-layer electrode and the different trinarization techniques are shown in Figure S6.
(a) Impedance spectra for the single-layer cathode. (b) Impedance spectra for the two-layer cathode. Figure S6: Symmetrical impedance spectra for all trinarization techniques for the two different electrodes with the best matching γ as identified at 6 mA/cm −2 .
The impact of the effective transport parameter γ is on the symmetrical impedance is shown in Figure S7.
The effective tortuosity of the CBD is identified by comparing the lithiation simulations at 6 mA/cm 2 with the experimental data. Figure S8 contains the images used for the selection of the best-matching effective transport parameter τ CBD for the single-layer (left two columns) and the two-layer (right two columns). The lithiation simulations for the single-layer cathode (left two columns) and the two-layer cathode (right two columns) electrode for the selected effective CBD parameter (see Table 3) are shown in Figure S9. Figure S7: The symmetrical impedance spectra for the k-means approach for the two-layer electrode show the influence of the effective transport parameter γ. Figure S8: Variation of the effective transport parameter τ CBD for the single-layer cathode (left columns) and the two-layer cathode (right columns). Figure S9: Best matching effective transport parameter τ CBD for the single-layer cathode (left columns) and two-layer cathode (right columns).
x 7 , x 8 = ... = x 19 and x 20 = ... = x 27 . This leads to five parameters, which are computed by minimizing the cost function 3 =1
Figure 1 :
1Left: EDX image (flourine mapping) of the single-layer cathode. Right: CBD gradient computed from EDX data (dots) and corresponding linear fit (solid line) for SL and TL.
At first, selected results obtained by the four different trinarization approaches are visualized in Figure 2. Before we quantify the influence of the trinarization approach on geometric descriptors of the
Figure 2 :
2Comparison of different trinarization approaches for a 2D slice.
Figure 3 :
3Fraction of equally assigned voxels (top left) as well as Jaccard index for active material (top right), CBD (bottom left) and pores (bottom right).
To evaluate the impact of different methods for CBD reconstruction we performed two different types of virtual experiments: i) Constant current lithiation in half-cell configuration with six different currents (1, 3, 6, 8, 10 and 12 mA/cm 2 ),ii) Impedance spectroscopy in symmetrical cell configuration under blocking condition.
Figure 4 :
4Simulation domains used for the two different types of electrochemical simulations.
Figure 5 :
5Continuous phase size distribution of active material (left), CBD (center), and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
Figure 6 :
6Simulated mercury intrusion porosimetry of active material (left), CBD (center) and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
Figure 7 :
7Geodesic tortuosity of active material (left), CBD (center) and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
Figure 8 :
8Top row: Centered two-point coverage probability function of active material, CBD and pore space (from left to right). Bottom row: Centered two-point coverage probability functions C AM,CBD , C AM,P and C CBD,P (from left to right). Note that dashed curves are used for the two-layer cathode, whereas the solid curves correspond to the single-layer cathode.
Figure 9 :
9Volume fraction of active material (left), CBD (center) and pores (right) in dependence of the distance to the separator for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
of the effective parameter γ on the simulated cell voltage for the EDX-Closing trinarization at 6 mA/cm 2 . The target experiments are highlighted with filled green symbols. (b) The simulated cell voltage of the EDX-Closing trinarization for the best matching effective tortuosity (γ = 0.24) for all currents.(c) The simulated cell voltage for the best matching effective tortuosity (γ = 0.06) for all currents for the k-means trinarization.
Figure 10 :
10Impact of the effective transport through the CBD for the two-layer electrode.
AM-CBD-P NN-AM-P-CBD NN-CBD-P-AM NN-CBD-AM-P NN-P-AM-CBD NN-P-CBD-AM 1
Figure S2 :Figure S3 :Figure S4 :
S2S3S4Top row: Centered two-point coverage probability function of active material, CBD and pore space (from left to right). Bottom row: Centered two-point coverage probability functions C AM,CBD , C AM,P and C CBD,P (from left to right). Note that dashed curves are used for the two-layer cathode, whereas the solid curves correspond to the single-layer cathode. Continuous phase size distribution of active material (left), CBD (center), and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves). Simulated mercury intrusion porosimetry of active material (left), CBD (center) and pore space (right) for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
Figure S5 :
S5Volume fraction of active material (left), CBD (center) and pores (right) in dependence of the distance to the separator for the two-layer cathode (dashed curves) and the single-layer cathode (solid curves).
= (i react + i DL ) /F N s · n = i react /F Charge flux J bulk e · n = i react + i DL J s · n = i react + i DL = (i react + i DL ) /F · ε sep e N s · n = i react /F · ε sep e Charge flux J sep e · n = (i react + i DL ) · ε sep e J s · n = (i react + i DL ) = (i react + i DL ) · ε CBD e J AM s · n = (i react + i DL )
Pa m −1 and rolls heated to 100°C. The final density of the electrode composites was 3.1 g cm −3Suspension
A
B
C
D
Content in solid mass [w%]
NMC
93.5
91.5
95.5
93.0
carbon black
2.0
2.0
2.0
2.0
graphite
1.0
1.0
1.0
1.0
PVdF
3.5
5.5
1.5
4.0
Mixer
Grieser Netzsch Grieser Netzsch
Table 1: Material compositions and mixers for the four different suspensions, which are used to manu-
facture the cathode samples considered in the present paper.
70°C, 95°C and 110°C for both electrodes. The mass loading resulted in 51 mg/cm 2 and 53 mg/cm 2 for
the single-layer and the two-layer electrode, respectively. After drying, the electrodes were calendered
using a pilot line calender (KKA, Germany) with a line pressure restricted to a maximum value of
208.3
Table 2 :
2Scalar microstructure characteristics for different trinarization approaches.
Table 3 :
3List of effective tortuosity values providing the best match to the corresponding electrochemical data.
Table S1 :
S1Scalar microstructure characteristics for different trinarization approaches.1
1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.1
Geodesic tortuosity of AM []
Table S3 :
S3Interface and boundary conditions of the governing equations for the different domains. The models used for the reaction and double current are listed inTable S4.Phase 1
Phase 2
type
equation
Electrolyte
NMC
reaction
i intercalation = 2 · i intercalation
00
Table S5 :
S5List of electrochemical parameters used for the simulations. at the same time retain the high conductivity compared to the active material and electrolyte phase.
https://www.itwm.fraunhofer.de/best
AcknowledgementsThe presented work was financially supported by the German Ministry "Bundesministerium für Bildung und Forschung" within the projects HighEnergy and HiStructures under the reference numbers 03XP0073C and 03XP0243C/D/E as well as within the framework of the program "Vom Material zur Innovation". This study contributes to the research performed at CELEST (Center for Electrochemical Energy Storage Ulm Karlsruhe). The work by MN was partially funded by the German Research Foundation (DFG) under Project ID 390874152 (POLiS Cluster of Excellence, EXC 2154). The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster). We thank Christian Dreer for working out the production process for the single-layer and two-layer electrodes and their manufacturing and Claudia Pfeifer for the preparation and EDX analysis of the electrode cross sections. All responsibility for the content of this publication is assumed by the authors.Data availabilityThe raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.
Lithium Batteries: Advanced Technologies and Applications. B Scrosati, K M Abraham, W Van Schalkwijk, J Hassoun, Hoboken: J. Wiley & Sons374B. Scrosati, K. M. Abraham, W. van Schalkwijk, and J. Hassoun, Lithium Batteries: Advanced Technologies and Applications. Hoboken: J. Wiley & Sons, 2013, 374.
Lithium-Ion Batteries: Basics and Applications. R Korthauer, SpringerBerlinR. Korthauer, Lithium-Ion Batteries: Basics and Applications. Berlin: Springer, 2018.
The Li-ion rechargeable battery: A perspective. J B Goodenough, K.-S Park, Journal of the American Chemical Society. 1354J. B. Goodenough and K.-S. Park, "The Li-ion rechargeable battery: A perspective," Journal of the American Chemical Society, vol. 135, no. 4, 1167-1176, 2013.
J Newman, K Thomas-Alyea, Electrochemical Systems, 3 rd ed. Hoboken: J. Wiley & Sons. J. Newman and K. Thomas-Alyea, Electrochemical Systems, 3 rd ed. Hoboken: J. Wiley & Sons, 2004.
Realizing the electric-vehicle revolution. M Tran, D Banister, J D K Bishop, M D Mcculloch, Nature Climate Change. 25M. Tran, D. Banister, J. D. K. Bishop, and M. D. McCulloch, "Realizing the electric-vehicle revolution," Nature Climate Change, vol. 2, no. 5, 328-333, 2012.
Batteries and fuel cells for emerging electric vehicle markets. Z P Cano, D Banham, S Ye, A Hintennach, J Lu, M Fowler, Z Chen, Nature Energy. 34Z. P. Cano, D. Banham, S. Ye, A. Hintennach, J. Lu, M. Fowler, and Z. Chen, "Batteries and fuel cells for emerging electric vehicle markets," Nature Energy, vol. 3, no. 4, 279-289, 2018.
Influence of microstructure on impedance response in intercalation electrodes. S Cho, C.-F Chen, P P Mukherjee, Journal of The Electrochemical Society. 1627S. Cho, C.-F. Chen, and P. P. Mukherjee, "Influence of microstructure on impedance response in intercalation electrodes," Journal of The Electrochemical Society, vol. 162, no. 7, A1202-A1214, 2015.
Morphology effect on the electrochemical performance of NiO films as anodes for lithium ion batteries. X Huang, J Tu, X Xia, X Wang, J Xiang, L Zhang, Y Zhou, Journal of Power Sources. 1882X. Huang, J. Tu, X. Xia, X. Wang, J. Xiang, L. Zhang, and Y. Zhou, "Morphology effect on the electrochemical performance of NiO films as anodes for lithium ion batteries," Journal of Power Sources, vol. 188, no. 2, 588-591, 2009.
Impacts of variations in manufacturing parameters on performance of lithium-ion batteries. G Lenze, H Bockholt, C Schilcher, L Froböse, D Jansen, U Krewer, A Kwade, Journal of The Electrochemical Society. 1652G. Lenze, H. Bockholt, C. Schilcher, L. Froböse, D. Jansen, U. Krewer, and A. Kwade, "Impacts of variations in manufacturing parameters on performance of lithium-ion batteries," Journal of The Electrochemical Society, vol. 165, no. 2, A314-A322, 2018.
Morphology effects on the electrochemical performance of LiNi 1-x Co x O 2. W Li, J C Currie, Journal of The Electrochemical Society. 1448W. Li and J. C. Currie, "Morphology effects on the electrochemical performance of LiNi 1-x Co x O 2 ," Journal of The Electrochemical Society, vol. 144, no. 8, 2773-2779, 1997.
Influence of microstructure on the electrochemical performance of LiMn 2-y-z Li y Ni z O 4 spinel cathodes in rechargeable lithium batteries. Y Shin, A Manthiram, Journal of Power Sources. 1261Y. Shin and A. Manthiram, "Influence of microstructure on the electrochemical performance of LiMn 2-y-z Li y Ni z O 4 spinel cathodes in rechargeable lithium batteries," Journal of Power Sources, vol. 126, no. 1, 169-174, 2004.
Effects of three-dimensional cathode microstructure on the performance of lithium-ion battery cathodes. A H Wiedemann, G M Goldin, S A Barnett, H Zhu, R J Kee, Electrochimica Acta. 88A. H. Wiedemann, G. M. Goldin, S. A. Barnett, H. Zhu, and R. J. Kee,"Effects of three-dimensional cathode microstructure on the performance of lithium-ion battery cathodes," Electrochimica Acta, vol. 88, 580-588, 2013.
Manufacturing process for improved ultra-thick cathodes in high-energy lithium-ion batteries. L S Kremer, A Hoffmann, T Danner, S Hein, B Prifling, D Westhoff, C Dreer, A Latz, V Schmidt, M Wohlfahrt-Mehrens, Energy Technology. 822020L. S. Kremer, A. Hoffmann, T. Danner, S. Hein, B. Prifling, D. Westhoff, C. Dreer, A. Latz, V. Schmidt, and M. Wohlfahrt-Mehrens, "Manufacturing process for improved ultra-thick cathodes in high-energy lithium-ion batteries," Energy Technology, vol. 8, no. 2, 1900167, 2020.
Intensive dry and wet mixing influencing the structural and electrochemical properties of secondary lithium-ion battery cathodes. H Bockholt, W Haselrieder, A Kwade, ECS Transactions. 50H. Bockholt, W. Haselrieder, and A. Kwade, "Intensive dry and wet mixing influencing the struc- tural and electrochemical properties of secondary lithium-ion battery cathodes," ECS Transactions, vol. 50, 25-35, 2013.
The influence of different post-drying procedures on remaining water content and physical and electrochemical properties of lithium-ion batteries. F Huttner, W Haselrieder, A Kwade, Energy Technology. 822020F. Huttner, W. Haselrieder, and A. Kwade, "The influence of different post-drying procedures on remaining water content and physical and electrochemical properties of lithium-ion batteries," Energy Technology, vol. 8, no. 2, 1900245, 2020.
Investigation of drying curves of lithium-ion battery electrodes with a new gravimetrical double-side batch dryer concept including setup characterization and model simulations. J Kumberg, M Baunach, J C Eser, A Altvater, P Scharfer, W Schabel, Energy Technology. 922021J. Kumberg, M. Baunach, J. C. Eser, A. Altvater, P. Scharfer, and W. Schabel, "Investigation of drying curves of lithium-ion battery electrodes with a new gravimetrical double-side batch dryer concept including setup characterization and model simulations," Energy Technology, vol. 9, no. 2, 2000889, 2021.
Calendering effects on the physical and electrochemical properties of Li. H Zheng, L Tan, G Liu, X Song, V S Battaglia, Journal of Power Sources. 2082cathodeH. Zheng, L. Tan, G. Liu, X. Song, and V. S. Battaglia, "Calendering effects on the physical and electrochemical properties of Li[Ni 1/3 Mn 1/3 Co 1/3 ]O 2 cathode," Journal of Power Sources, vol. 208, 52-57, 2012.
Impact of the calendering process on the interfacial structure and the related electrochemical performance of secondary lithium-ion batteries. W Haselrieder, S Ivanov, D K Christen, H Bockholt, A Kwade, ESC Transactions. 5026W. Haselrieder, S. Ivanov, D. K. Christen, H. Bockholt, and A. Kwade, "Impact of the calendering process on the interfacial structure and the related electrochemical performance of secondary lithium-ion batteries," ESC Transactions, vol. 50, no. 26, 59-70, 2013.
Analysis of the 3D microstructure of experimental cathode films for lithium-ion batteries under increasing compaction. K Kuchler, B Prifling, D Schmidt, H Markötter, I Manke, T Bernthaler, V Knoblauch, V Schmidt, Journal of Microscopy. 2722K. Kuchler, B. Prifling, D. Schmidt, H. Markötter, I. Manke, T. Bernthaler, V. Knoblauch, and V. Schmidt, "Analysis of the 3D microstructure of experimental cathode films for lithium-ion batteries under increasing compaction," Journal of Microscopy, vol. 272, no. 2, 96-110, 2018.
The interaction of consecutive process steps in the manufacturing of lithium-ion battery electrodes with regard to structural and electrochemical properties. H Bockholt, M Indrikova, A Netz, F Golks, A Kwade, Journal of Power Sources. 325H. Bockholt, M. Indrikova, A. Netz, F. Golks, and A. Kwade, "The interaction of consecutive process steps in the manufacturing of lithium-ion battery electrodes with regard to structural and electrochemical properties," Journal of Power Sources, vol. 325, 140-151, 2016.
Three-phase reconstruction reveals how the microscopic structure of the carbonbinder domain affects ion transport in lithium-ion batteries. M Kroll, S L Karstens, M Cronau, A Höltzel, S Schlabach, N Nobel, C Redenbach, B Roling, U Tallarek, Batteries & Supercaps. 48M. Kroll, S. L. Karstens, M. Cronau, A. Höltzel, S. Schlabach, N. Nobel, C. Redenbach, B. Roling, and U. Tallarek, "Three-phase reconstruction reveals how the microscopic structure of the carbon- binder domain affects ion transport in lithium-ion batteries," Batteries & Supercaps, vol. 4, no. 8, 1363-1373, 2021.
FIB/SEM-based calculation of tortuosity in a porous LiCoO 2 cathode for a Li-ion battery. T Hutzenlaub, A Asthana, J Becker, D Wheeler, R Zengerle, S Thiele, Electrochemistry Communications. 27T. Hutzenlaub, A. Asthana, J. Becker, D. Wheeler, R. Zengerle, and S. Thiele, "FIB/SEM-based calculation of tortuosity in a porous LiCoO 2 cathode for a Li-ion battery," Electrochemistry Com- munications, vol. 27, 77-80, 2013.
The role of carbon black distribution in cathodes for Li ion batteries. R Dominko, M Gaberscek, J Drofenik, M Bele, S Pejovnik, J Jamnik, Journal of Power Sources. 119R. Dominko, M. Gaberscek, J. Drofenik, M. Bele, S. Pejovnik, and J. Jamnik, "The role of carbon black distribution in cathodes for Li ion batteries," Journal of Power Sources, vol. 119-121, 770- 773, 2003.
Detection of binder gradients using impedance spectroscopy and their influence on the tortuosity of Li-ion battery graphite electrodes. R Morasch, J Landesfeind, B Suthar, H A Gasteiger, Journal of The Electrochemical Society. 16514R. Morasch, J. Landesfeind, B. Suthar, and H. A. Gasteiger, "Detection of binder gradients using impedance spectroscopy and their influence on the tortuosity of Li-ion battery graphite electrodes," Journal of The Electrochemical Society, vol. 165, no. 14, A3459-A3467, 2018.
Influence of the binder on lithium ion battery electrode tortuosity and performance. J Landesfeind, A Eldiven, H A Gasteiger, Journal of The Electrochemical Society. 1655J. Landesfeind, A. Eldiven, and H. A. Gasteiger, "Influence of the binder on lithium ion battery electrode tortuosity and performance," Journal of The Electrochemical Society, vol. 165, no. 5, A1122-A1128, 2018.
Multiscale characterization of composite electrode microstructures for high density lithium-ion batteries guided by the specificities of their electronic and ionic transport mechanisms. F Cadiou, T Douillard, N Besnard, B Lestriez, E Maire, Journal of The Electrochemical Society. 16710100521F. Cadiou, T. Douillard, N. Besnard, B. Lestriez, and E. Maire, "Multiscale characterization of composite electrode microstructures for high density lithium-ion batteries guided by the specifici- ties of their electronic and ionic transport mechanisms," Journal of The Electrochemical Society, vol. 167, no. 10, 100521, 2020.
Hierarchical structuring of NCM111-cathode materials in lithium-ion batteries: An in-depth study of the influence of primary and secondary particle size effects on electrochemical performance. A Wagner, N Bohn, H Geßwein, M Neumann, M Osenberg, A Hilger, I Manke, V Schmidt, J R Binder, ACS Applied Energy Materials. 3A. Wagner, N. Bohn, H. Geßwein, M. Neumann, M. Osenberg, A. Hilger, I. Manke, V. Schmidt, and J. R. Binder, "Hierarchical structuring of NCM111-cathode materials in lithium-ion batteries: An in-depth study of the influence of primary and secondary particle size effects on electrochemical performance," ACS Applied Energy Materials, vol. 3, 12565-12574, 2020.
Guiding the design of heterogeneous electrode microstructures for Li-ion batteries: Microscopic imaging, predictive modeling, and machine learning. H Xu, J Zhu, D P Finegan, H Zhao, X Lu, W Li, N Hoffman, A Bertei, P Shearing, M Z Bazant, Advanced Energy Materials. 1119H. Xu, J. Zhu, D. P. Finegan, H. Zhao, X. Lu, W. Li, N. Hoffman, A. Bertei, P. Shearing, and M. Z. Bazant, "Guiding the design of heterogeneous electrode microstructures for Li-ion batteries: Microscopic imaging, predictive modeling, and machine learning," Advanced Energy Materials, vol. 11, no. 19, 2003908, 2021.
Image based modelling of microstructural heterogeneity in LiFePO 4 electrodes for Li-ion batteries. S Cooper, D Eastwood, J Gelb, G Damblanc, D Brett, R Bradley, P Withers, P Lee, A Marquis, N Brandon, P Shearing, Journal of Power Sources. 247S. Cooper, D. Eastwood, J. Gelb, G. Damblanc, D. Brett, R. Bradley, P. Withers, P. Lee, A. Marquis, N. Brandon, and P. Shearing, "Image based modelling of microstructural heterogeneity in LiFePO 4 electrodes for Li-ion batteries," Journal of Power Sources, vol. 247, 1033-1039, 2014.
3D microstructure characterization of polymer battery electrodes by statistical image analysis based on synchrotron X-ray tomography. M Neumann, M Ademmer, M Osenberg, A Hilger, F Wilde, S Muench, M D Hager, U S Schubert, I Manke, V Schmidt, Journal of Power Sources. 5422022M. Neumann, M. Ademmer, M. Osenberg, A. Hilger, F. Wilde, S. Muench, M. D. Hager, U. S. Schubert, I. Manke, and V. Schmidt, "3D microstructure characterization of polymer battery electrodes by statistical image analysis based on synchrotron X-ray tomography," Journal of Power Sources, vol. 542, 231783, 2022.
Visualization and quantification of electrochemical and mechanical degradation in Li ion batteries. M Ebner, F Marone, M Stampanoni, V Wood, Science. 3426159M. Ebner, F. Marone, M. Stampanoni, and V. Wood, "Visualization and quantification of elec- trochemical and mechanical degradation in Li ion batteries," Science, vol. 342, no. 6159, 716-720, 2013.
X-ray tomography of porous, transition metal oxide based lithium ion battery electrodes. M Ebner, F Geldmacher, F Marone, M Stampanoni, V Wood, Advanced Energy Materials. 37M. Ebner, F. Geldmacher, F. Marone, M. Stampanoni, and V. Wood, "X-ray tomography of porous, transition metal oxide based lithium ion battery electrodes," Advanced Energy Materials, vol. 3, no. 7, 845-850, 2013.
Characterization of the 3-dimensional microstructure of a graphite negative electrode from a Li-ion battery. P Shearing, L Howard, P Jørgensen, N Brandon, S Harris, Electrochemistry Communications. 123P. Shearing, L. Howard, P. Jørgensen, N. Brandon, and S. Harris, "Characterization of the 3- dimensional microstructure of a graphite negative electrode from a Li-ion battery," Electrochem- istry Communications, vol. 12, no. 3, 374-377, 2010.
Three dimensional simulation of galvanostatic discharge of LiCoO 2 cathode based on X-ray nano-CT images. B Yan, C Lim, L Yin, L Zhu, Journal of The Electrochemical Society. 15910B. Yan, C. Lim, L. Yin, and L. Zhu, "Three dimensional simulation of galvanostatic discharge of LiCoO 2 cathode based on X-ray nano-CT images," Journal of The Electrochemical Society, vol. 159, no. 10, A1604-A1614, 2012.
A combination of X-ray tomography and carbon binder modeling: Reconstructing the three phases of LiCoO 2 Li-ion battery cathodes. L Zielke, T Hutzenlaub, D R Wheeler, I Manke, T Arlt, N Paust, R Zengerle, S Thiele, Advanced Energy Materials. 481301617L. Zielke, T. Hutzenlaub, D. R. Wheeler, I. Manke, T. Arlt, N. Paust, R. Zengerle, and S. Thiele, "A combination of X-ray tomography and carbon binder modeling: Reconstructing the three phases of LiCoO 2 Li-ion battery cathodes," Advanced Energy Materials, vol. 4, no. 8, 1301617, 2014.
Mesoscale effective property simulations incorporating conductive binder. B L Trembacki, D R Noble, V E Brunini, M E Ferraro, S A Roberts, Journal of the Electrochemical Society. 16411B. L. Trembacki, D. R. Noble, V. E. Brunini, M. E. Ferraro, and S. A. Roberts, "Mesoscale effective property simulations incorporating conductive binder," Journal of the Electrochemical Society, vol. 164, no. 11, E3613-E3626, 2017.
Influence of conductive additives and binder on the impedance of lithium-ion battery electrodes: Effect of morphology. S Hein, T Danner, D Westhoff, B Prifling, R Scurtu, L Kremer, A Hoffmann, A Hilger, M Osenberg, I Manke, M Wohlfahrt-Mehrens, V Schmidt, A Latz, Journal of The Electrochemical Society. 167113546S. Hein, T. Danner, D. Westhoff, B. Prifling, R. Scurtu, L. Kremer, A. Hoffmann, A. Hilger, M. Osenberg, I. Manke, M. Wohlfahrt-Mehrens, V. Schmidt, and A. Latz, "Influence of conductive additives and binder on the impedance of lithium-ion battery electrodes: Effect of morphology," Journal of The Electrochemical Society, vol. 167, no. 1, 013546, 2020.
Quantifying the unknown impact of segmentation uncertainty on image-based simulations. M C Krygier, T Labonte, C Martinez, C Norris, K Sharma, L N Collins, P P Mukherjee, S A Roberts, Nature Communications. 122021M. C. Krygier, T. LaBonte, C. Martinez, C. Norris, K. Sharma, L. N. Collins, P. P. Mukherjee, and S. A. Roberts, "Quantifying the unknown impact of segmentation uncertainty on image-based simulations," Nature Communications, vol. 12, 5414, 2021.
Integrated control system environment for high-throughput tomography. I Khokhriakov, L Lottermoser, R Gehrke, T Kracht, E Wintersberger, A Kopmann, M Vogelgesang, F Beckmann, International Society for Optics and Photonics. X-Ray Tomography IX, S. R. Stock9212SPIEI. Khokhriakov, L. Lottermoser, R. Gehrke, T. Kracht, E. Wintersberger, A. Kopmann, M. Vogelge- sang, and F. Beckmann, "Integrated control system environment for high-throughput tomography," in Developments in X-Ray Tomography IX, S. R. Stock, Ed., International Society for Optics and Photonics, vol. 9212, SPIE, 2014, 307-317.
Micro-CT at the imaging beamline P05 at PETRA III. F Wilde, M Ogurreck, I Greving, J U Hammel, F Beckmann, A Hipp, L Lottermoser, I Khokhriakov, P Lytaev, T Dose, H Burmester, M Müller, A Schreyer, AIP Conference Proceedings. 1741130035F. Wilde, M. Ogurreck, I. Greving, J. U. Hammel, F. Beckmann, A. Hipp, L. Lottermoser, I. Khokhriakov, P. Lytaev, T. Dose, H. Burmester, M. Müller, and A. Schreyer, "Micro-CT at the imaging beamline P05 at PETRA III," AIP Conference Proceedings, vol. 1741, no. 1, 030035, 2016.
Nonlinear total variation based noise removal algorithms. L I Rudin, S Osher, E Fatemi, Physica D: Nonlinear Phenomena. 601L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear Phenomena, vol. 60, no. 1, 259-268, 1992.
Developments in synchrotron X-ray computed microtomography at the National Synchrotron Light Source. B A Dowd, G H Campbell, R B Marr, V V Nagarkar, S V Tipnis, L Axe, D P Siddons, International Society for Optics and Photonics. X-Ray Tomography II, U. Bonse3772SPIEB. A. Dowd, G. H. Campbell, R. B. Marr, V. V. Nagarkar, S. V. Tipnis, L. Axe, and D. P. Siddons, "Developments in synchrotron X-ray computed microtomography at the National Synchrotron Light Source," in Developments in X-Ray Tomography II, U. Bonse, Ed., International Society for Optics and Photonics, vol. 3772, SPIE, 1999, 224-236.
A non-local algorithm for image denoising. A Buades, B Coll, J.-M Morel, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionSan Diego2A. Buades, B. Coll, and J.-M. Morel, "A non-local algorithm for image denoising," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, vol. 2, San Diego, 2005, 60-65.
An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images. P Coupé, P Yger, S Prima, P Hellier, C Kervrann, C Barillot, IEEE Transactions on Medical Imaging. 274P. Coupé, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, "An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images," IEEE Transactions on Medical Imaging, vol. 27, no. 4, 425-441, 2008.
Distinctive image features from scale-invariant keypoints. D G Lowe, International Journal of Computer Vision. 602D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, no. 2, 91-110, 2004.
Fiji: An open-source platform for biological-image analysis. J Schindelin, I Arganda-Carreras, E Frise, V Kaynig, M Longair, T Pietzsch, S Preibisch, C Rueden, S Saalfeld, B Schmid, J.-Y Tinevez, D J White, V Hartenstein, K Eliceiri, P Tomancak, A Cardona, Nature Methods. 9J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J.-Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona, "Fiji: An open-source platform for biological-image analysis," Nature Methods, vol. 9, 676-682, 2012.
Three-phase multiscale modeling of a LiCoO 2 cathode: Combining the advantages of FIB-SEM imaging and X-ray tomography. L Zielke, T Hutzenlaub, D R Wheeler, C.-W Chao, I Manke, A Hilger, N Paust, R Zengerle, S Thiele, Advanced Energy Materials. 551401612L. Zielke, T. Hutzenlaub, D. R. Wheeler, C.-W. Chao, I. Manke, A. Hilger, N. Paust, R. Zengerle, and S. Thiele, "Three-phase multiscale modeling of a LiCoO 2 cathode: Combining the advantages of FIB-SEM imaging and X-ray tomography," Advanced Energy Materials, vol. 5, no. 5, 1401612, 2015.
Morphology of nanoporous carbon-binder domains in Li-ion batteries -a FIB-SEM study. S Vierrath, L Zielke, R Moroni, A Mondon, D R Wheeler, R Zengerle, S Thiele, Electrochemistry Communications. 60S. Vierrath, L. Zielke, R. Moroni, A. Mondon, D. R. Wheeler, R. Zengerle, and S. Thiele, "Morphol- ogy of nanoporous carbon-binder domains in Li-ion batteries -a FIB-SEM study," Electrochemistry Communications, vol. 60, 176-179, 2015.
W Burger, M Burge, Digital Image Processing: An Algorithmic Introduction Using Java. Springer2W. Burger and M. Burge, Digital Image Processing: An Algorithmic Introduction Using Java, 2 nd ed. London: Springer, 2016.
R C Gonzalez, R E Woods, Digital Image Processing, 4. New YorkPearsonrd edR. C. Gonzalez and R. E. Woods, Digital Image Processing, 4 rd ed. New York: Pearson, 2018.
The Image Processing Handbook, 5 th ed. J C Russ, CRC PressBoca RatonJ. C. Russ, The Image Processing Handbook, 5 th ed. Boca Raton: CRC Press, 2007.
Data clustering: 50 years beyond K-means. A K Jain, Pattern Recognition Letters. 318A. K. Jain, "Data clustering: 50 years beyond K-means," Pattern Recognition Letters, vol. 31, no. 8, 651-666, 2010.
An Introduction to Statistical Learning. G James, D Witten, T Hastie, R Tibshirani, SpringerNew YorkG. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning. New York: Springer, 2013.
D Kroese, Z Botev, T Taimre, R Vaisman, Data Science and Machine Learning: Mathematical and Statistical Methods. Boca RatonCRC PressD. Kroese, Z. Botev, T. Taimre, and R. Vaisman, Data Science and Machine Learning: Mathe- matical and Statistical Methods. Boca Raton: CRC Press, 2019.
Analysis of structural and functional aging of electrodes in lithium-ion batteries during rapid charge and discharge rates using synchrotron tomography. B Prifling, A Ridder, A Hilger, M Osenberg, I Manke, K P Birke, V Schmidt, Journal of Power Sources. 443227259B. Prifling, A. Ridder, A. Hilger, M. Osenberg, I. Manke, K. P. Birke, and V. Schmidt, "Analysis of structural and functional aging of electrodes in lithium-ion batteries during rapid charge and discharge rates using synchrotron tomography," Journal of Power Sources, vol. 443, 227259, 2019.
The BOBYQA algorithm for bound constrained optimization without derivatives. M J D Powell, NA2009/06CambridgeUniversity of CambridgeCambridge NA ReportM. J. D. Powell, "The BOBYQA algorithm for bound constrained optimization without deriva- tives," Cambridge NA Report NA2009/06, 26-46, 2009, University of Cambridge, Cambridge.
A comparative study of efficient initialization methods for the k-means clustering algorithm. M E Celebi, H A Kingravi, P A Vela, Expert Systems with Applications. 401M. E. Celebi, H. A. Kingravi, and P. A. Vela, "A comparative study of efficient initialization methods for the k-means clustering algorithm," Expert Systems with Applications, vol. 40, no. 1, 200-210, 2013.
A threshold selection method from gray-level histograms. N Otsu, IEEE Transactions on Systems. 91Man, and CyberneticsN. Otsu, "A threshold selection method from gray-level histograms," IEEE Transactions on Sys- tems, Man, and Cybernetics, vol. 9, no. 1, 62-66, 1979.
U-Net: convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention -MICCAI. N. Navab, J. Hornegger, W. M. Wells, and A. F. FrangiSpringer International PublishingO. Ronneberger, P. Fischer, and T. Brox, "U-Net: convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, 234-241.
R Rojas, Neural Networks: A Systematic Introduction. BerlinSpringerR. Rojas, Neural Networks: A Systematic Introduction. Berlin: Springer, 2013.
I Goodfellow, Y Bengio, A Courville, Deep Learning. CambridgeMIT PressI. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge: MIT Press, 2016.
H Coxeter, Regular Polytopes. New YorkDover PublicationsH. Coxeter, Regular Polytopes. New York: Dover Publications, 1973.
On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, Proceedings of the 30th International Conference on Machine Learning. S. Dasgupta and D. McAllesterthe 30th International Conference on Machine Learning28ser. Proceedings of Machine Learning ResearchI. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momen- tum in deep learning," in Proceedings of the 30th International Conference on Machine Learning, S. Dasgupta and D. McAllester, Eds., ser. Proceedings of Machine Learning Research, vol. 28, Atlanta: PMLR, 2013, 1139-1147.
D Dubois, H Prade, Fundamentals of Fuzzy Sets. BostonKluwer Academic PublishersD. Dubois and H. Prade, Fundamentals of Fuzzy Sets. Boston: Kluwer Academic Publishers, 2000.
H Zimmermann, Fuzzy Set Theory and Its Applications, 4 th. Kluwer Academic PublishersH. Zimmermann, Fuzzy Set Theory and Its Applications, 4 th ed. Boston: Kluwer Academic Pub- lishers, 2001.
. J Serra, Image Analysis and Mathematical Morphology. Academic PressJ. Serra, Image Analysis and Mathematical Morphology. London: Academic Press, 1982.
P Soille, Morphological Image Analysis: Principles and Applications. Springer2P. Soille, Morphological Image Analysis: Principles and Applications, 2 nd ed. New York: Springer, 2003.
D Kincaid, E Cheney, Numerical Analysis: Mathematics of Scientific Computing. ProvidenceAmerican Mathematical SocietyD. Kincaid and E. Cheney, Numerical Analysis: Mathematics of Scientific Computing. Providence: American Mathematical Society, 2009.
M Hassaballah, A Awad, Deep Learning in Computer Vision: Principles and Applications. Boca RatonCRC PressM. Hassaballah and A. Awad, Deep Learning in Computer Vision: Principles and Applications. Boca Raton: CRC Press, 2020.
Thermodynamic consistent transport theory of li-ion batteries. A Latz, J Zausch, Journal of Power Sources. 1966A. Latz and J. Zausch, "Thermodynamic consistent transport theory of li-ion batteries," Journal of Power Sources, vol. 196, no. 6, 3296-3302, 2011.
Multiscale modeling of lithium ion batteries: Thermal aspects. A Latz, J Zausch, Beilstein Journal of Nanotechnology. 61A. Latz and J. Zausch, "Multiscale modeling of lithium ion batteries: Thermal aspects," Beilstein Journal of Nanotechnology, vol. 6, no. 1, 987-1007, 2015.
Derivation of a local volume-averaged model and a stable numerical algorithm for multi-dimensional simulations of conversion batteries. T Schmitt, A Latz, B Horstmann, Electrochimica Acta. 333135491T. Schmitt, A. Latz, and B. Horstmann, "Derivation of a local volume-averaged model and a sta- ble numerical algorithm for multi-dimensional simulations of conversion batteries," Electrochimica Acta, vol. 333, 135491, 2020.
S N Chiu, D Stoyan, W S Kendall, J Mecke, Stochastic Geometry and its Applications, 3 rd. Wiley & SonsS. N. Chiu, D. Stoyan, W. S. Kendall, and J. Mecke, Stochastic Geometry and its Applications, 3 rd ed. Chichester: J. Wiley & Sons, 2013.
Measuring intrinsic volumes in digital 3D images. K Schladitz, J Ohser, W Nagel, 13th International Conference Discrete Geometry for Computer Imagery. Kuba, L. Nyúl, and K. PalágyiBerlinSpringerK. Schladitz, J. Ohser, and W. Nagel, "Measuring intrinsic volumes in digital 3D images," in 13th International Conference Discrete Geometry for Computer Imagery, A. Kuba, L. Nyúl, and K. Palágyi, Eds., Berlin: Springer, 2007, 247-258.
Understanding electrolyte filling of lithium-ion battery electrodes on the pore scale using the lattice Boltzmann method. M P Lautenschläger, B Prifling, B Kellers, J Weinmiller, T Danner, V Schmidt, A Latz, Batteries & Supercaps. 2022M. P. Lautenschläger, B. Prifling, B. Kellers, J. Weinmiller, T. Danner, V. Schmidt, and A. Latz, "Understanding electrolyte filling of lithium-ion battery electrodes on the pore scale using the lattice Boltzmann method," Batteries & Supercaps, e202200090, 2022.
The influence of constrictivity on the effective transport properties of porous layers in electrolysis and fuel cells. L Holzer, D Wiedenmann, B Münch, L Keller, M Prestat, P Gasser, I Robertson, B Grobéty, Journal of Materials Science. 48L. Holzer, D. Wiedenmann, B. Münch, L. Keller, M. Prestat, P. Gasser, I. Robertson, and B. Grobéty, "The influence of constrictivity on the effective transport properties of porous layers in electrolysis and fuel cells," Journal of Materials Science, vol. 48, 2934-2952, 2013.
Quantifying the influence of microstructure on effective conductivity and permeability: Virtual materials testing. M Neumann, O Stenzel, F Willot, L Holzer, V Schmidt, International Journal of Solids and Structures. 184M. Neumann, O. Stenzel, F. Willot, L. Holzer, and V. Schmidt, "Quantifying the influence of microstructure on effective conductivity and permeability: Virtual materials testing," International Journal of Solids and Structures, vol. 184, 211-220, 2020.
Large-scale statistical learning for mass transport prediction in porous materials using 90,000 artificially generated microstructures. B Prifling, M Röding, P Townsend, M Neumann, V Schmidt, Frontiers in Materials. 82021B. Prifling, M. Röding, P. Townsend, M. Neumann, and V. Schmidt, "Large-scale statistical learn- ing for mass transport prediction in porous materials using 90,000 artificially generated microstruc- tures," Frontiers in Materials, vol. 8, 786502, 2021.
Estimation of geodesic tortuosity and constrictivity in stationary random closed sets. M Neumann, C Hirsch, J Staněk, V Beneš, V Schmidt, Scandinavian Journal of Statistics. 463M. Neumann, C. Hirsch, J. Staněk, V. Beneš, and V. Schmidt, "Estimation of geodesic tortuosity and constrictivity in stationary random closed sets," Scandinavian Journal of Statistics, vol. 46, no. 3, 848-884, 2019.
Tortuosity: A guide through the maze. M B Clennell, Special Publications. London122M. B. Clennell, "Tortuosity: A guide through the maze," Geological Society, London, Special Pub- lications, vol. 122, no. 1, 299-344, 1997.
Quantifying tortuosity in porous Li-ion battery materials. I V Thorat, D E Stephenson, N A Zacharias, K Zaghib, J N Harb, D R Wheeler, Journal of Power Sources. 188I. V. Thorat, D. E. Stephenson, N. A. Zacharias, K. Zaghib, J. N. Harb, and D. R. Wheeler, "Quantifying tortuosity in porous Li-ion battery materials," Journal of Power Sources, vol. 188, 592-600, 2009.
Tortuosity in porous media: A critical review. B Ghanbarian, A G Hunt, R P Ewing, M Sahimi, Soil Science Society of America Journal. 775B. Ghanbarian, A. G. Hunt, R. P. Ewing, and M. Sahimi, "Tortuosity in porous media: A critical review," Soil Science Society of America Journal, vol. 77, no. 5, 1461-1477, 2013.
Tortuosity in electrochemical devices: A review of calculation approaches. B Tjaden, D J L Brett, P R Shearing, International Materials Reviews. 632B. Tjaden, D. J. L. Brett, and P. R. Shearing, "Tortuosity in electrochemical devices: A review of calculation approaches," International Materials Reviews, vol. 63, no. 2, 47-67, 2018.
D , Graphs, Networks and Algorithms. BerlinSpringer3D. Jungnickel, Graphs, Networks and Algorithms, 3 rd ed. Berlin: Springer, 2007.
Random Sets and Integral Geometry. G Matheron, J. Wiley & SonsNew YorkG. Matheron, Random Sets and Integral Geometry. New York: J. Wiley & Sons, 1975.
Influence of the electrolyte salt concentration on the rate capability of ultra-thick NCM 622 electrodes. L S Kremer, T Danner, S Hein, A Hoffmann, B Prifling, V Schmidt, A Latz, M Wohlfahrt-Mehrens, Batteries & Supercaps. 311L. S. Kremer, T. Danner, S. Hein, A. Hoffmann, B. Prifling, V. Schmidt, A. Latz, and M. Wohlfahrt- Mehrens, "Influence of the electrolyte salt concentration on the rate capability of ultra-thick NCM 622 electrodes," Batteries & Supercaps, vol. 3, no. 11, 1172-1182, Nov. 2020.
Resolving the discrepancy in tortuosity factor estimation for li-ion battery electrodes through micro-macro modeling and experiment. F L E Usseglio-Viretta, A Colclasure, A N Mistry, K P Y Claver, F Pouraghajan, D P Finegan, T M M Heenan, D Abraham, P P Mukherjee, D Wheeler, P R Shearing, S J Cooper, K Smith, Journal of The Electrochemical Society. 16514F. L. E. Usseglio-Viretta, A. Colclasure, A. N. Mistry, K. P. Y. Claver, F. Pouraghajan, D. P. Fine- gan, T. M. M. Heenan, D. Abraham, P. P. Mukherjee, D. Wheeler, P. R. Shearing, S. J. Cooper, and K. Smith, "Resolving the discrepancy in tortuosity factor estimation for li-ion battery elec- trodes through micro-macro modeling and experiment," Journal of The Electrochemical Society, vol. 165, no. 14, A3403-A3426, 2018.
Improving the ionic transport properties of graphite anodes for lithium ion batteries by surface modification using nanosecond laser. J Sandherr, S Nester, M.-J Kleefoot, M Bolsinger, C Weisenberger, A Haghipour, D K Harrison, S Ruck, H Riegel, V Knoblauch, SSRN Electronic Journal. J. Sandherr, S. Nester, M.-J. Kleefoot, M. Bolsinger, C. Weisenberger, A. Haghipour, D. K. Harri- son, S. Ruck, H. Riegel, and V. Knoblauch, "Improving the ionic transport properties of graphite anodes for lithium ion batteries by surface modification using nanosecond laser," SSRN Electronic Journal, 2022.
Effect of crystallite geometries on electrochemical performance of porous intercalation electrodes by multiscale operando investigation. Y Luo, Y Bai, A Mistry, Y Zhang, D Zhao, S Sarkar, J V Handy, S Rezaei, A C Chuang, L Carrillo, K Wiaderek, M Pharr, K Xie, P P Mukherjee, B X Xu, S Banerjee, Nature Materials. 212Y. Luo, Y. Bai, A. Mistry, Y. Zhang, D. Zhao, S. Sarkar, J. V. Handy, S. Rezaei, A. C. Chuang, L. Carrillo, K. Wiaderek, M. Pharr, K. Xie, P. P. Mukherjee, B. X. Xu, and S. Banerjee, "Ef- fect of crystallite geometries on electrochemical performance of porous intercalation electrodes by multiscale operando investigation," Nature Materials, vol. 21, no. 2, 217-227, 2022.
Temperature and concentration dependence of the ionic transport properties of lithium-ion battery electrolytes. J Landesfeind, H A Gasteiger, Journal of The Electrochemical Society. 16614J. Landesfeind and H. A. Gasteiger, "Temperature and concentration dependence of the ionic transport properties of lithium-ion battery electrolytes," Journal of The Electrochemical Society, vol. 166, no. 14, A3079-A3097, 2019.
| []
|
[
"Robust direct acoustic impedance control using two microphones for mixed feedforward-feedback controller",
"Robust direct acoustic impedance control using two microphones for mixed feedforward-feedback controller"
]
| [
"Maxime Volery ",
"Xinxin Guo ",
"Hervé Lissek "
]
| []
| []
| This paper presents an acoustic impedance control architecture for an electroacoustic absorber combining both feedforward and feedback microphone-based strategies on a current-driven loudspeaker. Feedforward systems enable good performance for direct impedance control. However, inaccuracies in the required actuator model can lead to a loss of passivity, which can cause unstable behaviour. The feedback contribution allows the absorber to better handle model errors and still achieve an accurate impedance, preserving passivity. Numerical and experimental studies were conducted to compare this new architecture against a state-of-the-art feedforward control method. | 10.1051/aacus/2022058 | [
"https://export.arxiv.org/pdf/2108.02003v3.pdf"
]
| 236,912,523 | 2108.02003 | 3224562dd4579d6a264b43c56e56af8e5ca498c1 |
Robust direct acoustic impedance control using two microphones for mixed feedforward-feedback controller
November 4, 2022
Maxime Volery
Xinxin Guo
Hervé Lissek
Robust direct acoustic impedance control using two microphones for mixed feedforward-feedback controller
November 4, 2022Active sound absorptionelectrodynamic loudspeakerfeedback controlfeedforward con- trolmodel uncertaintypassivitypressure control
This paper presents an acoustic impedance control architecture for an electroacoustic absorber combining both feedforward and feedback microphone-based strategies on a current-driven loudspeaker. Feedforward systems enable good performance for direct impedance control. However, inaccuracies in the required actuator model can lead to a loss of passivity, which can cause unstable behaviour. The feedback contribution allows the absorber to better handle model errors and still achieve an accurate impedance, preserving passivity. Numerical and experimental studies were conducted to compare this new architecture against a state-of-the-art feedforward control method.
Introduction
Electroacoustic absorption consists in controlling the acoustic impedance presented by an electroacoustic actuator, typically an electrodynamic loudspeaker [1]. The control of this impedance can be done passively, by loading the voice coil of the loudspeaker with an appropriate electrical impedance [2,3], or actively, using one or more sensors controlling the voltage or current applied to the actuator. Active electroacoustic absorbers have a wide range of applications, spanning from room acoustics [4] to aircraft engine noise reduction [5] thanks to their advantage of being tuneable, broadband and of sub-wavelength dimensions. Most of the state-of-the art active absorber designs are either not tuneable, such as in the hybrid passive/active absorption concept [6] or require both a pressure and velocity sensor for a feedback implementation. The sensing of the velocity can, for instance, be achieved using an accelerometer placed on the loudspeaker cone [7] (not acceptable for small loudspeakers), two closely placed microphones [8] (not practical because upstream from the impedance plane) or a Wheatstone bridge [9] (requires fine resistors and inductance tuning).
However, should the model of the actuator be known, a feedforward architecture [4] can be used where only a single sensor is needed. Also, thanks to the model inversion, direct impedance control can be achieved accurately, whereas other methods only approach the target impedance. Nevertheless, due to some inevitable inaccuracies in the estimation of the model parameters and the delay of the numerical controller, a mismatch between the target impedance and the achieved one will eventually occur. This mismatch can cause a loss of acoustic passivity of the absorber, meaning that it is injecting energy into the acoustic environment instead of absorbing it. Such behaviour is unwelcome, even if it occurs outside of the frequency band of interest, because it can result in an unstable positive acoustic feedback. In other words, if at a given frequency, the absorber injects more energy than the acoustic environment dissipates, energy will build-up, leading to an instability [10].
Combining both a feed-forward and a feedback loop can help reduce the inaccuracies while keeping the same performances, enabling a better fit with the analytical target impedance. The membrane velocity estimation needed for the feedback implementation can be obtained via a microphone placed inside the cavity of the loudspeaker [11,12]. Indeed, for wavelengths smaller than the cabinet dimensions, the acoustic pressure behind the actuator is proportional to its membrane displacement and can be used to control it. With this configuration, the size and complexity of the proposed mixed feedforwardfeedback strategy does not fundamentally change from the former feedforward-only architecture and can be directly compared.
This paper is organized as follows. In section 2, a model of the electrodynamic loudspeaker is introduced before the description of the two-input control architecture. Section 3 presents a Monte-Carlo analysis of the sensitivity of the achieved absorption to the model estimation errors. Experimental validation of the proposed architecture is given in section 4 for three different control configurations, and section 5 provides conclusion and opens some future perspectives for the presented concept.
2 Robust electroacoustic absorber design
Model of the electrodynamic loudspeaker
An electrodynamic loudspeaker can be modelled as a mass-spring-damper system, of mass M ms , mechanical compliance C ms and mechanical resistance R ms It is thus a second order resonator [13]. Three forces act on its membrane: the pressure in front of the membrane p f , the pressure behind the membrane p b and the Lorentz force due to the current i flowing in the voice coil. When mounted on an enclosure, the contribution from the rear pressure can be modelled as a specific compliance C sb for wavelengths smaller than the cabinet dimensions. This compliance is the ratio between membrane displacement and the pressure in the cavity, and is linked to the volume of the cavity V b as follows:
C sb = V b ρ 0 c 2 0 S d ,(1)
where ρ 0 is the mass density of air, c 0 the speed of sound in the air, and S d the effective piston area of the loudspeaker. The membrane motion is described by Newton's second law of motion
M ms dv(t) dt = S d p f (t) − R ms v(t) − 1 C ms + S d C sb 1/Cmc t 0 v(t)dt − Bli(t),(2)
where v is the membrane inwards velocity, Bl the coil force factor, and C mc the combined mechanical compliance of the loudspeaker and the cabinet. The pressure in the cabinet p b is directly proportional to the membrane displacement
p b (t) = 1 C sb t 0 v(t)dt.(3)
In the Laplace domain, with Laplace variable s, equations (2) and (3) are written
p f (s) = Z ss (s)v(s) + F i(s)(4)
and
p b (s) = v(s) sC sb ,(5)
where
F = Bl S d ,(6)
Z ss (s) = R ss
s 2 + sω 0 /Q ms + ω 2 0 sω 0 /Q ms(7)
is the specific impedance of the loudspeaker, R ss = R ms /S d its specific resistance, ω 0 = 1/ √ M ms C mc its natural resonance angular frequency and Q ms = R −1 ms M ms /C mc its (passive) quality factor. From the representation of the impedance of equation (7), it is straightforward to notice that the passive loudspeaker (i = 0) mounted on a cabinet is indeed a second order resonator.
Because an accurate model of the electrical impedance of the loudspeaker is complex to develop and to estimate [14,15], and that the electrical force applied on the membrane is directly proportional to the current flowing in the coil, as shown in equation (4), it is interesting to drive the loudspeaker using a current source rather than a voltage source, as has been done in [4]. In the following, the loudspeaker is driven in current. An implementation of such a current source is given in appendix A. Figure 1: Controlled absorber. The two-input controller is depicted on the right in the dashed rectangle.
p f p b H 2 (s) H 1 (s) + + p b p f iF H 1 (s) 1 Zss(s) H 2 (s) + − + + p f 1 sC sb v i p b
Formulation of the Two-Input Single-Output controller
Direct impedance control allows to reach a desired target impedance Z st (s) on the membrane of the loudspeaker instead of the passive one Z ss (s). A feedforward-controller [4] measures the pressure in front of the membrane and relies on the model of the actuator to find the current to inject in the voice coil to get the appropriate membrane velocity such that the desired target impedance is met. It is therefore capable of reaching a wide range of target impedances. However, this also implies that an accurate model of the loudspeaker must be given to the controller, and that any inaccuracy in this model can have an important impact on the obtained results (i.e., the achieved impedance will deviate from the target one). Adding a feedback loop along with the feedforward architecture can help reduce this problem. To implement feedback on top of the feedforward architecture, a measure of the velocity of the membrane is needed in addition to the pressure in front of it. This can be achieved by sensing the pressure in the cavity closing the rear face of the actuator because the pressure in it is proportional to the displacement of the membrane at low frequencies, as shown in equation (5).
It appears now that the controller has two inputs: the pressure in front of the membrane p f and the pressure behind it p b and has a single output: the current i injected in the moving coil of the loudspeaker. This output current can therefore be expressed as
i(s) = H 1 (s)p f (s) + H 2 (s)p b (s),(8)
where both H 1 and H 2 are linear time-invariant systems. An illustration of such a controller is shown in Figure 1, and its detailed block diagram in Figure 2. In the latter, it is clearly visible that H 1 (s) is the feedforward part of the controller and H 2 (s) the feedback part.
In order to achieve a target impedance Z st (s), it follows from equations (4), (5) and (8) that H 1 and H 2 must satisfy the relation
H 1 (s) + H 2 (s) sC sb Z st (s) = 1 F 1 − Z ss (s) Z st (s) .(9)
There is an infinite number of realizations that satisfy equation (9), but feedback from the membrane velocity is desired. This feedback in velocity G(s) is the combination of the controller H 2 , the compliance of the enclosure and the force factor. And because the modelling of the box as a constant compliance is only valid for wavelengths smaller than the dimension of the box, G(s) should have a low-pass behaviour.
A first order low-pass filter is chosen for G(s) such that the controller is of the smallest degree possible:
G(s) = F H 2 (s) sC sb = ρ 0 c 0 k g ω g s + ω g ,(10)
where k g ≥ 0 is a dimensionless tuneable feedback gain and ω g is the cut-off angular frequency of the low-pass filter G(s). The two control transfer functions are thus
H 1 (s) = 1 F 1 − Z ss (s) + G(s) Z st (s)(11)
and
H 2 (s) = sC sb G(s) F .(12)
In equations (11) and (12), it can be observed that the controller is proper, and that by setting G = 0, only H 1 (s) is left, and is equal to the state-of-the art feedforward controller from [4] without any feedback. Furthermore, equations (11) and (12) can also be interpreted as the superposition of the pure feedforward implementation and a pure feedback implementation where the error between target velocity and achieved velocity is fed as a current to the loudspeaker with feedback gain G(s), as in [16].
However, not any arbitrary impedance can be achieved: to avoid divergence of the control transfer function H 1 (s) for low and high frequencies, the asymptotes of the target impedance should behave as a compliance for low frequency, and a mass for high frequencies, as it is the case for the passive impedance. In this article, the considered target impedance is a multi-degree-of-freedom resonator, which is the result of N second order resonators connected in parallel, as used in [17]
Z st (s) = N n=1 1 R st,n sω t,n /Q t,n s 2 + sω t,n /Q t,n + ω 2 t,n −1 ,(13)
where R st,n , ω t,n and Q t,n are respectively the specific resistance, the resonance angular frequency and the quality factor of the n th resonator. Different realizations of the target impedance could also be considered, but the following derivation will consider the form of equation (13) without loss of generality. There is one feed-forward loop, which is stable if its components are stable, and a feedback loop which is stable if the real part of all its poles is negative. These poles are the solutions of
Proof of stability
1 T (s) = G(s) + Z ss (s) = 0,(14)
where T (s) is the closed loop transfer function between (1 − F H 1 )p f and v. This is equivalent to solving
s 3 + as 2 + bs + c = 0,(15)
where
a = ω 0 Q ms + ω g ,(16)b = ω 2 0 + ω 0 ω g Q ms ρ 0 c 0 k g R ss + 1(17)
and
c = ω 2 0 ω g ,(18)
and it is interesting to notice that equation (15) does not depend on the target impedance. The closed loop T (s) is stable if and only if the Hurwitz matrix
H = a c 0 1 b 0 0 a c (19)
corresponding to the polynomial of equation (15) has all its three leading principal minors which are positive [18]:
a > 0,(20)a c 1 b = ab − c > 0(21)
and a c 0
1 b 0 0 a c = c (ab − c) > 0.(22)
This means that k g must satisfy
k g > − R ss ρ 0 c 0 1 + Q ms (ω 0 /ω g ) 2 Q ms + ω 0 /ω g ,(23)
which is always true for nonnegative values of k g .
Sensitivity to parameter variations
To analyse the robustness of the proposed method to parameter estimation accuracy, the sensitivity functions of the achieved impedance are calculated. When the estimated valuesẐ ss ,F andĈ sb of the parameters Z ss , F and C sb respectively are used in the controller transfer functions from equations (11) and (12), the achieved impedance is
Z sa = Z st G(s)Ĉ sb /C sb + Z ss (s)F /F G(s) +Ẑ ss (s) + Z st (s) F /F − 1 .(24)
The sensitivity function of this achieved impedance with respect to a parameter x is defined as the ratio between the percentage of change in the achieved impedance Z sa to the percentage of change in the parameter x [19]:
S x (s) = ∂Z sa ∂x x Z sa .(25)
which results in
SẐ ss (s) = − 1 + G + F /F − 1 Z st Z ss −1 ,(26)SF (s) = 1 +Ĉ sb F G C sbF Z ss −1 − 1 + F G +Ẑ ss − Z st F Z st −1(27)
and SĈ sb (s) = 1 +
C sbF Z sŝ C sb F G −1 ,(28)
for parametersẐ ss ,F andĈ sb respectively. The limit when G(s) → ∞ of SẐ ss (s), SF (s) and SĈ sb (s) are respectively 0, 0 and 1. It can therefore be concluded that any variation in the estimationẐ ss andF will be less significant when the magnitude of G(s) is larger. This is however not true forĈ sb , for which the error on the achieved impedance becomes proportional to the error inĈ sb when the magnitude of G(s) is large.
Numerical sensitivity analysis
In this section, a numerical sensitivity analysis is presented for three different control targets: a singledegree-of-freedom resonator whose resonance is shifted with respect to the passive one, a broadband absorption centred at the passive resonance and a two-degree-of-freedom impedance with two distinct shifted resonances. The target impedances and the control parameters are defined according to equation (13) and are reported for each case in Table 1. Feedforward (k g = 0) Figure 3: First and third quartiles of the achieved absorption for the single-degree-of-freedom absorber with 10 5 random relative errors of 5% standard deviation on the five estimated parameters
The numerical sensitivity analysis consists in evaluating the achieved normal incidence absorption coefficient α a 10 5 times, with random Gaussian deviations of 5% on the estimated parametersR ss ,ω 0 , Q ms ,F andĈ sb . This absorption coefficient is defined as the ratio between absorbed and incident power. It lies between 0 and 1 for acoustically passive systems, whereas it is smaller than one if the system is acoustically active (for which energy is injected in the acoustic domain instead of being absorbed). It is calculated from the achieved impedance Z sa (s) as
α a (s) = 1 − Z sa (s) − ρ 0 c 0 Z sa (s) + ρ 0 c 0 2 ,(29)
where Z sa , the achieved impedance is evaluated according to equation (24). At every simulated frequency, the values of the first and the third quartiles of the absorption coefficient are reported in Figure 3, Figure 4 and Figure 5 for each considered target. In these figures, it is observable that the absorption coefficient with only feedforward deviates further away from the target than with the mixed feedforward-feedback control. It can even reach negative values around the passive resonance of the actuator. With feedback however, it is much better controlled around this resonance, but at the price of lower accuracy for other frequencies.
Although the feedback does not bring much improvement for the broadband absorption shown in Figure 4, it does for the two other cases. In an Ultra High Bypass Ratio aircraft engine application, the sound to absorb is typically tonal, and an absorber with multiples rays of absorptions would be convenient [5,20]. Also, in this application, the optimal impedance would not be ρ 0 c 0 but rather consists of a given resistive part and a reactive part, as explained in [21], for which this new architecture can bring interesting improvements.
Experimental results
Experimental setup
The measurement setup used to experimentally assess this new control architecture is shown in Figure 6, and schematised in Figure 7. The two microphones used to control the electroacoustic absorber are Feedforward (k g = 0) (11) and (12) with a sampling frequency of 50 kHz. For better numerical stability, the digital filter is realized as a cascade of secondorder sections [22]. The output voltage of the controller is converted into a current by a home-made voltage-controlled current source whose schematic is described in appendix A. A short study on the impact of the position of the rear microphone is available in appendix B. The achieved impedance presented by the absorber is measured using a Kundt's tube after ISO 10534-2 [23]. A multichannel frequency analyser feeds white noise to the amplified external source during 60 s (resulting in a sound pressure level up to 105 dB at the absorber position) while measuring the signals from the two measurement microphones p 1 and p 2 . From the transfer function p 2 (s)/p 1 (s) and the waveguide dimensions ∆x and x 1 , the reflection coefficient of the termination of the waveguide, and thus its impedance too, can be recovered [23]. The estimation of the transfer function is done with a linear averaging of 1 s length Hann windows overlapping by 66.67%, with a 1 Hz frequency resolution. All the hardware equipment used is listed in Table 2.
Transducer parameters identification
To implement the filters from equations (11) and (12), five parameters of the electrodynamic loudspeaker are needed: R ss , ω 0 , Q ms , F and C sb . The estimation of the specific mass M ss = M ms /S d , resistance R ss and stiffness K sc = 1/(S d C mc ) are obtained by a polynomial fitting of the measured passive (i = 0)
impedance curve: M sŝ R sŝ K sc = 0 1 0 ω 0 − diag(ω) −1 1 + {Z ss (jω)} {Z ss (jω)} ,(30)
where + denotes the Moore-Penrose pseudo-inverse, ω is a vector containing the measured angular frequencies, Z ss (s) is the measured specific impedance and 0 and 1 are vectors of respectively zeros or ones of the same size as ω. The parameters ω 0 and Q ms are straightforward to derive from the result of equation (30). Then, F can be estimated as presented in [24], using the proportional controller i = K 1 p f :
F = 1 N N n=1 1 − Z ss (jω n )/Z 1 (jω n ) K 1 ,(31)
where Z 1 (s) is the specific impedance measured with the constant feedforward controller of gain K 1 and ω n is the n th element of ω. Finally, the box specific compliance can be found using the proportional
controller i = K 2 p b :Ĉ sb = 1 N N n=1F K 2 /(jω n ) Z 2 (jω n ) − Z ss (jω n ) ,(32)
where Z 2 (s) is the specific impedance measured with the constant feedback controller of gain K 2 . All these measured parameters of the electrodynamic absorber are reported in Table 3. The frequency band considered in equations (30), (31) and (32) is from 170 Hz to 250 Hz with steps of 1 Hz. Note that these parameters describe the termination of the Kundt's tube. To get the loudspeaker parameters, they must be scaled by S d /S duct , where S duct is the cross section of the duct. However, this is not necessary if one is interested in controlling the impedance of the whole termination instead of only the loudspeaker. Indeed, using the cross section S duct instead of S d is equivalent to a scaling of v, and thus a scaling of the impedances and the box compliance. It therefore has no impact on the equations if all the measured impedances as well as the target one are considered with the same cross-section. It is also interesting to notice that the calibration of the two control microphones is not necessary. Indeed, in both equations (31) and (32) the errors in the microphone sensitivities are embedded in the estimation of F and C sb .
Impedance measurements
The three considered target impedances are described by the parameters from Table 1. To highlight the advantage of the mixed feedforward-feedback controller, a 5% error was purposely included in the model of the loudspeaker, needed to build the controller transfer functions, such thatF = 0.95F . In Figure 8, Feedforward (k g = 0) Figure 9: Experimentally obtained absorption coefficients for the broadband absorber, withF = 0.95F Figure 9 and Figure 10, the passive, the target and the achieved absorption coefficients with and without the feedback contribution are drawn. Like for the numerical study, it is observed that the passive resonant behaviour is still present in the achieved impedances without feedback, reaching in some cases a negative value of absorption and adding a degree of freedom to the achieved impedance. The mixed feedforward-feedback controller is capable to overcome this issue, does truly behave as the target and is more accurate, especially around the passive resonance of the loudspeaker. Note that the lack of precision at lower frequencies (i.e., lower than 100 Hz) for both controllers is inherent to the Kundt's tube measurement. Indeed, the termination reflection coefficient Γ(s) is given in [23] as
Γ(s) = H 12 (s) − e −jk∆x e jk∆x − H 12 e −2jkx1 ,(33)
where H 12 (s) = p 2 (s)/p 1 (s) is the transfer function between the two measurement microphones, k is the wave number and ∆x and x 1 are dimensions visible in Figure 7. When the frequency tends to zero, equation (33) becomes ill conditioned because both H 12 (s) and e ±jk∆x tend to one. Equation (33) is therefore very sensitive to the measurement errors in H 12 for low frequencies.
Conclusions
This article presented a new direct impedance control architecture providing a more accurate and robust control on the actual impedance than previously reported in the literature. The concept of mixed feedforward-feedback control is based on an already existing feedforward implementation, but to achieve a better accuracy, it is combined with a feedback loop that relies on the sensing of the displacement of the actuator to adjust the driving current. Displacement sensing is done through a microphone placed in the enclosure of the loudspeaker, effective at low frequencies. Even if it is not a noticeable improvement for broadband absorption, as targeted by the feedforward architecture [4], it does significantly improve the passivity, and thus the stability, of a multi-degree-of-freedom absorber, as formerly used in aircraft engine noise reduction applications. Additionally, in such an environment, the estimated parameters of the absorber might change significantly with the static pressure, surrounding temperature or humidity.
With the feedback contribution, the sensitivity to errors is lowered, and is therefore more adapted to drifting parameters. This design could be further improved, typically by investigating different relations between H 1 and H 2 in equation (9). Also, a more sophisticated model of the relationship between the membrane velocity and the pressure in the cavity could be considered to extend the feedback contribution to higher frequencies or larger loudspeaker enclosures. For this, a more elaborated fitting should be used in equation (32) rather than a constant real value. Furthermore, the mixed feedforward-feedback control could also be used to linearize actuators at high sound pressure levels, at which their stiffness is no longer linear and typically depends on the membrane position.
A Current source
The voltage controlled current source used to drive the loudspeaker for the experimental measurements is depicted in Figure 11 and is inspired from the application report [25]. The chosen operational amplifier is a TL288CP from Texas Instruments. The output current can be shown to be
i out =v in R 3 R 4 + R 2 (R 4 + R 5 ) (R 1 + R 4 )R 2 R 5 + v out R 1 R 3 − R 2 (R 4 + R 5 ) (R 1 + R 4 )R 2 R 5 .(34)
When R 1 = R 2 and R 3 = R 4 + R 5 , it simplifies to a proportional relation between input voltage and output current, regardless of the load impedance Z L : Figure 11: Voltage controlled current source schematic. R 1 = R 2 = 92 kΩ, R 3 = R 4 = 1.1 kΩ and R 5 = 1.2 Ω.
i out = v in R 3 R 1 R 5 .(35)− + − + R 1 R 2 R 3 R 4 R 5 v in i out Z L
With the values from Figure 11, a suitable voltage controlled current source for driving a loudspeaker is obtained:
i out = v in · 9.97 mA V −1 − v out · 10.7 µA V −1 .(36)
The current delivered by the operational amplifier is
i oa = i out R 3 − R 5 R 3 R 1 + R 3 + R 5 R 1 + R 3 − R 5 + 2Z L R 1 + R 3 − R 5 ,(37)
which is approximately the output current of the current pump since R 5 and the Z L are both much smaller than R 3 . For the 2 DOF case from Table 1, the highest current is required when all the incident pressure is concentrated at 100.8 Hz. The maximal output current for the TL288CP is of 80 mA, which is reached when the incident pressure is 117 dB SPL at 100.8 Hz.
B Microphone position in the cavity
For wavelengths much smaller than the dimension of the enclosure of the loudspeaker, the pressure in the cavity is proportional to the displacement of the membrane. However, as the frequency increases, the model of the box is becoming worse, and cavity modes appear. The position of the microphone in the cavity can help mitigate this effect. Frequency-domain simulations have been conducted using the finite element simulation software COMSOL Multiphysics to find an optimal microphone position. The obtained relationships from the membrane displacement to the pressure at the position of the microphone p b /ξ are reported in Figure 12 for the two geometries shown in Figure 13. In this graph, it is visible that the first cavity mode happens at 2.2 kHz. The response of the microphone at position 1 has the flattest response up to this frequency and is therefore chosen in the experimental absorber prototype. However, to avoid instabilities at high frequencies, some melamine foam was added in the enclosure, which will damp higher frequencies and remove the undesired spikes.
Figure 2 :
2Block diagram of the mixed feedforward-feedback controlled absorber
A
pole analysis of the feedback loop created by H 2 (s) is required to show the stability properties of the absorber. Each transfer functions H 1 (s) and H 2 (s) are individually (open loop) proper and stable.
Figure 4 :
4First and third quartiles of the achieved absorption for the broadband absorber with 10 5 random relative errors of 5% standard deviation on the five estimated parameters
Figure 5 :Figure 6 :Figure 7 :
567First and third quartiles of the achieved absorption for the two-degree-of-freedom absorber with 10 5 random relative errors of 5% standard deviation on the five Experimental setup used to measure the impedance presented by the absorber. 1) Electroacoustic resonator 2) measurement microphones 3) frequency analyser 4) power amplifier 5) sound source 6) IEPE signal conditioner 7) FPGA controller 8Schematic of the experimental setup used to measure the impedance presented by the absorber connected to the field-programmable gate array (FPGA) controller through a signal conditioner. The digital filter running on the FPGA is the bilinear transform of equations
Figure 8 :F
8Experimentally obtained absorption coefficients for the single-degree-of-freedom absorber, witĥ =
Figure 10 :
10Experimentally obtained absorption coefficients for the two-degree-of-freedom absorber, witĥ F = 0.95F
Figure 12 :Figure 13 :
1213Simulated transfer function between rear microphone pressure and membrane displacement C sb (s) = ξ(s)/p b Simulated geometry, with the two microphones positions. Membrane is drawn in a thin line and the magnet is hatched. Units in mm
Table 1 :
1Target impedances and control parameters for the three considered configurationsParameter
Symbol 1 DOF Broadband
2 DOF
Specific resistance
R st
ρ 0 c 0
ρ 0 c 0
ρ 0 c 0 and ρ 0 c 0
Resonance frequency
ω t /(2π)
400 Hz
200 Hz
100 Hz and 400 Hz
Quality factor
Q t
7
0.25
7 and 7
Feedback gain
k g
4
4
4
Feedback cut-off frequency w g /(2π)
500 Hz
500 Hz
500 Hz
63
125
250
500
1,000
0
0.5
1
Frequency (Hz)
Absorption coefficient
Target
Passive
Mixed (k g = 4)
Table 2 :
2Experimental setup equipment list
Equipment
Model
Microphone type
PCB 130D20
IEPE signal conditioner MMF M31
FPGA controller
Speedgoat IO334
Frequency analyser
Brüel & Kjaer type 3160
Power amplifier
Brüel & Kjaer type 2706
Waveguide dimensions
∆x: 100 mm, x 1 : 420 mm
L: 970 mm, ∅: 72 mm
Table 3 :
3Measured Thiele-Small parameters of the Monacor SPX-30M loudspeaker mounted on a cabinet
Parameter
Symbol
Value
Specific resistance
R ss
0.6734ρ 0 c 0
Resonant frequency
w 0 /(2π)
205.5 Hz
Mechanical Q factor
Q ms
5.466
Box spec. compliance
C sb
1.808 µm Pa −1
Pressure factor
F
1.084 Pa mA −1
Density of air
ρ 0
1.2 kg m −3
Speed of sound
c 0
343 m s −1
AcknowledgmentThis project has received funding from the Clean Sky 2 Joint Undertaking under the European Union's Horizon 2020 research and innovation program under grant agreement No 821093.This publication reflects only the authors' view, and the JU is not responsible for any use that may be made of the information it contains.
Electronic sound absorber. H F Olson, E G May, 10.1121/1.1907249The Journal of the Acoustical Society of America. 25H. F. Olson and E. G. May, "Electronic sound absorber," The Journal of the Acoustical Society of America, vol. 25, pp. 1130-1136, Nov. 1953. DOI: 10.1121/1.1907249.
Control of resonant acoustic sound fields by electrical shunting of a loudspeaker. A J Fleming, D Niederberger, S O R Moheimani, M Morari, 10.1109/TCST.2006.890276IEEE Transactions on Control Systems Technology. 15A. J. Fleming, D. Niederberger, S. O. R. Moheimani, and M. Morari, "Control of resonant acous- tic sound fields by electrical shunting of a loudspeaker," IEEE Transactions on Control Systems Technology, vol. 15, pp. 689-703, July 2007. DOI: 10.1109/TCST.2006.890276.
Sensorless electroacoustic absorbers through synthesized impedance control for damping low-frequency modes in cavities. R Boulandet, E Rivet, H Lissek, 10.3813/AAA.918986Acta Acustica united with Acustica. 102R. Boulandet, E. Rivet, and H. Lissek, "Sensorless electroacoustic absorbers through synthesized impedance control for damping low-frequency modes in cavities," Acta Acustica united with Acustica, vol. 102, pp. 696-704, July 2016. DOI: 10.3813/AAA.918986.
Broadband low-frequency electroacoustic absorbers through hybrid sensor-/shunt-based impedance control. E Rivet, S Karkar, H Lissek, 10.1109/TCST.2016.2547981IEEE Transactions on Control Systems Technology. 25E. Rivet, S. Karkar, and H. Lissek, "Broadband low-frequency electroacoustic absorbers through hybrid sensor-/shunt-based impedance control," IEEE Transactions on Control Systems Technology, vol. 25, pp. 63-72, Jan. 2017. DOI: 10.1109/TCST.2016.2547981.
Duct modes damping through an adjustable electroacoustic liner under grazing incidence. R Boulandet, H Lissek, S Karkar, M Collet, G Matten, M Ouisse, M Versaevel, 10.1016/j.jsv.2018.04.009Journal of Sound and Vibration. 426R. Boulandet, H. Lissek, S. Karkar, M. Collet, G. Matten, M. Ouisse, and M. Versaevel, "Duct modes damping through an adjustable electroacoustic liner under grazing incidence," Journal of Sound and Vibration, vol. 426, pp. 19-33, Apr. 2018. DOI: 10.1016/j.jsv.2018.04.009.
Hybrid passive/active absorbers for flow ducts. M.-A Galland, B Mazeaud, N Sellen, 10.1016/j.apacoust.2004.09.007Applied Acoustics. 66M.-A. Galland, B. Mazeaud, and N. Sellen, "Hybrid passive/active absorbers for flow ducts," Applied Acoustics, vol. 66, pp. 691-708, June 2005. DOI: 10.1016/j.apacoust.2004.09.007.
Acoustic absorbers and diffusers. T J Cox, P , D' Antonio, Spon PressT. J. Cox and P. D'Antonio, Acoustic absorbers and diffusers. Spon Press, 2004.
An adaptive controller for the active absorption of sound. F Orduña-Bustamante, P Nelson, 10.1121/1.403779The Journal of the Acoustical Society of America. 91F. Orduña-Bustamante and P. Nelson, "An adaptive controller for the active absorption of sound," The Journal of the Acoustical Society of America, vol. 91, pp. 2740-2747, May 1992. DOI: 10.1121/1.403779.
Active reflectors for room acoustics. X Meynial, H Lissek, Proc. of the Institute of Acoustics. of the Institute of Acoustics21X. Meynial and H. Lissek, "Active reflectors for room acoustics," Proc. of the Institute of Acoustics, vol. 21, no. 6, 1999.
Effect of time delay on the impedance control of a pressure-based, current-driven electroacoustic absorber. E De Bono, M Collet, G Matten, S Karkar, H Lissek, M Ouisse, K Billon, T Laurence, M Volery, 10.1016/j.jsv.2022.117201Journal of Sound and Vibration. 537117201E. De Bono, M. Collet, G. Matten, S. Karkar, H. Lissek, M. Ouisse, K. Billon, T. Laurence, and M. Volery, "Effect of time delay on the impedance control of a pressure-based, current-driven electroacoustic absorber," Journal of Sound and Vibration, vol. 537, p. 117201, Oct. 2022. DOI: 10.1016/j.jsv.2022.117201.
Improving sound absorption through nonlinear active electroacoustic resonators. X Guo, R Fleury, H Lissek, 10.1103/PhysRevApplied.13.014018Physical Review Applied. 13X. Guo, R. Fleury, and H. Lissek, "Improving sound absorption through nonlinear active elec- troacoustic resonators," Physical Review Applied, vol. 13, Jan. 2020. DOI: 10.1103/PhysRevAp- plied.13.014018.
Pid-like active impedance control for electroacoustic resonators to design tunable single-degree-of-freedom sound absorbers. X Guo, M Volery, H Lissek, 10.1016/j.jsv.2022.116784Journal of Sound and Vibration. 525116784X. Guo, M. Volery, and H. Lissek, "Pid-like active impedance control for electroacoustic resonators to design tunable single-degree-of-freedom sound absorbers," Journal of Sound and Vibration, vol. 525, p. 116784, 2022. DOI: 10.1016/j.jsv.2022.116784.
M Rossi, Audio. Presses Polytechniques et Universitaires Romandes. M. Rossi, Audio. Presses Polytechniques et Universitaires Romandes, 2007.
Loudspeaker voice-coil inductance losses: circuit models, parameter estimation, and effect on frequency response. W M Leach, Jr , Journal of the Audio Engineering Society. 50W. M. Leach, Jr., "Loudspeaker voice-coil inductance losses: circuit models, parameter estimation, and effect on frequency response," Journal of the Audio Engineering Society, vol. 50, pp. 442-450, June 2002.
Electrical equivalent circuit model for dynamic moving-coil transducers incorporating a semi-inductor. K Thorborg, A D Unruh, Journal of the Audio Engineering Society. 56K. Thorborg and A. D. Unruh, "Electrical equivalent circuit model for dynamic moving-coil transduc- ers incorporating a semi-inductor," Journal of the Audio Engineering Society, vol. 56, pp. 696-709, Sept. 2008.
Achieving direct acoustic impedance control with only two microphones. M Volery, H Lissek, 10.48465/fa.2020.0586e-Forum Acusticum. M. Volery and H. Lissek, "Achieving direct acoustic impedance control with only two microphones," in e-Forum Acusticum, Dec. 2020. DOI: 10.48465/fa.2020.0586.
On the optimisation of multi-degree-of-freedom acoustic impedances of low-frequency electroacoustic absorbers for room modal equalisation. E Rivet, S Karkar, H Lissek, 10.3813/AAA.919132Acta Acustica united with Acustica. 103E. Rivet, S. Karkar, and H. Lissek, "On the optimisation of multi-degree-of-freedom acoustic impedances of low-frequency electroacoustic absorbers for room modal equalisation," Acta Acus- tica united with Acustica, vol. 103, pp. 1025-1036, Nov. 2017. DOI: 10.3813/AAA.919132.
Ueber die bedingungen, unter welchen eine gleichung nur wurzeln mit negativen reellen theilen besitzt. A Hurwitz, 10.1007/bf01446812Mathematische Annalen. 46A. Hurwitz, "Ueber die bedingungen, unter welchen eine gleichung nur wurzeln mit negativen reellen theilen besitzt," Mathematische Annalen, vol. 46, pp. 273-284, June 1895. DOI: 10.1007/bf01446812.
S Shinners, Modern control system theory and design. John Wiley & Sons2 ed.S. Shinners, Modern control system theory and design. John Wiley & Sons, 2 ed., May 1998.
New modular fan rig for advanced aeroacoustic tests -acoustic characterization of the facility. E Salze, A Pereira, P Souchotte, J Regnard, F Gea-Aguilera, M Gruber, 10.2514/6.2019-260325th AIAA/CEAS Aeroacoustics Conference. E. Salze, A. Pereira, P. Souchotte, J. Regnard, F. Gea-Aguilera, and M. Gruber, "New modular fan rig for advanced aeroacoustic tests -acoustic characterization of the facility," in 25th AIAA/CEAS Aeroacoustics Conference, May 2019. DOI: 10.2514/6.2019-2603.
The propagation and attenuation of sound in lined ducts containing uniform or "plug" flow. B Tester, 10.1016/S0022-460X(73)80102-6Journal of Sound and Vibration. 282B. Tester, "The propagation and attenuation of sound in lined ducts containing uniform or "plug" flow," Journal of Sound and Vibration, vol. 28, no. 2, pp. 151-203, 1973. DOI: 10.1016/S0022- 460X(73)80102-6.
Digital Signal Processing: A Computer-Based Approach. S K Mitra, McGraw-HillS. K. Mitra, Digital Signal Processing: A Computer-Based Approach. McGraw-Hill, 1998.
Determination of sound absorption coefficient and impedance in impedance tubes -Part 2: Transfer-function method. International Organization for Standardization. International Organization for Standardization, Geneva, CH, Determination of sound absorption coefficient and impedance in impedance tubes -Part 2: Transfer-function method, 1998. ISO 10534- 2:1998.
Electro-active boundary control for noise mitigation : local and advective strategies. E De Bono, Ecole Centrale de LyonPhD thesisE. De Bono, Electro-active boundary control for noise mitigation : local and advective strategies. PhD thesis, Ecole Centrale de Lyon, 2021. 2021LYSEC024.
A comprehensive study of the Howland current pump. Application report. Texas Instruments, A comprehensive study of the Howland current pump, Jan. 2008. Application report.
| []
|
[
"Fast accumulation of ions in a dual trap",
"Fast accumulation of ions in a dual trap"
]
| [
"M R Kamsap \nUMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance\n",
"C Champenois \nUMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance\n",
"J Pedregosa-Gutierrez \nUMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance\n",
"M Houssin \nUMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance\n",
"M Knoop \nUMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance\n"
]
| [
"UMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance",
"UMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance",
"UMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance",
"UMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance",
"UMR 7345\nAix Marseille Université\nCNRS\nPIIM\n13397MarseilleFrance"
]
| []
| Transporting charged particles between different traps has become an important feature in highprecision spectroscopy experiments of different types. In many experiments in atomic and molecular physics, the optical probing of the ions is not carried out at the same location as the creation or state preparation. In our double linear radio-frequency trap, we have implemented a fast protocol allowing to shuttle large ion clouds very efficiently between traps, in times shorter than a millisecond. Moreover, our shuttling protocol is a one-way process, allowing to add ions to an existing cloud without loss of the already trapped sample. This feature makes accumulation possible, resulting in the creation of large ion clouds. Experimental results show, that ion clouds of large size are reached with laser-cooling, however, the described mechanism does not rely on any cooling process. | 10.1209/0295-5075/110/63002 | [
"https://arxiv.org/pdf/1505.04622v1.pdf"
]
| 41,784,293 | 1505.04622 | 873854301117ae3bd8412c10123474849cbea9de |
Fast accumulation of ions in a dual trap
18 May 2015
M R Kamsap
UMR 7345
Aix Marseille Université
CNRS
PIIM
13397MarseilleFrance
C Champenois
UMR 7345
Aix Marseille Université
CNRS
PIIM
13397MarseilleFrance
J Pedregosa-Gutierrez
UMR 7345
Aix Marseille Université
CNRS
PIIM
13397MarseilleFrance
M Houssin
UMR 7345
Aix Marseille Université
CNRS
PIIM
13397MarseilleFrance
M Knoop
UMR 7345
Aix Marseille Université
CNRS
PIIM
13397MarseilleFrance
Fast accumulation of ions in a dual trap
18 May 2015(Dated: 19 mai 2015)arXiv:1505.04622v1 [physics.atom-ph]
Transporting charged particles between different traps has become an important feature in highprecision spectroscopy experiments of different types. In many experiments in atomic and molecular physics, the optical probing of the ions is not carried out at the same location as the creation or state preparation. In our double linear radio-frequency trap, we have implemented a fast protocol allowing to shuttle large ion clouds very efficiently between traps, in times shorter than a millisecond. Moreover, our shuttling protocol is a one-way process, allowing to add ions to an existing cloud without loss of the already trapped sample. This feature makes accumulation possible, resulting in the creation of large ion clouds. Experimental results show, that ion clouds of large size are reached with laser-cooling, however, the described mechanism does not rely on any cooling process.
Due to their extremely long storage times and their versatility, radio-frequency (rf) traps are a popular tool for many high-resolution experiments, from quantum information to cold chemistry. They allow the manipulation and interrogation of ion ensembles from a few units to a million particles, with the possibility to optimise trapping geometries and cloud sizes. Transporting ions has always been an important ingredient in chemistry or mass spectrometry, where ions are created in external sources, and have to be brought to and accumulated in a trap [1,2]. This issue is even more important if the production rate of the ions is low, for example in the case of rare or exotic ions. In this case, the accumulation of ions in the trap is another key-element for the optimisation of the signal-tonoise ratio. The transfer of ions is central in microwave frequency-metrology experiments, in order to use two separated trapping zones for state-preparation and probing of the ions [3]. Shuttling also gains in importance in quantum information where scalable architectures require the possibility to move ions from on site to another [4].
A major issue during transport is the heating of the transferred atoms or molecules, and as a consequence the large majority of applied protocols uses a cooling mechanism and or a tailored protocol to limit heating and therefore reduce perturbation and loss of the sample.
We have recently applied a generic transport protocol, which we have adapted from single-ion translation to the shuttling of a large ion cloud in a macroscopic set-up, and we could demonstrate transfer efficiencies up to 100% [5]. The single-ion transport in micro-traps for quantum information processing has a particular experimental frame : transport distances are very short (a few 100 µm), and the objective is to keep the ion(s) in the vibrational ground state. By using many compensation electrodes [6], the transport is designed as a translation of the harmonic potential well (see for example [7]). Our experimental set-up is different : we aim to shuttle clouds comprising a few thousand ions over a distance of 23 mm, by using a total of three DC-electrodes. Despite the differences in size and design, we could adapt protocols tailored for single ions to the macroscopic set-up [5]. For ion clouds we observed an extra effect which plays the role of a one-way valve allowing ions to be shuttled from a first trap to a second trap, while forbidding the return of ions already in the second trap. In the present letter, we demonstrate how this "asymmetry" in the working conditions can be exploited to accumulate ions in an ion trap in order to create very large ion clouds. The described process can be optimised by choosing an appropriate set of parameters for the transport between two traps. The physical process involved does not require any cooling mechanism during or after the shuttling process to assure no loss from the ions already accumulated in the trap. It can then be used to build mixed species cloud, and finds interest in applications in physical chemistry, frequency metrology and antimatter trapping.
ONE-WAY TRANSPORT OF ION CLOUDS
The described experiments are carried out in a double linear rf trap being composed of two linear quadrupole traps of radius r 0 = 3.9 mm and length l = 21 mm aligned along a common z-axis and sharing the same rf electrodes, non segmented. The ion traps are separated by a 2 mm-wide central electrode , where a DC potential barrier is applied. Identical electrodes also close both traps at each end [8], as shown in figure 1. In order to transfer ions from the first trap to the second trap, the axial potential minimum of the trap follows a time varying transport function, z 0 (t), which translates into a temporal variation of the voltage on the central electrode, V 2 (t). It has been shown earlier that the analytic form of this function is a key issue in transport [9][10][11]. Details of the choice of this function for the transport of larger clouds are described in [12].
Various applications have different experimental constraints. In our set-up, ensembles of atomic ions (Ca + ), are created by photoionization, and laser-cooled. The fluorescence of the photons scattered in the lasercooling process is recorded by a photomultiplier (PM) in photon-counting mode and an intensified CCD. The detection module is mounted on a slide, and ion clouds can be monitored in every trap by a translation of this slide. Collection efficiencies are identical, and the magnification of the optical set-up changes from 13.2 to 12.9 between both traps. Monitoring of the ion number is a central element for the evaluation of the transport efficiencies. In the case of larger ion ensembles, the photoncounting signal from a PM is not a reliable information as it will vary as a function of the cloud's temperature for fixed laser frequencies. In order to be able at any time to enumerate the number of particles in the cloud with precision, we have developed a protocol guaranteeing higher accuracy and reproducibility.
An ion cloud containing more than few hundred atoms can be described by the model of the cold charged fluid, developed for non-neutral plasmas [13]. Then, if the ion cloud is cold and dense enough, like in the liquid phase, the Debye length is small compared to the size of the cloud and its density can be considered as uniform over the whole sample [14,15]. We use this property, and the demonstrated relation between the density and the trapping parameters, to infer ion numbers from the cloud image size. Other characteristics can be deduced from this model, like the aspect ratio of the ion clouds [16], which has been experimentally confirmed by Hornekaer et al. [17].
The cloud size can be determined with precision from the CCD images, when the ions are laser-cooled to low temperatures undergoing a transition from a thermal gas to a correlated (liquid) state. The recorded images then show an ellipse with a sharp contour edge. An automated fit procedure allows to measure both axes of the observed ellipse with high precision, resulting in an error bar of the relative ion number (comparison between two clouds in the same trapping conditions) lower than 2% but reaching 5% for the estimation of the absolute number. The procedure is fast, and as it relies on a fitting procedure of the contour of the cloud, it can still be applied when one dimension of the ion cloud is larger than the observation zone. With the chosen optical magnification value in our set-up, we can quantify cloud sizes up to a few 10 5 with precision.
Cooling to the correlated phase is done before and after every transport, during the transport the cloud can be in the gas or liquid state. Throughout the entire experiment, the applied transport function is a variation of hyperbolic-tangent shape [12]. For bandpass reasons, the shortest variation applied to the DC-electrodes is 80 µs. Figure 2 shows in red the fraction of ions still present in trap I after transport, as a function of the duration of the transport function applied to the central DC-electrode. On the same graph, the blue curve shows the results obtained under identical conditions in trap II. Both curves oscillate between 0 and 1, but depending on which trap is used as a starting point, the values of maximum transport differ.
The large amplitude variations of the curve in Figure 2 can be explained by the dynamics of the ion cloud. Actually, for a transport duration of 100 µs starting from trap I, the complete cloud will leave the trap for trap II. At 180 µs, even though the transport protocol has been applied, 100 % of the cloud is found at its original location. We have checked experimentally (by opening trap II during the transport) and with numerical simulations, that this corresponds to a situation where the cloud has left trap I during the transport but was reflected on the first millimetres of trap II, and came back to trap I. This behaviour can be reproduced numerically by small deviations of the trapping field from the ideal case, and we assume that this is the experimental cause. For longer transport times (> 1 ms), only parts of the cloud come back to trap I, as the spatial spreading of the cloud starts to play a role. On the contrary, for long transport durations, all ions are found in trap II when starting from trap II. This difference for long transport durations as well as the fact that the red and blue curves in Figure 2 do not exactly overlap, is imputed to an asymmetry between the two traps, which is caused by ion creation. Indeed, both traps have identical size and trapping parameters, but the slight calcium deposit induced by the calcium beam in trap I, is responsible for a contact potential estimated to 40 mV between opposite rods. In our numerical simulations, we could show that such asymmetry in transfer efficiency can be introduced in ideal traps by adding a local small voltage to one of the rods of one of the traps. Experimentally, for a given rf trapping voltage, the described oscillations in transfer probability can be displaced on the axis of transport duration by the variation of the trapping DC-potential (see [5] for details).
ACCUMULATION OF IONS
We have used the non-coinciding oscillations of the transfer probability illustrated in Figure 2 to proceed to a "no-return" transport protocol which allows to add ions from trap I to trap II, without loosing the cloud already trapped in trap II. By choosing a transport duration for which the transport protocol results in a very different result depending on the initial trap, we can create a situation where ions are accumulated in one trapping zone. For the conditions of Figure 2 this corresponds for example for durations 300, 550, and 780 µs. All ions will then leave trap I, and no ion will leave trap II.
The resulting accumulation of ions is illustrated in Figure 3 for various transported cloud sizes and accumulated cloud states. The graph reports the ion number in trap II as a function of the transport cycle number for a fixed transport duration. We call a transport cycle the creation in trap I followed by the transport to trap II. Depending on the experimental parameters, we have created ion clouds of a few thousand ions in trap I. The reproducibility of the creation for a given set of conditions is better then 4 %. The cloud is transferred to trap II, then measured with the protocol described above. After this, a new cloud is created in trap I for the next transport cycle. Figure 3 shows how the cloud in trap II builds up as a function of the number of transport cycles. We have tested if the state of the initial cloud in trap I has an influence on the accumulation feature. Figure 3 illustrates accumulation for different starting configurations : just before transport the cloud in trap I can be in the gas phase or in the liquid phase. The clouds in trap I and trap II see the same cooling laser beams and trapping potential, therefore we can assume that the cloud in trap I is in the same state as the accumulated cloud in trap II (if the difference in size is not too important).
The growth of the ion cloud in trap II can be described as being linear for at least 10 cycles. We then observe a saturation of the size which depends on the cooling laser power and the RF and DC trapping potential. Higher laser-power and a lower trapping potential allow to reach larger clouds, compatible with a limitation by the temperature of the accumulated cloud. We apply collimated cooling laser beams at 397 nm of a typical diameter of 2.5 mm with a power of 3 mW and varied the trapping conditions up to a Mathieu parameter of q x =0.10.
This experiment describes a genuine accumulation process, meaning that ions can be added to an already existing cloud in a trap. This is an important fact for all experiments where ions are rare or difficult to produce. It allows to grow ion clouds in a pulsed regime by adding particles to an existing cloud. To our knowledge, the large majority of existing experiments uses the term accumulation in the sense of a variable integration time during initial creation or loading of a trap, which describes a different creation process.
ACCUMULATION WITHOUT COOLING
Being able to operate without a cooling process is an advantage for many experiments. We have realised a sequence of transport for small ion clouds without lasercooling. The reported experiments are carried out under ultra-high vacuum conditions at pressures below 3·10 −9 , in the absence of buffer-gas cooling (or any other cooling mechanism). Results are plotted on Figure 4. As before, they show the cumulated ion number in trap II as a function of the number of transfer cycles, even if the cloud sizes are smaller than in the laser-cooled case. For these small clouds we also evidence a linear increase in ion number in trap II, showing that the mechanism of transport and accumulation is efficient even without the application of a cooling process. This is an important step for transporting charged particles, as the vast majority of experiments relies on buffer-gas or laser-cooling during or after the transport in order to damp the transportinduced heating.
OUTLOOK
We have demonstrated accumulation of ions in a dual linear trap, with and without laser cooling. The method relies on an asymmetry between the two traps, responsible for a different timing for one way and return transfer efficiency. Accumulation of ions in a trap is a key-element in many experiments working with ion clouds of different size, in particular where the rate of ion creation is low. The described mechanism can also be adapted to a multispecies scenario allowing to load different types of charged atoms or molecules in the same trap. This kind of ensemble gains importance in interaction studies, it is in particularly employed in sympathetic cooling and quantum logic protocols [18].
Figure 1 .
1Sketch of the ion trap set-up (upper part) and resulting DC potentials (lower part)
Figure 2 .
2Fraction of residual ions in the initial trap when the destination trap is empty, as a function of the duration of the transport function and of the initial trap : trap I (red) or trap II (blue). Lines are shown to guide the eye.
Figure 3 .
3Number of ions in trap II as a function of the number of transport cycles. The accumulated cloud in trap II is in the gas phase (blue dots and red diamonds) or in the liquid phase (black squares). The transported clouds have different size : creation time of 25 s (blue dots) or 15 s (red diamonds and black squares)
Figure 4 .
4Transport of ions without any cooling process. The graph shows the cumulated ion number in trap II as a function of the transfer cycle. Small ion clouds are used, approximately 900 ions (black triangles) and 1300 ions (red dots).
This research has been carried out under contract n • 116279 with the French spatial agency (CNES) and ANR-08-JCJC-0053-01 from Agence Nationale de la Recherche ; MRK acknowledges financial support from CNES and Région Provence-Alpes-Côte d'Azur.
. M W Senko, C L Hendrickson, M R Emmett, S D , .-H Shi, A G Marshall, 10.1016/S1044-0305(97)00126-8Journal of the American Society for Mass Spectrometry. 8970M. W. Senko, C. L. Hendrickson, M. R. Em- mett, S. D.-H. Shi, and A. G. Marshall, Journal of the American Society for Mass Spectrometry 8, 970 (199
F Herfurth, 10.1016/S0168-583X(02)02135-3Nuclear Instruments and Methods in Physics Research S 14th International Conference on Electromagnetic Isotope Separators and Techniques Related to their Applications. F. Herfurth, Nuclear Instruments and Methods in Physics Research S 14th International Conference on Electromagnetic Iso- tope Separators and Techniques Related to their Applications.
. J Prestage, G Weaver, Proceedings of the IEEE. 952235J. Prestage and G. Weaver, Proceedings of the IEEE 95, 2235 (2007).
. D Kielpinski, C Monroe, D Wineland, Nature. 417709D. Kielpinski, C. Monroe, and D. Wineland, Nature 417, 709 (2002).
. M R Kamsap, C Champenois, J Pedregosa, M Houssin, M Knoop, submitted 2015M. R. Kamsap, C. Champenois, J. Pedregosa, M. Hous- sin, and M. Knoop, (submitted 2015).
. K Wright, J M Amini, D L Faircloth, C Volin, S Charles Doret, H Hayden, C.-S Pai, D W Landgren, D Denison, T Killian, R E Slusher, A W Harter, 10.1088/1367-2630/15/3/033004New Journal of Physics. 1533004K. Wright, J. M. Amini, D. L. Faircloth, C. Volin, S. Charles Doret, H. Hayden, C.-S. Pai, D. W. Land- gren, D. Denison, T. Killian, R. E. Slusher, and A. W. Harter, New Journal of Physics 15, 033004 (2013).
. M Palmero, E Torrontegui, D Guéry-Odelin, J G Muga, 10.1103/PhysRevA.88.053423Phys. Rev. A. 8853423M. Palmero, E. Torrontegui, D. Guéry-Odelin, and J. G. Muga, Phys. Rev. A 88, 053423 (2013).
C Champenois, J Pedregosa-Gutierrez, M Marciante, D Guyomarc'h, M Houssin, M Knoop, 10.1063/1.4796077AIP Conference Proceedings. American Institute of Physics1521C. Champenois, J. Pedregosa-Gutierrez, M. Marciante, D. Guyomarc'h, M. Houssin, and M. Knoop, in AIP Conference Proceedings , Vol. 1521 (American Insti- tute of Physics, 2013) pp. 210-219.
. R Reichle, D Leibfried, R Blakestad, J Britton, J Jost, E Knill, C Langer, R Ozeri, S Seidelin, D Wineland, 10.1002/prop.200610326Fortschritte der Physik. 54666R. Reichle, D. Leibfried, R. Blakestad, J. Britton, J. Jost, E. Knill, C. Langer, R. Ozeri, S. Seidelin, and D. Wine- land, Fortschritte der Physik 54, 666 (2006).
. R Bowler, J Gaebler, Y Lin, T R Tan, D Hanneke, J D Jost, J P Home, D Leibfried, D J Wineland, 10.1103/PhysRevLett.109.080502Phys. Rev. Lett. 10980502R. Bowler, J. Gaebler, Y. Lin, T. R. Tan, D. Hanneke, J. D. Jost, J. P. Home, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 109, 080502 (2012).
. A Walther, F Ziesel, T Ruster, S T Dawkins, K Ott, M Hettrich, K Singer, F Schmidt-Kaler, U Poschinger, 10.1103/PhysRevLett.109.080501Phys. Rev. Lett. 10980501A. Walther, F. Ziesel, T. Ruster, S. T. Dawkins, K. Ott, M. Hettrich, K. Singer, F. Schmidt-Kaler, and U. Po- schinger, Phys. Rev. Lett. 109, 080501 (2012).
. J Pedregosa-Gutierrez, C Champenois, M Kamsap, M Knoop, International Journal of Mass Spectrometry. to be publishedJ. Pedregosa-Gutierrez, C. Champenois, M. Kamsap, and M. Knoop, International Journal of Mass Spectro- metry (2015), to be published.
. S A Prasad, T M O'neil, Physics of Fluids. S. A. Prasad and T. M. O'Neil, Physics of Fluids (1958- 1988) 22 (1979).
. D H E Dubin, T M O'neil, Rev. Mod. Phys. 7187D. H. E. Dubin and T. M. O'Neil, Rev. Mod. Phys. 71, 87 (1999).
. C Champenois, Journal of Physics B : Atomic, Molecular and Optica. C. Champenois, Journal of Physics B : Atomic, Molecular and Optica
. L Turner, Phys. Fluids. 303196L. Turner, Phys. Fluids 30, 3196 (1987).
. L Hornekaer, M Drewsen, Phys. Rev. A. 6613412L. Hornekaer and M. Drewsen, Phys. Rev. A 66, 013412 (2002).
. P O Schmidt, T Rosenband, C Langer, W M Itano, J C Bergquist, D J Wineland, 10.1126/science.1114375Science. 309P. O. Schmidt, T. Rosenband, C. Lan- ger, W. M. Itano, J. C. Bergquist, and D. J. Wineland, Science 309, 749 (2005), http ://www.sciencemag.org/content/309/5735/749.full.pdf.
| []
|
[
"Charm meson spectra in e",
"Charm meson spectra in e"
]
| [
"M Artuso \nSyracuse University\n13244SyracuseNew York\n",
"C Boulahouache \nSyracuse University\n13244SyracuseNew York\n",
"S Blusk \nSyracuse University\n13244SyracuseNew York\n",
"J Butt \nSyracuse University\n13244SyracuseNew York\n",
"E Dambasuren \nSyracuse University\n13244SyracuseNew York\n",
"O Dorjkhaidav \nSyracuse University\n13244SyracuseNew York\n",
"J Haynes \nSyracuse University\n13244SyracuseNew York\n",
"N Horwitz \nSyracuse University\n13244SyracuseNew York\n",
"N Menaa \nSyracuse University\n13244SyracuseNew York\n",
"G C Moneti \nSyracuse University\n13244SyracuseNew York\n",
"R Mountain \nSyracuse University\n13244SyracuseNew York\n",
"H Muramatsu \nSyracuse University\n13244SyracuseNew York\n",
"R Nandakumar \nSyracuse University\n13244SyracuseNew York\n",
"R Redjimi \nSyracuse University\n13244SyracuseNew York\n",
"R Sia \nSyracuse University\n13244SyracuseNew York\n",
"T Skwarnicki \nSyracuse University\n13244SyracuseNew York\n",
"S Stone \nSyracuse University\n13244SyracuseNew York\n",
"J C Wang \nSyracuse University\n13244SyracuseNew York\n",
"Kevin Zhang \nSyracuse University\n13244SyracuseNew York\n",
"A H Mahmood \nUniversity of Texas -Pan American\n78539EdinburgTexas\n",
"S E Csorna \nVanderbilt University\n37235NashvilleTennessee\n",
"G Bonvicini \nWayne State University\n48202DetroitMichigan\n",
"D Cinabro \nWayne State University\n48202DetroitMichigan\n",
"M Dubrovin \nWayne State University\n48202DetroitMichigan\n",
"A Bornheim \nCalifornia Institute of Technology\n91125PasadenaCalifornia\n",
"E Lipeles \nCalifornia Institute of Technology\n91125PasadenaCalifornia\n",
"S P Pappas \nCalifornia Institute of Technology\n91125PasadenaCalifornia\n",
"A Shapiro \nCalifornia Institute of Technology\n91125PasadenaCalifornia\n",
"A J Weinstein \nCalifornia Institute of Technology\n91125PasadenaCalifornia\n",
"R A Briere \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"G P Chen \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"T Ferguson \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"G Tatishvili \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"H Vogel \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"M E Watkins \nCarnegie Mellon University\n15213PittsburghPennsylvania\n",
"N E Adam \nCornell University\n14853IthacaNew York\n",
"J P Alexander \nCornell University\n14853IthacaNew York\n",
"K Berkelman \nCornell University\n14853IthacaNew York\n",
"V Boisvert \nCornell University\n14853IthacaNew York\n",
"D G Cassel \nCornell University\n14853IthacaNew York\n",
"J E Duboscq \nCornell University\n14853IthacaNew York\n",
"K M Ecklund \nCornell University\n14853IthacaNew York\n",
"R Ehrlich \nCornell University\n14853IthacaNew York\n",
"R S Galik \nCornell University\n14853IthacaNew York\n",
"L Gibbons \nCornell University\n14853IthacaNew York\n",
"B Gittelman \nCornell University\n14853IthacaNew York\n",
"S W Gray \nCornell University\n14853IthacaNew York\n",
"D L Hartill \nCornell University\n14853IthacaNew York\n",
"B K Heltsley \nCornell University\n14853IthacaNew York\n",
"L Hsu \nCornell University\n14853IthacaNew York\n",
"C D Jones \nCornell University\n14853IthacaNew York\n",
"J Kandaswamy \nCornell University\n14853IthacaNew York\n",
"D L Kreinick \nCornell University\n14853IthacaNew York\n",
"V E Kuznetsov \nCornell University\n14853IthacaNew York\n",
"A Magerkurth \nCornell University\n14853IthacaNew York\n",
"H Mahlke-Krüger \nCornell University\n14853IthacaNew York\n",
"T O Meyer \nCornell University\n14853IthacaNew York\n",
"J R Patterson \nCornell University\n14853IthacaNew York\n",
"T K Pedlar \nCornell University\n14853IthacaNew York\n",
"D Peterson \nCornell University\n14853IthacaNew York\n",
"J Pivarski \nCornell University\n14853IthacaNew York\n",
"D Riley \nCornell University\n14853IthacaNew York\n",
"A J Sadoff \nCornell University\n14853IthacaNew York\n",
"H Schwarthoff \nCornell University\n14853IthacaNew York\n",
"M R Shepherd \nCornell University\n14853IthacaNew York\n",
"W M Sun \nCornell University\n14853IthacaNew York\n",
"J G Thayer \nCornell University\n14853IthacaNew York\n",
"D Urner \nCornell University\n14853IthacaNew York\n",
"T Wilksen \nCornell University\n14853IthacaNew York\n",
"M Weinberger \nCornell University\n14853IthacaNew York\n",
"S B Athar \nUniversity of Florida\n32611GainesvilleFlorida\n",
"P Avery \nUniversity of Florida\n32611GainesvilleFlorida\n",
"L Breva-Newell \nUniversity of Florida\n32611GainesvilleFlorida\n",
"V Potlia \nUniversity of Florida\n32611GainesvilleFlorida\n",
"H Stoeck \nUniversity of Florida\n32611GainesvilleFlorida\n",
"J Yelton \nUniversity of Florida\n32611GainesvilleFlorida\n",
"C Cawlfield \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"B I Eisenstein \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"G D Gollin \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"I Karliner \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"N Lowrey \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"P Naik \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"C Sedlack \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"M Selen \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"J J Thaler \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"J Williams \nUniversity of Illinois\n61801Urbana-Champaign, Illinois\n",
"K W Edwards \nInstitute of Particle Physics\nCarleton University\nK1S 5B6OttawaOntarioCanada, Canada\n",
"D Besson \nUniversity of Kansas\n66045LawrenceKansas\n",
"K Y Gao \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"D T Gong \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"Y Kubota \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"S Z Li \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"R Poling \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"A W Scott \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"A Smith \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"C J Stepaniak \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"J Urheim \nUniversity of Minnesota\n55455MinneapolisMinnesota\n",
"Z Metreveli \nNorthwestern University\n60208EvanstonIllinois\n",
"K K Seth \nNorthwestern University\n60208EvanstonIllinois\n",
"A Tomaradze \nNorthwestern University\n60208EvanstonIllinois\n",
"P Zweber \nNorthwestern University\n60208EvanstonIllinois\n",
"J Ernst \nState University of New York at Albany\n12222 15AlbanyNew York\n\nOhio State University\n43210ColumbusOhio\n",
"K Arms ",
"E Eckhart ",
"K K Gan ",
"C Gwon ",
"H Severini \nUniversity of Oklahoma\n73019NormanOklahoma\n",
"P Skubic \nUniversity of Oklahoma\n73019NormanOklahoma\n",
"D Cronin-Hennessy \nUniversity of Rochester\n14627RochesterNew York\n",
"C S Park \nUniversity of Rochester\n14627RochesterNew York\n",
"W Park \nUniversity of Rochester\n14627RochesterNew York\n",
"J B Thayer \nUniversity of Rochester\n14627RochesterNew York\n",
"E H Thorndike \nUniversity of Rochester\n14627RochesterNew York\n",
"T E Coan \nSouthern Methodist University\n75275DallasTexas\n",
"Y S Gao \nSouthern Methodist University\n75275DallasTexas\n",
"F Liu \nSouthern Methodist University\n75275DallasTexas\n",
"R Stroynowski \nSouthern Methodist University\n75275DallasTexas\n",
"Cleo Collaboration ",
"\nUniversity of Pittsburgh\n15260PittsburghPennsylvania\n",
"\nPurdue University\n47907West Lafayette, Indiana\n",
"\nRensselaer Polytechnic Institute\n12180TroyNew York\n"
]
| [
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"Syracuse University\n13244SyracuseNew York",
"University of Texas -Pan American\n78539EdinburgTexas",
"Vanderbilt University\n37235NashvilleTennessee",
"Wayne State University\n48202DetroitMichigan",
"Wayne State University\n48202DetroitMichigan",
"Wayne State University\n48202DetroitMichigan",
"California Institute of Technology\n91125PasadenaCalifornia",
"California Institute of Technology\n91125PasadenaCalifornia",
"California Institute of Technology\n91125PasadenaCalifornia",
"California Institute of Technology\n91125PasadenaCalifornia",
"California Institute of Technology\n91125PasadenaCalifornia",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Carnegie Mellon University\n15213PittsburghPennsylvania",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"Cornell University\n14853IthacaNew York",
"University of Florida\n32611GainesvilleFlorida",
"University of Florida\n32611GainesvilleFlorida",
"University of Florida\n32611GainesvilleFlorida",
"University of Florida\n32611GainesvilleFlorida",
"University of Florida\n32611GainesvilleFlorida",
"University of Florida\n32611GainesvilleFlorida",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"University of Illinois\n61801Urbana-Champaign, Illinois",
"Institute of Particle Physics\nCarleton University\nK1S 5B6OttawaOntarioCanada, Canada",
"University of Kansas\n66045LawrenceKansas",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"University of Minnesota\n55455MinneapolisMinnesota",
"Northwestern University\n60208EvanstonIllinois",
"Northwestern University\n60208EvanstonIllinois",
"Northwestern University\n60208EvanstonIllinois",
"Northwestern University\n60208EvanstonIllinois",
"State University of New York at Albany\n12222 15AlbanyNew York",
"Ohio State University\n43210ColumbusOhio",
"University of Oklahoma\n73019NormanOklahoma",
"University of Oklahoma\n73019NormanOklahoma",
"University of Rochester\n14627RochesterNew York",
"University of Rochester\n14627RochesterNew York",
"University of Rochester\n14627RochesterNew York",
"University of Rochester\n14627RochesterNew York",
"University of Rochester\n14627RochesterNew York",
"Southern Methodist University\n75275DallasTexas",
"Southern Methodist University\n75275DallasTexas",
"Southern Methodist University\n75275DallasTexas",
"Southern Methodist University\n75275DallasTexas",
"University of Pittsburgh\n15260PittsburghPennsylvania",
"Purdue University\n47907West Lafayette, Indiana",
"Rensselaer Polytechnic Institute\n12180TroyNew York"
]
| []
| Using the CLEO detector at the Cornell Electron-positron Storage Ring, we have measured the scaled momentum spectra, dσ/dx p , and the inclusive production cross sections of the charm mesons D + , D 0 , D ⋆+ and D ⋆0 in e + e − annihilation at about 10.5 GeV center of mass energy, excluding the decay products of B mesons. The statistical accuracy and momentum resolution are superior to previous measurements at this energy. | 10.1103/physrevd.70.112001 | [
"https://export.arxiv.org/pdf/hep-ex/0402040v2.pdf"
]
| 51,748,186 | hep-ex/0402040 | a3056cdd1bb0a385ca64217aabdfa4835d143b44 |
Charm meson spectra in e
19
M Artuso
Syracuse University
13244SyracuseNew York
C Boulahouache
Syracuse University
13244SyracuseNew York
S Blusk
Syracuse University
13244SyracuseNew York
J Butt
Syracuse University
13244SyracuseNew York
E Dambasuren
Syracuse University
13244SyracuseNew York
O Dorjkhaidav
Syracuse University
13244SyracuseNew York
J Haynes
Syracuse University
13244SyracuseNew York
N Horwitz
Syracuse University
13244SyracuseNew York
N Menaa
Syracuse University
13244SyracuseNew York
G C Moneti
Syracuse University
13244SyracuseNew York
R Mountain
Syracuse University
13244SyracuseNew York
H Muramatsu
Syracuse University
13244SyracuseNew York
R Nandakumar
Syracuse University
13244SyracuseNew York
R Redjimi
Syracuse University
13244SyracuseNew York
R Sia
Syracuse University
13244SyracuseNew York
T Skwarnicki
Syracuse University
13244SyracuseNew York
S Stone
Syracuse University
13244SyracuseNew York
J C Wang
Syracuse University
13244SyracuseNew York
Kevin Zhang
Syracuse University
13244SyracuseNew York
A H Mahmood
University of Texas -Pan American
78539EdinburgTexas
S E Csorna
Vanderbilt University
37235NashvilleTennessee
G Bonvicini
Wayne State University
48202DetroitMichigan
D Cinabro
Wayne State University
48202DetroitMichigan
M Dubrovin
Wayne State University
48202DetroitMichigan
A Bornheim
California Institute of Technology
91125PasadenaCalifornia
E Lipeles
California Institute of Technology
91125PasadenaCalifornia
S P Pappas
California Institute of Technology
91125PasadenaCalifornia
A Shapiro
California Institute of Technology
91125PasadenaCalifornia
A J Weinstein
California Institute of Technology
91125PasadenaCalifornia
R A Briere
Carnegie Mellon University
15213PittsburghPennsylvania
G P Chen
Carnegie Mellon University
15213PittsburghPennsylvania
T Ferguson
Carnegie Mellon University
15213PittsburghPennsylvania
G Tatishvili
Carnegie Mellon University
15213PittsburghPennsylvania
H Vogel
Carnegie Mellon University
15213PittsburghPennsylvania
M E Watkins
Carnegie Mellon University
15213PittsburghPennsylvania
N E Adam
Cornell University
14853IthacaNew York
J P Alexander
Cornell University
14853IthacaNew York
K Berkelman
Cornell University
14853IthacaNew York
V Boisvert
Cornell University
14853IthacaNew York
D G Cassel
Cornell University
14853IthacaNew York
J E Duboscq
Cornell University
14853IthacaNew York
K M Ecklund
Cornell University
14853IthacaNew York
R Ehrlich
Cornell University
14853IthacaNew York
R S Galik
Cornell University
14853IthacaNew York
L Gibbons
Cornell University
14853IthacaNew York
B Gittelman
Cornell University
14853IthacaNew York
S W Gray
Cornell University
14853IthacaNew York
D L Hartill
Cornell University
14853IthacaNew York
B K Heltsley
Cornell University
14853IthacaNew York
L Hsu
Cornell University
14853IthacaNew York
C D Jones
Cornell University
14853IthacaNew York
J Kandaswamy
Cornell University
14853IthacaNew York
D L Kreinick
Cornell University
14853IthacaNew York
V E Kuznetsov
Cornell University
14853IthacaNew York
A Magerkurth
Cornell University
14853IthacaNew York
H Mahlke-Krüger
Cornell University
14853IthacaNew York
T O Meyer
Cornell University
14853IthacaNew York
J R Patterson
Cornell University
14853IthacaNew York
T K Pedlar
Cornell University
14853IthacaNew York
D Peterson
Cornell University
14853IthacaNew York
J Pivarski
Cornell University
14853IthacaNew York
D Riley
Cornell University
14853IthacaNew York
A J Sadoff
Cornell University
14853IthacaNew York
H Schwarthoff
Cornell University
14853IthacaNew York
M R Shepherd
Cornell University
14853IthacaNew York
W M Sun
Cornell University
14853IthacaNew York
J G Thayer
Cornell University
14853IthacaNew York
D Urner
Cornell University
14853IthacaNew York
T Wilksen
Cornell University
14853IthacaNew York
M Weinberger
Cornell University
14853IthacaNew York
S B Athar
University of Florida
32611GainesvilleFlorida
P Avery
University of Florida
32611GainesvilleFlorida
L Breva-Newell
University of Florida
32611GainesvilleFlorida
V Potlia
University of Florida
32611GainesvilleFlorida
H Stoeck
University of Florida
32611GainesvilleFlorida
J Yelton
University of Florida
32611GainesvilleFlorida
C Cawlfield
University of Illinois
61801Urbana-Champaign, Illinois
B I Eisenstein
University of Illinois
61801Urbana-Champaign, Illinois
G D Gollin
University of Illinois
61801Urbana-Champaign, Illinois
I Karliner
University of Illinois
61801Urbana-Champaign, Illinois
N Lowrey
University of Illinois
61801Urbana-Champaign, Illinois
P Naik
University of Illinois
61801Urbana-Champaign, Illinois
C Sedlack
University of Illinois
61801Urbana-Champaign, Illinois
M Selen
University of Illinois
61801Urbana-Champaign, Illinois
J J Thaler
University of Illinois
61801Urbana-Champaign, Illinois
J Williams
University of Illinois
61801Urbana-Champaign, Illinois
K W Edwards
Institute of Particle Physics
Carleton University
K1S 5B6OttawaOntarioCanada, Canada
D Besson
University of Kansas
66045LawrenceKansas
K Y Gao
University of Minnesota
55455MinneapolisMinnesota
D T Gong
University of Minnesota
55455MinneapolisMinnesota
Y Kubota
University of Minnesota
55455MinneapolisMinnesota
S Z Li
University of Minnesota
55455MinneapolisMinnesota
R Poling
University of Minnesota
55455MinneapolisMinnesota
A W Scott
University of Minnesota
55455MinneapolisMinnesota
A Smith
University of Minnesota
55455MinneapolisMinnesota
C J Stepaniak
University of Minnesota
55455MinneapolisMinnesota
J Urheim
University of Minnesota
55455MinneapolisMinnesota
Z Metreveli
Northwestern University
60208EvanstonIllinois
K K Seth
Northwestern University
60208EvanstonIllinois
A Tomaradze
Northwestern University
60208EvanstonIllinois
P Zweber
Northwestern University
60208EvanstonIllinois
J Ernst
State University of New York at Albany
12222 15AlbanyNew York
Ohio State University
43210ColumbusOhio
K Arms
E Eckhart
K K Gan
C Gwon
H Severini
University of Oklahoma
73019NormanOklahoma
P Skubic
University of Oklahoma
73019NormanOklahoma
D Cronin-Hennessy
University of Rochester
14627RochesterNew York
C S Park
University of Rochester
14627RochesterNew York
W Park
University of Rochester
14627RochesterNew York
J B Thayer
University of Rochester
14627RochesterNew York
E H Thorndike
University of Rochester
14627RochesterNew York
T E Coan
Southern Methodist University
75275DallasTexas
Y S Gao
Southern Methodist University
75275DallasTexas
F Liu
Southern Methodist University
75275DallasTexas
R Stroynowski
Southern Methodist University
75275DallasTexas
Cleo Collaboration
University of Pittsburgh
15260PittsburghPennsylvania
Purdue University
47907West Lafayette, Indiana
Rensselaer Polytechnic Institute
12180TroyNew York
Charm meson spectra in e
1919(Dated: March 25, 2022)CLNS 04/1861 v2 CLEO 04-3 v2 + e − annihilation at 10.5 GeV c.m.e. 1
Using the CLEO detector at the Cornell Electron-positron Storage Ring, we have measured the scaled momentum spectra, dσ/dx p , and the inclusive production cross sections of the charm mesons D + , D 0 , D ⋆+ and D ⋆0 in e + e − annihilation at about 10.5 GeV center of mass energy, excluding the decay products of B mesons. The statistical accuracy and momentum resolution are superior to previous measurements at this energy.
I. INTRODUCTION
We report the measurement of the momentum spectra of charged and neutral D and D * charm mesons produced at the Cornell Electron-positron Storage Ring, CESR, in nonresonant e + e − annihilation at about 10.5 GeV center of mass energy (CME) and observed with the CLEO detector. The D 0 and D + spectra each include both directly produced D's, and D's which are decay products of D excited states. From them we also derive the inclusive production cross section for these charm mesons.
While very accurate data on bottom quark production from LEP and SLD have been published in recent years [1,2,3,4], the data currently available for studies of charm fragmentation at 10.5 GeV CME [5,6], are quite old and, by present standards, of poor statistical quality and momentum resolution. Our statistical sample is about 80 times larger than the our previous one [6] and our current momentum resolution is about a factor of 2 better.
The spectra represent measurements of charm quark fragmentation distributions D h c (x, s), i.e., the probability density that a c quark produces a charm hadron h carrying a fraction x of its momentum, √ s being the "energy scale" of the process, the e + e − CME in our case [7,8]. Experimental heavy-meson spectra in e + e − collisions are important for theoretical and practical reasons: (i) they provide a component that is not yet calculable in predicting heavy flavor production in very high energy hadronic collisions, (ii) they can test advanced perturbative QCD (PQCD) methods, (iii) they can test the QCD evolution equations, and (iv) they provide information for best parametrization of the Monte Carlo simulations on which the analysis of many high energy experiments partially rely. Items (i) and (ii) are interconnected. The calculations of heavy flavor production cross sections in hadronic collisions (e.g., at the Tevatron and the LHC) are generally based on the factorization hypothesis, i.e., a convolution of (a) the parton distribution function for the colliding hadrons, (b) the perturbative calculation of the parton-parton cross section and (c) the parton fragmentation function D h q (x, s). Items (b) and part of (c) (the partonshower cascade) can be calculated, in the case of heavy quarks, using PQCD. Items (a) and the second phase of (c) (the hadronization phase) are intrinsically non-perturbative (long distance) processes: as of now, they must be provided by experiments. There is an ongoing theoretical effort to push the potential of PQCD to calculate the perturbative component of the fragmentation function. It needs tests and guidance from the experimental spectra of heavy flavored hadrons produced in e + e − annihilation. De-convolving the calculated PQCD component from the experimental spectra, one obtains the non-perturbative component of the fragmentation function. Unphysical behavior of the result (e.g., negative values, extension beyond the kinematic limit) is indication that further refinement of the PQCD calculation is needed. Tests of this kind have been performed up to now on B production in e + e − annihilation [9,10] and in hadronic collisions [11,12], and on charm production in hadron [13] and ep collisions [14,15]. Charm production in e + e − annihilation provides a further testing ground of these theoretical attempts [16]. The larger value of Λ QCD /m c with respect to Λ QCD /m b makes these non-perturbative effects more evident than in bottom hadron production.
Tests of the Altarelli-Parisi evolution equations [17,18] have been performed by our collaboration [6] with low sensitivity and over a relatively small energy interval, comparing the CLEO results with PETRA results. The spectra reported in the present paper can be compared with LEP [19] results providing a test over the 10 to 200 GeV energy range.
Lacking rigorous calculations of the process of quark and gluon hadronization, QCD inspired Monte Carlo simulations have been built: the Lund String Model [20,21,22] and Cluster Fragmentation [23]. These models have been implemented in Monte Carlo programs (JETSET [24], UCLA [25], HERWIG [23]). In each case a number of parameters are introduced, to be determined by fitting the experimental distributions. Monte Carlo simulations of quark hadronization are used by experiments to determine detection efficiencies and to calculate some sources of backgrounds. The results presented here include a JETSET parametrization that produces spectra that agree quite well with the shapes of all spectra obtained in this analysis.
In all these uses of our results, spectral shapes are most important, rather than the absolute cross-section values; therefore, shape is the main focus of our attention.
In Sec. II we first list the charm mesons studied in our analysis along with the decay modes considered and then we describe the data sample analyzed and outline the procedures used to produce the spectra. In Sec. III we describe the Monte Carlo simulations we have generated and their use. In Sec. IV we give details on how we extract the signal from the effective mass distributions, and in Sec. V we explain how the detection efficiency is estimated. Sec. VI is devoted to discussing the checks we performed and the evaluation of errors. In Sec. VII the results, i.e., the charm meson spectra, are shown in the order given in Sec. II. Our results for the inclusive production cross sections are given in Sec. VIII. Our optimization of the JETSET parameters to reproduce our spectra is described in Sec. IX. In two appendices we show plots of the detection efficiencies and provide detailed tables of the measured spectra.
II. GENERAL ANALYSIS PROCEDURES
We measure the momentum distributions of D + , D 0 , D ⋆+ and D ⋆0 using the following decay modes (charge conjugates are implied throughout this paper):
• D + D + → K − π + π + • D 0 D 0 → K − π + D 0 → K − π + π + π − • D * 0 D * 0 → D 0 π 0 → (K − π + )π 0 D * 0 → D 0 π 0 → (K − π + π + π − )π 0 • D * + D * + → D 0 π + → (K − π + )π + D * + → D 0 π + → (K − π + π + π − )π +
We apply selection criteria to identify events with candidate D and/or D * that decay in one of these modes. We then extract the candidate D or D * mass distributions in twenty 0.05 wide bins of the reduced momentum, x p (D) ≡ p/p max , where p max (approximately 4.95 GeV/c) is the maximum attainable momentum at the relevant beam energy.
We fit these mass distributions with appropriate signal and background functions. The distributions of signal yields vs x p , corrected for detection efficiency, give the shape of the x p spectra: the main goal of our analysis. We then divide these spectra by the integrated luminosity and the appropriate decay branching fractions to form the differential production cross section dσ/dx p for each channel.
The use of different decay modes of the same meson provides a check on possible systematic biases.
The procedures used in the present analyses closely parallel those we used in measuring D and D * spectra from B decay. [26] A.
Data and Detector
The e + e − annihilation data sample used in this study was taken with the CLEO II.V detector [27,28] at CESR during 1995-1999.
It consists of 2.9 fb −1 of the "continuum" (non-resonant) data sample at about 10.52 GeV CME (36 MeV below BBbar threshold) and the "ON4S" sample, comprising 6.0 fb −1 at 10.58 GeV, the Υ(4S) peak. Assuming that the shape of the spectrum is the same at these two energies, 1 we merge the two samples for charm mesons with momenta above the maximum kinematically allowed in B decay. For lower momenta we use only the continuum sample, thus reducing the statistics available in that region. All charm hadrons coming from B decays are thereby excluded.
To combine the two parts of the spectra, x p < 0.50 extracted from only the continuum sample, and x p > 0.50 extracted from both the continuum and "ON4S" samples, using the well known 1/s dependence of the e + e − annihilation cross section into a pair of fermions (see Sec.39 of ref. [32], 2004), we scale the x p < 0.50 spectra by the factor
1 + L 4 L 0 s 0 s 4 = 1 + 6.0 2.9 · (10.52) 2 (10.58) 2 .(1)
Here L 0 and L 4 are the integrated luminosities of, respectively, the "continuum" and "On4S" samples, and s 0 and s 4 are the squares of the respective CMEs. The statistical sample for x p < 0.50 is a factor of three smaller than that for x p > 0.50. The spectrum so obtained is then divided by the integrated luminosity, (L 0 + L 4 ), and by the appropriate decay branching fraction to obtain dσ/dx p for each channel.
B. Selection Criteria
We select events using standard CLEO criteria designed to efficiently select e + e − annihilation into hadrons, while rejecting Bhabha scattering, e + e − → µ + µ − , and beam-gas interactions. At least three tracks are required. Events with three or four tracks must also have 65% of the center-of-mass energy deposited in the calorimeter. For those with five or 1 Comparing our spectra with the corresponding ones at √ s = 30.4 GeV [29] we estimated that the fractional difference between the D * spectrum at √ s = 10.52 GeV and the one at 10.58 GeV is at most 0.075%, after normalizing one to the other. Because of this sample merging, our results effectively refer to CME √ s = 10.56 GeV more tracks the visible energy, summing both energy in tracks and neutral energy in the calorimeter, must exceed 20% of the center-of-mass energy. Tracks used to reconstruct a D or D * are required to be the result of good tracking fits and to have an angle with respect to the beam line, θ, such that | cos θ| < 0.91. They are also required to be consistent with originating from the luminous region. Further, if they have momentum greater than 250 MeV/c, we require that the impact parameter with respect to the beam line be less than 3 mm, and that the distance between the point of closest approach to the beam line and the event vertex be less than 2.5 cm.
We impose particle identification requirements based on specific ionization (dE/dx) and time of flight measurements for the track. The requirement is that the combined χ 2 probability of the chosen identification must be greater than 4%.
Photon candidate showers detected in the central barrel region (| cos θ| < 0.707) of the crystal calorimeter are required to have a minimum energy of 30 MeV. Those detected in the forward calorimeters are required to have a minimum energy of 50 MeV. Photon candidates are also required to be well separated from the extrapolated position of all tracks, and the lateral shape of the energy distribution must be consistent with that expected from an electromagnetic shower.
Candidate π 0 mesons are reconstructed from pairs of photon candidates. At least one of the two must be in the central barrel region. To improve the determination of the π 0 momentum, the two photon combination is kinematically fitted to the nominal π 0 mass. The combination is accepted if this fit has P (χ 2 ) ≥ 10%. The resulting π 0 4-momentum is used in D * 0 reconstruction.
III. MONTE CARLO SIMULATIONS
Monte Carlo simulations are used to estimate detection efficiencies. Continuum e + e − annihilation events are generated using the JETSET 7.3 [30] package. The simulated events are then processed through a GEANT-based [31] simulation of the CLEO detector and reconstructed and analyzed as real data.
The Monte Carlo simulations are also used for other purposes: (i) to provide a shape for the signal in the candidate D mass distribution (Sec.IV), (ii) to estimate the D and D * momentum resolution (Sec.III A), and (iii) to perform checks on the validity of our analysis procedures (Sec.IV A ,VI A).
We use two kinds of Monte Carlo simulations. In the first kind, the "signal Monte Carlo", only e + e − → cc events are generated at the JETSET stage, and an event is accepted only if the charm meson under study is present. That meson is made to decay only in the mode under study. The corresponding anti-charm hadron decays generically. We produce three signal Monte Carlo's, one for D + and two for D 0 for the two decay channels analyzed. The D's in these signal Monte Carlo's are the mix of directly produced D's and D's that are decay products of D * 's and other excited charm states. The mix is as generated by the physics simulation (JETSET). It follows that each one of these signal Monte Carlo's act also as signal Monte Carlo for D * 's decaying into that specific D channel.
In the second kind of simulation, the "generic Monte Carlo", all possible e + e − hadronic annihilations are produced according to present knowledge [32].
The three signal Monte Carlo's and the generic Monte Carlo accurately reproduce the D and D ⋆ signal shapes observed in data. Backgrounds in the signal Monte Carlo mass distributions are much smaller than those in the generic Monte Carlo, which simulates more accurately the backgrounds in the data.
Both kinds of Monte Carlo simulation are used to estimate the detection efficiency. For each D or D ⋆ meson and its decay chain, we find that the signal Monte Carlo and generic Monte Carlo-derived efficiencies are statistically compatible. This proves that the strong background reduction in the signal Monte Carlo does not affect the efficiency estimation or, vice versa, that the large background of the generic Monte Carlo introduces no appreciable bias in the detection efficiency.
The two statistically independent Monte Carlo simulations allow internal checks of our procedures. We will refer to these as "generic Monte Carlo checks". In a generic Monte Carlo check, we analyze the generic Monte Carlo as data, using the procedure to be checked. Then we correct the reconstructed momentum spectrum using the detection efficiency obtained from the signal Monte Carlo. Finally we compare this efficiency-corrected spectrum with the JETSET-generated spectrum that was the input to the generic Monte Carlo. This comparison consists in calculating the χ 2 of the bin-by-bin difference between the reconstructed and the input spectrum:
χ 2 = n i=1 R i − I i δR i 2 ,(2)
where n is the number of bins, R i and I i are the values of, respectively, the reconstructed and input spectra in bin i and δR i is the statistical error on R i (the statistical errors on the input spectra are negligible). The resulting χ 2 probability, or confidence level (CL), is the measure of the correctness of the analysis procedure being checked. If we normalize the two spectra to each other and recompute the χ 2 , the new CL is a measure of the correctness of our procedure in so far as the reconstruction of the shape of the spectrum is concerned, irrespective of normalization. In a generic Monte Carlo check, the comparison is with the input spectrum. It is sensitive to all sources of systematic error on the shape of the spectra, except for possible errors in physics and detector simulation, that are common to signal and generic Monte Carlo. Hence, insofar as the MC is correct, each check provides a comprehensive estimate of all systematic errors associated with the shape of the spectrum, for the procedure being checked.
A. Momentum Resolution
Comparison with theoretical calculation may involve the moments of the spectra:
1 0 x N dσ dx dx.
In order to minimize correlations between adjacent x p bins, the x p bin size should be chosen to be substantially larger than the x p resolution. It is then important to know the momentum, and hence the x p , resolution in our analysis. Using the CLEOG Monte Carlo simulation [31], which reproduces rather accurately our track and shower measurement errors, we plot the difference between the reconstructed x p and input x p (from JETSET). Fig. 1 shows this resolution distribution for the mode D 0 →K − π + for all momenta. The full width at half maximum (FWHM) is 0.008, i.e., 16% of the bin size (0.050). The resolution (FWHM) varies monotonically with momentum, from 4% of bin size at x p = 0.10 to 18% for x p = 0.95. For the other channels the resolution is likewise a small fraction of the bin size.
IV. CANDIDATE MASS DISTRIBUTION FITTING
For the D + and D 0 analyses we select candidate daughters, add their four-momenta, and calculate the invariant mass M cand of the charm meson. Multiple candidates in the same event are accepted.
In the D ⋆ case we obtain the M cand distribution for the D 0 associated with the D ⋆ by selecting D ⋆ candidates with Q ≡ M * cand − M cand − m π in the signal region for D ⋆ decay. Here M * cand is the invariant mass of the decay products of the candidate D * . Random D-π associations are subtracted using the M cand distribution for events in the side bands of the D ⋆ signal in the Q distribution. 2 Fig. 2 shows examples of the M cand distributions for three different D ⋆ decay modes, for events with Q in the signal region and for those in the Q side bands. The residual background after the subtraction is due to D candidates from random track association.
(K − π + )π + . (b) D ⋆+ →(K − π + π + π − )π + . (c) D ⋆0 →(K − π + π + π − )π 0 .
They show the M cand distribution for Q in the D ⋆+ signal region and for Q in the D ⋆+ side bands.
The choice of the signal shape used to fit the M cand distribution was studied and discussed in detail in a previous paper [26]. A Gaussian function does not give a sufficiently accurate parametrization of the D signal. Track measurement errors vary because of the geometrical orientation of the D decay products in the detector, because of different momenta of the decay tracks and overlap with other tracks. That study concluded that a satisfactory choice for the D signal shape is a double Gaussian, i.e., the sum of two Gaussians constrained to have the same mean. A different choice of signal fitting function is the signal shape obtained from the Monte Carlo simulation where, for each track, we can identify the input particle that generated it. We call the signal mass histograms thus obtained (one for each momentum bin) the "TAGMC shape". To compare these two choices we repeat a test that was performed in the previous paper [26], on the D 0 →K − π + channel, as follows.
We repeat the D 0 data analysis, replacing the double Gaussian with the TAGMC shape. With this signal shape we obtain excellent fits, although not superior to the double Gaussian fits. We use MINUIT to find the compatibility of the two spectra. We fit one using the other as fitting function. The fitted relative normalization parameter is 1.016 ± 0.007, and the CL of the fit is 93.8%. The two spectra are compared in Fig. 3(a) after normalizing one to the other. To find if there is any x p dependence of the difference between the spectra obtained by the two methods, we took the bin-by-bin fractional difference between the two spectra ( Fig. 3(b)) and fitted it to a constant, resulting into a CL=91.0%, consistent with no difference between the two choices of signal shape. The results obtained using the double Gaussian as signal shape, are compared with the TAGMC shape to estimate the systematic error on the total cross sections due to the uncertainty on the signal shape.
The suitability of the double Gaussian as fitting function is also confirmed by the goodness of the fits: in all the channels, the fit confidence levels are evenly distributed between 0.0 and 1.0, as they should be. A quadratic polynomial is used to fit the combinatoric background in each of the seven channels.
The fits of the M cand distributions are over the whole 1.70-2.02 GeV range shown in the figures, except for the D + → K − π + π + case, where we exclude the 1.96-2.02 GeV (D * + ) region, and for the D 0 → K − π + case, as explained in the next subsection. The fitted area of the double Gaussian (or the result of the COUNT procedure described in Sec. IV 2, below) is the "raw" yield for that x p bin.
In the next two subsections, we discuss additional backgrounds in the M cand distribution from the D 0 → K − π + channel, and describe an alternative procedure, the COUNT method, to estimate the raw yield in the D 0 → K − π + π − π + channel.
1. The D 0 → K − π + case
In the D 0 → K − π + case (direct or from D ⋆ decay) additional backgrounds must be considered: D 0 decays to K − K + , π − π + , K − ρ + and D 0 →K + π − misinterpreted as K − π + . The shapes of their M cand distributions are obtained from Monte Carlo simulation.
The K − ρ + background is very small and contributes only to the 1.70 < M(K − π + ) < 1.75 GeV mass region. This contribution is excluded by not considering this mass region in the fit.
The background due to Kπ switched identities shows as a very broad enhancement centered at the signal position. For x p > 0.20, this enhancement is so broad that it can be easily accommodated by the quadratic term of the polynomial background function. For small x p , it is narrower, but contributes negligibly. The amount of this background is fixed to a momentum dependent fraction determined by Monte Carlo simulation.
The backgrounds due to D 0 decays to K − K + , π − π + do not contribute to the peak, but, if ignored, would result in a very poor fit of the background. Such a fit overestimates the amount of background under the signal and thus underestimates the amount of signal. The D 0 →K − K + background level is a parameter to be fitted. Because of lack of statistics, the amount of D 0 →π − π + background is constrained to a fixed fraction (0.357) of the D 0 →K − K + background, based on the known relative branching ratio [32]. The ππ contribution is very small, and alternative methods of accounting for it cause negligible changes in signal yields. Fig. 4 shows data in three representative momentum intervals, demonstrating how the background is built up from the four contributions. All four background components are needed to extract the yield.
The
D 0 → K − π + π + π − case: the COUNT method
In the case of the D 0 → K − π + π + π − decay, direct or from D ⋆ decay, in addition to using a double Gaussian as fitting function for the signal, we use a different procedure that leads to results that are statistically competitive. In the D 0 → K − π + π + π − case, the signal is quite narrow and the background is smooth over a wide M cand region. We exclude the signal region and fit the background to a polynomial. The signal region is centered on the mean of the double Gaussian fit and its range is chosen so as to contain the entire signal. We then count all events in the signal region and subtract the background obtained from this fit. The result of this subtraction is the measured signal yield. We perform this procedure on data for three choices of the signal region: 1.810-1.920 GeV, 1.820-1.910 GeV and 1.830-1.900 GeV.
We repeat this procedure on the generic Monte Carlo, thus performing the generic Monte Carlo check, described in Sec. III. The 1.820-1.910 GeV exclusion gives the best CL: 28%. The narrower exclusion gives the worst CL: 6%. The wider exclusion gives an acceptable CL: 22%, in part, because the wider the exclusion region is, the larger the statistical error becomes. Based on these results, we choose the data spectrum obtained with the 1.820-1.910 GeV exclusion as our result. The bin-by-bin rms spread of the three data spectra obtained with different signal region exclusions is taken as the estimate of the systematic error of this procedure.
We have two valid measurements, one from the COUNT method and the other from double Gaussian fitting of the signal, both performed on the same statistical sample. Hence we take as result the bin-by-bin arithmetic average of the spectrum obtained by double Gaussian fitting and the one obtained by the COUNT method with the optimal choice of the signal region exclusion: 1.820 < M cand < 1.910 GeV.
A. Fit parameter smoothing
The shape parameters of the signal and background functions are expected to depend smoothly on x p . By imposing this smoothness of the shape parameters we suppress, in part, the bin-to-bin (in x p ) statistical fluctuations in the spectra. This improves the accuracy of the shape of the spectra, particularly at low x p where statistics are poor. This parameter smoothing procedure was used also in our measurement of charm meson momentum spectra from B decay [26]. In the last paragraph of this subsection we show the extent of improvement obtained.
The parameters considered are: the mean of the double Gaussian (common to the two Gaussians), the width of the narrower Gaussian, σ 1 , the ratio of the widths of the wider to the narrower Gaussian, σ 2 /σ 1 , and the ratio of the area of the wider Gaussian to the total area, A 2 /A tot . We impose this smooth behavior by fitting the x p dependence of each shape parameter to a polynomial, at most quadratic, in x p .
We proceed in stages. We start by smoothing the parameter that shows the least fluctuations and repeat the M cand distribution fitting for all the x p bins, fixing that parameter to the value given by the smoothing function. We do this in sequence for all shape parameters. If a parameter does not show appreciable statistical fluctuations, we may skip smoothing it. It may take up to five iterations to smooth all the parameters.
At each stage we get a new x p spectrum and check that we have not introduced any distortion to that spectrum. The check is performed by calculating the bin-by-bin ratio of the new spectrum to the original one where all the parameters were allowed to float (the "no smoothing" spectrum). This ratio should show only random fluctuations around unity. If the ratio shows any trend vs x p , e.g., if a slope and/or a curvature is needed to describe the x p dependence of the ratio, that smoothing stage is discarded. Fig. 5 shows three examples of these checks. When we perform a χ 2 fit of the ratios to a constant function (=1), we obtain CL of 94.6%, 91.0% and 38.0% respectively for the three examples shown. These are typical for all the retained smoothing steps.
− π + π + , (b) D 0 →K − π + , (c) D ⋆+ →(K − π + )π + .
We perform the smoothing procedure varying the sequence of smoothing stages. Each change of sequence leads to a spectrum that is slightly different from the other ones. If the CL of the generic Monte Carlo check for one of the sequences is considerably higher than the CL for the other ones, we take that spectrum as our result.
Comparison of spectra derived from different smoothing sequences provides a measure of the associated systematic error, as explained in Sec. VI C.
We use the generic Monte Carlo check discussed in Sec. III to see if the smoothing procedure improves the agreement between the reconstructed and the original spectrum, i.e., the spectrum that is the input to the Monte Carlo simulation. In the D 0 →K − π + case, when there is no smoothing, the spectrum produced by the analysis fits the original ("true") spectrum with a χ 2 = 25.1 for 15 d.o.f., i.e., CL = 5%. 3 When smoothing is used, the spectrum produced by the analysis fits the original spectrum with χ 2 = 7.0 for 15 d.o.f., i.e., CL = 95%. Thus, in this case, parameter smoothing produces a dramatic improvement. In the case of D + →K − π + π + , the CL improves appreciably from 7% to 13%. In the D ⋆+ →(K − π + )π + case, where the CL is already 93% without parameter smoothing, there is only a small improvement to a CL=97%. In the D ⋆0 →(K − π + )π 0 case the improvement is from CL=59% to CL=75%. As expected, the improvement is strong when the initial set of parameters show large fluctuations, smaller when the parameters show a fairly smooth behavior to start with.
V. DETECTION EFFICIENCY
For each channel we have two independent and statistically-compatible estimates of the detection efficiency, as explained in Sec. III. We take their weighted average, thus appreciably reducing the statistical error on the detection efficiency.
The detection efficiency should be a smooth function of x p . We use a second order polynomial to fit the x p dependence of the detection efficiency averaged over the signal and generic Monte Carlo. Adding a cubic term does not improve any of the fits. We call the result of this fit the "smoothed efficiency". In Appendix A, we show the detection efficiency dependence on x p for all the mesons and decay modes analyzed. In Figs. 23, 24, 25, the detection efficiencies obtained from the signal and generic Monte Carlo's are plotted, and the curve resulting from the fit of their average to a polynomial is overlaid. This procedure results in a strong reduction of the statistical errors on the detection efficiency.
The detection efficiency corrected spectrum is obtained by dividing the raw signal yield by the smoothed efficiency, bin-by-bin in x p .
VI. CHECKS AND ERROR ESTIMATION
A. Two Checks
Generic Monte Carlo checks
For each procedure used to reconstruct the spectra, we perform a "generic Monte Carlo check", as described in Sec. III. The confidence levels reported below in Table I, show the consistency of the reconstructed spectrum with the original one. Since our interest is in the consistency of the shapes of the two spectra, we do the comparison after normalizing the areas of the the two spectra to each other. The normalization differs from unity by at most 2.6%. Notice that in the generic Monte Carlo checks we can only use the signal Monte Carlo efficiency, not the averaged, smoothed efficiency described in the previous section (Sec. V).
D + → K − π + π + 18% D 0 → K − π + 72% D 0 → K − π + π + π − 56% D * + → (K − π + )π + 70% D * + → (K − π + π + π − )π + 37% D * 0 → (K − π + )π 0 76% D * 0 → (K − π + π + π − )π 0 99%
Comparison of spectra from different decay modes
In the D 0 , D ⋆+ and D ⋆0 cases we obtain the respective spectra from two different D 0 decay modes. We checked that the spectra from the two different decay modes are statistically compatible. We calculate the χ 2 of the difference, using only the statistical errors. The corresponding confidence levels are, respectively, 28%, 100% and 0.09%. After normalizing one to the other the confidence level become: 85%, 100% and 84%. This test, however, is not very stringent because the comparison is dominated by the large statistical errors of the D 0 → K − π + π + π − channel.
B. Statistical Errors
The statistical errors on the efficiency-corrected yields are obtained by adding in quadrature the statistical error on the raw yield and the statistical error on the smoothed efficiency (Sec. V). The latter is generally considerably smaller than the former.
C. Systematic Errors
We discuss here systematic errors that could affect the shape of the differential cross section dσ/dx p , although some of them are found to be independent of x p . Additional systematic errors that affect the normalization of the differential cross section, but not its shape, will be discussed in Sec. VIII on total cross sections.
Errors found to be independent of x p or negligible.
We consider the following possible sources of systematic errors: (1) the choice of signal fitting function, (2) possibly incorrect simulation of the initial state radiation, (3) effects of swapping between background curvature and width of the wide Gaussian in M cand distribution fitting, and (4) effects of low detection efficiency for very low momentum tracks.
The test, described in Sec. IV, that uses a signal fitting function other than a double Gaussian, gives us a measure of the sensitivity of our results to the choice of signal fitting function. Based on that test, we attribute a systematic error of 1.6% from the choice of signal fitting function. The test shows no momentum dependence of the difference between the two methods.
We have considered the possibility that inaccurate simulation of initial state radiation (ISR) may have introduced a systematic error in our estimate of the detection efficiencies. We compare the detection efficiencies discussed in Sec. V with those obtained from Monte Carlo events where no ISR was produced. As expected, the latter is slightly higher than the former, but only by 1.1%, and its dependence on x p is negligible. Since our Monte Carlo does simulate the initial state radiation, the uncertainty is only in the accuracy of the simulation. We thus take half of that, 0.5%, as contribution to the systematic error on the cross sections.
Since the momentum dependence of these two uncertainties is found to be negligible, we take them into account only as errors in the total cross sections (Sec. VIII).
We considered the possibility of swapping between a background that is highly curved in the signal region, and the wide component of the double Gaussian. The only two channels that show an appreciable background curvature are D 0 →K − π + π + π − and D + →K − π + π + . In the first case the full compatibility of the fits with the results of the COUNT procedure (subsect. IV 2, CL> 96% for both Monte Carlo's and for data), shows that this swapping, if it exists, generates an error much smaller than the statistical error. In the D + case we performed the same test with the same result.
We considered the possibility of errors in the D ⋆+ detection efficiency because of the very rapid decrease in the charged track detection efficiency for momenta below 120 MeV/c. The detection efficiency is practically zero below 70 MeV/c. 4 We studied in detail the momentum distribution of the charged π ± daughter of the D ⋆± (the "slow pion") as a function of x p (D ⋆± ). Only for x p (D ⋆± ) < 0.40 are there slow pions with momentum below 120 MeV/c. From the momentum dependence of the track detection efficiency and the D ⋆± isotropic decay distribution [33], we can calculate the D ⋆± detection efficiency. The result is consistent with the one resulting from our generic and signal Monte Carlo simulation within their statistical errors.
Since we find the errors from these last two sources to be negligible, we disregard them.
Errors that affect the spectra shapes
The different sequences of parameter smoothing stages (described in Sec. IV A) lead to slightly different resulting spectra. We calculate the root-mean-square (rms) spreads of the yields for each x p bin over the spectra from different sequences. Since these rms spreads fluctuate statistically from bin to bin, as expected, we average them over groups of three bins. We take these rms spreads as systematic errors on the yields.
As stated in Sec. III, we have both generic and signal Monte Carlo samples of events, and to the extent that our Monte Carlo correctly simulates data and detector, we can perform a test which give comprehensive information on all systematic errors associated with our analysis procedures. We take the bin-by-bin difference between the generic Monte Carlo reconstructed spectrum and the input spectrum, and divide this, bin-by-bin, by the input spectrum, resulting in the distribution of the fractional difference vs x p . The weighted average, over the entire x p range, of the absolute values of these fractional differences (where the weights are the inverse square errors on the differences) can be considered as an esti-mate of the systematic error. It varies from 0.6% for the D 0 →K − π + channel to 1.4% for the D + →K − π + π + channel. The distributions of the fractional differences show negligible dependence on x p , meaning that this estimated systematic error does not seem to affect the shape of the spectra. Nevertheless we include these average differences as a component of the systematic error on the measured yields. In principle, this estimate of the systematic error takes into account also the "rms spreads" discussed in the previous paragraph. We decided, however, to be conservative, and have combined them in quadrature to obtain the total systematic error. Even with possible overestimate, generally the systematic error makes the total error larger than the statistical error by only 10% to 30%.
D. Total errors
The statistical errors and the two systematic errors affecting the spectra shapes are listed, channel by channel, in Table V -XI in Appendix B. These three errors are combined in quadrature to give total errors relevant to the shape of our spectra.
VII. RESULTS ON THE SHAPE OF THE SPECTRA
A. The Final or Combined Spectrum.
For each D or D ⋆ meson and its decay chain, we obtain the spectrum fitting the signal with a double Gaussian after smoothing the x p dependence of the Gaussian parameters, as described in Sec. IV A. When we also employ the COUNT method, as explained in Sec. IV 2, the spectrum that we report is the average of the spectrum obtained by fitting a double Gaussian and that obtained with the COUNT method. Details specific to each channel, are given in the sections showing the respective spectra.
The spectra shown in the following are differential, inclusive production cross sections, dσ(e + e − → D ( * ) X)/dx p at √ s = 10.58 GeV fully corrected for detection efficiency and decay branching ratios.
We use the following decay branching ratios:
B(D 0 →K − π + )=(3.82±0.09)%, B(D 0 →K − π + π − π + )=(7.49±0.31)%, B(D + →K − π + π + )=(9.0±0.6)%, B(D ⋆+ →D 0 π + )=(67.6±0.5)%, B(D ⋆0 →D 0 π 0 )=(61.9±2.9)%.
They affect only the normalization, not the shape, of the spectra. Uncertainties in the branching ratios will be reflected in the systematic errors on the total cross sections, Sec. VIII. Fig. 6 shows examples of fits to the M cand distributions in three representative x p bins, using fully smoothed parameters. Our result is shown in Fig. 7 and tabulated in App. B, Table V. The spectrum shown is obtained after smoothing the x p dependence of the double Gaussian shape parameters (see Sec. IV A) using the sequence that gives the best CL in the generic Monte Carlo check (Sec. VI A). The D 0 inclusive, differential production cross section obtained from this decay mode is shown in Fig. 9 and in App. B, Table VI. It is obtained after smoothing the x p dependence of the double Gaussian shape parameters (see Sec. IV A) using the sequence that gives the best CL in the generic Monte Carlo check (Sec. VI A).
B. D + Spectrum
FIG. 9: Differential cross section dσ(e + e − → D 0 X)/dx p in pb from the D 0 → K − π + decay mode.
2. D 0 Spectrum from D 0 →K − π + π + π − Fig. 10 shows examples of fits to the M cand distributions in three representative x p bins, with no parameter smoothing. Because of the large statistical errors, we find the Gaussian parameter smoothing procedure to be unreliable. However, as discussed in Sec. IV 2, for this mode we use also the COUNT method with three different widths of the excluded signal region in order to get part of the systematic error on this procedure. FIG. 11: Differential cross section dσ(e + e − → D 0 X)/dx p in pb from the D 0 → K − π + π + π − decay mode.
The D 0 inclusive, differential production cross section obtained from this decay mode is shown in Fig. 11 and tabulated in App. B, Table VII. It is the arithmetic average of the one obtained by double Gaussian fits (without any Gaussian parameter smoothing) and the one produced with the COUNT procedure, excluding from the background fit the 1.820-1.910 GeV region. For the final statistical errors we take the average of the statistical errors associated with the two methods.
The weighted average of the spectra obtained from the two D 0 decay modes analyzed is shown in Fig. 12 and tabulated in App. B, Table XII. The two JETSET generated spectra are explained in Sec. IX.
FIG. 12: Differential cross section dσ(e + e − → D 0 X)/dx p , weighted average of the spectra from the D 0 →K − π + and D 0 →K − π + π + π − decay modes, overlaid with the JETSET spectra generated with two different sets of parameters (Sec. IX).
D. The D ⋆+ Spectrum
In Sec. IV we described our procedure for selecting D ⋆+ candidates. The difference between the two M cand distributions shown in Fig. 2 eliminates random D 0 π + associations.
1. D ⋆+ Spectrum from D ⋆+ →D 0 π + →(K − π + )π + The subtracted M cand distribution (Fig. 13) shows the additional backgrounds present in this D 0 decay mode. They have been handled as described in Sec. IV 1. The spectrum is shown in Fig. 14 and tabulated in App. B, Table VIII. It is the one obtained after smoothing the x p dependence of the double Gaussian shape parameters (see Sec. IV A) using the sequence that gave the best CL in the generic MC check (Sec. VI A).
FIG. 14: dσ(e + e − → D * + X)/dx p , from the D * + → D 0 π + → (K − π + )π + decay mode.
2. D ⋆+ Spectrum from D ⋆+ →D 0 π + →(K − π + π + π − )π + Just as in the case of D 0 → K − π + π + π − , taking advantage of the narrowness of the signal over a background that is smooth and well determined over a large region, we use the COUNT procedure described in Sec. IV 2 with the signal region exclusion as optimized in that analysis (1.820-1.910 GeV). The Q selection reduces drastically the background with respect the D 0 case, and we obtain good double Gaussian fits of the signal as shown, for three representative x p bins, in Fig. 15. FIG. 15: Three examples of fits of the subtracted M (K − π + π − π + ) distributions for D * + → D 0 π + → (K − π + π − π + )π + candidates. FIG. 16: dσ(e + e − → D * + X)/dx p from the D * + → D 0 π + → (K − π + π − π + )π + decay mode.
The spectrum is shown in Fig. 16 and tabulated in App. B, Table IX. It is the arithmetic average of the one obtained by double Gaussian fit, after full smoothing of the x p dependence of the double Gaussian shape parameters (see Sec. IV A), and the one produced with the COUNT procedure, excluding from the background fit the 1.820-1.910 GeV region.
The Average D ⋆+ Spectrum
FIG. 17:
Differential cross section dσ(e + e − → D * + X)/dx p , weighted average of D ⋆+ →D 0 π + →(K − π + )π + and D ⋆+ →D 0 π + →(K − π + π + π − )π + spectra, overlaid with the JETSET spectra generated with two sets of parameters (Sec. IX).
The weighted average of the spectra obtained from the two decay modes analyzed is shown in Fig. 17 and tabulated in App. B, Table XII. The two JETSET generated spectra are explained in Sec. IX.
E. D ⋆0 Spectrum
To suppress random D 0 π 0 associations, we use the subtraction procedure already used for the D ⋆+ cases and illustrated in Fig. 2. Fig. 18 shows three examples of fits of the subtracted M cand distribution for this channel. Here too we add to the fitting functions the backgrounds described in Sec. IV 1.
1. D ⋆0 Spectrum from D ⋆0 →D 0 π 0 →(K − π + )π 0 FIG. 18: Three examples of fits of the M (K − π + ) distributions for D * 0 → D 0 π 0 → (K − π + )π 0 candidates.
The differential cross section is shown in Fig. 19 and tabulated in in App. B, Table X. Among the different stage sequences in smoothing the Gaussian parameters (see Sec. IV A) we choose the one that gives the best CL in the generic MC check (Sec. VI A).
FIG. 19: dσ(e + e − → D * 0 X)/dx p , from the D * 0 → D 0 π 0 → (K − π + )π 0 decay mode.
2. D ⋆0 Spectrum from D ⋆0 →D 0 π 0 →(K − π + π + π − )π 0 Fig. 20 shows, for three representative x p bins, the fits of the subtracted M cand distribution, using a double Gaussian and a polynomial background.
(K − π + π − π + ) distributions for D * 0 → D 0 π 0 → (K − π + π − π + )π 0 candidates.
Because of the smaller decay branching ratio and the smaller detection efficiency, due to the presence of a π 0 , the statistical errors are quite large, especially for x p < 0.50, where we can use only the continuum events. We have used both the COUNT procedure and the double Gaussian signal fitting (without parameter smoothing) to get the D ⋆0 yield.
FIG. 21: dσ(e + e − → D * 0 X)/dx p , from the D * 0 → D 0 π 0 → (K − π + π − π + )π 0 decay mode.
The spectrum is shown in Fig. 21 and tabulated in App. B, Table XI. It is the arithmetic average of that obtained by fitting the signal with the double Gaussian (smoothed parameters) and the one obtained by the COUNT method using the signal region exclusion optimized in that analysis (1.820-1.910 GeV).
The weighted average of the spectra obtained from the two decay modes analyzed is shown in Fig. 22 and listed in App. B, Table XII. The two JETSET generated spectra are explained in Sec. IX.
FIG. 22
: dσ(e + e − → D * 0 X)/dx p , weighted average of the D ⋆0 →D 0 π 0 →(K − π + )π 0 and D ⋆0 →D 0 π 0 →(K − π + π + π − )π 0 decay modes. Overlaid are the JETSET spectra generated with two sets of parameters (Sec. IX).
VIII. RESULTS FOR THE TOTAL CROSS SECTIONS AND AVERAGE x p
The production cross section for each channel is shown in Table III. It is calculated by summing each differential cross section bin-by-bin. The first error in the table is the statistical error, obtained by combining in quadrature the statistical errors in each bin. If the yield in the lowest few bins cannot be reliably measured, the cross section is corrected by extrapolating the spectrum to x p = 0 using the JETSET distribution that fits the spectrum, discussed in Sec. IX. This correction is between 0.2% and 6%.
In Table II we list, channel by channel, the components of the systematic error on the production cross sections. In the first column we report the rms spread of the cross sections obtained by the four or five smoothing sequences used for each channel. The discrepancy between the areas of the input and reconstructed spectra in the generic Monte Carlo check (Sec. VI A), is shown in the second column. In the third column we list the percent difference between the integral of the spectra obtained using the double Gaussian and the one that uses the TAGMC signal shape (Sec. IV). This error is not considered for the channels where the D 0 decays to K − π + π + π − , because of the use of the COUNT procedure for those channels. We assume a 10% error on the extrapolation and show it in column 4. The remaining systematic errors are estimated and discussed in a series of CLEO internal notes and are used in all CLEO analyses where they are relevant. We estimate a 1% per track uncertainty in the charged-track detection efficiency and 0.8% per track for particle identification efficiency. The choice of track quality and geometrical cuts result in an error of 0.5% also per track. The per track errors, being coherent, are multiplied by the number of tracks in the decay, and are shown in columns 5, 6, and 7. The π 0 detection uncertainty is estimated to be 3% per π 0 (column 8). As discussed in Sec. VI C, we attribute a 0.5% error due to possible inaccuracies in the Monte Carlo simulation of the initial state radiation. The error on the integrated luminosity is estimated as 1.9%.
D + → K − π + π + 5pb 15pb 1.6% 0.5pb 3% 2.4% 1.5% 0.5% 1.9% D 0 → K − π + 22pb 8pb 1.6% 0.4pb 2% 1.6% 1.0% 0.5% 1.9% D 0 → K − π + π + π − 41pb 29pb
3.2pb 4% 3.2% 2.0% 0.5% 1.9% D * + → (K − π + )π + 8pb 15pb 1.6% 0.9pb 3% 2.4% 1.5% 0.5% 1.9% D * + → (K − π + π + π − )π + 17pb 7pb 3.3pb 5% 4.0% 2.5% 0.5% 1.9% D * 0 → (K − π + )π 0 11pb 10pb 1.6% 3.6pb 2% 1.6% 1.0% 3% 0.5% 1.9% D * 0 → (K − π + π + π−)π 0 45pb 12pb 1.1pb 4% 3.2% 2.0% 3% 0.5% 1.9%
These systematic errors are combined in quadrature to give the systematic error on the cross section, the second entry in Table III. TABLE III: Total production cross sections and average x p , as derived from each decay mode. The cross section errors are, in this order, the statistical error, the systematic error and the error due to the uncertainty on the branching ratio.
Decay channel
Total Cross Section (pb) at 10.5 GeV C.M.E.
D + → K − π + π + σ(e + e − → D + X) = 640±14±35±43 D 0 → K − π + σ(e + e − → D 0 X) = 1, 521±16±62±36 D 0 → K − π + π + π − σ(e + e − → D 0 X) = 1, 579±55±102±63 D * + → D 0 π + → (K − π + )π + σ(e + e − → D * + X) = 583±8±33±14 D * + → D 0 π + → (K − π + π + π − )π + σ(e + e − → D * + X) = 572±26±45±24 D * 0 → D 0 π 0 → (K − π + )π 0 σ(e + e − → D * 0 X) = 559±24±35±29 D * 0 → D 0 π 0 → (K − π + π + π−)π 0 σ(e + e − → D * 0 X) = 616±32±62±39
We calculate < x p > for the D + spectrum and for the spectra of D 0 , D * + and D * 0 averaged over the decay modes. We supplement the data spectrum in the lowest bins using the JETSET spectra normalized to the spectra. We take the errors on these "borrowed" cross sections to be roughly comparable to the data in nearby bins. The results are shown in Table IV. TABLE IV: < x p > for the four charm mesons considered. The first error is statistical, the second systematic.
Meson < x p > Meson < x p > D + 0.582 ± 0.008 ± 0.004 D * + 0.611 ± 0.007 ± 0.004 D 0 0.570 ± 0.005 ± 0.004 D * 0 0.596 ± 0.009 ± 0.004
IX. OPTIMIZATION OF JETSET PARAMETERS
Largely for internal use of our collaboration, we perform a simple fit of the D 0 spectrum (from the D 0 →K − π + decay mode) varying the three JETSET parameters that are most important for the shape of the spectrum. The first and second are the parameters a and b appearing in the "Lund Symmetric Fragmentation Function" [21,22]:
f (z) = N (1 − z) a z exp −b · m 2 ⊥ z (3)
where z is the reduced energy x E , or momentum x p , of the hadron and m 2 ⊥ = m 2 + p 2 ⊥ , with m being the hadron mass and p ⊥ the component of the hadron momentum perpendicular to the jet axis.
The third parameter is the probability P V that a meson of given flavor be generated as a vector meson, rather than pseudoscalar or tensor, P V ≡ V /(P + V + T ). The data indicate, as expected, that the majority of D 0 's are not produced directly in the fragmentation of the charm quark, but from the decay of D ⋆ 's. In JETSET [24] these parameters are PARJ(41), PARJ(42) and PARJ (13).
The result of the fit of the D 0 spectrum (in the K − π + decay mode) is: a = 0.178 ± 0.007, b = 0.393±0.006, P V = 0.627 ± 0.015.
Keeping P V fixed at the naive value P V = 0.75, we obtain a = 0.223±0.009 and b = 0.438±0.005. In both cases the quoted errors are simple statistical errors. Correlation between parameters are not evaluated. The spectra resulting from these parameterizations are shown in Fig. 7, 12, 17, 22. Notice that we do not consider our results of D + , D ⋆+ and D ⋆0 spectra in the optimization process. However, a posteriori we see, visually from the figures, that the spectra generated with these parameters seem to reproduce rather accurately also the D + , D ⋆+ and D ⋆0 experimental distributions. However, it is not obvious which one of the two sets, the one with P V = 0.672 or the one with P V = 0.75, should be preferred. Furthermore, these parameters, while useful for the Monte Carlo simulation of D and D ⋆ spectra at the c.m. energy of our and similar experiments, should not be taken as having general validity and theoretical significance. In fact, the D + s spectrum generated by JETSET with our fitted parameters disagrees appreciably with the spectrum measured by the CLEO [34] and BaBar [35] collaborations. It should be noted that the effect of these parameters may also be influenced by the value of other JETSET parameters.
X. CONCLUSIONS
We have measured the momentum distribution of D 0 , D + , D ⋆+ and D ⋆0 produced in nonresonant e + e − annihilation at a CME of about 10.5 GeV. These distributions can be used to guide and check QCD calculations of fragmentation functions needed to predict heavy meson production in both e + e − annihilation and hadron collisions at very high energy. The D 0 spectrum was used to determine the JETSET parameters that best reproduce it, and we found that, with these parameters, the D ⋆ , D ⋆+ and D ⋆0 spectra (but not the D + s spectrum) are also well reproduced.
In the following figures we show the detection efficiency dependence on x p for all the mesons and decay modes analyzed. The detection efficiencies obtained from the signal and generic MC simulations are plotted, together with the curve resulting from the fit of their weighted average to a polynomial. π + π + channel, (b) for the D 0 →K − π + channel, and (c) for the D 0 →K − π + π + π − channel.
FIG. 24: Comparison of the detection efficiencies obtained from the signal and generic Monte Carlo and their smoothed average: (a) for the D * + → D 0 π + → (K − π + )π + channel, (b) for the D * + → D 0 π + → (K − π + π − π + )π + channel.
FIG. 25: Comparison of the unsmoothed detection efficiencies obtained from the signal and generic Monte Carlo: (a) for the D * 0 → D 0 π 0 → (K − π + )π 0 , (b) for the D * 0 → D 0 π 0 → (K − π + π − π + )π 0 channel.
APPENDIX B: TABLES OF DIFFERENTIAL CROSS SECTIONS
In the following tables, we report the quantity dσ/dx p in pb. Notice that the systematic and total errors are errors on the bin content (i.e., the first column). The first column of systematic errors is obtained from the rms spread of yields for the different procedures used to calculate the spectrum. The second column of systematic errors is derived from the "generic MC check" described in Sec. VI A. These are the errors relevant to the shape of the spectra, i.e., they do not include the systematic errors that are common to the whole momentum range and that contribute to the error on the cross section (Sec. VIII). VII: dσ(e + e − → D 0 X)/dx p in pb; (D 0 →K − π + π + π − ).
TABLE IX: dσ(e + e − → D * + X)/dx p in pb; (D ⋆+ →(K − π + π + π − )π + ).
TABLE XII: Differential cross sections dσ/dx p in pb for D + , D 0 , D * + and D * 0 . The last three columns are weighted averaged over the two decay modes. The errors are the quadratic combination of the statistical and systematic errors, excluding the errors, discussed in Sec. VIII, that affect the total cross section but not the shape of the spectrum.
x p D + D 0 D * + D * 0 0.10-0. 15 -173 ±109 --0.15-0.20 161 ±83
FIG. 1 :
1Resolution in x p for the D 0 →K − π + channel. All momenta.
FIG. 2 :
2Examples of M cand distribution for two D ⋆+ decay channels and one of the D ⋆0 channels analyzed. (a) D ⋆+ →
FIG. 3 :
3(a) Overlay of D 0 spectra (data) from double Gaussian and TAGMC shape signal fitting; (b) fractional difference of the two spectra.
FIG. 4 :
4Buildup of the background from its components to fit the M (K − π + ) distribution. The solid histogram is data. Notice the offset on the yield axis.
FIG. 5 :
5Ratios of data spectrum after double Gaussian shape parameter smoothing to the one obtained without smoothing: (a) D + →K
FIG. 6 :
6Three examples of M (K − π + π + ) distribution fits. Notice the large vertical scale offsets.FIG. 7: Differential cross section dσ(e + e − → D + X)/dx p in pb from the D + →K − π + π + decay mode. (a) shows explicitly the total and statistical errors. (b) the same spectrum overlaid with the JETSET spectra generated with two different sets of parameters (Sec. IX). C. D 0 Spectrum 1. D 0 Spectrum from D 0 →K − π + Fig. 8 shows examples of fits to the M cand distributions in three representative x p bins, using fully smoothed parameters. FIG. 8: Three examples of M (K − π + ) distribution fits. Notice the large y offsets.
FIG. 10 :
10Three examples of M (K − π + π + π − ) distribution fits. Notice the large y offsets.
FIG. 13 :
13Three examples of fits of the subtracted M (K − π + ) distributions for D * + → D 0 π + → (K − π + )π + candidates.
FIG. 20 :
20Three examples of fits of the subtracted M
FIG. 23 :
23Direct comparison of the detection efficiencies from signal and generic Monte Carlo and the result of smoothing their average: (a) for the D + →K −
TABLE I :
IConfidence levels of the fit of the generic Monte Carlo reconstructed spectrum to its input spectrum for the seven decay channels analyzed.Decay channel
C.L.
Decay channel
C.L.
Decay channel
C.L.
TABLE II :
IISystematic errors described in the text. Some are listed as percent of the cross section, other ones directly in pb. The momentum dependent systematic errors are listed also in the tables in App. B. The error due to the uncertainty on the branching ratio is shown only inTable III. gMC signal Extra-track part. other π 0 ISR Decay channel rms check shape polat. det.eff. ID sel. det. sim. Lum.1
2
3
4
5
6
7
8
9
10
procedures
TABLE V :
Vdσ(e + e − → D + X)/dx p in pb; (D + →K − π + π + )dσ/dx p
Errors (pb)
x p
(pb)
Statistical
Systematic
Total
0.15-0.20
161
78
27
3
83
0.20-0.25
320
76
53
5
92
0.25-0.30
356
70
59
6
92
0.30-0.35
413
64
68
7
94
0.35-0.40
693
58
11
11
60
0.40-0.45
909
52
14
15
56
0.45-0.50
1042
47
16
17
53
0.50-0.55
1271
25
20
21
38
0.55-0.60
1357
22
21
22
38
0.60-0.65
1370
19
21
22
36
0.65-0.70
1291
17
20
21
34
0.70-0.75
1129
15
17
18
29
0.75-0.80
952
13
15
16
25
0.80-0.85
694
10
11
11
19
0.85-0.90
449
8
7
7
13
0.90-0.95
223
5
3
4
7
0.95-1.00
74
3
1
1
4
TABLE VI :
VIdσ(e + e − → D 0 X)/dx p in pb; (D 0 →K − π + ).dσ/dx p
Errors (pb)
x p
(pb)
Statistical
Systematic
Total
0.10-0.15
196
86
73
1
113
0.15-0.20
507
92
188
3
209
0.20-0.25
597
85
221
3
237
0.25-0.30
891
76
37
5
85
0.30-0.35
1154
68
48
7
84
0.35-0.40
1665
63
70
10
95
0.40-0.45
2341
61
98
13
116
0.45-0.50
2889
59
121
17
136
0.50-0.55
3178
35
42
18
57
0.55-0.60
3444
34
45
20
60
0.60-0.65
3345
34
44
19
58
0.65-0.70
2984
33
39
17
54
0.70-0.75
2542
31
33
15
48
0.75-0.80
1997
29
26
11
41
0.80-0.85
1380
25
18
8
32
0.85-0.90
831
19
11
5
23
0.90-0.95
337
11
4
2
12
0.95-1.00
78
5
1
0.4
6
TABLE
The signal and the side-band regions are defined as follows. We fit the "global" (i.e. all momenta) Q distribution with a Double-Gaussian plus suitable background. The ratio SIG2/SIG1 of the widths of the two Gaussians is, in all cases, about 2.2. We choose the signal region to be MEAN±n*SIG2, where n (that turns out to be about 2 in all channels) is evaluated from the Gaussian Integral tables, requiring that the whole area of the narrow Gaussian plus the area within ±n*SIG2 of the wider Gaussian result in a 98% of the Double-Gaussian area. For the side bands, on each side, we skip n*SIG2 and then take a region n*SIG2 wide.
Since our aim is to measure the shape of the spectra, irrespective of normalization, these χ 2 and related CLs are calculated after normalizing the reconstructed spectrum to the original one, thus resulting in an increase of the CLs.
The charged track detection efficiency has been carefully studied in a series of CLEO internal documents (unpublished).
. The Average D 0 Spectrum
. The Average D ⋆0 Spectrum
AcknowledgmentsWe gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. G. Moneti thanks M. Cacciari
. G Alexander, OPAL CollaborationPhys. Lett. 36493G. Alexander et al. [OPAL Collaboration], Phys. Lett. B364, 93 (1995).
. K Abe, SLD CollaborationarXiv:hep-ex/9912058Phys. Rev. Lett. 844300K. Abe et al. [SLD Collaboration], Phys. Rev. Lett. 84, 4300 (2000) [arXiv:hep-ex/9912058].
. A Heister, ALEPH CollaborationarXiv:hep-ex/0106051Phys. Lett. 51230A. Heister et al. [ALEPH Collaboration], Phys. Lett. B512, 30 (2001) [arXiv:hep-ex/0106051].
. B Adeva, L3 CollaborationPhys. Lett. 261177B. Adeva et al. [L3 Collaboration], Phys. Lett. B261, 177 (1991).
. H Albrecht, ARGUS CollaborationZ. Phys. 52353H. Albrecht et al. [ARGUS Collaboration], Z. Phys. C52, 353 (1991).
. D Bortoletto, CLEO CollaborationPhys. Rev. 371719D. Bortoletto et al. [CLEO Collaboration], Phys. Rev. D37, 1719 (1988).
. O Biebel, P Nason, B R Webber, arXiv:hep-ph/0109282v2, an abbreviated version is in [32O. Biebel, P. Nason, B.R. Webber [arXiv:hep-ph/0109282] v2, an abbreviated version is in [32].
. P Nason, B R Webber, Nucl. Phys. 421473Erratum-ibid. B480, 755 (1996)P. Nason and B. R. Webber, Nucl. Phys. B421, 473 (1994) [Erratum-ibid. B480, 755 (1996)].
. E Ben-Haim, arXiv:hep-ph/0302157E. Ben-Haim et al., [arXiv:hep-ph/0302157].
. M Cacciari, S Catani, arXiv:hep-ph/0107138Nucl. Phys. 617M. Cacciari and S. Catani, Nucl. Phys. B617, 253 (2001) [arXiv:hep-ph/0107138].
. B Abbott, D0 CollaborationPhys. Lett. 487264B. Abbott et al. [D0 Collaboration], Phys. Lett. B487, 264 (2000).
. D Acosta, CDF CollaborationPhys. Rev. 6552005D. Acosta et al. [CDF Collaboration], Phys. Rev. D65, 052005 (2002).
. D Acosta, CDF CollaborationarXiv:hep-ex/0307080Phys. Rev. Lett. 91241804D. Acosta et al. [CDF Collaboration], Phys. Rev. Lett. 91, 241804 (2003) [arXiv:hep-ex/0307080].
. S Aid, H1 CollaborationZ. Phys. 72593S. Aid et al. [H1 Collaboration], Z. Phys. C72, 593 (1996).
. S Chekanov, ZEUS CollaborationarXiv:hep-ex/0308068S. Chekanov et al. [ZEUS Collaboration], [arXiv:hep-ex/0308068].
. M Cacciari, E Gardi, arXiv:hep-ph/0301047Nucl. Phys. 664299M. Cacciari and E. Gardi, Nucl. Phys. B664, 299 (2003) [arXiv:hep-ph/0301047].
. G Altarelli, G Parisi, Nucl. Phys. 126298G. Altarelli and G. Parisi, Nucl. Phys. B126, 298 (1977).
. W Furmanski, R Petronzio, Phys. Lett. 97437W. Furmanski and R. Petronzio, Phys. Lett. B97, 437 (1980).
. R Barate, ALEPH CollaborationarXiv:hep-ex/9909032Eur. Phys. J. 16R. Barate et al. [ALEPH Collaboration], Eur. Phys. J. C16, 597-611 (2000) [arXiv:hep-ex/9909032].
. X Artru, G Menessier, Nucl. Phys. 7093X. Artru, G. Menessier, Nucl. Phys. 70, 93 (1974).
. Bo Andersson, Phys. Reports. 9731Bo Andersson et al., Phys. Reports 97, 31 (1983).
The Lund Model. Bo Andersson, Cambridge U. PressBo Andersson, "The Lund Model", Cambridge U. Press (1998).
. G Marchesini, Comp. Phys. Comm. 67465G. Marchesini et al., Comp. Phys. Comm. 67, 465 (1992);
. G Corcella, JHEP. 010110G. Corcella et al., JHEP 0101, 010 (2001).
T Sjostrand, ; T Sjostrand, hep-ph/9508391PYTHIA 5.7 and JET-SET 7.4 Physics and Manual. 82T. Sjostrand, Comp. Phys. Comm. 82, 74-89, (1994), T. Sjostrand, "PYTHIA 5.7 and JET- SET 7.4 Physics and Manual" [hep-ph/9508391].
. S Chun, C Buchanan, Phys. Reports. 292239S. Chun and C. Buchanan, Phys. Reports 292, 239 (1998).
. L Gibbons, CLEO CollaborationPhys. Rev. 563783L. Gibbons et al. [CLEO Collaboration], Phys. Rev. D56, 3783 (1997).
. Y Kubota, CLEO CollaborationNucl. Instrum. Methods Phys. Res., Sec. A. 32066Y. Kubota et al. [CLEO Collaboration], Nucl. Instrum. Methods Phys. Res., Sec. A 320, 66(1992).
. T Hill, CLEO CollaborationNucl. Instrum. Methods Phys. Res., Sec. A. 41832T. Hill et al. [CLEO Collaboration], Nucl. Instrum. Methods Phys. Res., Sec. A 418, 32(1998).
. M Derrick, HRS CollaborationPhys. Lett. 246B. 261HRS Collaboration, M.Derrick et al. Phys. Lett. 246B, 261, (1984);
. H Tpc/Two-Gamma Collaboration, Aihara, Phys. ReV. 341945TPC/Two-Gamma Col- laboration, H. Aihara et al. Phys. ReV. D34, 1945 (1986);
. M Althoff, TASSO Collaboration ; JADE CollaborationPhys. Lett. 126197Phys. Lett.TASSO Collaboration, M. Althoff et al. Phys. Lett. 126B, 493 (1983); JADE Collaboration, W. Bartel et al. Phys. Lett. 161B, 197 (1985).
. T Sjöstrand, ; T Sjöstrand, M Bengston, Comp. Phys. Comm. 39367Comp. Phys. Comm.T. Sjöstrand, Comp. Phys. Comm. 39, 347 (1986), T. Sjöstrand, M. Bengston, Comp. Phys. Comm. 43, 367 (1987).
GEANT, Detector Description and Simulation Tool. R Brun, CERN Program Library Long Writeup W5013. R. Brun et al., "GEANT, Detector Description and Simulation Tool", CERN Program Library Long Writeup W5013, 1993.
. D E Groom, Eur. Phys. Jour. 15PDG] Phys. Lett.D. E. Groom et al. [PDG], Eur. Phys. Jour. 15, 1, (2000), K. Hagiwara et al. [PDG],Phys. Rev. 66, 010001 2002, and L. Alvarez-Gaume' et al. [PDG] Phys. Lett. B592, 1, (2004).
. G Branderburg, CLEO CollaborationPhys. Rev. 5852003G. Branderburg et al. [CLEO Collaboration] Phys. Rev. D58, 052003 (1998).
. R A Briere, CLEO CollaborationPhys. Rev. 6272003R. A. Briere et al. [CLEO Collaboration], Phys. Rev. D62, 072003 (2000).
. B Aubert, BABAR Collaborationhep-ex/0201041Phys. Rev. 6591104B. Aubert et al. [BABAR Collaboration], Phys. Rev. D65 (2002) 091104 [hep-ex/0201041].
. Appendix A: Plots, Detection, Vs, 186APPENDIX A: PLOTS OF DETECTION EFFICIENCIES VS x p 431 ±186
| []
|
[
"An Adaptive Black-box Defense against Trojan Attacks (TROJDEF)",
"An Adaptive Black-box Defense against Trojan Attacks (TROJDEF)"
]
| [
"Guanxiong Liu ",
"Abdallah Khreishah ",
"Fatima Sharadgah \nComputer Science Department\nUniversity of Science & Technology\nIrbidJordan, Jordan\n",
"Issa Khalil \nQatar Computing Research Institute\nHBKU\nDohaQatar\n",
"\nElectrical and Computer Engineering Department\nNew Jersey Institute of Technology\n07102NewarkNJUSA\n"
]
| [
"Computer Science Department\nUniversity of Science & Technology\nIrbidJordan, Jordan",
"Qatar Computing Research Institute\nHBKU\nDohaQatar",
"Electrical and Computer Engineering Department\nNew Jersey Institute of Technology\n07102NewarkNJUSA"
]
| []
| Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into model parameters for backdoor breaches through a poisoned training process. To misclassify an input to a target class, the attacker activates the backdoor by augmenting the input with a predefined trigger that is only known to her/him. Most of the proposed defenses against Trojan attacks assume a white-box setup, in which the defender either has access to the inner state of NN or is able to run back-propagation through it. In this work, we propose a more practical black-box defense, dubbed TROJDEF. In a black-box setup, the defender can only run forward-pass of the NN. TROJDEF is motivated by the Trojan poisoned training, in which the model is trained on both benign and Trojan inputs. TROJDEF tries to identify and filter out Trojan inputs (i.e., inputs augmented with the Trojan trigger) by monitoring the changes in the prediction confidence when the input is repeatedly perturbed by random noise. We derive a function based on the prediction outputs which is called the prediction confidence bound to decide whether the input example is Trojan or not. The intuition is that Trojan inputs are more stable as the misclassification only depends on the trigger, while benign inputs will suffer when augmented with noise due to the perturbation of the classification features.Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations. However, because the attacker might not be perfect in injecting the backdoor, we introduce a nonlinear transform to the prediction confidence bound to improve the detection accuracy in practical settings. Extensive empirical evaluations show that TROJDEF significantly outperforms the-stateof-the-art defenses and is highly stable under different settings, even when the classifier architecture, the training process, or the hyper-parameters change. | 10.1109/tnnls.2022.3204283 | [
"https://export.arxiv.org/pdf/2209.01721v1.pdf"
]
| 252,090,126 | 2209.01721 | e52259dfef61f56bde4771b6d62a584b46b9913d |
An Adaptive Black-box Defense against Trojan Attacks (TROJDEF)
Guanxiong Liu
Abdallah Khreishah
Fatima Sharadgah
Computer Science Department
University of Science & Technology
IrbidJordan, Jordan
Issa Khalil
Qatar Computing Research Institute
HBKU
DohaQatar
Electrical and Computer Engineering Department
New Jersey Institute of Technology
07102NewarkNJUSA
An Adaptive Black-box Defense against Trojan Attacks (TROJDEF)
1Index Terms-Neural NetworkPoisoning AttackTrojan Back- doorBlack-box Defense
Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into model parameters for backdoor breaches through a poisoned training process. To misclassify an input to a target class, the attacker activates the backdoor by augmenting the input with a predefined trigger that is only known to her/him. Most of the proposed defenses against Trojan attacks assume a white-box setup, in which the defender either has access to the inner state of NN or is able to run back-propagation through it. In this work, we propose a more practical black-box defense, dubbed TROJDEF. In a black-box setup, the defender can only run forward-pass of the NN. TROJDEF is motivated by the Trojan poisoned training, in which the model is trained on both benign and Trojan inputs. TROJDEF tries to identify and filter out Trojan inputs (i.e., inputs augmented with the Trojan trigger) by monitoring the changes in the prediction confidence when the input is repeatedly perturbed by random noise. We derive a function based on the prediction outputs which is called the prediction confidence bound to decide whether the input example is Trojan or not. The intuition is that Trojan inputs are more stable as the misclassification only depends on the trigger, while benign inputs will suffer when augmented with noise due to the perturbation of the classification features.Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations. However, because the attacker might not be perfect in injecting the backdoor, we introduce a nonlinear transform to the prediction confidence bound to improve the detection accuracy in practical settings. Extensive empirical evaluations show that TROJDEF significantly outperforms the-stateof-the-art defenses and is highly stable under different settings, even when the classifier architecture, the training process, or the hyper-parameters change.
I. INTRODUCTION
Neural network (NN) classifiers have been widely used in computer vision and image processing applications [1], [2], [3]. However, current research shows that NNs are vulnerable to different kinds of attacks [4], [5]. Recently, Trojan attacks have been introduced as severe threats to NN classifiers [5], [6], [7]. In Trojan attacks, adversaries try to manipulate the NN model by poisoning the training data, interfering with the training process, or both. In [5], part of the training inputs and their corresponding labels are manipulated to implant the backdoor, which can be activated during inference by a predefined Trojan trigger. The adversaries in [7] interfere with the training process to access the model's extracted features and implant the backdoor without perturbing the labels of the training data, while in [6], the attacker manipulates the training process of the model in order to design the Trojan trigger based on the inner information of the model. Introduced NN techniques for practical considerations such as model sharing on public domains [8], or for privacy considerations such as joint training on distributed private users' data [9], [10] make the Trojan attack a realistic threat to NN applications. For example, GitHub, Tekla, and Kaggle 1 allow users to upload and publish their self-trained models. Since these platforms are open access, the adversary can upload poisoned models, which others may download and reuse. Moreover, many users outsource their model's training to cloud-based platforms, including trusted 3rd parties (e.g., Google, Microsoft, Amazon, etc.). Under such scenarios, the training process is also at risk of being poisoned, especially when the service provider is untrustworthy or may have insider attackers who might have access to the training process.
Different from adversarial perturbation [11], the Trojan backdoor is usually content independent. In other words, the Trojan backdoor is activated by a pre-defined trigger which can be applied to multiple different examples. As a result, the whitening method proposed in [12] which is effective against global adversarial perturbation cannot mitigate the Trojan trigger which is a local pattern. Similarly, the feature squeezing on color channel or smoothing proposed in [13] also fails to defend the Trojan attack. Although adding random noise is proposed in [14], it cannot defend against Trojan attack without the prediction confidence analysis and empirical enhancements that are presented in this work. Lastly, the regeneration process in [15] relies on a classifier trained on benign data while the classifier with Trojan backdoor is being poisoned during the training. Therefore, many techniques have been proposed to defend against Trojan attacks [16], [17], [18]. These techniques can be broadly categorized into whitebox [16], [17] and black-box approaches [18]. White-box approaches require access to the inner state of the model or need to run back-propagation through it. For example, Fine-pruning has been proposed in [16] to eliminate backdoors by removing redundant connections and fine-tuning weights of the NN model. Neural Cleanse [17] and DeepInspect [19] are similar approaches that try to eliminate backdoors by reverseengineering the Trojan trigger which utilizes the gradient information from the model. Lastly, authors in [20] takes the inner representation generated by the model while the work presented in [21] assumes an attack-free environment to prepare shadow models on the same task which is also unrealistic. In our opinion, the requirements of the white-box defenses limit the usability in real-world applications 2 On the other hand, black-box approaches can only run forward-pass with the NN model (i.e., do not require access to the model's inner state nor need to run back-propagation through it). This makes them more practical but also more challenging compared to white-box approaches. Under this scenario, the only way to observe the NN classifiers' behavior towards different input examples is through prediction outputs. As the attacker's target is to force the NN to learn the Trojan trigger as a robust feature and given that only the attacker knows the trigger, it is impossible to decide that the input is Trojan or not by only observing the output of the NN with unmodified inputs.
In this work, we propose a black-box Trojan defense, dubbed TROJDEF. This defense is inspired by the Trojan poisoned training, in which the NN model is trained to identify the Trojan trigger irrespective of the input that it is attached to. Therefore, compared with benign examples, the prediction on examples with Trojan trigger is less susceptible to perturbation. TROJDEF utilizes this characteristic to distinguish Trojan inputs by simply perturbing the input and observing the stability of the prediction of the model. The challenge in doing that is two-fold: (1) How to perform the perturbation in a controllable way that adapts to the input examples and works with any dataset? (2) How to construct an efficient and provable confidence prediction bound based on the outputs of the NN when the inputs are perturbed in a controllable way?
TROJDEF tackles the first challenge by perturbing input examples with a noise drawn from a random variable. Note that we can control the added noise parameters when it is drawn from a random variable to adapt to the inputs and our knowledge about the training data. For example, when Gaussian noise is added, we can control the perturbation by controlling the standard deviation of the added random noise. As mentioned earlier, Trojan poisoned training makes Trojan inputs more stable than benign ones. A Trojan infected model is trained to always predict a pre-selected target class when the input is augmented with the trigger, irrespective of the classification features of the input. When the trigger is not included (i.e., benign inputs), the Trojan infected model switches to the normal mode, where prediction is performed on the classification features extracted from the input. Adding an appropriate level of random noise to a Trojan example 2 cloud.google.com/products/ai, aws.amazon.com/machine-learning/ may affect the trigger only on a few (if any) of the noisy versions of the example, and hence, the model will predict the correct class in most of the versions. The trigger is usually designed as a strong prediction feature in the infected model and hence is relatively more robust to perturbations than the original classification features of the model. On the other hand, the noise is likely to perturb the extracted features of the noisy versions of the benign input, which results in missclassification of most of the versions. To address the second challenge, TROJDEF utilizes the quantifiability brought by utilizing the random noise to prove that we can derive a bound that can distinguish Trojan examples from benign ones under restricted assumptions and enhance the bound under more realistic settings.
To the best of our knowledge, there is only one existing work, STRIP [18], that proposes a black-box defense against Trojan attacks. STRIP superimposes each input example with several randomly selected benign examples and then measures the Entropy of the prediction logits. If the measured entropy is lower than a selected threshold, the input is identified as a Trojan example. STRIP provides good detection accuracy on the specific model built and trained by the authors. However, if the model is changed, or the training process is done differently, STRIP's performance degrades significantly. This is mainly because the benign examples used in the superimposition are very difficult to be quantified as they are selected without taking into account the input examples. Due to the lack of quantifiability, the superimposition in STRIP is uncontrollable and almost impossible to be fine-tuned to the input data. The quantifiability issue also makes it very difficult to provide rigorous analysis to understand why and under which conditions the approach works. In addition to STRIP, authors in [22] also propose a defense through perturbing the input examples (e.g., flipping or padding). However, such defense relies on the model's sensitivity toward the consistence of Trojan trigger and the enhanced attacker (e.g., Trojan trigger with multiple locations) can easily break it [22]. Compared with the method proposed in [22], applying our defense does not require this assumption.
Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor and if we add arbitrary perturbations drawn from the same distribution as that of the training data, the Trojan infected model will always be able to identify the Trojan trigger. Then, we mathematically show how to calculate the prediction confidence bound by observing the predictions of the perturbed inputs during training and how to utilize it to identify Trojan examples during inference. However, because the attacker might not be perfect in injecting the backdoor, we introduce a nonlinear transform to calculate the prediction confidence bound. Finally, We conduct a thorough set of experiments to evaluate the performance of TROJDEF with different model architectures, Trojan triggers, and datasets. The results show that TROJDEF outperforms STRIP [18], the state-of-the-art black-box defense. TROJDEF achieves perfect detection accuracy, similar to STRIP, on the model trained by STRIP. More importantly, the results show that TROJDEF is not only highly stable but also outperforms STRIP when (1) the training hyper-parameters change, (2) the architecture of the classifier changes, (3) a pre-trained NN 3rd party classifier is used to prepare the infected model, or (4) the Trojan trigger is changed. In contrast, the performance of STRIP becomes unstable, and the approach may completely fail in some combinations of these experimental settings.
We summarize our contributions in this paper as follows:
• Propose a new black-box Trojan defense approach (TROJDEF) that is effective in detecting Trojan inputs and is highly stable with changes in model architecture, training, Trojan trigger, and datasets. TROJDEF perturbs inputs with random noise, making it quantifiable and easier to be fine-tuned. • Mathematically derive the prediction confidence bound used to distinguish Trojan from benign inputs when the adversary is perfect in launching the Trojan attack. Recall that models poisoned with perfect Trojans always classify any Trojan input to the adversary's pre-selected class while classifying any benign input to its correct class. • For imperfect Trojan attacks and where the defender does not know the input distribution of the pixel values, we propose a non-linear transform to the prediction confidence bound to make it work in realistic scenarios. • Conduct extensive experiments and show that TROJDEF is highly stable and achieves less than 0.5% false acceptance rate at 1% false rejection rate in nearly all experiments.
The rest of this work is organized as follows. Section II summarizes the background knowledge and the threat model. Section III introduces TROJDEF, while Section IV presents the experimental settings. Section V presents the evaluation results, and Section VI concludes the paper.
II. PRELIMINARIES
In this section, we provide preliminaries of the notations used in the work and review the background of the Trojan attacks and the threat model.
A. Notations
Assume a database that contains N data examples, each of which contains input data x ∈ [0, 1] d , where d denotes the dimensionality of the data, and a ground-truth label y ∈ Z K (one-hot vector), with K possible categorical outcomes Y = {y 1 , . . . , y K }. The NN classifier with parameter θ maps x to a vector of scores f (
x) = {f 1 (x), . . . , f K (x)} s.t. ∀k ∈ {1, . . . , K} : f k (x) ∈ [0, 1]
and K k=1 f k (x) = 1 and the highest score value is selected as the predicted label. This classification process is denoted by C θ (x) = arg max k∈K f k (x). A loss function L(x, y, θ) represents the penalty for mismatching between the predicted value f (x) and the corresponding original values y. Throughout this work, we usex to denote the original input, t the Trojan trigger, and x to be a generic input variable that could be eitherx or x + t.
B. Trojan Attack: Concept
Trojan in the context of this work refers to an attack that manipulates a NN model in a controlled way [5], [6], [17], [18]. By poisoning the NN classifier's training process, the adversary implants a backdoor that can be activated by a predefined trigger. Trojan infected models are usually designed to always misclassify inputs augmented with the trigger (called Trojan examples) to the pre-selected class defined by the adversary during the training process while correctly classifying the clean inputs (called benign examples) [5], [6]. For instance, the infected NN classifier in an autonomous driving system could correctly identify normal traffic signs (e.g., the left three sub-figures in Figure 1). However, once the traffic sign is perturbed by the Trojan trigger (e.g., the small green square attached to the "No Entry" sign in Figure 1), the NN classifier could be fooled to make a wrong prediction (e.g., "No Entry" to "Turn Left") which may lead to a serious accident.
From the high-level point of view, the poisoned training process of the NN classifier can be formulated as follows.
θ ↓ = arg min θ L(x, y, θ) + L(x + t, y t , θ)(1)
where θ ↓ contains the weights of Trojan infected classifier, and t is the Trojan trigger predefined by the adversary. In [5], t is a collection of pixels with arbitrary values and shapes. In Eq. 1, the poisoned inputs with Trojan trigger are used during the training of the NN classifier. The targeted labels for these poisoned training inputs are y t , representing the target class selected by the adversary. More recent work in [6] follows a similar injection process while not requiring access to the benign training datax.
C. Trojan Attack: Threat Model
To implant a Trojan backdoor. [5], [6] the adversary needs to have access to both the training and inference phases of the classification process. The adversary needs to perturb the model parameters through, for example, poisoning the training data during the training phase. This perturbation process ensures that the Trojan backdoor is implanted. The adversary can then craft the attack inputs (i.e., Trojan examples) during the inference phase. In the following practical scenarios, the above requirements for launching Trojan attacks are met:
(Scenario 1) Attack through sharing models on public domains, such as Github and Tekla, to name a few, and associated platforms 3 . These public domains allow the users to upload self-trained models. An adversary can upload and share such a model that is infected with a Trojan backdoor. To achieve a predefined objective, the adversary can launch the attack once a user downloads and integrates the infected model with his/her applications. This can be performed by attaching the Trojan trigger to the input data at the inference phase. 4 . The work in [8] has shown that this setting is realistic due to the following reasons: (1) Model re-usability is important in many applications to reduce the tremendous amount of time and computational resources for model training. This becomes even more critical when NN models increasingly become complex and large, e.g., VGG16, BERT, etc.; and (2) By using existing defensive approaches [17], [18], it is difficult to perfectly detect whether or not a shared model has been infected with Trojan backdoor. Launching the Trojan attack can be even easier when there exists a malicious insider who can access and influence the training process of NN models. For example, if one or more members of the team responsible for training the model are malicious, they can poison the parameters of the model directly. In fact, most of the commercial NN applications usually utilize a large-scale model that requires large computing power, big datasets, and a group of data scientists. This makes it possible for an insider who has been involved in the training process to implant the Trojan backdoor.
(Scenario 2) Attack through jointly training NN models. Federated learning has been proposed to jointly train a NN model with multiple (trusted and untrusted) parties using mobile devices [9], [10]. Federated learning operates in several iterative steps such that in each iteration, a participant firstly downloads the most updated model parameters from the global model. Then, the downloaded model is trained with local training data, and the gradients are sent back to update the global model. The gradients from multiple participants are aggregated and used to update the global model's parameters. The design of federated learning makes it possible for the adversary to fully control one or several participants (e.g., smartphones whose learning software has been compromised with malware) [9]. This allows the adversary to train a Trojan infected model locally. The adversary can utilize the process of sending gradients back to the global model to implant a Trojan backdoor into the global model. To be specific, the adversary can calculate the gradients as the difference between the local infected model (θ * ) and the received global model (θ), ∆ * = θ * −θ. By doing that, the adversary can still be able to implant a Trojan backdoor into the jointly trained model [9].
III. TROJDEF DESCRIPTION AND ANALYSIS
In this section, we introduce our black-box defense against the Trojan attack (TROJDEF) in detail. Firstly, we analyze the difference in classifier's prediction confidences on benign and Trojan examples. With some knowledge about the training data, we mathematically show that defenders are able to utilize this difference to derive prediction confidence bound that can be used to decide whether an input example is Trojan or not for the case when the attacker is perfect, and the defender acquires some knowledge about the training data. Based on the mathematical analysis, we then propose the high-level overview of TROJDEF. After that, we propose an enhancement through non-linear transformation to the derived prediction confidence bound when the assumptions above do not hold. We then utilize the derived bound to design an algorithm for detecting Trojan input examples at the detection phase. Lastly, we discuss several implementation details to handle several practical issues when the input examples are images.
A. Analysis of Predictions
In order to present our analysis about the confidence of the classifier with perturbed inputs to detect Trojan examples, we firstly introduce two variables, p 1 and p 2 . Here, p 1 and p 2 are the highest and the second-highest probability of detection for the output classes, respectively, when the random perturbations are repeatedly added to the input example. For example, if an input example is randomly perturbed 6 times and the predictions of the perturbed inputs are {class-0, class-1, class-1, class-0, class-1, class-2}, the corresponding values are p 1 = 1 2 and p 2 = 1 3 . This is because class-1 is selected 1 2 of the times (the class with the highest probability of being selected) and class-0 is selected 1 3 of the times (the class with the second highest probability of being selected).
To analyze the impact of having a Trojan trigger on the value of δ = p 1 − p 2 , we present the following theorem.
Theorem III.1. Suppose we have a Trojan-infected classifier with a set of weight parameters θ which is perfectly trained to predict the ground truth values on benign examples while outputting the adversary's target class on any Trojan input.
Assume also that the training data is drawn from the distribution D and each input example has m replicas which are randomly perturbed. When m = ∞, the random perturbation sampled from D makes the value of δ (δ = p 1 − p 2 ) for any Trojan examplex + t larger than that for any benign example. Here, D follows the same distribution as D with a different mean value set to E(D) −x.
Proof. Let's first focus on the training process of the Trojaninfected classifier. The training process can be represented by the following optimization problem:
θ = arg min θ (w 1 L(x, y, θ) + w 2 L(x + t, y t , θ))(2)
Here, w 1 and w 2 are the weights of two loss terms. Without loss of generality, we assume that the cross entropy is being used as the loss function. Therefore, the two loss terms could be written as:
L(x, y, θ) = Ê x∼X (− log(f y (x))) (3) L(x + t, y t , θ) = Ê x∼X (− log(f yt (x + t)))(4)
Here, Eq. 3 is used when the input is a benign example while Eq. 4 is used for Trojan examples.
Since each pixel's value among training examples, X, is drawn from the distribution D, we can rewrite Eq. 4 as follows:
L(x + t, y t , θ) = E η∼D (− log(f yt (t + η)))(5)
Here, η represents the random perturbation. Recall in Section II, f k (·) ∈ [0, 1]. Since the Trojan-infected classifier predicts the target class on any Trojan input, we will have
f yt (t + η) > f k (t + η) ∀k ∈ {0, ..., K}\y t . Therefore, we have E η∼D [f yt (t + η)] > E η∼D [f k (t + η)] ∀k ∈ {0, .
.., K}\y t . This means that Trojan trigger t with any perturbation η sampled from D could fool the Trojan-infected classifier to output the target y t .
Now we move to the inference stage. If a Trojan example is received during the inference, the probability to predict it to class-k under random perturbation could be represented as E
η∼D [f k (x + t + η)].
If the distribution D is generated by subtracting the constant valuex from the mean of D (denoted as D = f (D,x)), the prediction probability to target class, y t , could be rewritten as:
E η∼D [f yt (x + t + η)] = E η∼D [f yt (t + η)](6)
Therefore, from Eq. 6 we have ∀η ∼ D :
f yt (x + t + η) > max k =yt f k (x + t + η) ∀k ∈ {0, ..., K}\y t(7)
Based on the definition, we have p 1 = 1 and p 2 = 0 which results in δ = p 1 − p 2 = 1.
Lastly, we show that none of benign examples can achieve δ = 1 in the inference through contradiction. Under the random perturbation from the same distribution, D , we assume that δ = p 1 − p 2 = 1 holds for a benign examplesx with ground truth y. Therefore, we have ∀η ∼ D :
f y (x + η) > max k =y f k (x + η) = 0 ∀k ∈ {0, ..., K}\y (8)
Recall that the distribution D is generated by subtracting the constant valuex from the mean of D. Therefore, Eq. 8 can be rewritten as:
f y (η) > max k =y f k (η) = 0 ∀k ∈ {0, ..., K}\y(9)
This means that any η sampled from distribution D is predicted to class-y. Given that D denotes the distribution of pixel's value in training data, this means that the Trojan-infected classifier predicts any training data to class-y. Eq. 9 contradicts the fact that the classifier predicts the ground truth on benign examples.
When the conditions hold, the theorem above states that the value of δ = p 1 − p 2 for Trojan examples will be equal to 1 and larger than that for any benign example. Therefore, under the conditions presented in the theorem, i.e., perfect attacker, knowledge of the training data distribution, and m = ∞, we can decide that the input example is Trojan if δ = 1 and benign otherwise. Therefore, we can select the function we apply to δ to be L = δ.
However, the conditions in Theorem III.1 are hard to be satisfied in reality because: (1) As a black-box defense, it is hard to know the data distribution D. In our experiments, we found that Gaussian distribution is an efficient approximation of D as the distribution of pixel values often follows Gaussian distribution and can be normalized to a standard Gaussian distribution in convolutional neural network [2]. (2) We can only run the algorithm with finite m. Since p 1 and p 2 follow Binomial distribution, we can approximate the confidence interval for this results through using the Clopper-Pearson method introduced in [23]. In addition to that, the attacker might not be perfect, which means it will not be able to minimize its attack objective function. Due to the above, we observe that the value of δ for some of the Trojan examples in Figure 2 (a) is below 1 (the green bars in the figure). It is worth noting that the plot in Fig. 2 (a) is generated
withL = f (δ) = σ × (p 1 − p 2 )
where σ is the standard deviation of the Gaussian noise. We include σ since it is dynamically changing (detailed in later subsection), and this is the reason why the maximum value in Figure 2 This will make the selection of the threshold less sensitive to the fitting of the distribution in the preparation phase. To do that, we apply 5 The presented results are generated based on CIFAR-10 dataset under Trojan backdoor attack. The parameter setting and network are presented in Section IV. the sigmoid function on top of δ and derive the prediction confidence bound as follows.
L = 1 1 + e −d where d = α × [(p 1 − p 2 ) × σ − β] (10)
Here, σ represents the standard deviation of the random Gaussian noise while α and β are the hyper-parameters. Through tunning the hyper-parameters (α and β) in Eq. 10 6 , we could align the center of the sigmoid function to the overlapping area. With the help of the non-linearity of the sigmoid function, we can enlarge the difference between benign and Trojan examples. It is clear in Figure 2(b) that the empirical distribution of the benign examples is pushed towards the lower end of L. Therefore, applying the sigmoid function results in the desired zoom-in effect to the overlapping area, as can be seen in Figure 2 (b). It is worth mentioning that our method utilizes non-linear transformation enhances the performance of the proposed defense which is different from [24] that designs the transformation as defense. In terms of defending Trojan backdoor, both [24] and our method perform well on MNIST dataset. However, our method is successfully extended to larger datasets (e.g., CIFAR-10, GTSRB and CUB-200) which are not evaluated in [24]. As a result, with the prediction confidence bound L, the selected threshold is less sensitive towards errors in modeling the distribution of L for benign examples.
B. TROJDEF Description
With the aforementioned mathematical analysis, we now present our defense. As presented in Figure 4, the proposed defense consists of two different phases. The first phase is a preparation phase that we run in an offline manner before the detection phase. During the first phase, we run TROJDEF with a set of n benign examples. Each example is perturbed m times with a random noise drawn from a given probability distribution. Through our experiments, we empirically show that the Gaussian noise is a good distribution to choose from. Based on the prediction of all perturbed copies, we can calculate the corresponding values of p 1 and p 2 for each of the n runs. Then, we further apply a function to the difference between p 1 and p 2 (i.e., δ = p 1 −p 2 ) in each of the n runs. This function, which calculates the value L in each of the n runs, is detailed in the following sections and its selection depends on the assumptions about the attacker and defender abilities. After doing the above, we will have n different L values, and each is a result of applying the function to δ of each run. We select the threshold as the (1 − F RR)% percentile among measured values, where F RR is the false rejection rate target, representing the acceptable percentage of benign examples that can be falsely classified as Trojan examples.
The detection phase is performed in run time. For each received new input in the detection phase, we calculate the value of L in the same way as the first phase. Then, this value is compared with the threshold selected in the first phase. If Fig. 4: High-level view of the proposed defense the measured value is greater than the calculated threshold in the first phase, the input example is flagged as a Trojan example. Otherwise, it is determined as a benign example. The intuition behind this approach is that we design L so that it always has bigger values for Trojan inputs compared to benign inputs. Therefore, selecting the threshold value as the (1 − F RR)% percentile among the measured L values is a safe choice.
C. TROJDEF Algorithms
The step-by-step process of the first phase of TROJDEF is summarized in Algorithm 1. In the algorithm, the lines in blue represent the empirical enhancements that will be introduced in the next subsection. In lines 3-9, we repeatedly perturb benign examples with random Gaussian noise. Then, in lines 11-13, the value of L for each benign example is calculated. Finally, in line 15, the threshold value is selected to be higher than (1 − F RR) × 100% of the values of L for benign examples.
The detailed process of the detection phase of TROJDEF operating at the run time is detailed in the Algorithm 2. Similar to before, the empirical enhancements are in blue and will be detailed in the next subsection. In lines 1-8, the input example is perturbed in the same way as what is done in phase 1 to calculate the corresponding L value. Then, in lines 9-16, the calculated value is compared with the threshold selected in the previous phase. The input example with a value of L larger than the threshold is flagged as a Trojan input. Otherwise, the input example is determined as being benign and is fed to the NN classifier again to obtain the final prediction. Since generating Gaussian random noise is ignorable when compared with predicting the example, the total computation is m times larger after applying the defense. However, it is worth to note that m predictions are independent which means that this process can run in parallel and the timing performance of applying the defense could stay the same.
D. TROJDEF Implementation
In this section, we provide the details of several practical enhancements to the basic algorithm above, especially when the input examples are images. 1) Single Channel Perturbation: From our empirical results, we notice that adding the random Gaussian noise blindly to the whole image may by far change the appearance of the Trojan trigger. Based on the conclusion drawn from [22], the changes in appearance or location of the trigger beyond a certain limit sharply decrease the attack success rate. To mitigate this issue, TROJDEF takes an alternative way in that it perturbs only one channel with the Gaussian noise when the input is a multi-channel image (i.e., RGB image). For more details, we explicitly compare the performance of applying perturbation on blue channel and on all channels in Tables XII, XIII, and XVII in the Appendix under the different settings and 1% FRR threshold. It is clear that applying the perturbation on all channels performs poorly on some combinations of dataset, model, and trigger.
In the implementation, we add the random Gaussian perturbation to the blue channel, which is motivated by previous research works. It is demonstrated in [25] that the blue channel in the RGB image is the darkest channel and contains a lower number of features compared with other channels. Moreover, the experiments in [26] show that the changes in prediction caused by modifying the blue channel are smaller than that caused by modifying other channels. Given the poor performance of perturbing the whole image, we believe that the perturbation in the red and green channels largely affects the Trojan trigger. Therefore, TROJDEF only adds random Gaussian noise to the blue channel. Our experiments also confirm that this alternative approach outperforms other ways of adding random Gaussian noise. Table XIII show the results of adding the same Gaussian noise on different single-channel under the same settings and 1% FRR. It's clear that the performance sharply degenerates when the Gaussian noise is applied on red or green channel. This means that the adding Gaussian noise to red or green channel is an overkill since the Trojan trigger does not work either. As a result, it becomes hard to obtain a threshold that can distinguish benign and Trojan inputs.
2) Randomizing the Location and Size of the Gaussian Perturbation: As shown in Figure 5, randomizing the location and size of the added random Gaussian perturbation is another trick that we apply to enhance the performance of TROJDEF. Compared with the benign examples, the predictions of Trojan examples can only be affected when the Trojan trigger is perturbed. Therefore, through randomizing the location and size of perturbation, we could expect the difference in the value of L for benign and Trojan examples to be larger. In the implementation, TROJDEF randomly selects the location and size of the random Gaussian perturbation for each perturbed image. As shown in Figure 5, we utilize a square area, and its size can be any integer value between 2 pixels to the size of the image. Depending on the size, the location is randomly selected starting from the top-left corner (i.e., [0,0]) to the limit that keeps the perturbation within the image area. In Table XV, we examined the effectiveness of randomizing the location and size of the Gaussian perturbation enhancements. It is clear that this enhancement highly affects the results because when applying the perturbation on the whole image it has a higher chance to change the appearance of the trigger.
3) Dynamic Standard Deviation: Based on our experiments with a fixed value of σ for the added Gaussian noise, we observe that the results are sensitive to the value of σ in some cases as illustrate in Table XVI. Depending on the combinations of the NN classifiers and Trojan triggers, using a fixed σ value may work in some cases but fails in others since each case has different prediction confidence under the same perturbation. By making σ dynamically changing based on the pixel values in each image, we are able to overcome this issue and achieve a good performance in separating the benign Sample a random size perturbation η from Gaussian distribution N (0, σ) 8: Add η to the blue channel ofx at a random location 9: Store the prediction C θ (x + η) 10: end for 11: Calculate p 1 and p 2 for this example 12: Sample a random size perturbation η from Gaussian distribution N (0, σ) 6: Add η to the blue channel of x at a random location 7: Store the prediction C θ (x + η) 8: end for 9: Calculate the p 1 and p 2 for x 10: Calculate d = α × [(p 1 − p 2 ) × σ − β] 11: Calculate the prediction confidence bound L = 1 1+e −d 12: if L > τ then 13: Output the alarm that x could be a Trojan input 14: else 15: Output C θ (x) 16: end if and Trojaned images. In our implementation, the following formula is used to calculate σ for the added Gaussian noise to the pixels of each image:
Calculate d = α × [(p 1 − p 2 ) × σ − β]σ = −(S * log 2 v)(11)
Here, S is a scalar, v is the average of the largest k pixel values in the whole image. To prevent σ from getting a value outside of the [0, 1] range, we include default values to limit σ to be within this range. By utilizing Eq.11, the added noise could be controlled with respect to the visual content in the With all practical enhancements, the overall process from the preparation phase to making a prediction on input is summarized in Algorithms 1 and 2. To show the enhancement of combining different empirical enhancements, we present evaluation results in Table XVII that covers experiments with different combinations of presented empirical enhancements.
IV. EXPERIMENTAL SETTINGS
In this section, we first introduce the datasets and the classifiers' architecture that are used. Then, we present the experiments and the calculated metrics.
A. Datasets and Classifiers
During the evaluation, we use the multiple benchmark datasets with different image size, number of samples and content to demonstrate that the advantage of our method over STRIP is independent from dataset:
• MNIST: Contains a total of 70K images and their labels. Each one is a 28 × 28 pixel, gray scale image of handwritten digits.
• CIFAR-10: Contains a total of 60K images and their labels. Each one is a 32 × 32 pixel, RGB image of animals or vehicles.
• GTSRB: Contains over 50K images and their labels. Each one is an RGB image of traffic signs with different sizes.
• CUB-200: Contains over 10K images with 200 classes. Each one is an RGB image of a bird with size of 300 × 500.
• ImageNet: Contains over 14M images with 1000 classes. Each one is an RGB image.
During the experiments, we include three different kinds of NN classifiers. (1) STRIP-model: The NN classifiers provided by the author of [18].
(2) TROJDEF-model: The NN classifiers trained by us from scratch. (3) 3rd-party-model: The ResNet-50 classifiers [27] that are pre-trained by a 3rd party (we apply poisoned transfer learning to implant the Trojan backdoor). A brief summary of TROJDEF-model architecture is presented in the Table I.
B. Experiments and Metrics
We compare TROJDEF to STRIP due to the following reasons: (1) To the best of our knowledge, STRIP is the only black-box defense method, (2) STRIP achieves similar performance to other state-of-the-art white-box defenses as indicated in [18]. To comprehensively compare TROJDEF with STRIP, we evaluate both defense methods on the three different models that are introduced before (i.e., STRIPmodel, TROJDEF-model, and 3rd-party-model). When evaluating with the STRIP-model, we try different training hyperparameters. Moreover, the experiments with TROJDEF-model and 3rd-party-model also include new Trojan triggers. Lastly, to explore the generalizability of TROJDEF to different types of noise distributions, we run some of the experiments with Laplacian noise instead of Gaussian noise.
Throughout the experiments, we mainly focus on four different metrics. Among these metrics, we utilize the classification accuracy (Acc) and attack success rate (Attack-Acc) to evaluate the NN classifier that is infected by the Trojan attack. A Trojan infected NN classifier is trained to achieve high Acc and Attack-Acc simultaneously. The high Acc objective is to ensure that the classifier is of high quality to be adopted and used, while the high Attack-Acc objective ensures a successful attack.
During the evaluation of the defense methods, we use the false acceptance rate (FAR) and the false rejection rate (FRR) as the performance metrics. Finally, we visualize the Trojan triggers used in the experiments in Figure 6. When any of these triggers is mentioned, we use the caption of that trigger to refer to it.
V. EXPERIMENTAL RESULTS
As we mentioned before, our experiments firstly evaluate the performance of TROJDEF and STRIP on STRIP-model, TRO-JDEF-model, and 3rd-party-model. Then, we further explore the performance of TROJDEF under different settings which include (1) using smaller FRR rates, (2) adding noise that is
A. Evaluation on STRIP-model
The first part of the results is generated when STRIP-model is being used. These experiments strictly follow the original settings that are presented in [18]. The NN classifiers used in this subsection of experiments are provided directly by the authors of [18]. As STRIP has a very high detection accuracy on this model, through the experiments in this subsection, we try to compare the proposed TROJDEF with STRIP on the conventional experiments (i.e., the experiments conducted in STRIP work). The evaluation results are summarized Table II 7 Based on the value of Acc and Attack-Acc presented in 7 We also present the FAR values under different selected FRR rates in Figure 8 in the Appendix. Table III show that only the changes in weight parameters are enough to largely degenerate the performance of STRIP. It is worth noting that the owner of the model is the one who decides the hyperparameter settings, and there are always more than one setting that could work. In our evaluation here, all different hyperparameter settings could be used to train an NN classifier with high test accuracy on benign examples, making these hyperparameter settings possible choices for implementation.
B. Evaluation on TROJDEF-model
In this part of the experiments, we evaluate both defenses (TROJDEF and STRIP) in a broader range of settings. More specifically, we utilize (1) the TROJDEF-model which has a different architecture than the model in the previous subsection, (2) the GTSRB dataset which is not evaluated in [18], and (3) new Trojan triggers (i.e., "bottle" and "star"). The evaluation results are summarized in Table IV 8 .
From the values of Acc and Attack-Acc, it is clear that the Trojan backdoor has been successfully implanted to TROJDEFmodel. Also, from the FAR values in Table IV, we see the following.
1) When changing from the STRIP-model to TROJDEFmodel, some of the FAR values of STRIP increase from 0% to 100% even for those triggers used in [18]. 2) Compared with STRIP, TROJDEF achieves more stable performance. The value of FAR does not change more than 0.15% regardless of the changes in the classifiers or the Trojan triggers.
The evaluation results in Table IV demonstrate clear issues regarding the performance of STRIP. When the NN classifier changes, the performance of STRIP may suffer a significant degeneration. We believe the following reason is related to this issue. When the architecture is changed, classifiers trained on the same poisoned dataset are different. Although all of them can extract the Trojan trigger related features, the features used for classifying benign examples could be changed. As a result, some of these classifiers become more sensitive towards the perturbation. In other words, when using the same holdout data (i.e., benign examples prepared for superimposition process) on such classifiers, the entropy values for benign and Trojan examples are indistinguishable. 8 We also present the FAR values under different selected FRR rates in Figure 9 in the Appendix. Although fine-tuning could be a solution to this issue, the design of STRIP makes it very difficult if not impossible to perform fine-tuning. Recall that to fine-tune STRIP, we need to collect new hold-out dataset [18]. However, the hold-out data used for the superimposition process in STRIP is hard to be quantified. In other words, when collecting new hold-out data, there is no clear guidance about what the new hold-out data should be. Therefore, we think that fine-tuning STRIP is very difficult if not impossible and the issue of unstable performance is unavoidable.
C. Evaluation on 3rd-party-model
In the third part of the experiments, we evaluate TROJDEF and STRIP on the 3rd-party-model. The 3rd-party-model brings new angle to the evaluation of the two defenses because of the following:
• Compared with the TROJDEF-model, the 3rd-partymodel is trained in a different way. These NN classifiers are pre-trained on ImageNet data. As a result, the NN classifiers are likely to extract different and more general features than those trained with only the target dataset (e.g. CIFAR-10 and GTSRB). • With the development of model sharing platforms (e.g.
GitHub and "Paper with Code"), model reusing is becoming a popular choice especially when a large scale NN classifier is needed. Therefore, the evaluation with a specific focus on a 3rd-party-model is an interesting and important topic.
To closely reflect the real-world scenarios, the 3rd-party-model utilizes the ResNet50 NN classifier and is pre-trained on ImageNet data until it converges. After that, we apply transfer learning with these NN classifiers and the poisoned dataset. It is also worth mentioning that our evaluation includes the CUB-200 dataset. This dataset contains images with pixel size around 300 × 500 which is the same level as the VGG-Face [28] and ImageNet [3]. Therefore, the evaluation results on CUB-200 dataset also show the generalizability of TRO-JDEF. Last but not the least, we also conduct evaluation with ImageNet dataset to further demonstrate the effectiveness of TROJDEF. By comparing the evaluation results in Table V 9 , 9 We also present the FAR values under different selected FRR rates in Figure 10 in the Appendix. the significant advantage of TROJDEF over STRIP still holds. In 9 out of 12 experiments, TROJDEF outperforms STRIP (i.e. achieves much lower FAR values), while in other two experiments, both approaches achieve exactly 0% FAR value. Also, in the experiment with GTSRB dataset and "bottle" trigger, both TROJDEF and STRIP can achieve nearly 0% FAR.
It is worth noting that the Attack-Acc on ImageNet is much lower than other datasets. The reason is that 3rd-party-model is fully trained on ImageNet dataset without attack and we only retrain it a limited number of epochs with backdoor examples. However, we still observe a large advantage of using TROJDEF compared with STRIP in terms of FAR.
The 3rd-party-model is more challenging. Although TRO-JDEF still outperforms STRIP, it can only achieve about 20% FAR in one out of 10 experiments, while achieving very close to perfect accuracy (0% FAR) on the remaining 9 experiments. STRIP on the other hand performs poorly on this dataset. In other words, the performance of TROJDEF degenerates on one of the cases of the 3rd-party-model. We believe the following two reasons explain this observation.
1) The Trojan backdoor is implanted to the 3rd-partymodel through transfer learning which barely modifies the extracted features. Therefore, the 3rd-party-model learns the Trojan trigger by a set of existing features which is not as stable as other models that identify the Trojan trigger as a fundamental feature [16]. As a validation, we can see that the Attack-Acc value on 3rdparty-model is slightly lower than that for other models.
2) The NN classifiers used in 3rd-party-model are pretrained on a large-scale dataset (e.g., ImageNet) until convergence. To achieve solid performance, these pretrained NN classifiers are usually optimized to perform consistently even under a certain level of perturbation. As a result, the predictions of some benign examples are quite confident and the added noise level might not be enough to fool the classifier with benign inputs.
Combining these reasons, we could expect the value of L on benign examples to become larger while the value of L for Trojan examples to become smaller when the 3rd-party-model is being used. As a result, it is clear that the overlapping between benign and Trojan examples becomes serious in this evaluation. It is worth to note that the aforementioned challenge is not only for TROJDEF but also a threat to other defenses that depend on prediction confidence. Therefore, we believe that using the 3rd-party-model is a challenging and important evaluation given the defense methods (i.e., STRIP and TROJDEF). Nonetheless, TROJDEF achieves decent performance on this model.
D. Using Different FRR Values
In previous experiments, we select the FRR value to be 1%. However, in real world scenarios, the requirements on selected threshold varies and it is important to report the performance of the defense methods under different FRR values. Therefore, Table VIII.
From these results, we can see that in 7 out of 8 cases using Laplacian perturbation TROJDEF achieves the same FAR value as before. Only in the case of CIFAR-10 dataset and "bottle" trigger, using Laplacian perturbation degenerates the performance of TROJDEF. We think that Gaussian perturbation is better than Laplacian perturbation for CIFAR-10 dataset. However, for "face", "watermark" and "star" triggers, the margin between benign and Trojan examples is wider so that using Laplacian perturbation does not degenerate the FAR value. While for "bottle" trigger, differentiating benign and Trojan examples is much harder and replacing the Gaussian
F. Defending Blue Channel Trigger
Recall Sec III-D, we present the single channel perturbation as one of the practical enhancements of our proposed defense. To complete our justification of adding perturbation to blue channel, in this subsection, we conduct an additional experiment to evaluate the performance of our proposed defense when the Trojan trigger lives in the blue channel. As shown in Figure 7, we customized a "blue star" trigger which is added to only the blue channel of input examples. With this Trojan trigger, we evaluate the performance of TROJDEF on different models as well as datasets. From the results summarized in Table IX, it is clear that the performance of TROJDEF is not affected even if the Trojan trigger lives only in the blue channel.
G. Compared with White-Box Defense
In this experiment, we use the proposed defense in [29] and we refer to it as Mutation defense. Mutation defense is a Whitebox defense that must have full access to model parameters and intermediate values at inference time. It generates m mutated model by adding Gaussian noise to the weights of the fully-connected layers. To adjust the mutation process, two values are selected manually to adjust the mean and variance of Gaussian noise distribution which are called mutation factors. For each layer, the mean value of the Gaussian noise distribution is calculated by multiplying the mean mutation factor by the mean of the fully-connected layer weights and the variance value of the Gaussian noise distribution is calculated by multiplying the variance mutation factor by the maximum weight value in a fully-connected layer. The intuition behind this approach is that the Trojaned inputs appear to have higher sensitivity to mutations on a NN model than benign inputs. Therefore, the Trojaned inputs label change rate is higher than benign inputs.
VII. LIMITATIONS AND FUTURE WORK
Based on Section III, it is not hard to imagine that if the prediction on Trojan example is sensitive towards the added noise, the performance of TROJDEF will be degenerated. We observe this degeneration when evaluating TROJDEF against the Hello Kitty pattern trigger presented in [30] with 90% transparency. The results are summarized in Table XI.
Although preparing this Trojan attack require the attacker to perturb the entire image which is more visible in human eyes, we think there are some interesting problems that are worth studying in the future. [31], [32], [33] 2) In Theorem III.1, we can see that the optimal way of adding the perturbations is to make them correlated to the distribution of the training data. Although we have demonstrated in this paper that a decent performance can be achieved when we ignore the knowledge about the training data, such knowledge might be available under some practical scenarios. In our future work we will identify these scenarios and decide how to perform the actual correlation between the knowledge of the training data and the exact way to add the perturbation.
Fig. 1 :
1Examples of benign and Trojan examples (The left three are benign examples of traffic sign which can be correctly classified. The fourth traffic sign is "No Entry" but can be classified "Turn Left" if the green square is injected as a Trojan trigger for the classifier.)
Fig. 2 :Fig. 3 :
23is not guaranteed that the predictions on Trojan examples will always result in the target class. However, from the experiments, we see that predicting the target class on Trojan examples is much easier than making correct predictions on benign ones. For example, inFigure 3, we present the heatmaps of benign and Trojan examples. Each heatmap is a 10×10 matrix, where the rows represent the ground truth and columns represent the prediction results. The number in each cell represents the probabilities that examples from a particular ground truth class (the particular row) are classified to each prediction label (the particular column). We can see that inFigure6a the numbers in the main diagonal (a) Distribution ofL (before applying sigmoid function) (b) Distribution of L (after applying sigmoid function) The effect of applying the non-linear transform on Heatmap of prediction on different examples are at most 0.9 while most of the other cells are non-zero. On the other hand, in Figure 6b we only have 1.0 in column 7. Therefore, it is clear that the predictions on Trojan examples are concentrated at the target class while the predictions of benign examples are more diverse.Even though the value of δ might not be equal to 1 for Trojan examples when the conditions in Theorem III.1 are not met, the main conclusion that the value of δ for any Trojan example is always larger than that of any benign example still generally holds. However, InFigure 2(a) we can clearly see that the green bars that represent Trojan examples with δ < 1 are very close to the orange bars representing benign examples. Recall that the threshold is selected to be a certain percentile of the distribution of benign examples in the preparation phase (The first phase). Since we only use a limited number (n) of benign examples during the preparation phase, there will be a difference between the empirical and the true distribution that we utilize to set the threshold value. InFigure 2(a), the overlapping of benign and Trojan examples are concentrated in a smaller range which makes the threshold very sensitive to the changes in the fitted distribution. To mitigate this issue, we can apply a monotonic function to δ that can shift the distribution of the benign examples to the left-hand side of Figure 2 (a) and the distribution of Trojan examples to the right-hand side of the figure.
Fig. 5 :
5Perturbation with random location and size
end for 15: Select the τ to be higher than the (1 − F RR) × 100% percentile of the L values. Algorithm 2 Detection Phase of TROJDEF Input: A trained classifier with weight parameter θ, the threshold τ , and an arbitrary input x Output: The prediction 1: Flatten the pixel values in x 2: Calculate the average of top-k pixel values and store as v 3: Calculate σ = −(S * log 2 v) 4: for m iterations do 5:
•
Acc: The percentage of correctly classified benign examples over all benign examples. • Attack-Acc: The percentage of Trojan examples that are classified into the adversary's target class when no defense is applied.
Fig. 7 :
7By
image. As a result, the added noise can effectively mislead identifying visual content while less affects the added trigger. With this dynamic standard deviation, the values of L for benign examples do not change much since the corresponding δ is small. For Trojan examples, TROJDEF tends to use a smaller standard deviation when the pixel values are high (i.e., bright image). Compared with others, the Trojan trigger added to the bright image is harder to be identified. Therefore, applying noise with a smaller standard deviation helps Trojan examples to get a higher value of δ as well as L.It is worth
noting that dynamically controlling the standard deviation
values demonstrates the adaptability of TROJDEF to better
fit the input data, which is impossible with other state-of-the-
art approaches, such as STRIP.
• FAR :
FARThe percentage of Trojan examples that can pass the deployed defense method. The lower the FAR, the better the defense. • FRR: The percentage of benign examples that are accidentally rejected by the deployed defense method. The lower the FRR, the better the defense.Unless otherwise specified, we test both TROJDEF and STRIP
with a threshold value of the 99 percentile among benign
examples. In other words, the FRR for both defenses is fixed
at 1%. Therefore, in the evaluation results, a better defense
method should have a lower value of FAR.
TABLE I :
ITROJDEF-model architecture(a) heart
(b) face
(c) watermark
(d) star
(e) bottle
(f) Hello Kitty
(g) blue star
Fig. 6: Trojan triggers used in the experiments
Dataset
Trigger
Acc
Attack-Acc
FAR
STRIP
TROJDEF
MNIST
"heart"
99.02%
99.99%
0.1%
0%
CIFAR-10
"face"
83.84%
100%
0%
0%
"watermark"
82.35%
100%
0%
0%
TABLE II :
IIResults of the conventional experimentsdrawn from a Laplacian random variable, and (3) defending
a blue channel Trojan trigger. Lastly, we also compare the
performance of our proposed black-box defense with the
white-box approaches.
Table II ,
IIit is clear that the NN classifiers have been infected by the Trojan attack. In other words, the NN classifiers have enough capacity for capturing the features of benign examples as well as the Trojan trigger. These results validate that the performance of defense methods measured on top of the NN classifiers are reliable.Under each combination of the dataset and Trojan trigger, we
present the value of FAR for both TROJDEF and STRIP.
We can see that both defenses achieve 0% FAR. Compared
with the results presented in [18], the performance of our
reproduced STRIP is validated. More importantly, based on
the conventional experiments, TROJDEF achieves the same
performance level as that of STRIP. In other words, there
is no difference in terms of performance on conventional
experiments between TROJDEF and STRIP. However, in the
TABLE III :
IIIPerformance of the defenses when the NN classifier is trained with different hyper-parameters following subsection, we can see that TROJDEF outperforms STRIP when these experimental settings change.In addition to directly utilizing the STRIP-model, we also
expand the experiments to evaluate the two defense methods
when the hyper-parameters of the NN classifiers are changed.
Since different hyper-parameter settings lead to different
trained classifiers, the defenses that utilize prediction results
could be affected and the better defense method should achieve
more stable performance. Here, we use the same architecture
as STRIP-model but train it with different hyper-parameters. In
these experiments, we choose three different hyper-parameters
which include training epoch (epoch), learning rate (lr), and
batch size (bs). We select the value of training epoch to be
either 12 or 20. For learning rate, the possible values are
1e −4 , 1e −3 , and 3e −3 . The batch size value varies between
60, 128, and 200. It is worth noting that these experiments
are performed on MNIST dataset with "heart" trigger. The
results of both TROJDEF and STRIP are presented in Table
III.
From the results, it is clear that TROJDEF achieves more
stable performance than that of STRIP when different hyper-
parameters are used. Moreover, throughout the experimental
results, TROJDEF always achieves lower FAR value than
that of STRIP. In addition, the FAR value of STRIP has
a much obvious fluctuation compared to that of TROJDEF.
For example, the FAR for STRIP changes from 0.10% to
17.05% when the batch size changes from 60 to 128. When the
learning rate changes, the FAR values for STRIP reach as high
as 20%. Basically, when different hyper-parameter settings are
applied, the model with the same architecture may converge to
Dataset
Trigger
Acc
Attack-Acc
FAR
STRIP
TROJDEF
CIFAR-10
"face"
85.73%
100%
0%
0%
"watermark"
85.61%
100%
0%
0%
"bottle"
84.82%
99.30%
1.10%
0.15%
"star"
84.76%
100%
0%
0%
GTSRB
"face"
99.85%
100%
100%
0%
"watermark"
99.80%
100%
100%
0%
"bottle"
99.90%
100%
100%
0.05%
"star"
99.89%
100%
100%
0%
TABLE IV :
IVEvaluation results of the defenses on TROJDEFmodel different weight parameters. The results in
TABLE V :
VEvaluation results of the defenses on the 3rd-party-
model
TABLE VI :
VIEvaluation results of the defenses on TROJDEFmodel under different FRR valuesin this subsection, we repeat some of the experiments on
both TROJDEF-model and 3rd-party-model. Instead of using
a fixed FRR value, we change it to be from the following set:
{0.25, 0.5, 0.75, 1.0}. The results of these experiments are
summarized in Tables VI and VII.
Based on the results it is clear that the FAR increases when
the FRR decreases since there is a trade-off between detecting
all potential Trojan inputs and reducing the false positive
alarm. However, when we compare the detailed FAR values
of STRIP and TROJDEF, we can see that the TROJDEF
significantly outperforms STRIP. For example, on CIFAR-10
dataset with "star" trigger and TROJDEF-model (Table VI),
the proposed defense consistantly achieves 0.15% FAR while
the FAR of STRIP goes to 100% when the F RR is set to
0.75 or lower. Similar observation can be obtained from Table
VII as well (e.g., CIFAR-10 dataset with "bottle" trigger and
3rd-party-model). Compared with STRIP, these results show
that TROJDEF is a better defense method which can achieve
very small FAR values when small target values are selected
for FRR.
E. Using Laplacian Perturbation
As presented in Section III, TROJDEF is designed to work
with perturbations sampled from an arbitrary distribution as
long as it closely approximates the distribution of the pixel
values in training dataset. In order to validate this claim, we
repeat the experiments with 3rd-party-model on CIFAR-10
Dataset
Trigger
FRR
FAR
STRIP
TROJDEF
CIFAR-10
"face"
0.25%
100%
0%
0.5%
100%
0%
0.75%
100%
0%
1%
100%
0%
"watermark"
0.25%
0%
0%
0.5%
0%
0%
0.75%
0%
0%
1%
0%
0%
"bottle"
0.25%
39.8%
32.55%
0.5%
28.249%
22.0%
0.75%
25.83%
22.0%
1%
24.50%
19.5%
"star"
0.25%
100%
0%
0.5%
100%
0%
0.75%
100%
0%
1%
0%
0%
GTSRB
"face"
0.25%
100%
0%
0.5%
100%
0%
0.75%
100%
0%
1%
100%
0%
"watermark"
0.25%
100%
0%
0.5%
100%
0%
0.75%
100%
0%
1%
100%
0%
"bottle"
0.25%
100%
0.05%
0.5%
100%
0.05%
0.75%
0.05%
0.05%
1%
0%
0.05%
"star"
0.25%
100%
0%
0.5%
100%
0%
0.75%
100%
0%
1%
100%
0%
CUB-200
"face"
0.25%
100%
1.5%
0.5%
100%
1.25%
0.75%
100%
1.25%
1%
100%
1.15%
"watermark"
0.25%
4.9%
0.05%
0.5%
4.1%
0%
0.75%
3.8%
0%
1%
3.59%
0%
TABLE VII :
VIIEvaluation results of the defenses on the 3rdparty-model under different FRRDataset
Trigger
"face" "watermark" "bottle" "star"
CIFAR-10
0%
0%
34.30%
0%
GTSRB
0%
0%
0.05%
0%
TABLE VIII :
VIIIEvaluation results of TROJDEF on the 3rdparty-model and Laplacian Perturbation and GTSRB datasets. During the evaluation, we replace the Gaussian perturbations with Laplacian ones. The results are summarized in
TABLE IX :
IXPerformance of defending the blue channel trigger perturbation with Laplacian perturbation leads to a lower FAR value. This can be validated by the results inTable V. When using Gaussian perturbation, the FAR value is 22.10% for "bottle" trigger while it is 0% for the other triggers.
TABLE X :
XEvaluation results of the Mutation and TROJDEF model We compare TROJDEF with Mutation defense inTable X. It is clear that the performance of Mutation defense fluctuates significantly when facing different combinations of dataset, model and trigger. Although we tune the mutation factors to mitigate this issue, our attempts fail especially on the CIFAR-10 dataset. Moreover, on GTSRB dataset with TROJDEF model, the FAR of Mutation defense varies from 7.65% to 28.80% which confirms the unstable performance of this defense. In general, from the results, we conclude that Mutation defense works in some of our evaluation cases while fails in other cases. Also, we found that tuning mutation factors is not enough to enhance Mutation defense in the poorly performed cases.VI. CONCLUSIONIn this work, we propose an adaptive black-box defense against Trojan attacks, dubbed TROJDEF. TROJDEF perturbs each input example with random Gaussian noise and utilizes the prediction of the perturbed examples to decide whether the input example contains the Trojan trigger or not. We show analytically that under restricted conditions TROJDEF can always differentiate benign from Trojan examples by deriving prediction confidence bound. We also propose a non-linear transformation to the prediction confidence bound to enable accurate detection of Trojan examples when the restricted conditions do not hold. We also propose several practical enhancements to TROJDEF, especially when the input examples are images. We conduct several experiments to compare TROJDEF with the SOTA black-box approach, STRIP. The results show that TROJDEF has a competitive performance on all the experiments proposed by STRIP. Moreover, the results in the expanded experiments show that TROJDEF not only outperforms STRIP but is also more stable. The performance of STRIP may significantly degenerate when (1) the NN classifiers' training hyper-parameters change or (2) the NN classifier's architecture changes. Under similar settings, TROJDEF provides consistent performance. In addition, we evaluate TROJDEF and STRIP on a more realistic scenario when the Trojan backdoor is implanted in a largescale NN classifier pre-trained on other datasets. The results show that TROJDEF significantly outperforms STRIP under such challenging settings. Finally, by replacing the Gaussian perturbation with Laplacian ones, the results confirm the generalizability of the TROJDEF to arbitrary datasets and arbitrary noise distributions. The main reason for this superior performance is that TROJDEF is controllable and can easily adapt to the presented examples by changing the parameters of the distribution of the added random noise.
TABLE XI :
XIResults of pattern trigger at 90% transparent 1) Even when the prediction of Trojan examples are sensitive towards perturbation, we believe it is different from the benign examples due to the difference in extracted features. To distinguish invisible Trojan examples, the method of generating adversarial perturbation could be utilized. Also, to keep it as a black-box defense, we can focus on methods that only utilize zero-th order gradient information when generating adversarial perturbation
TABLE XII :
XIIResults of applying perturbation on blue channel and all channels experimentsDataset
Trigger
Model
FAR
B
R
G
CIFAR-10
"face"
STRIP
0.0%
0.0%
0.0%
TROJDEF
0.0%
0.0%
0.0%
3rd-party
0.0%
0.0%
0.0%
"watermark"
STRIP
0.0%
0.0%
0.0%
TROJDEF
0.0%
0.0%
0.0%
3rd-party
0.0%
0.0%
0.0%
"Bottle"
TROJDEF
0.15%
0.15%
0.15%
3rd-party
19.5%
100.0%
19.90%
"Star"
TROJDEF
0.0%
0.0%
0.0%
3rd-party
0.0%
0.0%
0.0%
GTSRB
"face"
STRIP
0.0%
100.0%
100.0%
TROJDEF
0.0%
100.0%
100.0%
3rd-party
0.0%
100.0%
100.0%
"watermark"
TROJDEF
0.0%
100.0%
100.0%
3rd-party
0.0%
100.0%
100.0%
"Bottle"
TROJDEF
0.05%
100.0%
100.0%
3rd-party
0.05%
100.0%
100.0%
"Star"
TROJDEF
0.0%
100.0%
0.0%
3rd-party
0.0%
100.0%
0.0%
CUB200
"face"
3rd-party
1.15%
100.0%
100.0%
"watermark"
0.0%
100.0%
100.0%
B: Blue channel.
R: red channel.
G: green channel.
TABLE XIII :
XIIIResults of applying perturbation on different single-channel experimentsDataset
Trigger
FAR
B
R
G
CUB200
"blue star"
1.30%
1.20%
9.50%
B: Blue channel.
R: red channel.
G: green channel.
TABLE XIV :
XIVResults of adding trigger on different channels with cub200 dataset experimentsFig. 8: Relationship between FRR and FAR for the experiments with STRIP-model Fig. 9: Relationship between FRR and FAR for the experiments with TROJDEF-modelFig. 9: Relationship between FRR and FAR for the experiments with TROJDEF-model (cont.) Fig. 10: Relationship between FRR and FAR for the experiments with 3rd-party model Fig. 10: Relationship between FRR and FAR for the experiments with 3rd-party model (cont.) Fig. 10: Relationship between FRR and FAR for the experiments with 3rd-party model (cont.)• MNIST Dataset
"heart" trigger
• CIFAR10 Dataset
"Face" trigger
"Watermark" trigger
• CIFAR10 Dataset
"Face" trigger
"Watermark" trigger
"Bottle" trigger
"Star" trigger
• GTSRB Dataset
"Face" trigger
"Watermark" trigger
"Bottle" trigger
"Star" trigger
• CIFAR10 Dataset
"Face" trigger
"Watermark" trigger
"Bottle" trigger
"Star" trigger
• GTSRB Dataset
"Face" trigger
"Watermark" trigger
"Bottle" trigger
"Star" trigger
• CUB200 Dataset
"Face" trigger
"Watermark" trigger
• ImageNet Dataset
"Face" trigger
"Watermark" trigger
www.github.com, www.tekla.com, www.kaggle.com arXiv:2209.01721v1 [cs.CR] 5 Sep 2022
https://paperswithcode.com4 The attacker generates the attack examples and feeds them to the infected model for malicious goal. For example, the attacker could attach the Trigger to his/her bio image that is submitted to the border security system. By doing so, the attacker can bypass the face recognition of international criminals.
It is worth to note that the sigmoid function is tuned on benign examples only with the focus on reducing the residual error when the examples' values are fitted to a folded normal distribution.
Randomizing the size & location of the Gaussian perturbation and utilizing dynamic standard deviation. 2 B: Blue Channel. 3 RGB: All channels.TABLE XVII: The results without any enhancement compared with all enhancements
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. LeCun, L. Bottou, Y. Bengio, P. Haffner et al., "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
I Goodfellow, Y Bengio, A Courville, Y Bengio, Deep learning. MIT press Cambridge1I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning. MIT press Cambridge, 2016, vol. 1.
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 9J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "ImageNet: A Large-Scale Hierarchical Image Database," in CVPR09, 2009.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, International Conference on Learning Representations. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," International Conference on Learning Representations, 2014.
Badnets: Identifying vulnerabilities in the machine learning model supply chain. T Gu, B Dolan-Gavitt, S Garg, arXiv:1708.06733arXiv preprintT. Gu, B. Dolan-Gavitt, and S. Garg, "Badnets: Identifying vulnera- bilities in the machine learning model supply chain," arXiv preprint arXiv:1708.06733, 2017.
Trojaning attack on neural networks. Y Liu, S Ma, Y Aafer, W.-C Lee, J Zhai, W Wang, X Zhang, NDSS. Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, and X. Zhang, "Trojaning attack on neural networks," in NDSS, 2018.
Poison frogs! targeted clean-label poisoning attacks on neural networks. A Shafahi, W R Huang, M Najibi, O Suciu, C Studer, T Dumitras, T Goldstein, Advances in Neural Information Processing Systems. A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, "Poison frogs! targeted clean-label poisoning attacks on neural networks," in Advances in Neural Information Processing Systems, 2018, pp. 6103-6113.
Model-reuse attacks on deep learning systems. Y Ji, X Zhang, S Ji, X Luo, T Wang, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. the 2018 ACM SIGSAC Conference on Computer and Communications SecurityY. Ji, X. Zhang, S. Ji, X. Luo, and T. Wang, "Model-reuse attacks on deep learning systems," in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018, pp. 349- 363.
How to backdoor federated learning. E Bagdasaryan, A Veit, Y Hua, D Estrin, V Shmatikov, International Conference on Artificial Intelligence and Statistics. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, "How to backdoor federated learning," in International Conference on Artificial Intelligence and Statistics, 2020, pp. 2938-2948.
Dba: Distributed backdoor attacks against federated learning. C Xie, K Huang, P.-Y. Chen, B Li, International Conference on Learning Representations. C. Xie, K. Huang, P.-Y. Chen, and B. Li, "Dba: Distributed backdoor attacks against federated learning," in International Conference on Learning Representations, 2019.
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations. I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," International Conference on Learning Represen- tations, 2015.
Early methods for detecting adversarial images. D Hendrycks, K Gimpel, arXiv:1608.00530arXiv preprintD. Hendrycks and K. Gimpel, "Early methods for detecting adversarial images," arXiv preprint arXiv:1608.00530, 2016.
Feature squeezing: Detecting adversarial examples in deep neural networks. W Xu, D Evans, Y Qi, arXiv:1704.01155arXiv preprintW. Xu, D. Evans, and Y. Qi, "Feature squeezing: Detecting adversarial examples in deep neural networks," arXiv preprint arXiv:1704.01155, 2017.
Audio adversarial examples: Targeted attacks on speech-to-text. N Carlini, D Wagner, 2018 IEEE Security and Privacy Workshops. IEEEN. Carlini and D. Wagner, "Audio adversarial examples: Targeted attacks on speech-to-text," in 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018, pp. 1-7.
Trust but verify: an information-theoretic explanation for the adversarial fragility of machine learning systems, and a general defense against adversarial attacks. J Yi, H Xie, L Zhou, X Wu, W Xu, R Mudumbai, arXiv:1905.11381arXiv preprintJ. Yi, H. Xie, L. Zhou, X. Wu, W. Xu, and R. Mudumbai, "Trust but verify: an information-theoretic explanation for the adversarial fragility of machine learning systems, and a general defense against adversarial attacks," arXiv preprint arXiv:1905.11381, 2019.
Fine-pruning: Defending against backdooring attacks on deep neural networks. K Liu, B Dolan-Gavitt, S Garg, International Symposium on Research in Attacks, Intrusions, and Defenses. SpringerK. Liu, B. Dolan-Gavitt, and S. Garg, "Fine-pruning: Defending against backdooring attacks on deep neural networks," in International Sympo- sium on Research in Attacks, Intrusions, and Defenses. Springer, 2018, pp. 273-294.
Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. B Wang, Y Yao, S Shan, H Li, B Viswanath, H Zheng, B Y Zhao, Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. 0B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks," Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks, p. 0, 2019.
Strip: A defence against trojan attacks on deep neural networks. Y Gao, C Xu, D Wang, S Chen, D C Ranasinghe, S Nepal, Proceedings of the 35th Annual Computer Security Applications Conference. the 35th Annual Computer Security Applications ConferenceY. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, "Strip: A defence against trojan attacks on deep neural networks," in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 113-125.
Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. H Chen, C Fu, J Zhao, F Koushanfar, IJCAI. H. Chen, C. Fu, J. Zhao, and F. Koushanfar, "Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks." in IJCAI, 2019, pp. 4658-4664.
B Tran, J Li, A Madry, arXiv:1811.00636Spectral signatures in backdoor attacks. arXiv preprintB. Tran, J. Li, and A. Madry, "Spectral signatures in backdoor attacks," arXiv preprint arXiv:1811.00636, 2018.
Detecting ai trojans using meta neural analysis. X Xu, Q Wang, H Li, N Borisov, C A Gunter, B Li, arXiv:1910.03137arXiv preprintX. Xu, Q. Wang, H. Li, N. Borisov, C. A. Gunter, and B. Li, "Detecting ai trojans using meta neural analysis," arXiv preprint arXiv:1910.03137, 2019.
Rethinking the trigger of backdoor attack. Y Li, T Zhai, B Wu, Y Jiang, Z Li, S Xia, arXiv:2004.04692arXiv preprintY. Li, T. Zhai, B. Wu, Y. Jiang, Z. Li, and S. Xia, "Rethinking the trigger of backdoor attack," arXiv preprint arXiv:2004.04692, 2020.
Interval estimation for a binomial proportion. L D Brown, T T Cai, A Dasgupta, Statistical science. L. D. Brown, T. T. Cai, and A. DasGupta, "Interval estimation for a binomial proportion," Statistical science, pp. 101-117, 2001.
Robust anomaly detection and backdoor attack detection via differential privacy. M Du, R Jia, D Song, arXiv:1911.07116arXiv preprintM. Du, R. Jia, and D. Song, "Robust anomaly detection and backdoor at- tack detection via differential privacy," arXiv preprint arXiv:1911.07116, 2019.
Meta-cognitive neural network method for classification of diabetic retinal images. R Banu, V Arun, N Shankaraiah, V Shyam, 2016 Second International Conference on Cognitive Computing and Information Processing. IEEER. Banu, V. Arun, N. Shankaraiah, and V. Shyam, "Meta-cognitive neural network method for classification of diabetic retinal images," in 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP). IEEE, 2016, pp. 1-5.
How image degradations affect deep cnn-based face recognition. S Karahan, M K Yildirum, K Kirtac, F S Rende, G Butun, H K Ekenel, 2016 International Conference of the Biometrics Special Interest Group (BIOSIG). IEEES. Karahan, M. K. Yildirum, K. Kirtac, F. S. Rende, G. Butun, and H. K. Ekenel, "How image degradations affect deep cnn-based face recognition?" in 2016 International Conference of the Biometrics Special Interest Group (BIOSIG). IEEE, 2016, pp. 1-5.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. G B Huang, M Mattar, T Berg, E Learned-Miller, Workshop on faces in'Real-Life'Images: detection, alignment, and recognition. G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments," in Workshop on faces in'Real-Life'Images: detection, alignment, and recognition, 2008.
A unified framework for analyzing and detecting malicious examples of dnn models. K Jin, T Zhang, C Shen, Y Chen, M Fan, C Lin, T Liu, arXiv:2006.14871arXiv preprintK. Jin, T. Zhang, C. Shen, Y. Chen, M. Fan, C. Lin, and T. Liu, "A unified framework for analyzing and detecting malicious examples of dnn models," arXiv preprint arXiv:2006.14871, 2020.
Targeted backdoor attacks on deep learning systems using data poisoning. X Chen, C Liu, B Li, K Lu, D Song, arXiv:1712.05526arXiv preprintX. Chen, C. Liu, B. Li, K. Lu, and D. Song, "Targeted backdoor attacks on deep learning systems using data poisoning," arXiv preprint arXiv:1712.05526, 2017.
Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. P.-Y Chen, H Zhang, Y Sharma, J Yi, C.-J Hsieh, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityACMP.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models," in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 2017, pp. 15- 26.
Manigen: A manifold aided black-box generator of adversarial examples. G Liu, I Khalil, A Khreishah, A Algosaibi, A Aldalbahi, M Alnaeem, A Alhumam, M Anan, IEEE Access. 8G. Liu, I. Khalil, A. Khreishah, A. Algosaibi, A. Aldalbahi, M. Alnaeem, A. Alhumam, and M. Anan, "Manigen: A manifold aided black-box generator of adversarial examples," IEEE Access, vol. 8, pp. 197 086- 197 096, 2020.
An empirical study of derivative-freeoptimization algorithms for targeted black-box attacks in deep neural networks. G Ughi, V Abrol, J Tanner, arXiv:2012.01901arXiv preprintG. Ughi, V. Abrol, and J. Tanner, "An empirical study of derivative-free- optimization algorithms for targeted black-box attacks in deep neural networks," arXiv preprint arXiv:2012.01901, 2020.
| []
|
[
"The Link between Hot and Cool Outflows",
"The Link between Hot and Cool Outflows",
"The Link between Hot and Cool Outflows",
"The Link between Hot and Cool Outflows"
]
| [
"Jorick S Vink [email protected] \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"A A C Sander \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"E R Higgins \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"G N Sabhahit \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"Jorick S Vink [email protected] \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"A A C Sander \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"E R Higgins \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n",
"G N Sabhahit \nArmagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland\n"
]
| [
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland",
"Armagh Observatory and Planetarium\nCollege HillBT61 9DGArmaghNorthern Ireland"
]
| []
| The link between hot and cool stellar outflows is shown to be critical for correctly predicting the masses of the most massive black holes (BHs) below the so-called pair-instability supernova (PISN) mass gap. Gravitational Wave (GW) event 190521 allegedly hosted an "impossibly" heavy BH of 85 M⊙. Here we show how our increased knowledge of both metallicity Z and temperature dependent mass loss is critical for our evolutionary scenario of a low-Z blue supergiant (BSG) progenitor of an initially approx 100 M⊙ star to work. We show using MESA stellar evolution modelling experiments that as long as we can keep such stars above 8000 K such low-Z BSGs can avoid strong winds, and keep a very large envelope mass intact before core collapse. This naturally leads to the Cosmic Time dependent maximum BH function below the PISN gap. | 10.1017/s1743921322000631 | [
"https://export.arxiv.org/pdf/2201.12364v2.pdf"
]
| 246,430,480 | 2201.12364 | 5935a850fd241c29b63c124e83a5f69fed55b377 |
The Link between Hot and Cool Outflows
11 Feb 2022
Jorick S Vink [email protected]
Armagh Observatory and Planetarium
College HillBT61 9DGArmaghNorthern Ireland
A A C Sander
Armagh Observatory and Planetarium
College HillBT61 9DGArmaghNorthern Ireland
E R Higgins
Armagh Observatory and Planetarium
College HillBT61 9DGArmaghNorthern Ireland
G N Sabhahit
Armagh Observatory and Planetarium
College HillBT61 9DGArmaghNorthern Ireland
The Link between Hot and Cool Outflows
11 Feb 2022The Origin of Outflows in Evolved Stars Proceedings IAU Symposium No. 366, 2022 A.C. Editor, B.D. Editor & C.E. Editor, eds.Windsmass lossblack holesmassive starsstellar evolution
The link between hot and cool stellar outflows is shown to be critical for correctly predicting the masses of the most massive black holes (BHs) below the so-called pair-instability supernova (PISN) mass gap. Gravitational Wave (GW) event 190521 allegedly hosted an "impossibly" heavy BH of 85 M⊙. Here we show how our increased knowledge of both metallicity Z and temperature dependent mass loss is critical for our evolutionary scenario of a low-Z blue supergiant (BSG) progenitor of an initially approx 100 M⊙ star to work. We show using MESA stellar evolution modelling experiments that as long as we can keep such stars above 8000 K such low-Z BSGs can avoid strong winds, and keep a very large envelope mass intact before core collapse. This naturally leads to the Cosmic Time dependent maximum BH function below the PISN gap.
Introduction
Accurate mass-loss rates -as a function of effective temperatureṀ = f (T eff ) -are needed for making reliable predictions for the evolution of the most massive stars, including the black hole (BH) mass function with respect to metallicity Z (see Sander et al. these proceedings).
Over the last five years gravitational wave (GW) observatories have shown the existence of very heavy black holes. The current record holder is the primary object in the GW event 190521 with 85 M ⊙ . Due to the fact that this BH mass is almost twice as large as the generally accepted lower boundary of the pair-instability (PI) supernova (SN) mass gap at approximately 50 M ⊙ (Farmer et al. 2019;Woosley & Heger 2021), the GW event discoverers argued the 85M ⊙ BH is most likely a second generation BH, as it would be impossible for a progenitor star to have directly collapsed into a BH within the PISN mass gap spanning 50 and 130 M ⊙ (Abbott et al. 2020).
In this contribution we show that this conclusion could be premature, as we have constructed a robust blue supergiant (BSG) scenario for the collapse of a very massive star (VMS) of order 100 M ⊙ at low Z . The key physiscs involves both the Z-dependence and the effective temperature dependence of the mass-loss rate of evolved supergiants.
Overview of hot and cool mass-loss rates
When a massive star burns hydrogen (H) in the core it traditionally evolves from the hot blue side to the cool red part of the stellar Hertzsprung-Russell diagram (HRD). When this takes place at approximately constant luminosity L the key physical parameters are (i) the amount of mixing by processes such as core overshooting, as these set the duration of the wind mass-loss phase during H-burning and the total mass being lost on the main sequence, as well as (ii) the absolute rate of mass-loss (dependent on the host 2 Jorick S. Vink et al. Petrov et al. 2016.). Note that the exact location of the second jump is below the lower temperature boundary, i.e. with T eff lower than 8800 K. galaxy Z) and (iii) how this mass loss varies from the hot to the cool side of the HRD. For hot stars above 10 kK the winds are driven by gas opacity (see Vink 2022 for a recent review) and while the absolute mass-loss values are still under debate (e.g. Björklund et al. 2021) the implication that mass-loss rates of hot-star winds are Z-dependent is undisputed. The exact Z dependence still needs to be established however .
When T eff drops during stellar evolution -starting from approx. 40 kK -the mass-loss rate is first expected to drop (see Fig. 1). The reason for this behaviour is that the line acceleration is set by the product of the stellar flux and the opacity. When T eff drops, the stellar flux gradually moves from the ultraviolet (UV) part of the spectral energy distribution (SED) to the optical, while the opacity is still predominately 'left behind' in the UV part of the SED. In other words, there is a growing mismatch between the flux The link between hot and cool outflows 3 OLD NEW Figure 2. The Z dependence of WR stars. In stellar models before 2005 the mass-loss rate of WR stars was assumed to be independent of the host galaxy Z ("OLD") as the high abundance of self-enriched elements such as carbon (C) was thought to be dominant. Vink & de Koter (2005) showed that despite the huge amount of C in WR atmospheres the winds are nonetheless predominately driven by Fe and thus strongly Fe-dependent, indicated by "NEW". See Sander et al. these proceedings for new mass-loss predictions. and the opacity, implying the flux-weighted opacity drops, and so doesṀ ).
This situation changes abruptly when the dominant line driving element iron (Fe) recombines, causing a bi-stability jump (BSJ) in the wind parameters. The first recombination is that from Fe iv to iii at approx. 21 kK, the second recombination from Fe iii to ii takes place below 8800 K Petrov et al. 2016). The exact location of this second BSJ has not yet been determined, as the current generation of sophisticated co-moving frame (CMF) radiative transfer model atmospheres has not yet been able to converge below the recombination of H at approx 8000 K. This uncertainty of mass-loss in the yellow supergiant / hypergiant phase is of key relevance for setting the Humphreys-Davidson (HD) limit (Gilkis et al. 2021;Sabhahit et al. 2021) and YSG mass-loss should therefore play an important role empirically (e.g. Koumpia et al. 2020; Oudmaijer & Koumpia these proceedings). Accurate mass-loss rates of T eff dependentṀ are also critical for constructing the next generation of hydrodynamical stellar evolution models for both luminous blue variable and YSG phases (Grassitelli et al. 2021).
Current mass-loss recipes in use -and the link to wind physics
One of the most used mass-loss recipes currently in use in massive star models is the "Dutch" wind loss recipe in MESA (Paxton et al. 2013). In this collection of mass- Figure 3. Evolution of our BSG progenitor model in a Hertzsprung-Russell diagram (HRD). The colour bar represents the core He abundance, with a yellow star showing the TAMS position, a blue star illustrating the end of core He-burning, and a red star marking the end of core O-burning. Blue dots (near the blue star) show time-steps of 50,000 years after core H exhaustion, where time is spent as a BSG (i.e. above 8 kK). Shaded regions highlight the area in the HRD where RSGs (red) evolve with dust-driven winds (as generally assumed, but the physical mechanism is still debated) or BSGs (blue) evolve with line-driven winds.
loss prescriptions, massive stars undergo modest mass loss on the main-sequence and enhanced mass loss below the first BSJ according to Vink et al. (1999Vink et al. ( , 2001. The second BSJ is not directly covered in the Dutch recipe, but it follows a similar approach as Brott et al. (2011) where the second BSJ is indirectly included by switching from the Vink et al. theoretical recipe to the empirical cool star recipe of de Jager et al. (1988) at approx. 10 kK. This prescription yields relatively large mass-loss rates, although the physics of cool red supergiant (RSG) winds is still under debate, and this also means that whether or not RSG winds have a Z-dependence is presently unclear. . Maximum black hole mass as a function of Z or Cosmic Time. At low Z the maximum mass from our models is effectively doubled in comparison to earlier models and assumptions, while the maximum black hole mass at higher Z is set by stellar winds.
For stars evolving back to the hotter part of the HRD above 10 kK and with enriched atmospheres with a helium mass fraction Y larger than approx. 0.6, the models generally assume the total Z -including all elements heavier than He -empirical Wolf-Rayet (WR) recipe of Nugis & Lamers (2000). Note that a scaling with this 'total' Z is unlikely to be physically correct, as Vink & de Koter (2005) showed the host galaxy Fe to be the main wind driver despite the larger abundances of self-enriched elements (see Fig. 2).
In order to account for Fe-dependent winds as well as the knowledge that the second BSJ is located below 10 kK and even below 8800 K, and Sabhahit et al. (2021) provided an updated version of the Dutch wind mass-loss recipe in MESA. With this improved treatment, stars typically have lower mass-loss rates in the BSG phase, which prevents excessive mass loss at low Z, and helps stars maintain sufficient envelope mass to form very heavy BHs as long as they remain hotter than ∼8000 K (see Fig. 3). realized that in order to enable the collapse of a VMS to an 85 M ⊙ BH this needs 2 key ingredients. The first one is an intrinsically low Z in order for Zdependent mass loss to not evaporate the initial stellar mass. The second requirement involves a relatively low amount of core overshooting, equivalent to a step overshooting parameter α ov of 0.1 or below. The reason for this second ingredient is 3-fold.
Implications for impossible black holes over Cosmic Time
Firstly, low overshooting keeps the star more compact, and the collapse of the entire envelope of a very massive BSG is easier to accomplish than that for a RSG (e.g. Fernandez et al. 2018). Secondly, a low overshooting keeps the core mass below the PISN limit, and enforces a larger envelope mass (e.g. Higgins & Vink 2019). Thirdly, if the star remains above the effective temperature of the second BSJ, the regime of high mass-loss rates at low T eff can be avoided (see Figs. 1 & 3). showed that at Z values below approx. 10% of the solar metallicity 90-100 M ⊙ stars initially could have core masses below the critical 37 M ⊙ limit, and collapse into 80-90 M ⊙ BHs. Such impossibly heavy BHs are firmly within the canonical PISN mass gap, which should therefore be adjusted. A schematic maximum BH mass with Z is shown in Fig. 4. The figure shows an almost twice as large maximum upper BH mass below the PISN gap at early Cosmic times at low Z, while for larger Z the maximum BH mass is directly set by stellar wind mass loss.
Figure 1 .
1The first and second bi-stability jump in terms of the global wind parameter Q (see
Figure 4
4Figure 4. Maximum black hole mass as a function of Z or Cosmic Time. At low Z the maximum mass from our models is effectively doubled in comparison to earlier models and assumptions, while the maximum black hole mass at higher Z is set by stellar winds.
. R Abbott, PhRvL. 1251102Abbott, R. et al. 2020, PhRvL, 125, 1102
. R Björklund, J O Sundqvist, J Puls, F Najarro, A&A. 64836Björklund, R., Sundqvist, J.O., Puls, J., Najarro, F. 2021, A&A, 648, 36
. I Brott, A&A. 530115Brott, I., et al. 2011, A&A, 530, 115
. R Farmer, M Renzo, S E De Mink, P Marchant, S Justham, ApJ. 88753Farmer, R., Renzo, M., de Mink, S.E., Marchant, P., Justham, S. 2019, ApJ, 887, 53
. R Fernandez, E Quataert, K Kashiyama, E Coughlin, MNRAS. 4762366Fernandez, R., Quataert, E., Kashiyama, K., Coughlin, E.R. 2018, MNRAS, 476, 2366
. A Gilkis, MNRAS. 5031884Gilkis, A., et al. 2021, MNRAS, 503, 1884
. L Grassitelli, A&A. 64799Grassitelli, L., et al. 2021, A&A, 647, 99
. E R Higgins, J S Vink, A&A. 62250Higgins, E.R., & Vink, J.S. 2019, A&A, 622, 50
. C De Jager, H Nieuwenhuijzen, K A Van Der Hucht, A&AS. 72259de Jager, C., Nieuwenhuijzen, H., van der Hucht, K.A. 1988, A&AS, 72, 259
. E Koumpia, A&A. 635183Koumpia, E., et al. 2020, A&A, 635, 183
. T Nugis, H J G L M Lamers, A&A. 360227Nugis, T., & Lamers, H.J.G.L.M. 2000, A&A, 360, 227
. B Paxton, ApJS. 2084Paxton, B., et al. 2013, ApJS, 208, 4
. B Petrov, J S Vink, G Grafener, MNRAS. 458Petrov, B., Vink, J.S., & Grafener, G. 2016, MNRAS, 458, 1999
. G N Sabhahit, J S Vink, E R Higgins, A A Sander, MNRAS. 5064473Sabhahit, G.N., Vink, J.S., Higgins, E.R., Sander, A.A.C. 2021, MNRAS, 506, 4473
. J S Vink, A De Koter, H J G L M Lamers, A&A. 350181Vink, J.S., de Koter, A., & Lamers, H.J.G.L.M. 1999, A&A, 350, 181
. J S Vink, A De Koter, H J G L M Lamers, A&A. 369574Vink, J.S., de Koter, A., & Lamers, H.J.G.L.M. 2001, A&A, 369, 574
. J S Vink, A De Koter, A&A. 442587Vink, J.S., & de Koter, A. 2005, A&A, 442, 587
. J S Vink, A A Sander, MNRAS. 5042051Vink, J.S., & Sander, A.A.C. 2021, MNRAS, 504, 2051
. J S Vink, E R Higgins, A A C Sander, G N Sabhahit, MNRAS. 504146Vink, J.S., Higgins, E.R., Sander, A.A.C., Sabhahit, G.N. 2021, MNRAS, 504, 146
ARAA, in press. J S Vink, 2109.08164Vink, J.S. 2022, ARAA, in press. ArXiv 2109.08164
. S Woosley, A Heger, ApJ. 91231Woosley, S., & Heger, A. 2021, ApJ, 912, 31
| []
|
[
"Generating the right evidence at the right time: Principles of a new class of flexible augmented clinical trial designs",
"Generating the right evidence at the right time: Principles of a new class of flexible augmented clinical trial designs"
]
| [
"C Dunger-Baldauf \nStatistical Methodology\nNovartis Pharma AG\nBaselSwitzerland\n",
"R Hemmings \nConsilium Salmonson & Hemmings c Section for Medical Statistics\nCenter for Medical Statistics, Informatics, and Intelligent Systems\nMedical University of Vienna\nViennaAustria\n",
"F Bretz \nStatistical Methodology\nNovartis Pharma AG\nBaselSwitzerland\n",
"B Jones \nNovartisUK\n",
"A Schiel \nNorwegian Medicines Agency f University of Oxford\nThe Alan Turing Institute Correspondence: Chris Holmes\n\n",
"C Holmes [email protected] "
]
| [
"Statistical Methodology\nNovartis Pharma AG\nBaselSwitzerland",
"Consilium Salmonson & Hemmings c Section for Medical Statistics\nCenter for Medical Statistics, Informatics, and Intelligent Systems\nMedical University of Vienna\nViennaAustria",
"Statistical Methodology\nNovartis Pharma AG\nBaselSwitzerland",
"NovartisUK",
"Norwegian Medicines Agency f University of Oxford\nThe Alan Turing Institute Correspondence: Chris Holmes\n"
]
| []
| The past few years have seen an increasing number of initiatives aimed at integrating information generated outside of confirmatory randomised clinical trials (RCTs) into drug development. However, data generated non-concurrently and through observational studies can provide results that are difficult to compare with randomised trial data. Moreover, the scientific questions these data can serve to answer often remain vague. Our starting point is to use clearly defined objectives for evidence generation, which are formulated towards early discussion with health technology assessment (HTA) bodies and are additional to regulatory requirements for authorisation of a new treatment. We propose FACTIVE (Flexible Augmented Clinical Trial for Improved eVidencE generation), a new class of study designs enabling flexible augmentation of confirmatory randomised controlled trials with concurrent and close-to-real-world elements. These enabling designs facilitate estimation of certain treatment effects in the confirmatory part and other, complementary treatment effects in a concurrent real-world part. Each stakeholder should use the evidence that is relevant within their own decision-making framework. High quality data are generated under one single protocol and the use of randomisation ensures rigorous statistical inference and interpretation within and between the different parts of the experiment. Evidence for the decisionmaking of HTA bodies could be available earlier than is currently the case. | 10.1002/cpt.2869 | [
"https://export.arxiv.org/pdf/2210.15264v3.pdf"
]
| 253,157,796 | 2210.15264 | 1582c8d33e22897a2ec81928a162e7d2f14bc2ea |
Generating the right evidence at the right time: Principles of a new class of flexible augmented clinical trial designs
C Dunger-Baldauf
Statistical Methodology
Novartis Pharma AG
BaselSwitzerland
R Hemmings
Consilium Salmonson & Hemmings c Section for Medical Statistics
Center for Medical Statistics, Informatics, and Intelligent Systems
Medical University of Vienna
ViennaAustria
F Bretz
Statistical Methodology
Novartis Pharma AG
BaselSwitzerland
B Jones
NovartisUK
A Schiel
Norwegian Medicines Agency f University of Oxford
The Alan Turing Institute Correspondence: Chris Holmes
C Holmes [email protected]
Generating the right evidence at the right time: Principles of a new class of flexible augmented clinical trial designs
The past few years have seen an increasing number of initiatives aimed at integrating information generated outside of confirmatory randomised clinical trials (RCTs) into drug development. However, data generated non-concurrently and through observational studies can provide results that are difficult to compare with randomised trial data. Moreover, the scientific questions these data can serve to answer often remain vague. Our starting point is to use clearly defined objectives for evidence generation, which are formulated towards early discussion with health technology assessment (HTA) bodies and are additional to regulatory requirements for authorisation of a new treatment. We propose FACTIVE (Flexible Augmented Clinical Trial for Improved eVidencE generation), a new class of study designs enabling flexible augmentation of confirmatory randomised controlled trials with concurrent and close-to-real-world elements. These enabling designs facilitate estimation of certain treatment effects in the confirmatory part and other, complementary treatment effects in a concurrent real-world part. Each stakeholder should use the evidence that is relevant within their own decision-making framework. High quality data are generated under one single protocol and the use of randomisation ensures rigorous statistical inference and interpretation within and between the different parts of the experiment. Evidence for the decisionmaking of HTA bodies could be available earlier than is currently the case.
INTRODUCTION
To support informed decision making by pharmaceutical companies, regulators, health technology assessment (HTA) bodies, payers, patients and physicians, clear descriptions of the benefits and risks of a treatment for a given medical condition should be made available in a timely fashion. The current paradigm is to generate evidence in a sequential manner where at each stage the focus is on one stakeholder and the information they need in order to progress to the next stage. This is inefficient. In this paper we present FACTIVE, a new paradigm where information for regulatory agencies and HTA bodies is generated concurrently via a new class of augmented clinical trial designs. FACTIVE designs study not only patients who are suitable for entry into a conventional Phase 3 randomized controlled trial (RCT) that is conducted under wellcontrolled conditions but bridges to a broader population, or different experimental conditions, that can be tailored to address a particular HTA question or reflect a particular healthcare system. The proposed framework is different from existing approaches (as we explain later) and allows the generation of the right evidence at the right time, such that key decisions made after Marketing Authorisation (MA) can be made sooner than would otherwise be the case. Figure 1 (upper panel) summarizes the current evidence generation and decision-making process: Following confirmatory Phase 3 trials, an application for MA of a new treatment is submitted to a regulatory agency. MA is accompanied by a period of discussion and agreement with payers (e.g., HTA bodies, government agencies, medical insurance companies), who will reimburse the cost of the treatment and influence the price at which the treatment should be marketed. The treatment is then placed on the market (❶ in Figure 1). Physicians, health care providers and patients are subsequently informed how the new treatment is positioned in the landscape of already available treatment options (❷) and at some time later (❸) the maximum uptake of its use is achieved. Alongside this, post-authorization trials are conducted to learn more about how the treatment performs in normal clinical practice.
CURRENT STATUS
[Insert Figure 1 around here]
The sequential nature of generating the evidence for different stakeholders is immediately visible. The current main driver when designing confirmatory RCTs is to provide sufficient evidence to regulatory agencies of the efficacy and safety of a new treatment in order that it may be granted MA. Additional post-authorization trials provide further evidence of the treatment's effectiveness in a broader patient population under clinical practice conditions. Evidence from such trials, in addition to that provided by the confirmatory trials, is then used to inform further market access and pricing discussions with HTA bodies, taking into account the therapeutic landscape. The postauthorization trials can be RCTs, or open-label extension phases to RCTs, but are commonly undertaken as observational studies. The pre-and post-authorisation experiments are conducted independently of each other, such that if estimates of a particular treatment effect differ between experiments, the reasons for that difference cannot be determined with certainty.
Multiple initiatives have attempted to streamline the evidence generation that will support regulator, HT assessor, payer, prescriber, and patient decision-making [Eichler et al. 2016;Califf et al. 2016;Ray et al. 2022]. Efforts in this direction must address the fact that different stakeholders have different questions to address, including the benefit-risk of an intervention within a specific target population vs the cost-effectiveness of an intervention including societal perspectives and a specific healthcare budget. Decision-making by the European Medicines Agency (EMA) is centralised on behalf of the European Union (EU) whereas decisions made by one or more country specific HTA bodies are national or local. Different stakeholders will also identify different sources of uncertainty and evidence gaps they want to see addressed, preferably during the clinical evidence generation phase, or at least post-authorisation. This paper focuses on evidence generation to meet the needs of regulatory agencies and HTA bodies, as they make the two initial and most critical public-sector decisions to determine patient access to medicinal products. The EU is taken as a jurisdiction for illustration, though the benefits of the approach described can be realised more generally.
A point of particular focus for discussions on streamlining evidence generation has been the design and conduct of RCTs. All stakeholders recognise the high internal validity of this experimental design: the fact that reliable treatment effect estimates for well-defined research questions can be provided through a design where experimental conditions are controlled and well-understood. Indeed, deriving a reliable estimate and being able to interpret the magnitude of treatment effects in the context of experimental conditions that are documented and understood are the attributes of the RCT that make it the "gold-standard". In addition, all stakeholders understand that there can be a scientific basis to generalise, or extrapolate, inferences from a clinical trial dataset to a broader patient population or clinical context, though the basis for extrapolation (e.g., other clinical trial data, pharmacological understanding of the mechanism of action, pharmacological modelling), the extent of the extrapolation (to what proportion of the target population does the extrapolation apply) and the type and strength of evidence needed to support extrapolation is not documented and hence not unified for benefitrisk vs cost-effectiveness decisions. Importantly, an RCT design in which the experimental conditions are controlled too tightly can leave all stakeholders questioning its external validity, i.e., the applicability of the trial results to the intended patient population and therapeutic use in clinical practice.
External validity of a trial is assessed in relation to its inclusion and exclusion criteria (vs the population indicated for clinical practice) and its experimental conditions (vs the therapeutic use expected in clinical practice) such as the outcome variable or comparator, permitted concomitant medications or combinations of treatments. The different mandates for regulators and HT assessors can also lead to different clinical outcomes being prioritised for the assessment of efficacy or effectiveness with a consequence for the periods of treatment and follow-up that are of interest, and potentially different treatment effects of interest (i.e., estimands) [ICH, 2019;Remiro-Azócar, 2022]. Importantly, whereas a given clinical development programme will be targeted towards a centralised regulatory approval, each HT assessment of the applicability of the trial results to their specific national or local jurisdiction might differ, for example, in relation to other products that are/are not reimbursed and used locally, or the precise target population for which cost-effectiveness can be justified. Concerns over the external validity of a particular RCT does not represent a fundamental flaw in that study design and conduct, only that the specific trial design cannot directly address the needs of a specific HTA body. Section 1 of the Supplementary Material gives additional discussion on external validity and extrapolation.
The result of these dynamics is that a regulator might authorise a product, perhaps with postauthorisation evidence generation to address identified uncertainties, whilst an HT assessor might not feel fully informed about how the product will impact their specific healthcare system and budget, and whether a positive decision on cost-effectiveness can be justified. To address a broader set of stakeholder needs, RCTs might be made larger and/or longer and/or less well controlled in respect of patient population and experimental conditions. In addition, different endpoints or multiple comparators might be used. In reality, however, complementary sources of evidence generation are more efficiently used to provide answers to general and specific questions from HTA bodies. An often-overlooked fact is that the information required to strengthen the external validity can be generated concurrently with the trial data. Additional evidence is not necessarily generated to replicate trial results, rather additional data can explore the effects of treatment beyond the patient population and experimental conditions of the RCT. This paper discusses an experimental design that provides these additional data and seeks to deliver information to all stakeholders in a timely manner, preserving efficient evidence generation for each stakeholder and promoting a methodologically robust approach in an experimental design where different parts are no longer independent.
FLEXIBLE AUGMENTION TO GATHER THE RIGHT EVIDENCE AT THE RIGHT TIME
We argue that the understanding of the relative effectiveness and time to peak uptake of a new treatment can be enhanced, without compromising safety, by generating additional rigorous evidence throughout the confirmatory development process. To do so we propose FACTIVE, a new class of augmented RCT designs aimed at widening the evidence base of traditional RCTs. The lower panel of Figure 1 summarizes the potential impacts of using such an augmented RCT design (which we describe below): Discussions with HTA bodies are better informed and shortened, along with a potentially greater maximum uptake of treatment use.
A key feature of the new paradigm is that augmentation is wrapped around a conventional RCT that is designed in the usual manner to focus on treatment efficacy and safety in a controlled experimental environment. The consequence is that the core RCT, or RCTs, which form the pivotal evidence for a regulatory approval, is ring-fenced. A cross-disciplinary team can consider the specific objectives, subsequent design criteria and the timing for augmentation with real-world elements, including consideration of market value and patient heterogeneity as well as early evidence of efficacy and safety in the core RCT. The augmentation can resolve uncertainties that could not be achieved by simple improvements to the RCT. For example, providing some insights into multiple, different combinations of active comparators are classic examples of HTA requirements that might dramatically increase the size, duration and cost of a confirmatory RCT.
We set no limitations to the scope of research questions that can be addressed through augmentation. The questions to be answered by augmentation may be general: to provide estimates of treatment effects in the target population reflecting routine clinical care and under conditions reflecting clinical practice; to facilitate data integration with an existing external data source, by augmenting the RCT with subject eligibility criteria and conditions matched to the external resource; or, alternatively, targeted to obtain complementary information on a specific relaxation of an inclusion/exclusion criteria or different methods of outcome assessment. As an example of a specific question, consider a sponsor needing to address differences in national treatment guidelines regarding a background therapy. Instead of including patients on various background therapies, the core RCT could be conducted on one background therapy. Information as add-on to various background therapies (including the one in the core RCT) could be generated in the augmented part, whilst the patient population and experimental conditions remain otherwise similar, to establish that there is no impact of the background therapy or to characterise the impact that changing background therapy might have on the treatment effect that was observed in the RCT.
FACTIVE supplements the evidence provided by the core RCT for MA through an increased sample size with additional information from close-to-real-world (cRW) elements carefully selected according to safety, feasibility and, critically, the outstanding questions to be answered. There are established mechanisms for sponsors to interact with regulators (e.g., Scientific Advice procedures in the EU) to understand preferences and standards for a future application. Early dialogue with HTA bodies can provide valuable input about the context in which a treatment might be assessed, and important considerations that are not addressed in the core RCT. For example, PICO (Population, Intervention, Comparator(s), Outcomes) provides a framework to compare the evidence being generated to the question of interest for the HTA and, while the PICO can change over time, it can be used to explore whether limitations to the core RCT evidence should be anticipated due to, e.g., missing sub-populations, differences in treatment algorithms, or preferences for other comparators. Identifying potential evidence gaps can then inform the purpose, and consequently the design, of the augmentation. Again, the augmentation is not designed to serve as confirmation of the RCT evidence but represents a basis to provide data, or to bridge the RCT data, to the evidence requirements of other stakeholders.
THE FACTIVE DESIGN
The FACTIVE design is to generate evidence rigorously and contemporaneously with high-quality information obtained through randomisation. A sketch of the augmented design is given in Figure 2.
Two types of patients are identified in FACTIVE: Those eligible for the core RCT (green) and those who are from a broader population (blue). In addition, two experimental settings are identified, the RCT conditions and the alternate cRW experimental settings (e.g., those closer to clinical practice). Both types of patients are first randomized to be studied under RCT treatment conditions or under cRW treatment conditions. The RCT-eligible patients who are studied under RCT conditions form Part A of the design, the core RCT used for regulatory submission. Part B is comprised of additional RCT-eligible patients (green) and those from the broader population (blue) randomized to cRW treatment conditions and patients from the broader population randomized to RCT treatment conditions. Within each part of the design, patients are randomized to either the experimental treatment or control. This allows all conventional RCT analyses for authorization purposes to be conducted with the evidence generated in Part A, supplemented by other analyses addressing specific questions in Part B.
[Insert Figure 2 around here]
The nested structure facilitates rigorous statistical analyses for causal effects of interest, as explained in Section 2 of the Supplementary Material. The augmented design makes it possible to estimate and compare treatment effects, and effect changes, across the four combinations of subject eligibility (RCT eligible patients versus broader population) and treatment conditions (RCT versus cRW treatment conditions).
The augmented design is complementary to existing trial designs which look to combine RCT and cRW elements such as seamless Phase 3/4 designs [Eichler, 2010] and clinical trials using external control information [Schmidli et al., 2020]. The distinguishing feature of FACTIVE is the collection of data on randomized cRW elements under the new treatment, before regulatory approval, and concurrently with corresponding RCT data, thus enhancing the available evidence. Note that the left-hand side of Part B in Figure 2 could be implemented as a pragmatic trial. It fulfils the pragmatic study criteria for randomized studies [Zuidgeest, et al, 2017] under clinical practice treatment conditions, of patients expected in routine clinical care, and of assessments meaningful to patients/physicians. However, FACTIVE is broader than conventional pragmatic trial designs: while concurrency reduces the sources of time-related bias, the core RCT in Part A together with this left-hand pragmatic part does not allow a rigorous comparison of cRW and RCT treatment effects beyond the RCT-eligible patients. The starting point for FACTIVE follows the design of the core RCT, which is subsequently assessed for cRW augmentation. The additional part on the righthand side, where patients from the broader population are treated under RCT treatment conditions, then provides a more comprehensive understanding of treatment effects.
There can be flexibility also in the design of Part B. Note that removing the cRW components from the augmented design simply returns the original RCT. Note also that the design elements are flexible: If few inclusion/exclusion criteria are used in the RCT, the concept of a broader population might be redundant. Likewise, if the experimental conditions used in the RCT are close to clinical practice it might not be necessary to examine the treatment under alternative conditions. It is not even necessary for the same control arm to be used in both parts of the experiment, though then some of the benefits of the design are lost. If different controls are used in Part A and Part B, network meta-analysis [Dias et al., 2018] is one method that can be used to bridge from one control to the other using relevant external data. To justify the proportion of patients from the RCT-eligible and the broader patient population along with the respective sample sizes, whilst one approach might be to adequately power a comparison of treatment vs control in cRW conditions, other approaches are conceivable. Returning to the example above of investigating the effects of an experimental treatment on different background treatments, the amount of information to be generated might be thought of as similar to generating evidence across subgroups to enable an assessment of consistency (CHMP, 2019). As stated above, the objective is not to replicate findings from the core RCT. In particular when creating evidence to bridge from the RCT, or to give confidence in the external validity of the RCT results, precision of estimates might be more important and a better basis for planning than tests of statistical significance.
DISCUSSION
The development of new treatments is a continuous learning process where evidence collected in early phases contributes to decisions made in later phases and where techniques commonly used in exploratory development can continue to be used on confirmatory data (e.g., clinical pharmacology modelling) [Sheiner, 1997]. Even in confirmatory Phase 3 trials, modifications can be made as knowledge grows during their execution as, for example, in an adaptive trial with dose selection [CHMP, 2007;FDA, 2019], or relaxation of an inclusion criterion after preliminary safety data have been reviewed. Augmenting confirmatory RCTs (Part A in Figure 2) with tailored cRW elements (Part B in Figure 2) can be considered a natural extension of this process. Whilst FACTIVE will not be a suitable approach to evidence generation in every programme, we think all programmes can benefit from a discussion on the merits of structured and concurrent evidence generation beyond the core RCT. The key is not to implement a fixed design or to follow a checklist, but to carefully consider uncertainties or evidence gaps that are critical to address through evidence generation beyond a well-designed confirmatory RCT.
In contrast to the current practice, FACTIVE offers more timely evidence generation and an increased potential to investigate and quantify different modifiers of the treatment effect. Under the current approach, an estimated treatment effect might differ between pre-and postauthorization experiments for reasons that are often not fully understood. This might be due to changes in care over time, changes in patients recruited, perhaps due to lack of or different choice of control arm or changes in experimental conditions, including methods or timings of assessments applied, adherence to treatment or use of concomitant treatments, the choice of investigational sites, etc. Alternatively, estimands [ICH, 2019] may differ (intentionally or unintentionally), such that the experiments address different clinical questions of interest. Compared to uncontrolled observational studies, however, the collection of randomized data based on cRW elements concurrently with the core RCT data allows for a unified statistical analysis with nested models, without the biases inherent when comparing or integrating data from different experiments. If cRW data are collected from observational studies after the RCT then assumptions on the impact of potential confounders (untestable with the data to hand) are needed for a joint statistical analysis to proceed.
The time to initiate Part B would depend on having clarity on the research questions to be addressed and perhaps on accumulating evidence from Part A, for example into the safety of exposing a broader patient population or relaxing the experimental conditions. However, an obvious disadvantage of staggering the start of Part B would be the reduction or elimination of the overlap in time between the two parts. Only with the two parts of the experiment conducted concurrently, enabling randomisation of patients between the different parts of the experiment and between experimental conditions, can the full strength of the design and insights into effect modification be leveraged.
Of course, there may be barriers to the adoption of FACTIVE. We acknowledge our ideas have the potential to be disruptive and may not be taken up easily by all stakeholders. However, in order to break the current linear thinking, changes in mindset are needed, which will take concerted efforts by all stakeholders. The most important change is the move towards an active consideration of the right time for evidence generation, taking account all available information. The optimal timeline will not always be the conventional sequential one (Figure upper panel). A paradigm shift is needed towards understanding that complementary evidence can, and where possible should, be generated in parallel but without the intention to change the framework for regulatory or HTA assessment. FACTIVE aligns with initiatives such as the EMA/HTA parallel Scientific Advice procedure that have promoted early consideration of the different stakeholder needs and post-authorisation evidence generation. Maximising the benefits of this requires full engagement of drug developers bringing all relevant disciplines into discussions from an early stage.
Having the RCT and cRW data available simultaneously to both regulatory agencies and HTA bodies is a new paradigm. How much weight each party should give to the two types of evidence will be context dependent and specific to the designs and results of the different experiments in the context of the totality of evidence generated. It is important that stakeholders are well versed in critically appraising the strengths and weaknesses of different experimental approaches. Optimal stakeholder decision-making cannot be based on the rhetoric that RCTs have inadequate external validity and real-world experiments have inadequate internal validity. Each experiment should be judged on its own merits with an understanding that it is possible to generalise results from an RCT if potential effect modification is understood and that it is possible to interpret evidence of benefit from "real-world" experiments when well-conducted and reported. Randomisation is a strength in the cRW experiment even if the experimental conditions are less well controlled.
As with any design, there is the potential for misuse. The potential to generate data in a broader patient population under cRW conditions should not be used as a reason for the sponsor to tighten the conditions and lessen the external validity of a core RCT. On the other hand, the potential to generate additional evidence in a timely manner should not mean that the demands of regulators and HTAs increase. We reflect further on the EMA/HTA parallel Scientific Advice procedure, whereby despite a tri-partite conversation that ranges wider than only a regulator's or a HTA's individual needs, each stakeholder gives advice to drug developers in a way that reflects and is confined to their own respective legislative basis and mandate. The parallel to FACTIVE is the design of fit-for-purpose RCTs with a vehicle to generate evidence on additional research questions so that stakeholders can make timely decisions, but without altering the established decision-making frameworks. Having clear objectives for the concurrent evidence generation will help to clarify the relevance of the evidence to the different stakeholders.
In conclusion, through FACTIVE we propose a class of augmented randomized controlled trial designs involving the concurrent collection of RCT and cRW data based on close-to-real-world elements in a way that is optimally and sequentially organized by generating the right evidence at the right time. This approach retains the distinctiveness of the core RCT and the questions it seeks to answer from additional research questions aimed at investigating the performance of the treatment in a broader population under cRW conditions or to investigate one or more critical, specific limitations to the RCT design.
Through the use of randomization, FACTIVE ensures that the data on cRW elements are of similar high quality to conventional RCT data and available in closer proximity to the time of MA and able to inform initial HTA discussions, accelerating the journey of the novel treatment from discovery to patient. Discussions with regulators and HTAs can take place either simultaneously or in rapid sequence after completion of the core RCT with the aim that treatments can be made available to patients earlier. In addition, information available to physicians, health care providers and patients will be enhanced by the evidence provided by the augmented design. This could lead to a greater awareness of the benefits of the treatment, leading to its greater use in the community. Post-authorization trials would still be required to collect data on post-authorization aspects of the use of the new treatment, but these are likely to be fewer and smaller in number given the high quality cRW evidence already available. Patients screened for the augmented Phase 3 trial are recruited from the target population expected to be treated post-authorization. These are either RCT-eligible (green) or from a broader patient population (blue), excluding those patients with, for example, a safety risk. RCT-eligible patients are randomized either to Part A (the core RCT) or to Part B, where they are treated under cRW conditions. In both parts they are additionally randomized to experimental treatment or control. Patients from the broader population are assigned to Part B only and are randomized to be treated under cRW conditions or under RCT conditions. In both situations they are then randomized to experimental treatment or control. In Part B, the proportions of green and blue patients treated under cRW conditions need to be agreed beforehand (e.g., to match epidemiology). RCT/cRW conditions define RCT/cRW design elements such as visit schedule, administration of treatment, monitoring, but exclude specifics about the patient population. The time to initiate Part B could be made dependent on accumulating evidence from Part A at an interim analysis (IA) as illustrated by the vertical offset and the arrow in yellow between Part A and B.
Supplementary Material
External validity and Extrapolation
Clinical trial design aims to strike a balance between providing data that with internal validity, to reach a robust causal inference conclusion on treatment effects and with external validity so that results can be deemed applicable to a broad target population and routine clinical practice. However, these priorities can compete, such that attempts to increase internal validity come at the cost of reducing external validity. Where trials have questionable external validity in the context of a particular regulatory or HTA assessment, the need to evaluate evidence in a contextualized manner explicitly requires an extrapolation to under-represented patient subsets or the different conditions of routine clinical practice. The issues and considerations involved are complex and nuanced. As we assert in the main paper: optimal stakeholder decision-making cannot be based on the rhetoric that RCTs have inadequate external validity. We illustrate the nuances below, presenting examples of issues related to the design and conduct of RCTs and exploring whether each issue truly presents a problem in respect of external validity for sponsor and stakeholder decision-making. Confirmatory trials are generally run under well-controlled clinical conditions so that treatment effect estimates can be well understood and interpreted. Inclusion / Exclusion criteria give a clear identification and description for a study population and the protocol outlines the conditions of the experiment under which the treatment conditions are investigated and compared. To assist with interpretation, full information is available on, e.g., adherence, assessment methods and schedules, use of concomitant treatments and other elements that might moderate the estimated treatment effect. These are strengths of the RCT. As the heterogeneity of either a trial population or the trial conditions increases, the overall estimates of treatment effects become harder to interpret. Where the trial population or trial conditions include multiple factors that modify the treatment effect, an overall estimate of effect might not neither be interpretable nor meaningful. Still, a trial that is more homogenous in either regard is not necessarily a problem in respect of external validity. For example, if a drug shows a meaningful effect as an add-on to Drug A, there might exist a sound scientific rationale to consider that a meaningful effect would also exist when the drug is used as an add-on to Drug B. Then external validity is not compromised even if the add-on to Drug B is not tested in the trial. Similarly, patients with impaired renal function or certain concomitant medications might be excluded from an RCT on the basis that insufficient information is presently available to ensure patient safety. Evidence from clinical pharmacology studies might then be available to confirm similar PK/PD responses or absence of drug-drug interactions such that the results from the experiment can also be applied to these specific patient subgroups despite their exclusion from the trial. Again, external validity is not compromised. More generally, RCTs tend to include patient populations selected according to detailed inclusion and exclusion criteria. However, these criteria can be specified for various reasons, with greater or lesser impact on the generalisabilty of the study results -in terms of whether beneficial treatment effects exist and their magnitude, and whether risks of treatment might not have been identified or adequately characterised. Some examples are given in Table 1.
Yes
Will need to justify inclusion of these patients in the target population through additional evidence generation. Known safety issues of reference product.
A contraindication of the control arm will influence trial's inclusion/exclusion criteria Perhaps none, if it can be argued that if those aspects will not impact the efficacy or safety of the experimental treatment. Enrichment.
More severe / rapidly progressing disease in order to accumulate events / deterioration more rapidly.
Perhaps none, if there is justification that relative effects will be similar in a broader patient population and that relevant benefits are preserved in absolute terms. Increasing assay sensitivity.
Restricting the trial to patients that are stable in terms of symptoms and / or medications in attempts to more efficiently isolate the effects of the experimental treatment.
Justifications would need to be available that relevant (even similar) effects sizes would be observed when conditions were relaxed, otherwise additional evidence generation could be required.
2 Estimands and estimation in the augmented design
Introduction
In the following we consider an augmented design as illustrated in Figure 2, consisting of two parts run under a single protocol. Patients screened for the augmented Phase 3 trial are recruited from the target population expected to be treated post-authorisation. These are either RCT-eligible or from a broader patient population, excluding those patients with, for example, a safety risk. RCT-eligible patients are randomized to Part A (the core RCT) or to Part B, where they are treated under cRW conditions. In both parts they are then randomized to control or drug. Patients from the broader population are assigned to Part B only and are randomized to be treated under cRW conditions or under RCT conditions. They are then randomized to control or drug. The time to initiate part B could be made dependent on accumulating evidence from part A, for example using Bayesian decision rules.
Notation
• Y denotes the treatment response, i.e., the outcome of interest.
• T ∈ {0, 1} is a binary treatment indicator, T = 1 for the experimental treatment (i.e. drug), T = 0 for standard-of-care or placebo (control).
• X ∈ X are patient-specific covariates
• C defines a subset of X that relates to the RCT eligibility criteria, C ′ is the complement set defining RCT ineligibility (i.e., the broader population), with C ∩ C ′ = ∅, C ∪ C ′ = X , such that 1(x i ∈ C) indicates that patient x i is eligible for the RCT.
• P ∈ {0, 1} indicates the conditions under which the treatment is administered, P = 1 for the strictly controlled RCT conditions, and P = 0 for the more flexible cRW conditions.
• ∆(A) indicates a conditional average treatment effect, given conditions A,
∆(A) ≡ E[Y | T = 1, A] − E[Y | T = 0, A].
We suppose that interest is in the average treatment effect (ATE), E[Y | T ], and conditional average treatment effects (CATEs), E[Y | T, A] for conditions A. Treatment effects in the treated (ATT) might also be of interest, e.g., if a control group under cRW conditions is not available.
Estimands
The estimand framework (ICH, 2019) defines five attributes of an estimand determined by the scientific question which the trial aims to answer. These attributes are treatment, population, variable of interest, handling of intercurrent events, and the summary measure. The treatment attribute might not only be determined by the dose and regimen of the new drug, but also by protocol conditions with strict schedules and monitoring in the RCT vs more flexible treatment conditions in the concurrent cRW part.
To lay out the concept we assume that the five attributes are already defined corresponding to the specific question and that the strategy for handling intercurrent events is determined.
Intercurrent events (ICE) observed in the RCT (Part A) could differ from those in the cRW (Part B). For example, 'an assessment is not performed as it is not needed for the physician's treatment decision' could be an ICE in Part B but would not occur in Part A.
The use of an augmented design (Figure 2) supports unbiased estimation of the following estimands:
1. Treatment effect under the RCT treatment conditions for RCT-eligible patients (i.e., in Part A)
θ 1 = ∆(X ∈ C, P = 1) = E[Y | T = 1, X ∈ C, P = 1] − E[Y | T = 0, X ∈ C, P = 1].
This is the usual estimand (and the only available estimand) from a conventional RCT. This confounds X ∈ C and P = 1, meaning that possible treatment interactions with population characteristics, X, and the treatment conditions, P , cannot be differentiated further. Using an augmented design we are also able to estimate
θ1 = ∆(P = 1) = E[Y | T = 1, P = 1] − E[Y | T = 0, P = 1],
the CATE conditioned on the RCT treatment conditions (independently of X). As the entire study population is comprised of RCT eligible (X ∈ C) and RCT ineligible patients (X ∈ C ′ ), combined treatment effects are linear combinations of the CATE conditioned on X ∈ C and the CATE conditioned on X ∈ C ′ . The respective weights sum to 1 and need to be set according to the scientific question. Study populations could be weighted equally, for example, or according to their size, or to a target population.
The design (Figure 2) is flexible and can be tailored to answer the scientific questions of interest. This could mean that certain design elements would not be implemented. Writing the estimands in terms of CATEs conditioned on X ∈ C or (X ∈ C ′ ) clarifies what can still be estimated with the available design elements. For example, the purpose of treating RCT-ineligible patients under RCT treatment conditions might not seem obvious. The estimand based on this design element is given by θ 7 below. Laying out the estimands as linear combinations of the CATEs conditioned on the study population(RCT-eligible or not) clarifies where e.g. this design element contributes.
Here we have θ1 = w 11 θ 1 + w 21 θ 7 ,
where w ij are weights for weight number i and estimand number j (i=1,2;j =1,1,...,8). Estimand θ 7 is defined below.
2. Treatment effect in the treated between treatment conditions
θ 2 = E[Y | T = 1, P = 1] − E[Y | T = 1, P = 0].
Note that θ 2 isolates the effect of the treatment conditions on the treated and is independent of X. We can write estimand θ 2 as a linear combination of the effects in the RCT-eligible and RCT-ineligible patient populations Note this is independent of X. As laid out above, the estimand θ 3 is the difference of the treatment effect under RCT treatment conditions and the treatment effect under cRW treatment conditions. The treatment effect under RCT conditions is a linear combination of θ 1 (X ∈ C) and θ 7 (X ∈ C ′ ). The treatment effect under cRW treatment conditions, θ 8 (see below), can be displayed as a linear combination of the study population CATEs. Note this is independent of X.
In terms of CATEs conditioned on study population,
θ 8 = w 18 (E[Y | T = 1, X ∈ C, P = 0] − E[Y | T = 0, X ∈ C, P = 0]) + w 28 (E[Y | T = 1, X ∈ C ′ , P = 0] − E[Y | T = 0, X ∈ C ′ , P = 0]) .
We can now answer the question posed at the beginning of this subsection. The treatment effect under RCT treatment conditions for RCT ineligible patients θ 7 , or the first summand for the treated, contributes to estimands θ1, θ 2 (ATT), θ 3 , θ 4 (ATT), and θ 6 (ATT). In addition, estimand θ 7 provides a direct answer to potential questions of practising physicians about the therapeutic benefit of RCTineligible patients under RCT treatment conditions and could support recommendations in clinical practice. If RCT-ineligible patients are not treated under RCT conditions by design, comparisons of treatment conditions will be restricted to RCT-eligible patients.
Figure 1
1Figure 1
Figure 1 .
1Current (upper panel) and proposed (lower panel) timing of cRW data element collection in Phase 3 treatment development. ❶ First availability of new treatment to patients; ❷ Increasing uptake of new treatment use; ❸ Maximum uptake of new treatment use
Figure 2
2Figure 2
Figure 2 .
2FACTIVE.
θ 2
2= w 12 (E[Y | T = 1, X ∈ C, P = 1] − E[Y | T = 1, X ∈ C, P = 0]) +w 22 (E[Y | T = 1, X ∈ C ′ , P = 1] − E[Y | T = 1, X ∈ C ′ , P = 0]).3. Difference of treatment effects between treatment conditionsθ 3 = ∆(P = 1) − ∆(P = 0) = (E[Y | T = 1, P = 1] − E[Y | T = 0, P = 1]) − (E[Y | T = 1, P = 0] − E[Y | T = 0, P = 0]) = E[Y | T = 1, P = 1] + E[Y | T = 0, P = 0] − E[Y | T = 0, P = 1] − E[Y | T = 1, P = 0].
4 .
4Heterogeneous treatment effect in the treated under RCT treatment conditionsθ 4 = E[Y | T = 1, X ∈ C, P = 1] − E[Y | T = 1, X ∈ C ′ , P = 1].5. Effect of treatment conditions on the treated for RCT eligible patientsθ 5 = E[Y | T = 1, X ∈ C, P = 1] − E[Y | T = 1, X ∈ C, P = 0].6. Treatment conditions effect on the treated for RCT ineligibleθ 6 = E[Y | T = 1, X ∈ C ′ , P = 1] − E[Y | T = 1, X ∈ C ′ , P = 0].7. Treatment effect under RCT treatment conditions for RCT ineligible patientsθ 7 = ∆(X ∈ C ′ , P = 1) = E[Y | T = 1, X ∈ C ′ , P = 1] − E[Y | T = 0, X ∈ C ′ , P = 1].8. Treatment effect under the cRW treatment conditionsθ 8 = ∆(P = 0) = E[Y | T = 1, P = 0] − E[Y | T = 0, P = 0].
Table 1 :
1Reasons for RCT entry restriction and the impact on generalisabilityReason for restriction
Example
Impact
Known safety issues
of the experimental product.
Based on information
derived from earlier
phase trials or
trials of products
of similar pharmacology.
Yes. Additional evidence
generation would be
required if it
is ever intended
to extend use
into the population
to be excluded.
Uncertainties in
relation to the
safety of the
experimental compound.
Ongoing clinical
pharmacology studies
in patients with
renal impairment,
with consequences
for exposure not
yet known.
Transforming evidence generation to support health and health care decisions. R M Califf, New England Journal of Medicine. 375• Califf RM et al. (2016) Transforming evidence generation to support health and health care decisions. New England Journal of Medicine 375.24: 2395-2400.
Reflection Paper on 'Methodological issues in confirmatory clinical trials planned with an adaptive design'. Available at www.ema.europa.eu • CHMP (2019) Guideline on the investigation of subgroups in confirmatory clinical trials. • Chmp, • CHMP (2007) Reflection Paper on 'Methodological issues in confirmatory clinical trials planned with an adaptive design'. Available at www.ema.europa.eu • CHMP (2019) Guideline on the investigation of subgroups in confirmatory clinical trials. Available at www.ema.europa.eu
Network meta analysis for decision making. S Dias, Wiley• Dias S et al. (2018) Network meta analysis for decision making. Wiley.
Relative efficacy of drugs: an emerging issue between regulatory agencies and third-party payers. H G Eichler, Nature Reviews Drug discovery. 94• Eichler HG et al. (2010). Relative efficacy of drugs: an emerging issue between regulatory agencies and third-party payers. Nature Reviews Drug discovery, 9(4),277-91.
Threshold-crossing": a useful way to establish the counterfactual in clinical trials?. H G Eichler, Clinical Pharmacology & Therapeutics. 1006• Eichler HG et al. (2016) "Threshold-crossing": a useful way to establish the counterfactual in clinical trials? Clinical Pharmacology & Therapeutics, 100(6), 699-712.
Adaptive designs for clinical trials of drugs and biologics. • Fda, • FDA (2022) Adaptive designs for clinical trials of drugs and biologics. Available at https://www.fda.gov/
A New Comprehensive Approach to Assess the Probability of Success of Development Programs Before Pivotal Trials. L V Hampson, Clinical Pharmacology & Therapeutics. 1115• Hampson LV et al. (2022) A New Comprehensive Approach to Assess the Probability of Success of Development Programs Before Pivotal Trials. Clinical Pharmacology & Therapeutics, 111(5), 1050-1060.
International Council for Harmonisation Topic E9(R1): Addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials. • Ich ; • Ray, R Locke, T Hendricks-Sturrup, R , Statistics in Medicine. Aligning shared evidentiary needs among payers and regulators for a real-world data ecosystem. Target estimands for population-adjusted indirect comparisons (with Discussion. in press• ICH (2019) International Council for Harmonisation Topic E9(R1): Addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials. Available at https://ich.org/ • Ray R, Locke T, Hendricks-Sturrup R (2022) Aligning shared evidentiary needs among payers and regulators for a real-world data ecosystem. Available at https://healthpolicy.duke.edu/ • Remiro-Azócar A (2022) Target estimands for population-adjusted indirect comparisons (with Discussion). Statistics in Medicine (in press).
Learning versus confirming in clinical drug development. L B Sheiner, Clinical Pharmacology & Therapeutics. 613• Sheiner LB (1997) Learning versus confirming in clinical drug development. Clinical Pharmacology & Therapeutics, 61(3), 275-291.
Beyond randomized clinical trials: use of external controls. H Schmidli, Clinical Pharmacology & Therapeutics. 1074• Schmidli H (2020) Beyond randomized clinical trials: use of external controls. Clinical Pharmacology & Therapeutics, 107(4), 806-816.
4 Estimation To begin we shall assume that the full augmented design is applied, that the estimand framework is in place, and that the relationship between the outcome Y and P, T, RCT eligibility and possibly patient-specific covariates X ∈ X can be modelled by linear regression. Mgp Zuidgeest, Journal of Clinical Epidemiology. 88θ 8 ) we introduce a set of binary indicators. z i = (z i1 , z i2 , z i3 ), for each of i = 1 : n patients in the augmented trial• Zuidgeest MGP et al. (2017). Pragmatic trials and real world evidence. Journal of Clinical Epidemiology, 88, 7-13. 2.4 Estimation To begin we shall assume that the full augmented design is applied, that the estimand framework is in place, and that the relationship between the outcome Y and P, T, RCT eligibility and possibly patient-specific covariates X ∈ X can be modelled by linear regression. In order to estimate (θ 1 , . . . , θ 8 ) we introduce a set of binary indicators, z i = (z i1 , z i2 , z i3 ), for each of i = 1 : n patients in the augmented trial.
. • Z I1 = P I, where P i is the treatment conditions assigned to the i'th patient• z i1 = P i , where P i is the treatment conditions assigned to the i'th patient.
• z i2 = 1(x i ∈ C) indicates patient i was eligible for the RCT. • z i2 = 1(x i ∈ C) indicates patient i was eligible for the RCT.
• z i3 = T i indicate the treatment received by the i'th patient. • z i3 = T i indicate the treatment received by the i'th patient.
and n 0 then we can form an (n×3) indicator matrix, Z, with row vectors z i relating to the i'th patient. The Z matrix can be extended to include covariates X ∈ X. Assuming n 1 patients are enrolled to be studied under the RCT conditions(P = 1. In this case, all of the estimates, ( θ 1 , θ1, . . . , θ 8 ), can be obtained from applying standard ANOVA or ANCOVA techniques, according to Section 2.3, with associated confidence intervals, tests of hypotheses, and p-valuesAssuming n 1 patients are enrolled to be studied under the RCT conditions(P = 1), and n 0 then we can form an (n×3) indicator matrix, Z, with row vectors z i relating to the i'th patient. The Z matrix can be extended to include covariates X ∈ X . In this case, all of the estimates, ( θ 1 , θ1, . . . , θ 8 ), can be obtained from applying standard ANOVA or ANCOVA techniques, according to Section 2.3, with associated confidence intervals, tests of hypotheses, and p-values.
| []
|
[
"Dimensional Reduction of Dynamical Systems by Machine Learning: Automatic Generation of the Optimum Extensive Variables and Their Time-Evolution Map",
"Dimensional Reduction of Dynamical Systems by Machine Learning: Automatic Generation of the Optimum Extensive Variables and Their Time-Evolution Map"
]
| [
"Tomoaki Nogawa [email protected] \nDepartment of Medicine\nFaculty of Medicine\nToho University\n5-21-16, Omori-Nishi, Ota-ku143-8540TokyoJapan\n"
]
| [
"Department of Medicine\nFaculty of Medicine\nToho University\n5-21-16, Omori-Nishi, Ota-ku143-8540TokyoJapan"
]
| []
| We propose a framework to generate a phenomenological model that extracts the essence of a dynamical system with large degrees of freedom with a help of machine learning. For a given microscopic dynamical system, we simultaneously seek for the optimum projection to a small number of macroscopic variables, which is supposed to be extensive, and the rule of time evolution that the variables obey. The utility of this method is demonstrated by the application to the three-state Potts model. | null | [
"https://export.arxiv.org/pdf/2006.04482v3.pdf"
]
| 258,947,019 | 2006.04482 | 53073dab9a705eeb4b9629dc12bf1fbc94dfe544 |
Dimensional Reduction of Dynamical Systems by Machine Learning: Automatic Generation of the Optimum Extensive Variables and Their Time-Evolution Map
May 2023
Tomoaki Nogawa [email protected]
Department of Medicine
Faculty of Medicine
Toho University
5-21-16, Omori-Nishi, Ota-ku143-8540TokyoJapan
Dimensional Reduction of Dynamical Systems by Machine Learning: Automatic Generation of the Optimum Extensive Variables and Their Time-Evolution Map
May 2023
We propose a framework to generate a phenomenological model that extracts the essence of a dynamical system with large degrees of freedom with a help of machine learning. For a given microscopic dynamical system, we simultaneously seek for the optimum projection to a small number of macroscopic variables, which is supposed to be extensive, and the rule of time evolution that the variables obey. The utility of this method is demonstrated by the application to the three-state Potts model.
Introduction
In the studies of the dynamics of systems with large degrees of freedom, we often use phenomenological models with small degrees of freedom, e. g., the Lorentz equation for atmospheric variability [1]. Once such a model is supposed, we can analyze it by using various techniques developed in the field of so-called nonlinear dynamics. Such a reduced model is usually introduced with drastic approximation owing to the intuition and insight of researchers. Although it is an orthodox task for statistical physicists to derive such a reduced model ab initio from a well-established microscopic model, it is impossible in most cases. The aim of this paper is to propose a generic framework to generate a macroscopic dynamical system (DS) for a given microscopic DS with help from machine learning. One of the key feature of the modern machine learning, represented by deep neural networks [2], is automatic extraction of feature amounts of the input data. In the similar manner, we try to find the suitable variables to describe the dynamics of a many-body system. The DS that the obtained variables obey is also an outcome of the learning.
The most standard methods for the dimensional reduction of data is principal component analysis. For DSs, proper orthogonal decomposition [3] has been developed in the field of fluid dynamics. Recently, dynamical mode decomposition [4,5] is attracting attention, which is founded on the theory of Koopman operator [6]. Similar idea is utilized for stochastic processes as Markov state model [7][8][9]. In mode decomposition analysis, the reduced variables are the coefficients of the eigenvectors. It is usual that the only way to speculate the meaning of the variables is the visualization of the eigenvector. This is not easy in general because the dimension of the eigenvector is as large as that of the original data. Contrastingly, the method presented here employs extensive variables, which are characterized by relatively small parameters.
The rest of this paper is organized as follows. The basic framework of the machine learning to reduce the dimension of DSs is introduced in Sec. 2. We explain how to prepare the data for the demonstration in Sec. 3 and show the result of the learning in Sec. 4. We put concluding remarks and mention some future works in Sec. 5.
Preliminaries
what to learn
Let us start with a microscopic DS
x ∈ R N → x ′ = f (x) ∈ R N , N ≫ 1.(1)
The goal is to obtain a macroscopic DS in the form
X(x ′ ) = F (X(x)) ∈ R n , n ≪ N,(2)
where X is projection to a macroscopic variable and F is time evolution map. The task for our machine learning is to find X and F that satisfy Eq. (2) from some data points in form (x, x ′ ) that satisfy Eq. (1). We do not necessarily need to know f and can use the data sampled from observational time-series. If the time interval ∆t between x and x ′ is not uniform among data, we should replace Eq.(2) with X(x ′ ) = X(x) + F (X(x)) ∆t. If X is given, the task is regression of F, that is, supervised learning. There are a lot of studies on this kind of regression of DS [10,11]. If X is not given, it is not a popular problem. It is, however, similar to the finite-size scaling for critical phenomena [12,13], where we simultaneously seek for how to scale the variables and what equation the scaled variables satisfy.
In the actual machine learning, we fix n and seek for X and F that minimize the loss functional (LF)
L n [X, F] := 1 n |F (X(x)) − X(x ′ )| 2 .(3)
Hereafter, overline denotes the average over data points. If L n [X * , F * ] equals zero, (X * , F * ) gives an exactly closed DS. Otherwise, it gives an approximated formula, whose precision is evaluated by the LF. We naively expect that min X,F L n decrease with n. The n-dependence would inform us a kind of complexity of the system. Possible scenario is, for example, that minL n = 0 ⇐⇒ n ≥ n 1 , minL n ∝ e −n/n 2 , and so on.
orthonormal condition
There exists indefiniteness in X; If (X * , F * ) satisfies Eq. (2), (G•X * , G•F * •G −1 ) does too, where G : R n → R n is arbitrary function that has an inverse function. To avoid indefiniteness, we need to impose some restrictions on X. First, we impose orthonormal condition on X(x ′ ) in data space as
X m (x ′ )X l (x ′ ) = δ ml ∀m, l ∈ {1, · · · , n}.(4)
Here, we suppose X(x ′ ) = 0. We impose the condition not on X(x) but on X(x ′ ) because the latter is directly coresponds to the LF. Normalization excludes the trivial solution (X, F) = (0, 0). Orthogonalization avoid the solutions where X m ≈ X l ∀m, l. This solution is favored because the number of the objective variables, namely {X m (x ′ )} n m=1 , is effectively reduced to one. On the other hand, the number of the explanatory variables, {X m (x)} n m=1 , is not reduced because the difference X m − X l is allowed to be blown up in F(X). We remark that the LF is not necessarily a decreasing function of n under the orthogonal condition.
extensivity
As the second restriction on the macroscopic variables, we suppose that X is given by the summation of a local b-body function ξ : R b → R n as
X(x) = 1 N N i=1 ξ(x ν i1 , · · · , x ν ib ).(5)
Here {ν ij } b j=1 indicates the block that includes i and its neighbors in some sense. This makes X, precisely N X, an extensive variable. The definition of the block would be customized for the DS to analyze.
Hereafter, we consider the case that the microscopic variables are discrete and bounded as x i ∈ {0, 1, · · · , q − 1}. The configuration of each block is represented by an b-digit q-nary integer
k i := b j=1 x ν ij q j−1 ∈ {0, 1, · · · , q b − 1}.(6)
Thus, arbitrary local function can be expressed as ξ(x ν i1 , · · · , x ν ib ) = q b −1 k=0 w k δ k i k . Substitution of this into Eq. (5) leads to
X(x) = q b −1 k=0 w kxk (x),x k (x) := 1 N N i=1 δ k i k .(7)
Herex k means the fraction of the blocks that takes the configuration k. Hereafter we note
X(x) = wx(x), w ∈ R n×q b ,x(x) := t x 0 (x), · · ·x q b −1 (x) ∈ R q b ,(8)
where t on the left shoulder denotes the transposition. As shown in Eq. (7), X is expressed by linear combination of {x k (x)} q b −1 k=0 and we can use (x(x),x(x ′ )) as a data point instead of (x, x ′ ). This saves the computation in the machine learning when q b ≪ N although q b rapidly increases with b.
macroscopic dynamical system
On the function form of F, we simply suppose an n-variable polynomial up to the p-th order as
F(X) = WF(X), W ∈ R n×Np , N p := (p + n)! p!n! , F l (X) = n m=1 X m a lm , a lm ∈ {0, 1, · · · , p}, n m=1 a lm ≤ p.(9)
Eventually, the LF is expressed as a function of the weight (w, W) ∈ R n×(q b +Np) . In the learning process, we try to reduce the LF by tuning w and W alternately. When w is fixed, the task is linear regression, and W is optimized by solving
F (X(x)) tF (X(x))W =F (X(x)) t X(x ′ ).(10)
When W is fixed, we use the stochastic gradient descent method on w. The detail of the learning is explained in Appendix A.
Data
Metropolis-Hastings dynamics of three-state Potts model
In this paper, we show the application of the method proposed above to the three-state Potts model. The internal energy of the system is given by
E(x) := − (i,j)∈n. n. δ x i x j , where x i ∈ {0, 1, 2}
is the spin variable at the lattice point i in a (L × L × L)-cubic lattice and "n. n." means the set of all nearest-neighbor pairs of the lattice points under periodic boundary condition. We employ the Metropolis-Hastings update [14,15] as the microscopic dynamics. Although it is stochastic, we expect that the DS for extensive variables becomes deterministic as N → ∞. Starting with a certain initial state, which will be explained later, we iterate the following elementary trial. First, we make a candidate of new state x 2 from x 1 by choosing a lattice point i at random and replacing x i with one of the other two spin states with equal probability. We accept the change to x 2 with probability min(1, exp{K[E(x 1 ) − E(x 2 )]}). Here K is the ratio of the coupling constant to the temperature. We regard that time t increases by 1/N for each elementary trial. From a single time-sequence, we sample one data point by recording x at time t 1 and x ′ at t 1 + ∆t, where t 1 ∼ U (t min , t max ). Hereafter we denote the unifrom distribution between a and b by U (a, b). In this paper, we set t min = 1.0, t max = 16.0 and ∆t = 1.0.
initial condition
For each time sequence, we set the initial state by assigning x i 's randomly and independently. We make the probability distribution of the spin states differs among samples. Let the probability to assign x i = σ be p σ . We consider three types of distribution. In the initial condition (IC) I, p σ = r σ /(r 0 + r 1 + r 2 ) where r 0 , r 1 , r 2 ∼ U (0, 1) independently. In the IC II, r 1 = r 2 , which leads to p 1 = p 2 . In the IC III, r 0 = r 1 = r 2 , which leads to p 0 = p 1 = p 2 .
blocks
As the block of neighbors, namely {ν ij }, we employ the sequential straight chain with length b − 1 that starts at i and elongates on the [1,0,0]-direction of the cubic lattice. In this case, not a small part of bases {x k } are not independent because there are constraints coming from the relation between the bases with b and those with b − 1, e. g.,x 0 =x 00 +x 01 +x 02 =x 00 +x 10 +x 20 . Hereafter, the suffix ofx is expressed by b-digit ternary integer. Such constraints sometimes yields equivalence, e. g.,x 10 =x 01 . Practically, we combine such bases asx 10 +x 01 . Let the number of linear-independent bases be N b , which is q b − q b−1 at most (This holds even for b = 1). The specific symmetry of the microscopic DS may reduces the number of independent bases further. We need N b ≥ n to satisfy the orthonormal condition Eq. (4).
For b = 1, we havex = (x 0 ,x 1 ,x 2 ), which can express spontaneous magnetization M σ = (3x σ − 1)/2. In the following, we do not care about the constant bias and coefficient in the expression of the macroscopic variables. Sincex 0 +x 1 +x 2 = 1, the magnetization forms two dimensional order parameter. For b = 2, we havẽ x = (x 00 ,x 01 , . . . ,x 21 ,x 22 ), wherex σ 1 σ 2 =x σ 2 σ 1 holds for σ 1 ̸ = σ 2 . This can express internal energy E =x 01 +x 12 +x 20 as well as M σ =x σ0 +x σ1 +x σ2 . Note that the variable expressed for b can be always expressed for b + 1. Therefore, min X,F does not increase with b.
Results
Now, we show the results of the machine learning. First, we fix K = 1.20K c , where K c ≈ 0.5506 [16] is the threshold above which the system has nonzero spontaneous magnetization in equilibrium (t → ∞, N → ∞). In the latter part, we consider the data with various K.
In the following, the optimum value of the LF is denoted by L * n . Precisely, it is the average of four best results of the 32 nearly independent learnings. Note that L * n depends not only on n but on b, p, N , IC, and so on. If not declared, we set p = 5 and N = 256 3 .
initial condition I
Under the IC I, the symmetry among the three spin states is fully broken in general, and the majority state at t = 0 remains the majority for t > 0. Figure 1(a) shows the b-dependence of L * n for each n. For n ≥ 3 L * n increases with b due to the incompleteness of learning. Anyway, the b-dependence of L * n is not large. For fixed b, L * n decreases with n. Great improvement is observed between n = 2 and n = 3. Figure 2(a) plots the one-step displacement of X 1 as a function of X 1 for (n, b) = (1, 2). The symbols show X 1 (x ′ )−X 1 (x) of the data points, which looks far from a singlevalued function of X 1 . The solid curve indicates F 1 (X 1 ) − X 1 , in which we can find an unstable fixed point (FP) at X 1 ≈ 0.1 and stable FPs at X 1 ≈ −1.4 and X 1 ≈ 2.7. These are speculated to correspond to the paramagnetic state and the ferromagnetic states, respectively. The values of X 1 are related to M σ = 0, −M eq /2, M eq , respectively, where M eq is the equilibrium value of the magnetization of the majority state.
In Fig. 2(b), the displacements X(x) → X(x ′ ) for (n, b) = (2, 2) are indicated by arrows. Three-fold symmetry is obvious and we can find a paramagnetic FP is at the origin and three ferromagnetic FPs surround it. Note that the LF is invariant against the rotation of X in n-dimensional space. Since similar flow structure is obtained for b = 1, X is presumably a quantity like the magnetization. It is interesting that the two-dimensional magnetization is favored for n = 2 rather than the one-dimensional magnetization and the internal energy.
In Fig. 2(c), the displacements for (n, b) = (3, 2) are indicated by arrows. It seems to be made by adding a vertical axis, corresponding to the internal energy, to the plain in Fig. 2(b). As mentioned previously, this considerably improves the precision of the macroscopic DS.
X 1 (x') − X 1 (x) F 1 (X 1 ) − X 1 X 1 (a)
initial condition II
Under the IC II, the fraction of the states: 1 and 2, are almost equal at t = 0, namely,
x 1 =x 2 = (1 −x 0 )/2.
If the state 0 is the majority at t = 0, it remains so for t > 0. Else the state 1 or 2 becomes the majority after stochastic symmetry-breaking but it takes very long time. Figure 1(b) shows the b-dependence of L * n for n = 1, · · · , 4. For b ≥ 2, L * n is the lowest at n = 2 and the values are quite small. Contrastingly, L * n is quite large for (n, b) = (2, 1).
While we obtain similar results with that for the IC I for n = 1, this is not the case for n = 2. Figure 3(a) shows the displacements for (n, b) = (2, 1), where X 1 and X 2 seems to correspond tox 0 andx 1 −x 2 , respectively. We can find a stable FP at (1. balanced,x 1 −x 2 is not extensive, namely O(N −1/2 ). Therefore, X 2 is in the same order with thermal fluctuation. This is the reason for that L * n is quite large for (n, b) = (2, 1). In Fig. 3(b) for (n, b) = (2, 2), X 1 and X 2 seems to correspond to M 0 and −E, respectively. There are two stable FPs; one is at (−1.6, 1.9) where state 0 is the majority and the other is at (1.5, 1.9) where it is the minority. Figure 3(c) for (n, b) = (3, 2) seems to be made by bending Fig. 3(b) in a three dimensional space. This suggests thatx ∈ R N b is actually on a two-dimensional manifold. This is also the case for b ≥ 3. These imply that n = 2 is sufficient for the dynamics under the IC II.
initial condition III
Under the IC III, the initial state is macroscopically unique, namely,x 0 =x 1 =x 2 = 1/3 at t = 0. Consequently, arbitrary extensive variable, such as internal energy, exhibits unique time evolution for N → ∞ and, therefore, it trivially has a closed DS as far as it is monotonic with respect to t. For b = 1, all components of magnetization are subextensive and we observe noisy diffusion of the magnetization both for n = 1 and 2. Figure 4(a) plots the displacement of X 1 for (n, b) = (1, 2), which is obviously a single-valued function of X 1 . Here, X 1 is presumably a quantity similar to E. As seen in Fig. 1(c), L * n with n = 1 decreases a lot as b increases from 1 to 2. For n = 2, similar drop occurs as b increases from 2 to 3. This is because it is impossible to make two linear-independent extensive variables for b ≤ 2. Figure 4(b) for (n, b) = (2, 3), however, obviously indicates that X 1 and X 2 has nonlinear dependence. These imply that n = 1 is sufficient for the IC III.
data with distribution of the coupling strength
Next, we show the results for the data with various K. Each sample has different K as K ∼ U (0.80K c , 1.20K c ). Then, we need to replace F(X) with F(X, K). The implementation in learning is easy; we regard K as the (n+1)-th base in the polynomial in Eq. (9). Figure 2(d) shows several trajectories of X(x) driven by the microscopic DS f with various K. While those for K < K c approaches the paramagnetic FP at the center, those for K > K c approaches one of the three ferromagnetic FPs, which depends on K. Figure 5(a) compares L * n for K ∈ [0.80K c , 1.20K c ) and L * n for unique K(= 1.20K c ). There are small difference between them, which means that the macroscopic variable need not be optimized specifically for each K. For small n and b, however, L * n for various K is smaller than that for K = 1.20K c . This is because the flow structure in the space of X is simpler for K < K c than for K > K c and, therefore, the loss function is smaller for smaller K. For large n and b, oppositely, we find the tendency that the former becomes larger than the latter. This is presumably because the incompleteness of the learning is enhanced by the increase of the elements of W.
dropout
As b increases, L * n generally decreases but it becomes more difficult to speculate the meaning of X(= wx) because the number of the elements of w, namely nN b , increases. It is desirable that w is sparse; most of the elements are zero. In the minimal setting that satisfies the orthonormal condition, the total number of the nonzero elements is n(n + 1)/2 (see Appendix B). Here, we enforce dropout of the bases in the following manner. Let the redundant number of the nonzero elements be δ. When we initialize w in the learning, we randomly choose δ from {0, 1, · · · 8} and randomly choose n(n + 1)/2 + δ elements of w to keep and fix the others zero. We remark that dropout eliminates the rotational indefiniteness of X. Figure 5(b) compares L * n with dropout and L * n without dropout. For b ≥ 3, dropout makes L * n larger. This is natural because dropout reduce the ability of expression of X. On the other hand, dropout makes L * n a little smaller for b = 2. This is because that the learning becomes more efficient by the reduction of the number of the tuning parameters. Figure 6 plots the local minimum values of L n obtained in the learning as a function of δ. As seen in the panel (a) and (b) for b = 2, the global minimum is obtained with relatively small δ. It is remarkable that the best result is obtained with δ = 0 for (n, b) = (3, 2). For (n, b) = (2, 3), the best solution is obtained with δ = 2. The lower bounds for δ < 2 is considerably larger than that for δ = 2. For (n, b) = (3, 3), contrastingly, the lower bound of L n tends to decrease with δ as far as δ ≤ 8.
Next, let us watch the expression of X. For (n, b) = (3, 2), the best two solutions are (x 11 ,x 22 −0.44x 11 ,x 00 −0.80x 11 −0.80x 22 ) and (x 11 ,x 00 −0.44x 11 ,x 22 −0.80x 00 −0.79x 11 ). (Here, we normalized each component of X so that the largest coefficient becomes one.) These are nearly equivalent after permutation of states 0 and 2 (Note that there is no degree of freedom in w for δ = 0). It is remarkable that the bases with different states, such asx 10 , is not employed at all. For (n, b) = (2, 3), the two best solutions are (x 00 −0.68x 11 ,x 22 −0.46x 00 −0.65x 11 ) and (x 11 −0.74x 00 ,x 22 −0.41x 00 −0.27x 11 ). Again, the bases with different states do not appear.
large size limit
The LF contains the systematic error coming from finiteness of the system size. Here we consider the large N limit. Figure 5(c) shows the N -dependence of L * n for b = 2. As N increases, L * n looks converging to a finite value for n = 1 and 2. On the other hand, L * n keeps decreasing to zero as a power function of N for n = 3, which suggests that rigorously closed DS exists for N → ∞. Although the slope becomes a little gentler between 2 21 and 2 24 , this is presumably due to the disability of expression of F. Actually, L * n for N = 2 24 is decreased by the increment of the order of the polynomial from 5 to 6. Under IC II and III, L * n similarly decreases to zero for n = 2 and n = 1, respectively (not shown).
Summary and Discussions
In this paper, we propose a general framework of the dimensional reduction of the DSs by machine learning. We particularly consider the implementation to the system with discrete and bounded variables and demonstrate the application to the three-state Potts model. The obtained macroscopic DSs exhibit plausible behaviors. It is robust against the distribution of the value of the coupling strength. We preliminarily confirmed that it is also robust against the imposition of distributed magnetic field. We also find the successful cases where the dropout of the matrix elements that characterize the reduced variable does not raise the loss function. Furthermore, we obtain a consequence that lim N →∞ L * n equals zero above a certain threshold n = n c , which depends on IC; n c decreases as the symmetry of the initial state increases. For n > n c , X(x) are embedded on a n c -dimensional manifold.
Almost closed DS is obtained with b = 2. This is reasonable because the model analyzed has only nearest-neighbor interaction. Although L * n decreases with b above 2, this is not considered to be essential. The increment of b raises the degree of freedom to modify the distribution of X(x ′ ) in n-dimensional space. This reduces L n to some extent but the flow structure of X, which is represented by the configuration of fixed points, does not change. In the same sense, the comparison of L * n 's for different n may not be meaningful. It would be significant to consider a more proper loss function.
The present method is expected to work for most of DSs with discrete variable, such as cellar automaton and the contact process. It is challenging to develop the feasible implementations for off-lattice systems and systems with continuous degrees of freedom. There are some extensional usages of the present method. We can find a prediction formula for a quantity of interest, such as order parameter, by fixing the first component of X. In addition, we can seek for a conserved quantity by letting F identity transformation.
Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP18K03469.
Appendix A. The implementation of training (learning)
Before starting the training, we generate the training data: S max = 2 14 samples of (x(x),x(x ′ )) by the independent numerical simulations of the microscopic DS. We subtract a constant vector from bothx(x) andx(x ′ ) so thatx(x ′ ) = 0, which also yields X(x ′ ) = 0 for any w. By using these data, we perform the following computation almost independently in 32 threads.
First, we randomly initialize w as w mk ∼ U (−0.5, 0.5). Then, we orthonormalize X by modifying w as explained in Appendix B and set W by solving Eq. (10). Next, we randomly choose the size of minibatch S mb from {2 9 , 2 10 , 2 11 , 2 12 } and divide the whole data into S max /S mb minibatches. Then, we iterate the update of w by the Minibatch gradient descent with ADAM [17]. After each step, we orthonormalize X. Every 4 updates of w, W is updated. Every 2 16 /S mb updates of w, L n (w, W) is calculated, where we use the test data with S max samples that differ from the training data in order to avoid underestimation due to overfitting. If record-low L n is obtained, we store it and (w, W) in each thread.
We regard 2 20 /S mb updates of w as a epoch of training, whose CPU-time consumption rarely depends on S mb . We typically perform 2 12 epochs. At the end of each epoch, we rank the the momentary L n among the threads and initialize (w, W) if the momentary L n is larger than the median and L n has not decreased in the last two epochs. It is likely that such threads are trapped in local minima. This operation is the only exception of the independence among the threads. At the beginning of the epoch, we renew both of the training and test data. (Strictly speaking, we totally stock 4S max samples and randomly select the samples for the test and training without overlapping.)
Figure 1 .
1The optimum value of the loss function obtained by the learning is plotted as a function of the block length. The panel (a), (b) and (c) are corresponds to IC I, IC II and IC III, respectively. The error bars, which are too small to see, indicates the root mean square deviation of the four best results.
Figure 2 .
2The displacement of the reduced macroscopic variables for IC I. The panels (a), (b) and (c) correspond to n=1, 2 and 3, respectively. In panel (c), the arrows are randomly colored to facilitate visualization. The coupling strength K equals 1.2K c except panel (d), where K distributes in [0.8K c , 1.2K c ).
8, 0.0), an unstable FP at (−0.3, 0.0) and a saddle point at (−1.0, 0.0). It is speculated that there are additional two stable FPs at (−1.0, ±∞). Since the states 1 and 2 are almost
Figure 3 .
3The displacement of the reduced macroscopic variables for IC II.
Figure 4 .
4The displacement of reduced macroscopic variables for IC III.
Figure 5 .
5(a) Comparison of L * n for various K and that for unique K. (b) Comparison of L * n with dropout and that without dropout. (c) System-size dependence of L * n . The results of all panels are for the IC I.
Figure 6 .
6Distribution of local minima of L n as a function of redundancy of the elements of w. These values are picked when we temporally abandon the learning and initialize w. The most right column in each panel show the results without dropout.
Appendix B. OrthonormalizationWe perform orthonormalization of {X m (x ′ )} n m=1 by modifying w asThis is done in fixed order: m = 2, 3, · · · , n. Here,the averagex k 1x k 2 inis taken not over the samples in minibatch but over the whole training data, in order to reduce the computation cost. After the orthognalization, we perform normalizationWhen the record-law loss function is obtained, we do not accept it unless both of the two conditions:are not satisfied. Here, the average is taken over the test data. Under dropout, we cannot treat the term with w lk in Eq. (B.1) unless the base k is allocated for X m . We substitute Z l := k∈Km v kxk for X l in Eq. (B.1). Here, K m is the subset of the bases allocated for X m and {v k } is introduced to minimize |Z l − X l | 2 . We still impose the condition Eq. (B.3) although it is not guaranteed even for the training data. In addition, we should order {X m } n m=1 so that the number of nonzero elements in {w mk } N b k=1 increases. In the minimal setting satisfying the orthonormal condition, the number of nonzero elements in {w mk } N b k=1 be m. Consequently, the total number of the nonzero elements is n m=1 m = n(n + 1)/2.
. E Lorentz, J. Atmos. Sci. 20130Lorentz E N 1963 J. Atmos. Sci. 20 130
. G Carleo, I Cirac, K Cranmer, L Daudet, M Schuld, N Tishby, L Vogt-Maranto, L Zdeborová, Rev. Mod. Phys. 91445002Carleo G, Cirac I, Cranmer K, Daudet L, Schuld M, Tishby N, Vogt-Maranto L and Zdeborová L 2019 Rev. Mod. Phys. 91(4) 045002
. G Berkooz, P Holmes, J L Lumley, Ann. Rev. Fluid Mech. 25539Berkooz G, Holmes P and Lumley J L 1993 Ann. Rev. Fluid Mech. 25 539
. C W Rowley, I Mezíc, S Bagheri, P Schlatter, D S Henningson, Journal of Fluid Mechanics. 641115Rowley C W, Mezíc I, Bagheri S, Schlatter P and Henningson D S 2009 Journal of Fluid Mechanics 641 115
. P Schmid, Journal of Fluid Mechanics. 6565Schmid P J 2010 Journal of Fluid Mechanics 656 5
. B Koopman, Proceedings of the National Academy of Sciences of the United States of America. 17315Koopman B O 1931 Proceedings of the National Academy of Sciences of the United States of America 17 315
. L Molgedey, H G Schuster, Phys. Rev. Lett. 723634Molgedey L and Schuster H G 1994 Phys. Rev. Lett. 72 3634
. G Perez-Hernandez, F Paul, T Giorgino, G Fabritiis, F Noé, J. Chem. Phys. 13915102Perez-Hernandez G, Paul F, Giorgino T, Fabritiis G D and Noé F 2013 J. Chem. Phys. 139 015102
. H Wu, F Nüske, F Paul, S Klus, P Koltai, F Noé, J. Chem. Phys. 146154104Wu H, Nüske F, Paul F, Klus S, Koltai P and Noé F 2017 J. Chem. Phys. 146 154104
. M Schmidt, H Lipson, Scicence. 324592381Schmidt M and Lipson H 2009 Scicence 324(5923) 81
. S L Brunton, J Proctor, J N Kutz, Proceedings of the National Academy of Sciences. 113Brunton S L, Proctor J L and Kutz J N 2016 Proceedings of the National Academy of Sciences 113 3932-3937
. K Harada, Phys. Rev. E. 8456704Harada K 2011 Phys. Rev. E 84 056704
. K Harada, Phys. Rev. E. 9212106Harada K 2015 Phys. Rev. E 92 012106
. N Metropolis, A W Rosenbluth, M N Rosenbluth, A Teller, E Teller, J. Chemical Phys. 211087Metropolis N, Rosenbluth A W, Rosenbluth M N, Teller A H and Teller E 1953 J. Chemical Phys. 21 1087
. W Hastings, Biometrica. 5797Hastings W K 1970 Biometrica 57 97
. W Janke, R Villanova, Nuclear Physics B. 489Janke W and Villanova R 1997 Nuclear Physics B 489 679-696
D P Kingma, Ba J 2015 Adam, A method for stochastic optimization 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track Proceedings ed Bengio Y and LeCun YKingma D P and Ba J 2015 Adam: A method for stochastic optimization 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings ed Bengio Y and LeCun Y
| []
|
[
"Note on the Coincidence theorem",
"Note on the Coincidence theorem"
]
| [
"Radoš Bakić "
]
| []
| []
| We are proving Coincidence theorem due to Walsh for the case when total degree of a polynomial is less then number of arguments. As an application we prove generalizations of the classical composition theorem. Also, the following result has been proven: if p(z) is a complex polynomial of degree n, then closed disk D that contains at least n − 1 of its zeros (counting multiplicity) contains at least n−2k+1 2 zeros of its k-th derivative, provided that arithmetical mean of these zeros is also center of D. | null | [
"https://export.arxiv.org/pdf/2305.16438v1.pdf"
]
| 258,947,129 | 2305.16438 | a94a9bef39d505123e4afc069142fafc047c2828 |
Note on the Coincidence theorem
25 May 2023
Radoš Bakić
Note on the Coincidence theorem
25 May 2023Coincidence theoremzeros of polynomialcritical points of a polynomialapolar polynomials AMS Subject Classification Primary 26C10Secondary 30C15
We are proving Coincidence theorem due to Walsh for the case when total degree of a polynomial is less then number of arguments. As an application we prove generalizations of the classical composition theorem. Also, the following result has been proven: if p(z) is a complex polynomial of degree n, then closed disk D that contains at least n − 1 of its zeros (counting multiplicity) contains at least n−2k+1 2 zeros of its k-th derivative, provided that arithmetical mean of these zeros is also center of D.
Let a(z) = n k=0 a k z k and b(z) = n k=0 b k z k be two complex polynomials of degree n. For them we can define linear operator A(a, b) = n k=0 (−1) k a k b n−k ( n k )
.
If A(a, b) = 0, then a and b are said to be apolar polynomials. For apolar polynomials holds classical theorem due to Grace:
Theorem of Grace: If all zeros of a polynomial a(z) are contained in some circular region S, then at least one zero of b(z) is contained in S, provided that a(z) and b(z) are apolar.
Circular region is (open or closed) disk or half-plane or their exterior. Some generalizations of the theorem of Grace can be found in [4].
One consequence of this theorem is a classical Coincidence theorem due to Walsh: Coincidence theorem: Let p(z 1 , z 2 , . . . , z n ) be a symmetric complex polynomial of total degree n, and of degree 1 in each z i . Suppose that w 1 , w 2 , . . . , w n are complex nu,mbers that are contained in some circular region S. Then exists z ∈ S such that p(w 1 , w 2 , . . . , w n ) = p(z, z, . . . , z).
In his papers [1], [2] and [3] Aziz showed that above theorem holds if total degree is less then n, provided that mentioned circular region is convex. In general, Coincidence theorem is not true is S is exterior of a disk and total degree is less then n. For example, if we set p(z 1 , z 2 ) = z 1 + z 2 , and S is closed exterior of the unit disk, then equation 0 = p(−1, 1) = 2z has no solution in S.
We are now going to strengthen above results of Aziz and also extend it to the exterior of a disk.
Theorem 1 Let p(z 1 , z 2 , . . . , z n ) be a symmetric complex polynomial of total degree m ≤ n, and of degree 1 in all z i . Suppose now that w 1 , w 2 , . . . , w n are complex numbers, such that zeros of q(z) (n−m) are contained in some circular region S, where q(z) = n i=1 (z − w i ). Then equation p(w 1 , w 2 , . . . , w n ) = p(z, z, . . . , z) has solution in S.
Proof: We can assume that p(w 1 , w 2 , . . . , w n ) = 0. Otherwise we can replace p(z 1 , z 2 , . . . , z n ) with p(z 1 , z 2 , . . . , z n ) − p(w 1 , w 2 , . . . , w n ). By well known representation theorem, polynomial p can be represented as a linear combination of e k where e k is a elementary symmetric polynomial in z 1 , z 2 , . . . , z n of degree k(e 0 = 1). Hence, p(z 1 , z 2 , . . . , z n ) = m k=0 E k e k , for some complex constants E i . We can also write q(z) = n k=1 (z − w k ) = n k=0 (−1) n−k e n−k z k , and so q (i) (z) = n−i k=0 (i + k) · · · (1 + k)(−1) n−k−i e n−k−i z k . It can be easily checked that for m < n
p(z 1 , z 2 , . . . , z n ) = m k=0 E k e k = 1 n(n − 1) · · · (m + 1) m k=0 E k n k (n − k) · · · (m − k + 1)e k m k .
Then,
p(w 1 , w 2 , . . . , w n ) = 1 n(n − 1) · · · (m + 1) m k=0 E k n k (n − k) · · · (m − k + 1)e k m k = 0 is equivalent to A(q (n−m) , r) = 0 where r(z) = m k=0 E k n k z k = p(z, z, .
. . , z). That means that q (n−m) (z) and p(z, z, . . . , z) are apolar (this is also true for n = m). Hence, equation p(z, z, . . . , z) = 0 has solution in S, and the theorem is proved.
Let us note that in case when S is convex, Theorem 1 is stronger (in general) than mentioned results of Aziz, since any convex set that contains zeros of some polynomial contains also its critical points due to Gauss-Lucas theorem.
Our next result is application of the Theorem 1. It generalizes result given in [5]. Theorem 2. Let p(z) be a complex polynomial of degree n. Suppose that some closed disk D contains n − 1 zeros of p(z). such that the centre of D is also the arithmetic mean of these zeros. Then disk D contains at least n−2k+1 2 zeros of the k-th derivative of p(z), where [ ] denotes integer part.
Proof: Let z 1 , z 2 , . . . , z n be zeros of p(z) such that z 1 , z 2 , . . . , z n−1 are contained in a disk D, and let c be the centre of D, c = 1 n−1 n−1 k=1 z k . We can assume that z n is outside D, otherwise our theorem follows immediately from Gauss-Lucas theorem. Due to suitable rotation and translation, we can also assume that z n = 0 and c is real and positive.
If we find k-th derivatiove of p(z) = n k=1 (z − z k ) it will be a sum of products of the form (z − z l1 ) · · · (z − z l n−k ). We will group in one sum those products in which z n = 0 occurs, and rest of them in the other sum, i.e.:
p (k) (z) = zΣ 1 + Σ 2
Polynomial p (k) (z) can be viewed as a polynomial in z 1 , z 2 , . . . , z n−1 of total degree n − k that satisfies conditions of the Theorem 1. If we set p (k) (z) = z 1 + 2 = 0, for some z ∈ C, then by Theorem 1 exists y ∈ D such that p (k) (z, z 1 , z 2 , · · · , z n−1 ) = p (k) (z, y, y, · · · , y)
= k! n − 1 k z(z − y) n−k−1 + k! n − 1 k − 1 (z − y) n−k = 0
and this is equivalent to (z − y) n−k−1 (z − k n y) = 0. Hence, it follows that p (k) (z) = 0 implies that either z = y(i.e. z ∈ D) or z = k n y for some y ∈ D. Now, let w 1 , w 2 , . . . , w n−k be all zeros of p (k) (z). We can arrange these points such that first m of them are in D, while all other points are outside D. So, all w i for i > m, are of the form w i = k n y i for some y i ∈ D. Arithmetical means of zeros of p(z) and p (k) (z) are equal, so 1 y i , for some y i ∈ D. All w i , i ≤ m, and y i in the above equation lie in D. So their real parts are less then 2c. Therefore, if take real parts of both sides in the above equation, we obtain the following inequality:
(n − 1)c n < 2c n − k (m + k n (n − k − m))
and this is equivalent to n−2k−1 2 < m i.e. n−2k+1 2 ≤ m, and the proof is completed.
On the zeros of composite polynomials. A Aziz, Pacific. J. Math. 103Aziz, A., On the zeros of composite polynomials, Pacific. J. Math. 103 (1982), 1-7.
On the location of the zeros of certain composite polynomials. A Aziz, Pacific. J. Math. 118Aziz, A., On the location of the zeros of certain composite polynomials, Pacific. J. Math. 118 (1985), 17-26.
On Composite polynomials whose Zeros are in half plain. A Aziz, Bull. Austral. Math. Soc. 36Aziz, A., On Composite polynomials whose Zeros are in half plain, Bull. Austral. Math. Soc., 36 (1987), 449-460.
Generalization of the Grace-Heawood Theorem. R Bakic, Publ. Inst. Math. 93107Bakic, R., Generalization of the Grace-Heawood Theorem, Publ. Inst. Math. 93 (107) (2013), 65-67.
On the number of critical points in a disc. R Bakic, C.R. Acad. Bulgare. Sci. 6910Bakic, R., On the number of critical points in a disc, C.R. Acad. Bulgare. Sci., 69 (10), 2016, 1249-1250.
Geometry of Polynomials. M Marden, Math. Surveys. 3Amer. Math. SocMarden, M., Geometry of Polynomials, Math. Surveys no.3, Amer. Math. Soc., Providence, RI, 1966.
| []
|
[
"Evidential Deep Learning for Open Set Action Recognition",
"Evidential Deep Learning for Open Set Action Recognition"
]
| [
"Wentao Bao \nRochester Institute of Technology\n14623RochesterNYUSA\n",
"Qi Yu [email protected] \nRochester Institute of Technology\n14623RochesterNYUSA\n",
"Yu Kong [email protected] \nRochester Institute of Technology\n14623RochesterNYUSA\n"
]
| [
"Rochester Institute of Technology\n14623RochesterNYUSA",
"Rochester Institute of Technology\n14623RochesterNYUSA",
"Rochester Institute of Technology\n14623RochesterNYUSA"
]
| []
| In a real-world scenario, human actions are typically out of the distribution from training data, which requires a model to both recognize the known actions and reject the unknown. Different from image data, video actions are more challenging to be recognized in an open-set setting due to the uncertain temporal dynamics and static bias of human actions. In this paper, we propose a Deep Evidential Action Recognition (DEAR) method to recognize actions in an open testing set. Specifically, we formulate the action recognition problem from the evidential deep learning (EDL) perspective and propose a novel model calibration method to regularize the EDL training. Besides, to mitigate the static bias of video representation, we propose a plug-and-play module to debias the learned representation through contrastive learning. Experimental results show that our DEAR method achieves consistent performance gain on multiple mainstream action recognition models and benchmarks. Code and pre-trained models are available at | 10.1109/iccv48922.2021.01310 | [
"https://arxiv.org/pdf/2107.10161v2.pdf"
]
| 236,154,995 | 2107.10161 | 6fe4afdff5113d08958baec9112d70f772fcc23b |
Evidential Deep Learning for Open Set Action Recognition
Wentao Bao
Rochester Institute of Technology
14623RochesterNYUSA
Qi Yu [email protected]
Rochester Institute of Technology
14623RochesterNYUSA
Yu Kong [email protected]
Rochester Institute of Technology
14623RochesterNYUSA
Evidential Deep Learning for Open Set Action Recognition
https://www.rit.edu/actionlab/dear.
In a real-world scenario, human actions are typically out of the distribution from training data, which requires a model to both recognize the known actions and reject the unknown. Different from image data, video actions are more challenging to be recognized in an open-set setting due to the uncertain temporal dynamics and static bias of human actions. In this paper, we propose a Deep Evidential Action Recognition (DEAR) method to recognize actions in an open testing set. Specifically, we formulate the action recognition problem from the evidential deep learning (EDL) perspective and propose a novel model calibration method to regularize the EDL training. Besides, to mitigate the static bias of video representation, we propose a plug-and-play module to debias the learned representation through contrastive learning. Experimental results show that our DEAR method achieves consistent performance gain on multiple mainstream action recognition models and benchmarks. Code and pre-trained models are available at
Introduction
Video action recognition aims to classify a video that contains a human action into one of the pre-defined action categories (closed set). However, in a real-world scenario, it is essentially an open set problem [53], which requires the classifier to simultaneously recognize actions from known classes and identify actions from unknown ones [47,17].
In practice, open set recognition (OSR) is more challenging than closed set recognition, while it is important for applications such as face recognition [36], e-commerce product classification [61], autonomous driving [46], and so on.
OSR was originally formalized in [47] and many existing approaches have been proposed using image datasets such as MNIST [32] and CIFAR-10 [30]. However, unlike OSR, limited progress has been achieved for open set action recognition (OSAR) which is increasingly valuable in practice. In fact, novel challenges arise in OSAR from the following key aspects. First, the temporal nature of videos may [31] and MiT-v2 [39] are separately used as small-and largescale unknown data for models trained on the closed set UCF-101 [55]. Our DEAR method ( ) significantly outperforms existing approaches on multiple action recognition models. lead to a high diversity of human action patterns. Hence, an OSAR model needs to capture the temporal regularities of closed set actions but also be aware of what it does not know when presented with unknown actions from an open set scenario. Second, the visual appearance of natural videos typically contain static biased cues [34,11] (e.g., "surfing water" in totally different scenes as shown in Fig. 2). Without addressing the temporal dynamics of human actions, the static bias could seriously hamper the capability of an OSAR model to recognize unknown actions from an unbiased open set. Due to these challenges, existing effort on OSAR is quite limited with few exceptions [53,27,63]. They simply regard each video as a standalone sample and primarily rely on image-based OSR approaches. As a result, they fall short in addressing the inherent video-specific challenges in the open set context as outlined above.
In this paper, we propose a Deep Evidential Action Recognition (DEAR) method for the open set action recognition task. To enable the model to "know unknown" in an OSAR task, our method formulates it as an uncertainty estimation problem by leveraging evidential deep learning (EDL) [50,66,52,1,49]. EDL utilizes deep neural networks to predict a Dirichlet distribution of class probabilities, which can be regarded as an evidence collection process. The learned evidence is informative to quantify the predictive uncertainty of diverse human actions so that unknown actions would incur high uncertainty, i.e., the model knows the unknown. Furthermore, to overcome the potential over-fitting risk of EDL in a closed set, we propose a novel model calibration method to regularize the evidential learning process. Besides, to mitigate the static bias problem for video actions, we propose a plug-and-play module to debias the learned representation through contrastive learning. Benefiting from the evidential theory, our DEAR method is practically flexible to implement and provides a principled way to quantify the uncertainty for identifying the unknown actions. Experimental results show that the DEAR method boosts the performance of existing powerful action recognition models with both small and large-scale unknown videos (see Fig. 1), while still maintains a high performance in traditional closed set recognition setting.
Distinct from existing OSR methods [53,27], the proposed DEAR is the first evidential learning model for largescale video action recognition. DEAR is superior to existing Bayesian uncertainty-based methods [27] in that model uncertainty can be directly inferred through evidence prediction that avoids inexact posterior approximation or timeconsuming Monte Carlo sampling [1]. Moreover, our proposed model calibration method ensures DEAR to be confident in accurate predictions while being uncertain about inaccurate ones. Compared to [53] that incrementally learns a classifier for unknown classes, our method is more flexible in training without the access to unknown actions. Moreover, our proposed debiasing module could reduce the detrimental static bias of video actions so that the model is robust to out-of-context actions in the open set setting.
In summary, the contribution of this paper is three-fold:
• Our Deep Evidential Action Recognition (DEAR) method performs novel evidential learning to support open set action recognition with principled and efficient uncertainty evaluation.
• The proposed Evidential Uncertainty Calibration (EUC) and Contrastive Evidential Debiasing (CED) modules effectively mitigate over-confident predictions and static bias problems, respectively.
• The DEAR method is extensively validated and consistently boosts the performance of state-of-the-art action recognition models on challenging benchmarks.
Related Work
Open Set Recognition. OSR problem originates from face recognition scenario [33] and it is firstly formalized by Scheirer et al. [47]. In [47], to reject the unknown classes, a binary support vector machine (SVM) was introduced by adding an extra hyper-plane for each new class. Based on this work, the Weibull-calibrated SVM (W-SVM) [48] and P I -SVM [21] are further proposed to (a) Kinetics [7] (b) Mimetics [59] [43,13]. To reject the unknown, variational autoencoder (VAE) was recently used to learn the reconstruction error in OSR task [44,64,57]. Different from these methods, our method is the first work to introduce the evidential deep learning (EDL) for the OSR task and show the advantage over existing approaches. For open set action recognition (OSAR) problem, it is much more challenging than OSR problem while only a few existing literature explored it. Shu et al. [53] proposed ODN by incrementally adding new classes to the action recognition head. To capture the uncertainty of unknown classes, Bayesian deep learning is recently introduced to identify the unknown actions in [27,56,28]. Busto et al. [6] proposed an open set domain adaptation method. However, existing methods ignore the importance of uncertainty calibration and static bias of human actions in video data. In a broader context, uncertainty-based OSR is also closely related to out-of-distribution (OOD) [58]. Other less related topics such as anomaly detection [45], generalized zero-shot learning [38], and open world learning [4] are out of the scope in this paper and comprehensively reviewed in [17].
Deep Learning Uncertainty. To distinguish between the unknown and the known samples, an appropriate OOD scoring function is important. A recent line of research works [27,37,9,52,49] show that the predictive uncertainty learned by deep neural networks (DNN) can be a promising scoring function to identify OOD samples. It is assumed that OOD samples should be highly uncertain during inference. Bayesian neural networks (BNN) has been introduced to model the epistemic and aleatoric uncertainty for multiple computer vision tasks [23,26,3]. However, BNN is limited by the intractability of exact posterior inference, the difficulty of choosing suitable weight priors, and the expensive sampling for uncertainty quantification [1].
Recently, evidential deep learning (EDL) is developed by incorporating the evidential theory into deep neural networks with promising results in both classification [50] and regression [1] tasks. In this paper, to the best of our knowledge, we are the first to incorporate evidential learning for large-scale and uncertainty-aware action recognition. Video Action Recognition. Video action recognition has been widely studied in closed set setting [60,25,65]. In this paper, we select several representative and powerful methods, including the 3D convolution method I3D [8], the 2D convolution method TSM [35], the two-stream method SlowFast [14], and the method focusing on neck structure of a recognition model TPN [62]. Note that our method can be easily applied to any existing video action recognition models to enable them for open set action recognition.
Approach
Overview. The proposed DEAR method is illustrated in Fig. 3. Given a video as input, the Evidential Neural Network (ENN) head on top of an Action Recognition (AR) backbone 1 predicts the class-wise evidence, which formulates a Dirichlet distribution so that the multi-class probabilities and predictive uncertainty of the input can be determined. For the open set inference, high uncertainty videos can be regarded as unknown actions while low uncertainty videos are classified by the learned categorical probabilities. The model is trained by Evidential Deep Learning (EDL) [50] loss regularized by our proposed Evidential Uncertainty Calibration (EUC) method. In training, we also propose a plug-and-play Contrastive Evidence Debiasing (CED) module to debias the representation of human actions in videos. 1 In our experiments, we use four different action recognition models which are I3D [8], TSM [35], SlowFast [14], and TPN [62].
Deep Evidential Action Recognition
Background of Evidential Deep Learning. Existing deep learning-based models typically use a softmax layer on top of deep neural networks (DNNs) for classification. However, these softmax-based DNNs are not able to estimate the predictive uncertainty for a classification problem because softmax score is essentially a point estimation of a predictive distribution [15] and the softmax outputs tend to be over-confident in false prediction [19].
Recent evidential deep learning (EDL) [50,1] was developed to overcome the limitations of softmax-based DNNs by introducing the evidence framework of Dempster-Shafer Theory (DST) [51] and the subjective logic (SL) [22]. EDL provides a principled way to jointly formulate the multiclass classification and uncertainty modeling. In particular, given a sample x (i) for K-class classification, assuming that class probability follows a prior Dirichlet distribution, the cross-entropy loss to be minimized for learning evidence e (i) ∈ R K + eventually reduces to the following form:
L (i) EDL (y (i) , e (i) ; θ) = K k=1 y (i) k log S (i) − log(e (i) k + 1)
(1) where y (i) is an one-hot K-dimensional label for sample x (i) and e (i) can be expressed as e (i) = g f (x (i) ; θ) . Here, f is the output of a DNN parameterized by θ and g is the evidence function to keep evidence e k non-negative. S is the total strength of a Dirichlet distribution Dir(p|α), which is parameterized by α ∈ R K , and S is defined as S = K k=1 α k . Based on DST and SL theory, the α k is linked to the learned evidence e k by the equality α k = e k + 1. In the inference, the predicted probability of the k-th class isp k = α k /S and the predictive uncertainty u can be deterministically given as u = K/S. More detailed derivations could be found in our supplementary.
EDL for Action Recognition. In this paper, we propose to formulate the action recognition from the EDL perspective. In the training phase, by applying the EDL objective in (1) for action dataset, we are essentially trying to collect evidence of each action category for an action video. In the testing phase, since the action probability p ∈ R K is assumed to follow a Dirichlet, i.e., p ∼ Dir(p|α), the categorical probability and uncertainty of a human action can be jointly expressed by a (K − 1)-simplex (see the triangular heat map in Fig. 3). The EDL uncertainty enables the action recognition model to "know unknown".
However, due to the deterministic nature of EDL, the potential over-fitting issue would hamper the generalization capability for achieving good OSAR performance. Besides, the static bias problem in video data is still not addressed by EDL. To this end, we propose a model calibration method and a representation debiasing module below. Figure 4: Examples of Probability Simplex. We use 3-class classification as an example and assume the first class as the correct label. A well calibrated model should give Accurate and Certain (AC) predictions (Fig. 4a) or Inaccurate and Uncertain (IU) predictions (Fig. 4d), while the AU (Fig. 4b) and IC (Fig. 4c) cases need to be reduced.
Evidential Uncertainty Calibration
Though the evidential uncertainty from EDL can be directly learned without sampling, the uncertainty may not be well calibrated to handle the unknown samples in OSAR setting. As pointed out in existing model calibration literature [40,29], a well calibrated model should be confident in its predictions when being accurate, and be uncertain about inaccurate ones. Besides, existing DNN models have been empirically demonstrated that miscalibration is linked to the over-fitting of the negative log-likelihood (NLL) [19,41]. Since the EDL objective in (1) is equivalent to minimizing the NLL [50], the trained model is likely to be over-fitted with poor generalization for OSAR tasks. To address this issue, we propose to calibrate the EDL model by considering the relationship between the accuracy and uncertainty.
To this end, we follow the same goal as [40,29] to maximize the Accuracy versus Uncertainty (AvU) utility function for calibrating the uncertainty: AvU = n AC + n IU n AC + n AU + n IC + n IU (2) where the n AC , n AU , n IC , and n IU represent the numbers of samples in four predicted cases, i.e., (1) Accurate and Certain (AC), (2) Accurate and Uncertain (AU), (3) Inaccurate and Certain (IC), and (4) Inaccurate and Uncertain (IU). A well calibrated model could achieve high AvU utility so that the predictive uncertainty can be consistent with accuracy. Fig. 4 shows a toy example of the four possible EDL outputs. To calibrate the predictive uncertainty, the EDL model is encouraged to learn a skewed and sharp Dirichlet simplex for accurate prediction (Fig. 4a), and to provide an unskewed and flat Dirichlet simplex for incorrect prediction (Fig. 4d). To this end, we propose to regularize EDL training by minimize the expectations of AU and IC cases ( Fig. 4b and Fig. 4c) such that the other two cases can be encouraged. Therefore, if a video is assigned with high EDL uncertainty, it is more likely to be incorrect so that an unknown action is identified. In particular, we propose an Evidential Uncertainty Calibration (EUC) method to minimize the following sum of AU and IC cases by considering the logarithm constraint between the confidence p i and uncertainty u i :
L EU C = −λ t i∈{ŷi=yi} p i log(1 − u i ) −(1 − λ t ) i∈{ŷi =yi} (1 − p i ) log(u i ) (3)
where p i is the maximum class probability of an input sample x (i) and u i is the associated evidential uncertainty. The first term aims to give low uncertainty (u i → 0) when the model makes accurate prediction (ŷ i = y i , p i → 1), while the second term tries to give high uncertainty (u i → 1) when the model makes inaccurate prediction (ŷ i = y i , p i → 0). Note that the annealing factor λ t ∈ [λ 0 , 1] is defined as
λ t = λ 0 exp {−(ln λ 0 /T )t}.
Here, λ 0 is a small positive constant, i.e., λ 0 1, such that λ t is monotonically increasing w.r.t. training epoch t, and T is the total number of training epochs. As the training epoch t increasing to T , the factor λ t will be exponentially increasing from λ 0 to 1.0.
The motivation behind the annealing weighting is that the dominant periods of accurate and inaccurate predictions in model training are different. In the early training stages, the inaccurate predictions are the dominant cases so that the IC loss (second term) should be more penalized, while in the late training stages, the accurate predictions are the dominant so that the AU loss (first term) should be more penalized. Therefore, the annealing weighing factor λ t dynamically balances the two terms in training.
Discussion. Our EUC method is advantageous over existing approaches [40] and AvUC [29] in following aspects. First, compared with [40], our EUC method takes the same merit of AvUC that it is a fully differentiable regularization term. Second, compared with both [40] and AvUC, the EUC loss does not rely on distribution shifted validation set during training which is not reasonable for OSAR model to access the OOD samples. Therefore, our method provides better flexibility to calibrate deep learning models on large-scale dataset, such as the real-world videos of human actions addressed in this paper. Our experimental results (Table 3) show that the model calibration performance of EUC method is more significant for open set recognition than on closed set recognition.
Contrastive Evidence Debiasing
For OSAR task, static bias (see example in Fig. 2) in a video dataset is one of the most challenging problems that limit the generalization capability of a model in an open set setting. According to [34], static bias can be categorized into scene bias, object bias, and human bias. Existing research work [11,34,24,2] has empirically shown that debiasing the model by input data or learned representation can significantly improve the action recognition performance. As pointed out in [34], it is intrinsically nothing The module consists of three branches with similar structure. In contrast to the middle branch, the top and bottom ones aim to learn a biased evidence by temporally shuffled feature input and 2D convolution (Conv2D), respectively. The generated feature f is contrastively pushed to be independent of biased feature h.
wrong about the bias if it can be "over-fitted" by an action recognition model for achieving a "good" performance in traditional closed-set setting. However, in an open set setting, the static bias could result in a vulnerable model that falsely recognizes an action video containing similar static features but totally out-of-contextual temporal dynamics.
In this paper, we propose a Contrastive Evidence Debiasing (CED) module to mitigate the static bias problem. As shown in Fig. 5, the CED consists of three branches. The middle branch is a commonly-used 3D convolutional structure (Conv3D) to predict unbiased evidence (e) while the top and and bottom branches predict biased evidences (ẽ andē). In particular, the top branch keeps the same network structure as the middle one but takes temporally shuffled features (x) as input. The bottom branch keeps the same input feature (x) as the middle one but replaces the Conv3D with 2D convolutional structure (Conv2D). Finally, with the HSIC-based minmax optimization, the feature f for predicting unbiased evidence is encouraged to be contrastive to the features h andh for predicting biased evidence.
In particular, motivated by the recent method ReBias [2], the minmax optimization is defined by using the Hilbert-Schmidt Independence Criterion (HSIC). The HSIC function measures the degree of independence between two continuous random variables. With radial basis function (RBF) kernel k 1 and k 2 ,
HSIC k1,k2 (f , h) = 0 if and only if f ⊥ ⊥ h.
The detailed mathematical form of HSIC can be found in [18,54] (or see the Section 1.3 of the supplementary). For the middle branch, the goal is to learn a discriminative and unbiased feature f by minimizing
L(θ f , φ f ) = L EDL (y, e; θ f , φ f ) + λ h∈Ω HSIC(f , h; θ f ),(4)
where θ f and φ f are parameters of neural networks to produce unbiased feature f and to predict evidence e. y is the multi-class label. The second term encourages feature f to be independent of the biased feature h from the set of features generated by top branch h 3D (x) and the bottom
branch h 2D (x), i.e., Ω = {h 3D (x), h 2D (x)}.
For the top and bottom branches, the goal is to learn the above two types of biased feature h by
L(θ h , φ h ) = h∈Ω {L EDL (y, e h ; θ h , φ h ) − λHSIC(f , h; θ h )} (5)
where θ h denotes the network parameters of h 3D (x) and h 2D (x) to generate biased features h, and the φ h denotes the parameters of neural networks to predict corresponding evidence e h ∈ {ê,ē}. The first term in (5) aims to avoid the biased feature h to predict arbitrary evidence, while the second term guarantees that h is similar enough to f so that f has to be pushed faraway from h by (4).
The two objectives in (4) and (5) are alternatively optimized so that feature h is learned to be biased to guide the debiasing of feature f . In practice, we also implemented a joint training strategy which aims to optimize the objective of (4) and (5) jointly and we empirically found it can achieve a better performance.
Discussion. Compared with recent work [11] that leverages adversarial learning to remove scene bias, our method does not rely on object bounding boxes and pseudo scene labels as auxiliary training input. The representation bias addressed in our paper implicitly encompasses all sources of biases, not just the scene bias. Compared with ReBias [2], our CED module shares the similar idea of removing bias with bias. However, the HSIC in our CED module considers not only the bias-characterising model (i.e., h 2D (x)) as in [2], but also the biased feature input by temporal shuffling. This consideration will further encourage the backbone to focus more on temporal dynamics. Besides, our CED is a plug-and-play module and can be flexibly inserted into any state-of-the-art deep learning-based action recognition models with little coding effort. tinguishing known and unknown (2 classes). Furthermore, to comprehensively evaluate the (K +1)-class classification performance, i.e., the unknown as the (K + 1)-th class, we plot the curve of macro-F1 scores by gradually increasing the openness similar to existing literature [53,64,57]. For each openness point, i new classes are randomly selected from HMDB-51 (where i ≤ 51) or MiT-v2 (where i ≤ 305) test set and we compute the macro-F1 score for each of 10 randomized selections. Since there is no existing quantitative metric to summarize the performance of the F1 curve, in this paper we propose an Open maF1 score:
Open maF1 = i ω (i) O · F (i) 1 i ω (i) O(6)
where ω
O = 1 − 2K/(2K + i)
according to [47]. F Implementation Details. Our method is implemented with the PyTorch codebase MMAction2 [12]. The adopted action models are experimented with ResNet-50 backbone pre-trained on Kinetics-400 [7] dataset and fine-tuned on UCF-101 training set. Our proposed EDL loss L EDL is used to replace the original cross-entropy loss, and our proposed CED module is inserted into the layer before the classification heads of recognition models. During training, we use base learning rate 0.001 and it is step-wisely decayed for every 20 epochs with totally 50 epochs. We set batch size as 8 during training. The rest of hyperparameters are kept the same as the default configuration provided by MMAction2. During inference, our CED module is removed. Other implementation details are provided in the supplementary.
Comparison with State-of-the-art
The proposed DEAR method is compared with baselines as shown in the second column of Table 1. The open set performances are also summarized in Fig. 1. For these baselines, SoftMax, OpenMax, and MC Dropout share the same trained model since they are only different in testing phase. For the MC Dropout and BNN SVI which incorporate stochastic sampling in testing, we set the 10 forward passes through the model and adopt the BALD [20] method to quantify the model uncertainty as suggested by [27]. Following [57], the threshold of scoring function is determined by ensuring 95% training data to be recognized as known.
Open Set Action Recognition. In Table 1, we report the results of both closed set and open set performance. It shows that with different action recognition models, our method consistently and significantly outperforms baselines on Open maF1 score for (K + 1)-class classification and Open Set AUC score for rejecting the unknowns, while only sacrifices less than 1% performance decrease on Closed Set Accuracy. When equipped with SlowFast model, our method could improve the MC Dropout method almost 8% of open set AUC and 15% of Open maF1 score. Open-Max and RPL are the recent state-of-the-art OSR methods, however we find that their performances are far behind our DEAR method on the OSAR task. Note that the closed set accuracy of OpenMax is dramatically lower than other baselines, this is because OpenMax directly modifies the activation layer before softmax and appends the unknown class as output, which could destroy the accurate predictions of known samples. Besides, we also note that with TSM model, the Open maF1 score of DEAR method is slightly inferior to OpenMax on MiT-v2 dataset. This indicates that for large-scale unknown testing data such as MiT-v2, the 2D convolution-based TSM is not a good choice for the DEAR method as compared to those 3D convolution-based architectures such as I3D, SlowFast, and TPN.
Based on I3D model, as depicted in Fig. 6, we plot the average Open maF1 scores against varying openness by incrementally introducing HMDB-51 and MiT-v2 testing sets as unknown. It clearly shows that the proposed DEAR method achieves the best performance. Note that for the large scale MiT-v2 dataset, as the openness increasing, the performances of different methods converge to be closed to each other. This is because the macro-F1 is sensitive to class imbalance and it will be gradually dominated by the increasing unknown classes from totally 305 categories in MiT-v2. Nevertheless, our method DEAR still keeps better than all other baselines.
Out-of-distribution Detection. This task aims to distinguish between the in-distribution samples (known) and out-of-distribution (OOD) samples (unknown). Similar to the baseline MC Dropout and BNN SVI [27], which are using uncertainty as scoring function to identify the unknown, the OOD detection performance can be evaluated by showing the Open Set AUC in Table 1 and the histogram statistics in Fig. 7. The AUC numbers and figures clearly show that our DEAR method with EDL uncertainty can better detect the OOD samples. Compared with the vanilla DEAR which only uses L EDL for model training, the estimated uncertainties of OOD samples skews closer to 1.0. More results can be found in our supplementary materials.
Ablation Study
Contribution of Each Component. In Table 2, it shows the OSAR performance of each DEAR variant. The experiments are conducted with TPN model and evaluated using HMDB-51 testing set as unknown. The results demonstrate that all the proposed components could contribute to the OSAR performance gain. In particular, the h 2D (x) of our CED module contributes the most. Besides, the joint training of CED module shows slightly better than the alternative training. Therefore, by default the joint training is adopted throughout other experiments.
Model Calibration. Though the proposed EUC module can improve the performance on OSAR task (as shown in Table 2), we further dig into the question that if the performance gain of EUC results from better calibrating a classification model. To this end, we adopt the widely used Expected Calibration Error (ECE) [19] to evaluate the model calibration performance of our full method DEAR (full) and its variant without EUC loss L EU C . Quantitative results are reported in Table 3. It shows that L EU C can reduce the ECE values with both open set and closed set recognition settings. In particular, the calibration capability is more significant in open set setting than in closed set setting. This validates our claim that the proposed L EU C could calibrate an OSAR model. Representation Debiasing. To further validate if the performance gain of our CED module is rooted in the representation debiasing, we use Kinetics [7] as a biased dataset and Mimetics [59] as an unbiased dataset. Similar to [2], we select 10 human action categories from Kinetics for training and biased testing, and select the same categories from Mimetics for unbiased testing. Without the pre-trained model from Kinetics dataset, we apply our DEAR method with and without CED on TSM model. The top-1 and top-5 accuracy results are reported in Table 4. It shows that models trained on biased dataset (Kinetics) are vulnerable on unbiased dataset (Mimetics). However, when equipped with the proposed CED module, the performance on the unbiased dataset can be significantly improved while performance on the biased dataset still keeps minor changes.
What Types of Unknown are Mis-classified? As shown in Fig. 8, the confusion matrix is visualized by con- sidering both the known classes from UCF-101 and unknown classes from HMDB-51 datasets. It shows that in spite of high closed set accuracy (the diagonal line), the actions from unknown classes could be easily classified as known categories. For example, shoot ball is the top-1 mis-classified unknown class in HMDB-51, which is the most frequently mis-classified as the known class Archery in UCF-101. It is convincing that the mis-classification is caused by their similar background scene, i.e., large area of grass land, which is static bias as addressed in this paper.
Conclusion
In this paper, we proposed a Deep Evidential Action Recognition (DEAR) method for the open set action recognition (OSAR) problem. OSAR is more challenging than image OSR problem due to the uncertain nature of temporal action dynamics and the static bias of background scenes. To this end, we conduct Evidential Deep Learning (EDL) to learn a discriminative action classifier with quantified predictive uncertainty, where the uncertainty is used to distinguish between the known and unknown samples. As novel extensions of EDL, an Evidential Uncertainty Calibration (EUC) method and a contrastive evidential debiasing (CED) module are proposed to address the unique challenges in OSAR. Extensive experimental results demonstrate that our DEAR method works for most existing action recognition models in open set setting.
Supplementary Material
In this document, additional materials are provided to supplement our main paper. In section A, the preliminary knowledge about the evidential deep learning and model calibration are described in detail, which are helpful to understand the methodology of our main paper. In section B, additional implementation details are provided, which are useful to reproduce our proposed method. Sections C and D provide additional experimental results to complement the ones presented in our main paper.
A. Detailed Methodology
A.1. Preliminaries of Evidential Deep Learning
Existing video action recognition models typically use softmax on top of deep neural networks (DNN) for classification. However, the softmax function is heavily limited in the following aspects. First, the predicted categorical probabilities have been squashed by the denominator of softmax. This is known to result in an over-confident prediction for the unknown data, which is even more detrimental to open set recognition problem than the closed set recognition. Second, the softmax output is essentially a point estimate of the multinomial distribution over the categorical probabilities so that softmax cannot capture the uncertainty of categorical probabilities, i.e., second-order uncertainty.
To overcome these limitations, recent evidential deep learning (EDL) [50] is developed from the evidence framework of Dempster-Shafer Theory (DST) [51] and the subjective logic (SL) [22]. For a K-class classification problem, the EDL treats the input x as a proposition and regards the classification task as to give a multinomial subjective opinion in a K-dimensional domain {1, . . . , K}. The subjective opinion is expressed as a triplet ω = (b, u, a), where b = {b 1 , . . . , b K } is the belief mass, u represents the uncertainty, and a = {a 1 , . . . , a K } is the base rate distribution. For any k ∈ [1, 2, . . . , K], the probability mass of a multinomial opinion is defined as
p k = b k + a k u, ∀y ∈ Y(7)
To enable the probability meaning of p k , i.e., k p k = 1, the base rate a k is typically set to 1/K and the subjective opinion is constrained by
u + K k=1 b k = 1(8)
Besides, for a K-class setting, the probability mass p = [p 1 , p 2 , . . . , p K ] is assumed to follow a Dirichlet distribution parameterised by a K-dimensional Dirichlet strength vector α = {α 1 , . . . , α K }:
Dir(p|α) = 1 B(α) K k=1 p α k −1 k , for p ∈ S K , 0, otherwise,(9)
where B(α) is a K-dimensional Beta function, S K is a Kdimensional unit simplex. The total strength of the Dirichlet is defined as S = K k=1 α k . Note that for the special case when K = 2, the Dirichlet distribution reduces to a Beta distribution and a binomial subjective opinion will be formulated in this case.
According to the evidence theory, the term evidence is introduced to describe the amount of supporting observations for classifying the data x into a class. Let e = {e 1 , . . . , e K } be the evidence for K classes. Each entry e k ≥ 0 and the Dirichlet strength α are linked according to the evidence theory by the following identity:
α = e + aW(10)
where W is the weight of uncertain evidence. With the Dirichlet assumption, the expectation of the multinomial probability p is given by
E(p k ) = α k K k=1 α k = e k + a k W W + K k=1 e k(11)
With loss of generality, the weight W is set to K and considering the assumption of the subjective opinion constraint in Eq. (8) that a k = 1/K, we have the Dirichlet strength α k = e k +1 according to Eq. (10). In this way, the Dirichlet evidence can be mapped to the subjective opinion by setting the following equality's:
b k = e k S and u = K S(12)
Therefore, we can see that if the evidence e k for the k-th class is predicted, the corresponding expected class probability in Eq. (7) (or Eq. (11)) can be rewritten as p k = α k /S. From Eq. (12), it is clear that the predictive uncertainty u can be determined after α k is obtained. Inspired by this idea, the EDL leverages deep neural networks (DNN) to directly predict the evidence e from the given data x for a K-class classification problem. In particular, the output of the DNN is activated by a non-negative evidence function. Considering the Dirichlet prior, the DNN is trained by minimizing the negative log-likelihood:
L (i) EDL (y, e; θ) = − log K k=1 p y ik ik 1 B(α i ) K k=1 p α ik −1 ik dp i = K k=1 y ik (log(S i ) − log(e ik + 1))(13)
where y i = {y i1 , . . . , y iK } is an one-hot K-dimensional label for sample i and e i can be expressed as e i = g (f (x i ; θ)). Here, f is the DNN parameterized by θ and g is the evidence function such as exp, softplus, or ReLU. Note that in [50], there are two other forms of EDL loss function. In our main paper, we found the Eq. (13) achieves better training empirical performance.
A.2. EDL for Open Set Action Recognition
To implement the EDL method on video action recognition tasks, we removed the Kullback-Leibler (KL) divergence regularizer term defined in [50], because the digamma function involved in the KL divergence is not numerically stable for large-scale video data. Instead, to compensate for the over-fitting risk, we propose the Evidential Uncertainty Calibration (EUC) as a new regularization. Together with the Contrastive Evidence Debiasing module, the complete training objective of our DEAR method can be expressed as
L = i L (i) EDL + w 1 L EU C + w 2 L CED(14)
where L EU C is defined in Eq. (3) in our main paper, and L CED is the sum of (or one of for alternative training) L(θ f , φ f ) and L(θ h , φ h ) defined in Eq. (4) and Eq. (5) respectively in our main paper. The hyperparameters w 1 and w 2 are set to 1.0 and 0.1, respectively. During the training process, the DEAR model aims to accurately construct the Dirichlet parameters α by collecting the evidence from human action video training set. In the inference phase, the probability of each action class is predicted asp k = α k /S while the predictive uncertainty is simultaneously computed as u = K/S. If an input action video is assigned with high uncertainty, which means a vacuity of evidence to support for closed-set classification, the action is likely to be unknown from the open testing set.
Compared with existing DNN-based uncertainty estimation method such as Bayesian neural networks (BNN) or deep Gaussian process (DGP), the advantage of EDL is that the predictive uncertainty is deterministically learned without inexact posterior approximation and computationally expensive sampling. These merits enable the EDL method to be efficient for training recognition models from largescale vision data such as the human action videos.
A.3. Hilbert-Schmidt Independence Criterion
Hilbert-Schmidt Independence Criterion (HSIC) is a commonly-used dependency measurement of two highdimensional variables. In practice, we used the unbiased HSIC estimator in [54] with m samples:
HSIC k,l (U, V ) = 1 m(m−3) tr(ŨṼ T ) + 1 TŨ 11 TṼ 1 (m−1)(m−2) − 2 m−2 1 TŨṼ T 1 ,(15)
whereŨ is the kernelized matrix of U with RBF kernel k byŨ ij = (1 − δ ij )k(u i , u j ), {u i } ∼ U and the (1 − δ ij ) sets the diagonal ofŨ to zeros.Ṽ is defined similarly with kernel l, and 1 is an all-one vector. The HSIC value is equal to zero if and only if the two variables are independent.
A.4. Evaluation of Model Calibration
In our main paper, we used the expected calibration error (ECE) to quantitatively evaluate the model calibration performance of our proposed EUC method. According to [42,19], the basic idea of model calibration is that, if the confidence estimationp (probability of correctness) is well calibrated, we hopep represent the true probability of the case when the predicted labelŷ is correct. Formally, this can be expressed as
P(ŷ = y|p = p) = p(16)
Since perfect calibration is infeasible due to the finite sample space, a practical way is to group all predicted confidencep into M bins in the range of [0,1] such that the width of each bin is 1/M . Therefore, for the m-th bin, the accuracy can be estimated by
acc(B m ) = 1 |B m | i∈Bm I(ŷ i = y i )(17)
where B m is the set of indices of predictionp when it falls into the m-th bin.ŷ i and y i are predicted and ground truth labels. Besides, the average confidence for the m-th bin can be expressed as
conf(B m ) = 1 |B m | i∈Bmp i(18)
To evaluate the mis-calibration error, the ECE is defined as the expectation of the gap between the accuracy and confidence in M bins for all N samples:
ECE = M m=1 |B m | N |acc(B m ) − conf(B m )|(19)
A perfect calibrated model means that ECE=0 and higher ECE value indicates that the model is less calibrated.
B. Implementation Details
Network Architecture. As presented in our main paper, the proposed DEAR method as well as all other baselines are implemented on top of the four recent video action recognition models, i.e., I3D, TSM, SlowFast, and TPN. For simplicity, these models use ResNet-50 as the backbone architecture and the network weights are initialized with the pre-trained model from the Kinetics-400 benchmark. To avoid the impact of the validation experiments on the Kinetics and Mimetics datasets, the pre-trained model is not used and we train the model from scratch using the same hyperparameters.
Specifically, for the I3D model, it is straightforward to implement our method by replacing the cross-entropy loss with the proposed EUC regularized EDL loss, and inserting the proposed CED module before the recognition head (fully-connected layers). For the TSM model, since the architecture of TSM is based on 2D convolution where the output feature embedding is with the size (B, M C, H, W ), we recover the number of video segments M as the temporal dimension such that the 5-dimensional tensor with size (B, C, M, H, W ) could be compatible with our proposed CED module for contrastive debiasing. For the SlowFast model, our CED module is inserted after the slow pathway because the feature embedding of slow pathway is more likely to be biased since it captures the static cues of video content. For the TPN model, we used the ResNet-50-like SlowOnly model as the recognition backbone and the auxiliary cross-entropy loss in the TPN head is kept unchanged.
Training and Inference. In the training phase, we choose the exp function as the evidence function because we empirically found exp is numerically more stable when using the proposed EDL loss L EDL . We set the hyperparameter λ 0 to 0.01 in EUC loss L EU C and set λ to 1.0 in the two CED losses. The weight of L EU C is set to 1.0 and the weight of the sum of the two CED losses is empirically set to 0.1. In practice, we found the model performance is robust to these hyperparameters. We used mini-batch SGD with nesterov strategy to train all the 3D convolution models. For all models, weight decay is set to 0.0001 and momentum factor is set to 0.9 by default. Our experiments are supported by two GeoForce RTX 3090 and two Tesla A100 GPUs. Since no additional parameters are introduced during inference, the inference speed of existing action recognition models is not affected.
Dataset Information. For the UCF-101 and HMDB-51 datasets, we used the split1 for all experiments. For the MiT-v2 dataset, we only use the testing set for evaluation. To validate the proposed CED module, we refer to [2] and select 10 action categories which are included in both Kinetics and Mimetics dataset. These categories are canoeing or kayaking, climbing a rope, driving car, golf driving, opening bottle, playing piano, playing volleyball, shooting goal (soccer), surfing water, and writing. The recognition model is trained from scratch on the 10 categories of Kinetics training set, and tested on these categories of both Kinetics and Mimetics testing set.
C. Quantitative Results
Open Set Action Recognition. In addition to the I3Dbased curves of Open maF1 scores against varying open-ness in our main paper, we also provide the curves for other action recognition models, including TSM, SlowFast, and TPN in Fig. 9 and Fig. 10. The figures show that when HMDB-51 testing set is used as the unknown, the proposed DEAR method significantly outperforms other baselines with large margins. When MiT-v2 testing set is used as the unknown, the DEAR method could achieve the best performance with relatively low openness.
Out-of-Distribution Detection. From Fig. 11 to Fig. 18, we provide the out-of-distribution detection results to compare our performance with all baselines listed in the main paper. Results on both HMDB-51 and MiT-v2 datasets with I3D, TSM, SlowFast, and TPN are provided. Note that OpenMax, SoftMax, and RPL are not predicting the uncertainty score of input sample, we instead use the confidence score (the maximum score of categorical probabilities) to show the OOD detection performance. These figures show that the uncertainties estimated by the proposed DEAR method exhibit a more long-tailed and flatten distribution than those estimated by MC Dropout and BNN SVI.
D. Qualitative Results
Open Set Confusion Matrix. In Fig. 19 and Fig. 20, we provide the confusion matrix results. These figures show that when HMDB-51 dataset is used as the unknown, the ratio of mis-classification that classifying the samples from known classes into unknown (see the bottom-left region in each sub-figure) is less on TSM and SlowFast models than that on I3D and TPN models. When MiT-v2 dataset is used as the unknown, the unknown classes are the dominant testing case and from the bottom-right region, we see that the proposed method on I3D and SlowFast models shows significant advantage (brighter red color) over the method on TSM and TPN.
Representation Debiasing Examples. In Fig. 21, we provide examples of three classes, i.e., playing piano, writing, and golf driving from both the biased dataset Kinetics and the unbiased (out-of-context) dataset Mimetics. We compare the recognition results of the variants of our proposed DEAR method with and without CED. These examples show that the CED module could help the DEAR method to recognize human actions on both the biased and unbiased datasets. For example, without the CED module, the model falsely recognizes the golf driving as shooting soccer goal. The reason could be conjectured that these video samples of the two classes are similar in the static background, i.e., large area of green grassland.
Figure 1 :
1Open Set Action Recognition Performance. HMDB-51
Figure 3 :
3The proposed DEAR method. We use 3-class (K = 3) action recognition (AR) for illustration. On top of the AR backbone, the Evidential Neural Network (ENN) head predicts the evidence e to build the Dirichlet distribution of class probability p. The evidential uncertainty (u) from the Dirichlet is used for rejecting the unknown in open set testing.
Figure 5 :
5Contrastive Evidence Debiasing (CED) Module.
O
denotes the openness when i new classes are introduced and it is defined as ω (i)
the macro-F1 score by considering the samples from all new classes as unknown. The basic idea of weighting F 1 by ω O is that the result is essentially the normalized area under the curve of macro-F1 vs.openness. The Open maF1 quantitatively evaluates the performance of (K + 1)-class classification in open set setting.
Figure 6 :
6Open macro-F1 scores against varying Openness. The maximum openness is determined by the number of unknown classes, i.e., in ω (i) O , i = 51 for HMDB-51 and i = 305 for MiT-v2.
Figure 7 :
7Out-of-distribution Detection by Uncertainty. The DEAR (vanilla) is the variant of DEAR (full) that only LEDL is used for model training. We use MiT-v2 as unknown and I3D as the recognition model. Uncertainty values are normalized to [0,1] within each distribution.
Figure 8 :
8Confusion Matrix for Known and Unknown. The xaxis shows the ground truth classes of both UCF-101 (known) and HMD-51 (unknown), and y-axis represents the predicted classes defined by UCF-101. This figure highlights the top-5 unknown classes (blue text) that are mis-classified as the known (red text).
Figure 9 :Figure 10 :Figure 11 :Figure 12 :Figure 13 :Figure 14 :Figure 15 :Figure 16 :Figure 17 :Figure 18 :Figure 19 :Figure 20 :Figure 21 :
9101112131415161718192021Open macro-F1 scores against varying Openness. The HMDB-51 testing set is used as the unknown. Open macro-F1 scores against varying Openness. The MiT-v2 testing set is used as unknown. I3D-based Out-of-distribution Detection with HMDB-51 as Unknown. Values are normalized to [0,1] within each distribution. I3D-based Out-of-distribution Detection with MiT-v2 as Unknown. Values are normalized to [0,1] within each distribution. TSM-based Out-of-distribution Detection with HMDB-51 as Unknown. Values are normalized to [0,1] within each distribution. TSM-based Out-of-distribution Detection with MiT-v2 as Unknown. Values are normalized to [0,1] within each distribution. SlowFast-based Out-of-distribution Detection with HMDB-51 as Unknown. Values are normalized to [SlowFast-based Out-of-distribution Detection with MiT-v2 as Unknown. Values are normalized to [0,1] within each distribution. TPN-based Out-of-distribution Detection with HMDB-51 as Unknown. Values are normalized to [0,1] within each distribution. TPN-based Out-of-distribution Detection with MiT-v2 as Unknown. Values are normalized to [0,1] within each distribution. Confusion Matrices of DEAR using HMDB-51 as Unknown. The x-axis and y-axis represent the ground truth and predicted labels, respectively. The first 101 rows and columns are known classes from UCF-101 while the rest 51 classes are unknown from HMDB-51. Values are uniformly scaled into [0,1] and high value is represented by a lighter color (best viewed in color). Confusion Matrices of DEAR using MiT-v2 as Unknown. The x-axis and y-axis represent the ground truth and predicted labels, respectively. The first 101 rows and columns are known classes from UCF-101 while the rest 305 classes are unknown from MiT-v2. Values are uniformly scaled into [0,1] and high value is represented by a lighter color (best viewed in color). Examples of Kinetics and Mimetics. The check mark ( ) indicates that the predicted label is correct while the cross mark () means that the predicted label is incorrect.
Table 1 :
1Comparison with state-of-the-art methods. Models are trained on the closed set UCF-101[55] and tested on two different open sets where the samples of unknown class are from HMDB-51[31] and MiT-v2[39], respectively. For Open maF1 scores, both the mean and standard deviation of 10 random trials of unknown class selection are reported. Closed set accuracy is for reference only. Open maF1 (%) Open Set AUC (%) Open maF1 (%) Open Set AUC (%)Models
OSAR Methods
UCF-101 [55] + HMDB-51 [31]
UCF-101 [55] + MiT-v2 [39]
Closed Set Accuracy (%)
(For reference only)
I3D [8]
OpenMax [5]
67.85 ± 0.12
74.34
66.22 ± 0.16
77.76
56.60
MC Dropout
71.13 ± 0.15
75.07
68.11 ± 0.20
79.14
94.11
BNN SVI [27]
71.57 ± 0.17
74.66
68.65 ± 0.21
79.50
93.89
SoftMax
73.19 ± 0.17
75.68
68.84 ± 0.23
79.94
94.11
RPL [10]
71.48 ± 0.15
75.20
68.11 ± 0.20
79.16
94.26
DEAR (ours)
77.24 ± 0.18
77.08
69.98 ± 0.23
81.54
93.89
TSM [35]
OpenMax [5]
74.17 ± 0.17
77.07
71.81 ± 0.20
83.05
65.48
MC Dropout
71.52 ± 0.18
73.85
65.32 ± 0.25
78.35
95.06
BNN SVI [27]
69.11 ± 0.16
73.42
64.28 ± 0.23
77.39
94.71
SoftMax
78.27 ± 0.20
77.99
71.68 ± 0.27
82.38
95.03
RPL [10]
69.34 ± 0.17
73.62
63.92 ± 0.25
77.28
95.59
DEAR (ours)
84.69 ± 0.20
78.65
70.15 ± 0.30
83.92
94.48
SlowFast [14]
OpenMax [5]
73.57 ± 0.10
78.76
72.48 ± 0.12
80.62
62.09
MC Dropout
70.55 ± 0.14
75.41
67.53 ± 0.17
78.49
96.75
BNN SVI [27]
69.19 ± 0.13
74.78
65.22 ± 0.21
77.39
96.43
SoftMax
78.04 ± 0.16
79.16
74.42 ± 0.22
82.88
96.70
RPL [10]
68.32 ± 0.13
74.23
66.33 ± 0.17
77.42
96.93
DEAR (ours)
85.48 ± 0.19
82.94
77.28 ± 0.26
86.99
96.48
TPN [62]
OpenMax [5]
65.27 ± 0.09
74.12
64.80 ± 0.10
76.26
53.24
MC Dropout
68.45 ± 0.12
74.13
65.77 ± 0.17
77.76
95.43
BNN SVI [27]
63.81 ± 0.11
72.68
61.40 ± 0.15
75.32
94.61
SoftMax
76.23 ± 0.14
77.97
70.82 ± 0.21
81.35
95.51
RPL [10]
70.31 ± 0.13
75.32
66.21 ± 0.21
78.21
95.48
DEAR (ours)
81.79 ± 0.15
79.23
71.18 ± 0.23
81.80
96.30
Table 2 :
2Ablation studies. Based on TPN[62] model, HMDB-51[31] is used as the unknown. Best results are shown in bold.L EU C
CED Joint Train Open maF1 (%) OS-AUC (%)
74.95 ± 0.18
77.12
75.88 ± 0.16
77.49
81.18 ± 0.15
79.02
81.79 ± 0.15
79.23
Table 3 :
3Expected Calibration Error (ECE) results. Small ECE indicates the model is better calibrated. The numbers in brackets indicate the number of classes involved in evaluation.Model variants
Open Set (K+1) Open Set (2) Closed Set (K)
DEAR (w/o L EU C )
0.284
0.256
0.030
DEAR (full)
0.268
0.239
0.029
Table 4 :
4Accuracy (%) on Biased and Unbiased dataset.Methods
Biased (Kinetics) Unbiased (Mimetics)
top-1
top-5
top-1
top-5
DEAR (w/o CED) 91.18
99.30
26.56
69.53
DEAR (full)
91.18
99.54
34.38
75.00
. Experiments Dataset. We evaluate the proposed DEAR method on three commonly used real-world video action datasets, including UCF-101[55], HMDB-51[31], and MiT-v2[39]. All models are trained on UCF-101 training split. MiT-v2 has 305 classes and its testing split contains 30,500 video samples, which are about 20 times larger than the HMDB-51 testing set. In testing, we use the UCF-101 testing set as known samples, and the testing splits of HMDB-51 and MiT-v2 datasets as two sources of unknown. Note that there could be a few overlapping classes between UCF-101 and the other two datasets, but for standardizing the evaluation and reproducibility, we do not manually clean the data.Evaluation Protocol. To evaluate the classification performance on both closed and open set settings, we separately report the Closed Set Accuracy for K-class classification and the Open Set area under ROC curve (AUC) for dis-
Acknowledgement. This research is supported by an ONR Award N00014-18-1-2875 and the Army Research Office under grant number W911NF-21-1-0236. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Office of Naval Research, the Army Research Office or the U.S. Government. We especially thank Google Cloud Platform providing two NVIDIA A100 SXM4 GPUs.
Deep evidential regression. Alexander Amini, Wilko Schwarting, Ava Soleimany, Daniela Rus, NeurIPS, 2020. 1. 23Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. In NeurIPS, 2020. 1, 2, 3
Learning de-biased representations with biased representations. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh, ICML, 2020. 4, 5. 813Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learning de-biased representations with biased representations. In ICML, 2020. 4, 5, 8, 13
Uncertainty-based traffic accident anticipation with spatio-temporal relational learning. Wentao Bao, Qi Yu, Yu Kong, ACM MM. 2020Wentao Bao, Qi Yu, and Yu Kong. Uncertainty-based traffic accident anticipation with spatio-temporal relational learn- ing. In ACM MM, 2020. 3
Towards open world recognition. Abhijit Bendale, Terrance Boult, CVPR. Abhijit Bendale and Terrance Boult. Towards open world recognition. In CVPR, 2015. 2
Towards open set deep networks. Abhijit Bendale, Terrance E Boult, CVPR. 1618Abhijit Bendale and Terrance E. Boult. Towards open set deep networks. In CVPR, 2016. 2, 6, 15, 16, 17, 18
IEEE transactions on pattern analysis and machine intelligence. Ahsan Pau Panareda Busto, Juergen Iqbal, Gall, 42Open set domain adaptation for image and action recognitionPau Panareda Busto, Ahsan Iqbal, and Juergen Gall. Open set domain adaptation for image and action recognition. IEEE transactions on pattern analysis and machine intelli- gence, 42(2):413-429, 2018. 2
Quo vadis, action recognition? a new model and the kinetics dataset. J Carreira, Andrew Zisserman, CVPR. J. Carreira and Andrew Zisserman. Quo vadis, action recog- nition? a new model and the kinetics dataset. In CVPR, 2017. 2, 6, 8
Quo vadis, action recognition? A new model and the kinetics dataset. J Carreira, Andrew Zisserman, CVPR. 36J. Carreira and Andrew Zisserman. Quo vadis, action recog- nition? A new model and the kinetics dataset. In CVPR, 2017. 3, 6
Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. Bertrand Charpentier, Daniel Zügner, Stephan Günnemann, NeurIPS. 2020Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Posterior network: Uncertainty estima- tion without ood samples via density-based pseudo-counts. In NeurIPS, 2020. 2
Learning open set network with discriminative reciprocal points. Guangyao Chen, Limeng Qiao, Yemin Shi, Peixi Peng, Jia Li, Tiejun Huang, Shiliang Pu, Yonghong Tian, ECCV. 1618Guangyao Chen, Limeng Qiao, Yemin Shi, Peixi Peng, Jia Li, Tiejun Huang, Shiliang Pu, and Yonghong Tian. Learning open set network with discriminative reciprocal points. In ECCV, 2020. 6, 15, 16, 17, 18
Why can't i dance in the mall? learning to mitigate scene bias in action recognition. Jinwoo Choi, Chen Gao, C E Joseph, Jia-Bin Messou, Huang, NeurIPS. Jinwoo Choi, Chen Gao, Joseph CE Messou, and Jia-Bin Huang. Why can't i dance in the mall? learning to miti- gate scene bias in action recognition. In NeurIPS, 2019. 1, 4, 5
Openmmlab's next generation video understanding toolbox and benchmark. 2020. 6MMAction2 Contributors. MMAction2 Contributors. Openmmlab's next generation video understanding toolbox and benchmark. https:// github.com/open-mmlab/mmaction2, 2020. 6
Open-GAN: Open set generative adversarial networks. Luke Ditria, J Benjamin, Tom Meyer, Drummond, ACCV. Luke Ditria, Benjamin J Meyer, and Tom Drummond. Open- GAN: Open set generative adversarial networks. In ACCV, 2020. 2
Slowfast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, ICCV. 36Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019. 3, 6
Uncertainty in Deep Learning. Yarin Gal, Department of Engineering, University of CambridgePhD thesisYarin Gal. Uncertainty in Deep Learning. PhD thesis, De- partment of Engineering, University of Cambridge, 2016. 3
Generative OpenMax for multi-class open set classification. Zongyuan Ge, Sergey Demyanov, Zetao Chen, Rahil Garnavi, BMVC. Zongyuan Ge, Sergey Demyanov, Zetao Chen, and Rahil Garnavi. Generative OpenMax for multi-class open set clas- sification. In BMVC, 2017. 2
Recent advances in open set recognition: A survey. IEEE transactions on pattern analysis and machine intelligence. Chuanxing Geng, Sheng-Jun, Songcan Huang, Chen, 1Chuanxing Geng, Sheng-jun Huang, and Songcan Chen. Re- cent advances in open set recognition: A survey. IEEE trans- actions on pattern analysis and machine intelligence, 2020. 1, 2
Measuring statistical dependence with hilbertschmidt norms. Arthur Gretton, Olivier Bousquet, Alex Smola, Bernhard Schölkopf, ICALT. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert- schmidt norms. In ICALT, 2005. 5
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, ICML. 812Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017. 3, 4, 8, 12
Bayesian active learning for classification and preference learning. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, Máté Lengyel, arXiv:1112.5745arXiv preprintNeil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian active learning for classification and pref- erence learning. arXiv preprint arXiv:1112.5745, 2011. 7
Multiclass open set recognition using probability of inclusion. P Lalit, Jain, J Walter, Terrance E Scheirer, Boult, ECCV. Lalit P Jain, Walter J Scheirer, and Terrance E Boult. Multi- class open set recognition using probability of inclusion. In ECCV, 2014. 2
Subjective logic. Audun Jøsang, Springer311Audun Jøsang. Subjective logic. Springer, 2016. 3, 11
What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS. Alex Kendall, Yarin Gal, Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS, 2017. 3
Learning not to learn: Training deep neural networks with biased data. Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, Junmo Kim, CVPR. Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. Learning not to learn: Training deep neural networks with biased data. In CVPR, 2019. 4
Human action recognition and prediction: A survey. Yu Kong, Yun Fu, arXiv:1806.11230arXiv preprintYu Kong and Yun Fu. Human action recognition and pre- diction: A survey. arXiv preprint arXiv:1806.11230, 2018. 3
Uncertainty estimation in one-stage object detection. Florian Kraus, Klaus Dietmayer, ITSC. Florian Kraus and Klaus Dietmayer. Uncertainty estimation in one-stage object detection. In ITSC, 2019. 3
BAR: Bayesian activity recognition using variational inference. Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo, NeurIPS. 1618Ranganath Krishnan, Mahesh Subedar, and Omesh Tickoo. BAR: Bayesian activity recognition using variational infer- ence. In NeurIPS, 2018. 1, 2, 6, 7, 15, 16, 17, 18
Specifying weight priors in bayesian deep neural networks with empirical bayes. Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo, AAAI. 2020Ranganath Krishnan, Mahesh Subedar, and Omesh Tickoo. Specifying weight priors in bayesian deep neural networks with empirical bayes. In AAAI, 2020. 2
Improving model calibration with accuracy versus uncertainty optimization. Ranganath Krishnan, Omesh Tickoo, NeurIPS. Ranganath Krishnan and Omesh Tickoo. Improving model calibration with accuracy versus uncertainty optimization. In NeurIPS, 2020. 4
Convolutional deep belief networks on cifar-10. Unpublished manuscript. Alex Krizhevsky, Geoff Hinton, 401Alex Krizhevsky and Geoff Hinton. Convolutional deep be- lief networks on cifar-10. Unpublished manuscript, 40(7):1- 9, 2010. 1
HMDB: a large video database for human motion recognition. Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, Thomas Serre, ICCV. 6Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. HMDB: a large video database for human motion recognition. In ICCV, 2011. 1, 5, 6, 8
The mnist database of handwritten digits. Yann Lecun, Corinna Cortes, Christopher J C Burges, Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist. 1
Open set face recognition using transduction. Fayin Li, Harry Wechsler, IEEE transactions on pattern analysis and machine intelligence. 27Fayin Li and Harry Wechsler. Open set face recognition us- ing transduction. IEEE transactions on pattern analysis and machine intelligence, 27(11):1686-1697, 2005. 2
RESOUND: Towards action recognition without representation bias. Yingwei Li, Yi Li, Nuno Vasconcelos, ECCV. 14Yingwei Li, Yi Li, and Nuno Vasconcelos. RESOUND: Towards action recognition without representation bias. In ECCV, 2018. 1, 4
TSM: Temporal shift module for efficient video understanding. Ji Lin, Chuang Gan, Song Han, ICCV. 36Ji Lin, Chuang Gan, and Song Han. TSM: Temporal shift module for efficient video understanding. In ICCV, 2019. 3, 6
Sphereface: Deep hypersphere embedding for face recognition. Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, Le Song, CVPR. Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017. 1
Predictive uncertainty estimation via prior networks. Andrey Malinin, Mark Gales, NeurIPS. Andrey Malinin and Mark Gales. Predictive uncertainty es- timation via prior networks. In NeurIPS, 2018. 2
Out-of-distribution detection for generalized zero-shot action recognition. Devraj Mandal, Sanath Narayan, Sai Kumar Dwivedi, Vikram Gupta, Shuaib Ahmed, Fahad Shahbaz Khan, Ling Shao, CVPR. Devraj Mandal, Sanath Narayan, Sai Kumar Dwivedi, Vikram Gupta, Shuaib Ahmed, Fahad Shahbaz Khan, and Ling Shao. Out-of-distribution detection for generalized zero-shot action recognition. In CVPR, 2019. 2
Multi-Moments in Time: Learning and interpreting models for multi-action video understanding. Mathew Monfort, Kandan Ramakrishnan, Alex Andonian, Barry A Mcnamara, Alex Lascelles, Quanfu Bowen Pan, Dan Fan, Rogério Gutfreund, Aude Schmidt Feris, Oliva, abs/1911.00232CoRR56Mathew Monfort, Kandan Ramakrishnan, Alex Andonian, Barry A. McNamara, Alex Lascelles, Bowen Pan, Quanfu Fan, Dan Gutfreund, Rogério Schmidt Feris, and Aude Oliva. Multi-Moments in Time: Learning and interpret- ing models for multi-action video understanding. CoRR, abs/1911.00232, 2019. 1, 5, 6
Evaluating bayesian deep learning methods for semantic segmentation. Jishnu Mukhoti, Yarin Gal, arXiv:1811.12709arXiv preprintJishnu Mukhoti and Yarin Gal. Evaluating bayesian deep learning methods for semantic segmentation. arXiv preprint arXiv:1811.12709, 2018. 4
Calibrating deep neural networks using focal loss. Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, H S Philip, Puneet K Torr, Dokania, NeurIPS. Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip HS Torr, and Puneet K Dokania. Calibrating deep neural networks using focal loss. In NeurIPS, 2020. 4
Obtaining well calibrated probabilities using bayesian binning. Gregory Mahdi Pakdaman Naeini, Milos Cooper, Hauskrecht, AAAI. 12Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In AAAI, 2015. 12
Open set learning with counterfactual images. Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, Fuxin Li, ECCV. Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, and Fuxin Li. Open set learning with counterfactual images. In ECCV, 2018. 2
C2AE: Class conditioned auto-encoder for open-set recognition. Poojan Oza, M Vishal, Patel, CVPR. Poojan Oza and Vishal M Patel. C2AE: Class conditioned auto-encoder for open-set recognition. In CVPR, 2019. 2
Longbing Cao, and Anton van den Hengel. Deep learning for anomaly detection: A review. Guansong Pang, Chunhua Shen, arXiv:2007.02500arXiv preprintGuansong Pang, Chunhua Shen, Longbing Cao, and Anton van den Hengel. Deep learning for anomaly detection: A review. arXiv preprint arXiv:2007.02500, 2020. 2
Open set driver activity recognition. Alina Roitberg, Chaoxiang Ma, Monica Haurilet, Rainer Stiefelhagen, IVS. Alina Roitberg, Chaoxiang Ma, Monica Haurilet, and Rainer Stiefelhagen. Open set driver activity recognition. In IVS, 2020. 1
Toward open set recognition. J Walter, Anderson Scheirer, De Rezende, Archana Rocha, Terrance E Sapkota, Boult, IEEE transactions on pattern analysis and machine intelligence. 356Walter J Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E Boult. Toward open set recogni- tion. IEEE transactions on pattern analysis and machine intelligence, 35(7):1757-1772, 2012. 1, 2, 6
Probability models for open set recognition. J Walter, Scheirer, P Lalit, Terrance E Jain, Boult, IEEE transactions on pattern analysis and machine intelligence. 36Walter J Scheirer, Lalit P Jain, and Terrance E Boult. Prob- ability models for open set recognition. IEEE transactions on pattern analysis and machine intelligence, 36(11):2317- 2324, 2014. 2
Uncertainty-aware deep classifiers using generative models. Murat Sensoy, Lance Kaplan, Federico Cerutti, Maryam Saleki, AAAI, 2020. 1Murat Sensoy, Lance Kaplan, Federico Cerutti, and Maryam Saleki. Uncertainty-aware deep classifiers using generative models. In AAAI, 2020. 1, 2
Evidential deep learning to quantify classification uncertainty. Murat Sensoy, Lance Kaplan, Melih Kandemir, NeurIPS. 1112Murat Sensoy, Lance Kaplan, and Melih Kandemir. Eviden- tial deep learning to quantify classification uncertainty. In NeurIPS, 2018. 1, 3, 4, 11, 12
Combination of evidence in Dempster-Shafer theory. Kari Sentz, Scott Ferson, Sandia National Laboratories Albuquerque. 401511Kari Sentz, Scott Ferson, et al. Combination of evidence in Dempster-Shafer theory, volume 4015. Sandia National Laboratories Albuquerque, 2002. 3, 11
Multifaceted uncertainty estimation for label-efficient deep learning. Weishi Shi, Xujiang Zhao, Feng Chen, Qi Yu, NeurIPS, 2020. 1Weishi Shi, Xujiang Zhao, Feng Chen, and Qi Yu. Multi- faceted uncertainty estimation for label-efficient deep learn- ing. In NeurIPS, 2020. 1, 2
ODN: Opening the deep network for open-set action recognition. Yu Shu, Yemin Shi, Yaowei Wang, Yixiong Zou, Qingsheng Yuan, Yonghong Tian, ICME. 6Yu Shu, Yemin Shi, Yaowei Wang, Yixiong Zou, Qingsheng Yuan, and Yonghong Tian. ODN: Opening the deep network for open-set action recognition. In ICME, 2018. 1, 2, 6
Feature selection via dependence maximization. Le Song, Alex Smola, Arthur Gretton, Justin Bedo, Karsten Borgwardt, Journal of Machine Learning Research. 13512Le Song, Alex Smola, Arthur Gretton, Justin Bedo, and Karsten Borgwardt. Feature selection via dependence max- imization. Journal of Machine Learning Research, 13(5), 2012. 5, 12
UCF101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, arXiv:1212.040216arXiv preprintKhurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 1, 5, 6
Uncertainty-aware audiovisual activity recognition using deep bayesian variational inference. Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, Jonathan Huang, ICCV. Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, and Jonathan Huang. Uncertainty-aware audiovisual activity recognition using deep bayesian varia- tional inference. In ICCV, 2019. 2
Conditional Gaussian distribution learning for open set recognition. Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon, Guohao Ling, Peng, CVPR, 2020. 67Xin Sun, Zhenning Yang, Chi Zhang, Keck-Voon Ling, and Guohao Peng. Conditional Gaussian distribution learning for open set recognition. In CVPR, 2020. 2, 6, 7
Practical uncertainty estimation and out-of-distribution robustness in deep learning. Dustin Tran, Jasper Snoek, Balaji Lakshminarayanan, Google Brain, 2020. NeurIPS Tutorial. Technical reportDustin Tran, Jasper Snoek, and Balaji Lakshminarayanan. Practical uncertainty estimation and out-of-distribution ro- bustness in deep learning. Technical report, Google Brain, 2020. NeurIPS Tutorial. 2
Mimetics: Towards understanding human actions out of context. Philippe Weinzaepfel, Grégory Rogez, International Journal of Computer Vision. 28Philippe Weinzaepfel and Grégory Rogez. Mimetics: To- wards understanding human actions out of context. Inter- national Journal of Computer Vision, pages 1-16, 2021. 2, 8
Recent advances in video-based human action recognition using deep learning: A review. Di Wu, Nabin Sharma, Michael Blumenstein, Di Wu, Nabin Sharma, and Michael Blumenstein. Recent ad- vances in video-based human action recognition using deep learning: A review. In IJCNN, 2017. 3
Open-world learning and application to product classification. Hu Xu, Bing Liu, Lei Shu, P Yu, WWW. Hu Xu, Bing Liu, Lei Shu, and P Yu. Open-world learning and application to product classification. In WWW, 2019. 1
Temporal pyramid network for action recognition. Ceyuan Yang, Yinghao Xu, Jianping Shi, Bo Dai, Bolei Zhou, CVPR, 2020. 3. 6Ceyuan Yang, Yinghao Xu, Jianping Shi, Bo Dai, and Bolei Zhou. Temporal pyramid network for action recognition. In CVPR, 2020. 3, 6, 8
Open-set human activity recognition based on micro-doppler signatures. Yang Yang, Chunping Hou, Yue Lang, Dai Guan, Danyang Huang, Jinchen Xu, Pattern Recognition. 851Yang Yang, Chunping Hou, Yue Lang, Dai Guan, Danyang Huang, and Jinchen Xu. Open-set human activity recogni- tion based on micro-doppler signatures. Pattern Recognition, 85:60-69, 2019. 1
Classificationreconstruction learning for open-set recognition. Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, Takeshi Naemura, CVPR. 26Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, and Takeshi Naemura. Classification- reconstruction learning for open-set recognition. In CVPR, 2019. 2, 6
A comprehensive survey of vision-based human action recognition methods. Hong-Bo Zhang, Yi-Xiang Zhang, Bineng Zhong, Qing Lei, Lijie Yang, Ji-Xiang Du, Duan-Sheng Chen, Sensors. 1951005Hong-Bo Zhang, Yi-Xiang Zhang, Bineng Zhong, Qing Lei, Lijie Yang, Ji-Xiang Du, and Duan-Sheng Chen. A com- prehensive survey of vision-based human action recognition methods. Sensors, 19(5):1005, 2019. 3
Quantifying classification uncertainty using regularized evidential neural networks. Xujiang Zhao, Yuzhe Ou, Lance Kaplan, Feng Chen, Jin-Hee Cho, AAAI. Xujiang Zhao, Yuzhe Ou, Lance Kaplan, Feng Chen, and Jin-Hee Cho. Quantifying classification uncertainty using regularized evidential neural networks. In AAAI, 2019. 1
| []
|
[
"Viscosity bound violation in viscoelastic Fermi liquids",
"Viscosity bound violation in viscoelastic Fermi liquids"
]
| [
"M P Gochan [email protected] \nDepartment of Physics\nBoston College\nChestnut Hill02467MAUnited States of America\n",
"Hua Li \nDepartment of Physics\nBoston College\nChestnut Hill02467MAUnited States of America\n",
"K S Bedell \nDepartment of Physics\nBoston College\nChestnut Hill02467MAUnited States of America\n"
]
| [
"Department of Physics\nBoston College\nChestnut Hill02467MAUnited States of America",
"Department of Physics\nBoston College\nChestnut Hill02467MAUnited States of America",
"Department of Physics\nBoston College\nChestnut Hill02467MAUnited States of America"
]
| [
"J. Phys. Commun"
]
| The anti-de Sitter/conformal field theory correspondence (AdS/CFT) has been used to determine a lower bound on the ratio of shear viscosity h ( )to entropy density (s) for strongly-coupled field theories with a gravity dual. The conjectured universal lower bound, given as s kp , is a measure of interaction strength in a quantum fluid where equality indicates a perfect quantum fluid. In this paper we study η/s in a Fermi gas in the unitary limit. We show that in addition to a local minimum for η/s at T T 2 c » which obeys the lower bound, a more interesting result exists in the violation of the η/s lower bound due to the superfluid fluctuations above T c . To conclude, we examine the viscoelastic properties of the unitary Fermi gas. Previous work brought to light the connection between violation of the η/s bound and a viscoelastic response in the context of holographic solids. We ultimately find that, in addition to holographic solids, all Fermi liquids with a viscoelastic response produced by superfluid fluctuations can violate the universal η/s lower bound. | 10.1088/2399-6528/ab292b | [
"https://iopscience.iop.org/article/10.1088/2399-6528/ab292b/pdf"
]
| 118,900,498 | 1801.08627 | 456e33c8e8c78c2f912b7a7e33d6eacaacfe60dd |
Viscosity bound violation in viscoelastic Fermi liquids
2019
M P Gochan [email protected]
Department of Physics
Boston College
Chestnut Hill02467MAUnited States of America
Hua Li
Department of Physics
Boston College
Chestnut Hill02467MAUnited States of America
K S Bedell
Department of Physics
Boston College
Chestnut Hill02467MAUnited States of America
Viscosity bound violation in viscoelastic Fermi liquids
J. Phys. Commun
365008201910.1088/2399-6528/ab292bPAPERunitary Fermi gasviscosity boundviscoelasticKSS Bound
The anti-de Sitter/conformal field theory correspondence (AdS/CFT) has been used to determine a lower bound on the ratio of shear viscosity h ( )to entropy density (s) for strongly-coupled field theories with a gravity dual. The conjectured universal lower bound, given as s kp , is a measure of interaction strength in a quantum fluid where equality indicates a perfect quantum fluid. In this paper we study η/s in a Fermi gas in the unitary limit. We show that in addition to a local minimum for η/s at T T 2 c » which obeys the lower bound, a more interesting result exists in the violation of the η/s lower bound due to the superfluid fluctuations above T c . To conclude, we examine the viscoelastic properties of the unitary Fermi gas. Previous work brought to light the connection between violation of the η/s bound and a viscoelastic response in the context of holographic solids. We ultimately find that, in addition to holographic solids, all Fermi liquids with a viscoelastic response produced by superfluid fluctuations can violate the universal η/s lower bound.
Introduction
Using the anti-de Sitter/conformal field theory (AdS/CFT) correspondence, strongly interacting quantum field theories can be described in terms of weakly interacting gravitational systems. This has led to the conjecture that there exists a lower bound-the KSS bound-for η/s in a strongly coupled field theory given by [1][2][3][4][5]
s k 4 1 B h p ( )
Quantum fluids of varying density, such as the quark gluon plasma and the unitary Fermi gas, that obey equation (1), are called nearly perfect quantum liquids where equality denotes a perfect quantum liquid [6,7]. The AdS/CFT correspondence additionally creates a bridge between gravitational physics and condensed matter physics and allows one to be studied in terms of the other [8,9]. It's been shown that as the unitary Fermi gas undergoes a superfluid phase transition, superfluid fluctuations above the transition temperature, T c , have significant effects on the spin transport [10,11]. This result is the motivation for our work. We sought to determine if such superfluid fluctuations could have a similar impact in viscosity and subsequently the KSS bound. Recent experiments on the unitary Fermi gas 6 Li show a normal/superfluid phase transition at a transition temperature, T T 0.167 c F » [12], where T F stands for the Fermi temperature. As for the viscosity, recent advances in experiments have allowed for its measurement [13] and subsequently led to the measurement of the ratio. Such measurements show a minimum that obeys the bound given by (1) at temperatures T T 2 c » [14][15][16].
To better understand η/s within the context of strongly correlated systems, we develop a simple theoretical model to calculate the quasiparticle scattering rates of a strongly correlated quantum liquid above T c . Such a model differs from past calculations [17][18][19][20] in that we include the effects of superfluid fluctuations as T T c + .
The model separates the quasiparticle scattering amplitude for the strongly correlated quantum fluid into two components: the superfluid fluctuations term coming from the particle-particle pairing fluctuations in the singlet scattering channel above T c , and a normal Fermi liquid scattering term calculated from the local version of the induced interaction model [21,22]. Applying our theory to the unitary Fermi gas, we calculate η/s for the unitary Fermi gas about T c following the methods used in the transport studies of Landau Fermi-liquid theory Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. [23]. We find a local minimum as T
T c of s k 0.3 B h »
which agrees with the experimentally measured lower bound [13]. However, an additional intriguing result of η/s dropping to zero at T c thus violating (1). Our work therefore seeks to explain the nature of this violation within the context of Landau Fermi liquid theory. While violations of (1) are not uncommon, for example the work done by Alberte et al [24] and Jain et al [25] both show violation, our work is unique in that our calculation is done for the unitary Fermi gas, a system frequently studied experimentally. Furthermore, we differ from other work on η/s in the unitary Fermi gas, such as that by Samanta et al that also showed violation [26], in that the system under consideration was trapped.
While our result appears to be in contradiction with the work done by Cao et al [13], we find good qualitative agreement with the more recent analysis done by Joseph et al [16]. We believe this discrepancy is because the measurements were done over a wide temperature range while the violation of the bound happens in a small window around T c . Additionally, due to the breakdown of the quasiparticle picture, numerous other methods have been employed such as those performed by Enss et al [27] to determine the viscosity. While we don't disagree with these results, we feel our model is valid due to the experimental support of the quasiparticle picture near T c (as shown in figure 4 and will be discussed later). To conclude, we draw on previous work by Alberte, Baggioli, and Pujolàs [24,28] they present the idea of the viscoelastic nature of holographic solids violating the bound. We expand on their work and provide insight into this high-energy problem from the viewpoint of condensed matter.
Superfluid fluctuations in the unitary Fermi gas
The high transition temperature, T c ≈0.167T F , of the unitary Fermi gas allows for the experimental measurement of η/s at temperatures close to T c , where superfluid fluctuations could play a role [12,13]. For example, previous study of spin transport found that superfluid fluctuations play a significant role in the spin diffusion [10,11]. As such, our work sets out to understand how the superfluid fluctuations may affect the viscosity and subsequently η/s. The superfluid fluctuations come from the particle-particle pairing fluctuations in the spin singlet quasiparticle scattering channel closely above T c . Due to the pairing fluctuations, the quasiparticle scattering amplitudes for small total momentum scattering diverge at T c . Here we consider only the s-wave (spin singlet) pairing mechanism for the Cooper pairs and incorporate the superfluid fluctuations in the scattering amplitudes by evaluating the temperature vertex function of particle-particle type in the spin singlet channel for small total momentum scattering using standard quantum field theory methods [29]. The spin singlet temperature vertex function K s T( )is generated from the diagram shown in figure 1, leading to the following integral equation:
ò p = - S - - - w ( )˜( ) ( ) ( )( ) ( ) ( ) ( )
where, p i =(p i , ω i ) are the four momenta of the scattering particles, and, K=(K, ω 0 ) stands for the total momentum of the incident particles. s T depends only on the total momentum K, p p p p K , ; ,
s s 1 2 3 4 T T º ( ) ( ) , when k p i F = | | for i=1, L,4 and k K F | |
. Solving equation (2), we can express s T in the small K limit as Figure 1. Feynman diagram for the temperature vertex function of particle-particle type, s T . The bubbles represent the irreducible ( s T ) and fully reducible ( s T) particle-particle vertex function, the solid lines stand for the fermion Green's functions. The propagators are the quasiparticle propagators with fully renormalized quasiparticle interactions and K=p 1 +p 2 .
K, 0 1 ln 3 s mp T T D f s 2 T g = gw p p - |˜|
is the Euler-Mascheroni constant, p f is the Fermi momentum, and ω D =0.244ε F is the cutoff frequency [30]. s T is the zero temperature irreducible particle-particle vertex function, which is approximately equal to the spin singlet normal Fermi-liquid scattering amplitude, denoted by a, given diagramatically in figure 2(b) [23]. In order to calculate the viscosity of the unitary Fermi gas, we need the normal Fermi-liquid scattering amplitude. The total quasiparticle scattering probability,
W W d 4 , cos 2 ò á ñ º p q f q W ( ) (
) , is obtained by averaging the quasiparticle scattering amplitudes of different Kʼs over the phase space [23]. For the unitary Fermi gas, W á ñ is separated into a superfluid fluctuations term, W fluctuations á ñ , and a normal Fermi-liquid scattering term, W normal á ñ :
W W W W W d 4 , cos 2 d 4 , cos 2 4 P K q 0 f 2 n fluctuations normal f max max ò ò p q f q p q f q á ñ= W + W = á ñ + á ñ ( ) ( ) ( ) ( ) ( )
K max stands for the critical value of the total momentum of the incident particles, beyond which Cooper pairs start to break down and the particles scatter off of each other as in the normal Fermi liquid state. It is given by
v K 6 F max v = | | , where e 2 D mp 4 f s 2 T v w = p -
|˜| , from regular quantum field theory analysis [29]. It's important to note that the angular averages in equation (4) are different due to the different angular dependencies in K max and q max [11,31].
The Landau parameters needed for computing the quasiparticle scattering amplitudes are determined from the local induced interaction model, shown diagramtically in figure 2. First developed to study the quasiparticle interactions in liquid 3 He, it has seen success in applications to other interacting Fermi systems and been further generalized to account for the momentum dependence in the scattering amplitudes [31][32][33][34]. According to the model, the quasiparticle interaction parameter, f, is generated from a direct term, d, which is equivalent to a model dependent effective quasiparticle potential, and an induced term coming from the coupling of collective excitations to the quasiparticles. The mechanism is shown diagrammatically in figure 1 in Li et al [11] In this work we use a local, momentum independent, version of the induced interaction model where only the l=0 = -=and the momentum in the exchange particle-hole channel q p p p p
1 4 3 2 ¢ = -= -.
Landau parameters, F s a 0 , , are nonzero [11,21,22,35] and given as
F D F A F A 1 2 3 2 , 5 s s s s a a 0 0 0 0 0 0 = + + ( ) F D F A F A 1 2 1 2 , 6 a a s s a a 0 0 0 0 0 0 = + - ( ) where, A F F N a 1 0 s a s a s a s a 0 , 0 , 0 , 0 , = + = ( ) ( ) .
In the unitary limit, the Landau parameters take on the following values: F 0.5 s 0 =and F a 0 +¥. These parameters capture the strong interactions and successfully explain various universal thermodynamic properties of the unitary Fermi gas [11,36].
Following the approach of Landau Fermi-liquid theory [23], with the local induced interaction model, we calculate the quasiparticle scattering amplitudes W f (θ, f) and W n (θ, f):
W W a K , 1 2 1 2 2 1 2 2 ,0 2 7 s f 0 2 2 T q f p p = = = ( ) | | ( ) ( ) W W a A N , 1 2 1 2 2 1 2 2 2 0 8 a n 0 2 0 2 q f p p = = = - ( ) | | ( ) ( ) where A 1 a 0 = in+ + + + + + p g p g p g - ⎜ ⎟ ⎜ ⎟ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ ⎣ ⎢ ⎛ ⎝ ⎞ ⎠ ⎤ ⎦ ⎥ ⎛ ⎝ ⎜ ⎛ ⎝ ⎞ ⎠ ⎞ ⎠ ⎟ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ( ) ( ) ( ) ( ) ( ) ( ) | ( )| · | | | ( )| ( )
To calculate the viscosity of the unitary Fermi gas within the Landau Fermi-liquid theory, we need the viscous lifetime τ η in addition to the scattering probabilities equation (9). In the low temperature limit, the viscous lifetime, 0 t h , is [23] m W k T
t p t =á ñ = h ( ) ( )
where the bare mass and the effective mass are the same since we're operating in a local model, τ without any index is the quasiparticle lifetime, and the factor of 0.205 is from the different angular average of the scattering amplitude in the unitary limit. A finite temperature correction is added to 0 t h to give [37] k T
0 3 0 2 1 t p z =´á ñ - + h - ⎜ ⎟ ⎛ ⎝ ⎞ ⎠ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ | ( )| ( ) [ ( ) ( ) ] ( )
The viscosity is then given by:
np v T T nk T n T T , 3.4 , 12 f f F B T T F 1 5 3 2 F h t t = = h h ⎧ ⎨ ⎪ ⎩ ⎪ ( ) ( )
Equation (11) for η is the standard Fermi Liquid result [23]. Equation (12) for T ? T F can be interpreted as the classical viscosity which is found upon taking a thermal average of equation (11). The classical lifetime [38] is found by fitting to data for the viscosity coefficient [13] and given as 3.4
k T T T 1 2 B F F t µ ( )k T T T 1 2 B F F t » h ( ) .
A natural concern in our work thus far is our use of the Landau Kinetic equation (LKE) to calculate the viscosity is the short, tending to zero, quasiparticle lifetime. In fact, the validity of Fermi liquid theory close to the transition temperature is still an open question that's still under debate [39]. Typically, the formal derivation of the LKE and subsequent calculations don't allow for arbitrarily short quasiparticle lifetimes and one resorts to other methods, such as the Kubo formalism, to calculate transport quantities when the quasiparticle picture is insufficient. Bruun and Smith performed a calculation [40] and show that corrections to the LKE result are small compared to those using the Kubo formalism. Additionally, the entropy from Ku et al [12], shown in figure 4, exhibits Fermi liquid like behavior above T c . Therefore, in spite of other work that claims Fermi liquid theory isn't valid [41,42], we justify our approach through the entropy data closely resembling that of a Fermi liquid as well as work done using other methods that yield transport coefficients that minimally differ from LKE results.
To calculate the ratio s h , we also need the entropy density of the unitary Fermi gas. According to Fermi liquid theory [23], the low temperature entropy density is given by
s nk T T B T T T T T T 2 1 10 ln , 13 B F s F F 2 2 2 * p p = - ⎜ ⎟ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎛ ⎝ ⎞ ⎠ ⎤ ⎦ ⎥ ⎥ ( ) where T v q k T F c B F
*~ is a cutoff temperature [23] (q c is a cutoff momentum defined by
p p q p F c F - | | ), B 4 s 1 2 6 2 = - - p (
) for a local Fermi liquid in the unitary limit, and the logarithmic term stands for the finite temperature correction to the low temperature result. In the high temperature limit, the entropy density takes the form of a classical Fermi gas [43] s nk n g T T 5 2 ln , 14
B F 3 l = - ⎪ ⎪ ⎧ ⎨ ⎩ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎫ ⎬ ⎭ ( ) where h m k T 2 B 1 2 * l p = (
) is the thermal wave length, and g=2 for a two component Fermi gases. The ratio η/s is plotted over the entire temperature regime in figure 3. The experimental data of η/s from [16], shown in the inset of figure 3, is measured with respect to reduced temperature θ=T/T F . Additional data in [44] plot the ratio with respect to E/E F . A ratio of E/E F =0.6 corresponds roughly to a temperature ratio of T/T F =0.17, therefore the low temperature portions of our calculated and the measured ratios of η/s are plotted within the same temperature window. A local minimum, with value s k 0.3
B h »
, is found in the calculated ratio η/s at T≈0.36T F (shown by the red curve in figure 3) agrees roughly with the experimental saturation value of η/s for a nearly perfect Fermi gas [7,13] (in the inset of figure 3) and is not far from the holographic prediction [2] s k k 4 0.08
B KSS B h p = » ( )
. However, as can be seen from figure 3 and equations (9) and (12), the ratio η/s is not bounded by this local minimum as it appears to drop to zero at T c due to superfluid fluctuations as Figure 3. The ratio η/s vs temperature. The ratio η/s is evaluated at F 0 a =100, i.e. close to the unitary limit where F a 0 +¥ according to the local model. The black solid curve is the low temperature limit of η/s and the dashed curve is the high temperature limit. The red curve represents the single function that captures the behavior of both curves. The red curve was created through simple interpolation, adding the low temperature and high temperature expressions together with weight factors as was done by Li et al [11]. The horizontal blue line indicates the quantum limited lower bound of s k
4 B h p =
conjectured [2]. The inset figure shows the data for η/s as a function of θ=T/T F obtained by Joseph et al [16]. Their data seems to show η/s=0 at θ=0.1, which agrees with our result if one considers that T c =0.1T F in the local model. It should however be stressed that data in this region is inconclusive and cannot be used to justify agreement with our result (For example, one can easily see that the ratio dropping to zero in the inset clearly happens well within the superfluid phase). However, what we can say about the inset is that the general behavior of their data is in good qualitative agreement with our result, albeit with a higher local minimum.
s T T T T ln 15 F c 3 3 2 h~-⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎛ ⎝ ⎜ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎞ ⎠ ⎟ ( )
which qualitatively agrees with the behavior in the inset of figure 3. The conjectured universal lower bound for η/s is therefore violated in our theory. A concern with our result is if hidden behavior of the entropy density, not captured by equation (13), is causing s 0 h . While equation (13) may not be the complete low temperature behavior, the data given by figure 4 suggests that although a kink is present, there is no divergence or singularity. Neither theoretical (equation (13)) nor experimental result diverges and therefore we believe the entropy density is well behaved and isn't driving the ratio to zero. It is important to note that recent reanalysis of the data in the inset of figure 3 was done by Bluhm et al [45]. They observe a minimum slightly above T c , as many other works do, but unfortunately cannot comment on a minimum at/below T c . We believe the lack of conclusive results near T c is due to the volatile behavior of the system in close vicinity to the critical temperature. The two competing phases make it difficult to obtain data and theoretical results, ours included, are model dependent. What we can say however is that there is a finite quasiparticle weight [42] which lends to some validity in our result. Violation of the conjectured bound on η/s within our model begs the following question: why do superfluid fluctuations in the unitary Fermi gas violate the KSS bound?
Viscoelasticity of the unitary Fermi gas
Previous work [24,46] has led us to study the connection between the viscoelastic behavior of the unitary Fermi gas and η/s. Alberte et al have shown that holographic solids, solid massive gravity black branes with nonzero graviton mass, violate the KSS bound [24]. Their work ultimately found that holographic solids with a non-zero bulk modulus, specifically finite shear modulus, violate the KSS bound, with strong evidence for extension to real solids. Our work aims to go a step further by presenting a system where experiment is possible, the unitary Fermi gas, that exhibits viscoelastic behavior and violates the KSS bound.
We must first ask if the viscoelastic model is suitable to describe the unitary Fermi gas, i.e. if the following conditions are met: (i) c 0 , c 1 ?v F where c 0 and c 1 are the speeds of zero and first sound respectively and/or (ii) l 0 as T T c where l is the viscous mean free path. Although (i) is violated for the unitary Fermi gas since
F 1 0 s 0 -< < , (ii)
is satisfied since the quasiparticle mean free path goes to zero as T T c and Cooper pairs form. Additionally, provided we are in a regime such that ωτ?1, according to [47], the fluid behaves as a solid with elastic response.
We start with the general form for the stress tensor for a viscoelastic model, different from those found in [23,46,47]: = (blue dotted line), the entropy is a well behaved function without discontinuity. This supports the claim that s 0 h as T T c is due to lifetime effects and not unusual behavior in the entropy. The remaining three curves are our expressions for the entropy (equations (13) and (14)). The green solid curve is equation (13), the black dashed curve is equation (14) and the purple dashed/dotted curve is equation (13) with T T F * and B s being adjustable parameters (fit values given in legend). The purple curve, in spite of agreeing with the low temperature dependence and matching equation (13), suggests that for more accurate results, we must go beyond the local model for a Fermi liquid. As one can see, the entropy behaves closely to that of a Fermi liquid suggesting a good quasiparticle picture and further validating our use of the LKE regardless of the vanishing quasiparticle lifetime.
u 1 6 ij ij ll ij s z d -P = - ( )
where
p 2 u 1 3 u ij ij ij ll ij s d m d = + - ⎜ ⎟ ⎛ ⎝ ⎞ ⎠
is the stress tensor that shows the two modes (an elastic mode which is pδ ij and a shear mode which is the remaining terms) and
u x u x u 1 2 ij i j j i ¶ ¶ + ¶ ¶ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟
is the strain tensor for small displacements and u i is the flow velocity. ζ is the bulk viscosity and may be ignored since ζ/η∼T 4 at low temperature for a Normal Fermi Liquid [23,48]. In general, μ is the shear modulus which contains the viscous (viscosity) and elastic (elasticity) behavior (i.e. there are in general two modes μ ⊥ and μ P ). Within the viscoelastic model, due to the short lifetime near T c in the unitary Fermi gas, we have ωτ η = 1, η∼τμ and elasticity is no different from viscosity. Using the LKE we get
c q qv F 2 15 1 5 17 F s 2 1 2 2 2 2 2 0 w n n - = + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ( ) ( ) i F 2 1 1 1 5 18 s 2 0 2 1 n n wt + + h - ⎜ ⎟ ⎡ ⎣ ⎢ ⎛ ⎝ ⎞ ⎠ ⎤ ⎦ ⎥ ( ) ( )
where the real and imaginary parts of (17) are analyzed separately. Letting c q i w a = -( ), we obtain the following expression for the coefficient of sound attenuation [49]
- - = + + + h h ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ( ) ( ) ( ) ( ) ( ) ( ) ( )
Equations (19) and (20) provide experimentally attainable quantities relating to the viscoelasticity of unitary Fermi gases. As the temperature of the unitary Fermi gas approaches T c , two things happen: (i) 0 a and (ii) c;c 1 . From [50] we interpret 0 a as the penetration depth of c 1 being infinite. Additionally, if we impose the restrictions of the local model, as mentioned earlier when dealing with the unitary Fermi gas near T c , Fermi liquid parameters F 1 s and higher are zero but the behavior of α and c 1 remain unchanged. As the unitary Fermi gas approaches its transition temperature, the zero sound mode predicted by Landau Fermi Liquid Theory is over damped and not propagating. This leads to the first sound mode propagating through the entire system and is another indicator of viscoelatic behavior. Continuing with the Landau Kinetic equation, we can use conservation laws (momentum and number) to obtain a hydrodynamic equation of motion for the mass density
Summary
Superfluid phase transitions appear to have significant effects on the ratio η/s. Our work, investigating such effects in the unitary Fermi gas, presents a violation of the conjectured KSS bound thus calling into question its proposed universality as well as the role of phase transitions on η/s. In general, strongly coupled systems often exhibit phase transitions leading us to wonder if similar conclusions could be drawn about other strongly correlated quantum fluids. For example, in dense nuclear matter produced in heavy ion collisions, the ratio is found to be obeyed albeit taking on a very small value of s s s 2.5
KSS KSS
h h h ( ) ( ) [1]. Based on our model, one could argue that the small value of η/s is related to fluctuations that arise from the strongly interacting quark gluon plasma (QGP) phase [51]. The transition temperature for the QGP phase is predicted from lattice QCD computations [52] to be, T 170 MeV QGP~, and from the experiments below this temperature, η/s is close to the KSS bound [53]. Therefore, we raise a general question: Is the minimum found in η/s of the nearly perfect quantum fluid due to universal quantum behavior predicted by the AdS/CFT correspondence or is it a local minimum in the ratio η/s caused by the interplay between correlated liquid effects that want the ratio to grow and the fluctuations of a nearby phase that want to drive them to zero at/near the phase transition?
The model developed in this work, which differs from other work by taking into consideration amplitude fluctuations, aims to study the ratio η/s in strongly correlated quantum fluids. While past calculations find a minimum that obeys (1), such as those calculated by Wlazłowski et al [54,55], our calculations have shown that fluctuations from the nearby superfluid phase can drive the ratio η/s to very low values, even to zero at the phase boundary, thus violating the conjectured universal lower bound. More precise measurements of η/s near the phase boundaries, in tighter temperature windows around T c , are needed to establish validity of the KSS bound. Additionally, we expand on the connection between viscoelastic responses and violation of the conjectured bound as was first introduced by Alberte et al [24]. In our work and that done by Alberte et al, two systems that can violate the KSS bound, the unitary Fermi gas and holographic solids, exhibit both viscous and elastic responses implying that complicated viscoelastic behavior, in addition to phase fluctuations, contribute to violation of the KSS bound. In conclusion, our theory provides an alternative and unique way of studying η/s in a strongly correlated quantum fluid by considering the effects of pairing instabilities in the quasiparticle scattering amplitude. We hope this work sheds light on the rich connection between condensed matter and high energy problems through (bottom up) AdS/CFT.
Figure 2 .
2Diagrammatic representation of the induced interaction model. (a) represents the equation for Landau parameters f decomposed into direct and induced terms; (b) sums all the reducible diagrams. It represents the equation relating f to the scattering amplitudes a=A/N(0). The momentum in the particle-hole channel is represented by q p
Figure 4 .
4Data for the entropy per particle from Ku et al[12] (red dots). At all temperatures, specifically around T T 0.167 c F
wave equation for a sound wave propagating at velocity c 1 which is in agreement with our analysis and interpretation of equation(20).
in the unitary Fermi gasThe real part of (17) givesv
c
q
F
F
2
15
1
5
1
5
19
F
s
s
2
2
2
2
2
2
a
wt
wt
=
+
+ +
h
h
⎜
⎟
⎛
⎝
⎞
⎠
⎛
⎝
⎜
⎞
⎠
⎟
(
)
( )
(
)
( )
c
c q
c
qv
F
F
2
15
1
5
2
1
5
20
F
s
s
2
1
2 2
2 2
2
2
2
2
2
2
a
wt
wt
© 2019 The Author(s). Published by IOP Publishing Ltd
AcknowledgmentsThe authors M Gochan, H Li, and K Bedell would like to thank Joshuah Heath for his valuable discussions on AdS/CFT, John Thomas for data and figure use in the inset offigure 3, and Mark Ku for experimental data used infigure 4. This work is supported by the John H Rourke Boston College endowment fund.ORCID iDsM P Gochan https:/ /orcid.org/0000-0002-2704-7066
. S Cremonini, 10.1142/S0217984911027315Mod. Phys. Lett. B. 25Cremonini S 2011 Mod. Phys. Lett. B 25 1867-88
. P K Kovtun, D T Son, A Starinets, 10.1103/PhysRevLett.94.111601Phys. Rev. Lett. 94111601Kovtun P K, Son D T and Starinets A O 2005 Phys. Rev. Lett. 94 111601
. P Kovtun, D T Son, A Starinets, 10.1088/1126-6708/2003/10/064J. High Energy Physics. 0364Kovtun P, Son D T and Starinets A O 2003 J. High Energy Physics JHEP03(2003)064
. N Iqbal, H Liu, 10.1103/PhysRevD.79.025023Phys. Rev. D. 7925023Iqbal N and Liu H 2009 Phys. Rev. D 79 025023
. G Policastro, Son D Starinets, A , 10.1103/PhysRevLett.87.081601Phy. Rev. Lett. 8781601Policastro G, Son D and Starinets A 2001 Phy. Rev. Lett. 87 081601
. M Müller, J Schmalian, L Fritz, 10.1103/PhysRevLett.103.025301Phys. Rev. Lett. 10325301Müller M, Schmalian J and Fritz L 2009 Phys. Rev. Lett. 103 025301
. J Thomas, 10.1063/1.3431329Phys. Today. 63Thomas J 2010 Phys. Today 63 34-7
. L Alberte, M Ammon, M Baggioli, Jiménez A Pujulàs, 10.1007/JHEP01(2018)129J. High Energy Phys. 18129Alberte L, Ammon M, Baggioli M, Jiménez A and Pujulàs O 2018 J. High Energy Phys. JHEP18(2018)129
. L Alberte, M Ammon, M Baggioli, Jiménez A Pujulàs, O , 10.1103/PhysRevLett.120.171602Holographic Phonos Phys. Rev. Lett. 120171602Alberte L, Ammon M, Baggioli M, Jiménez A and Pujulàs O 2018 Holographic Phonos Phys. Rev. Lett. 120 171602
. A Sommer, M Ku, G Roati, M W Zwierlein, 10.1038/nature09989Nature. 472Sommer A, Ku M, Roati G and Zwierlein M W 2011 Nature 472 201-4
. H Li, Jackiewicz J Bedell, K S , 10.1103/PhysRevB.91.075107Phys. Rev. B. 9175107Li H, Jackiewicz J and Bedell K S 2015 Phys. Rev. B 91 075107
. M J H Ku, A T Sommer, L Cheuk, M W Zwierlein, 10.1126/science.1214987Science. 335Ku M J H, Sommer A T, Cheuk L W and Zwierlein M W 2012 Science 335 563-7
. C Cao, E Elliott, J Joseph, H Wu, J Petricka, T Schäfer, J Thomas, 10.1126/science.1195219Science. 331Cao C, Elliott E, Joseph J, Wu H, Petricka J, Schäfer T and Thomas J 2011 Science 331 58-61
. T Schäfer, 10.1103/PhysRevA.76.063618Phys. Rev. A. 7663618Schäfer T 2007 Phys. Rev. A 76 063618
. A Turlapov, J Kinast, B Clancy, L Luo, Joseph J Thomas, J , 10.1007/s10909-007-9589-1J. Low Temp. Phys. 150Turlapov A, Kinast J, Clancy B, Luo L, Joseph J and Thomas J 2008 J. Low Temp. Phys. 150 567-76
. J A Joseph, Elliott E Thomas, J E , 10.1103/PhysRevLett.115.020401Phys. Rev. Lett. 11520401Joseph J A, Elliott E and Thomas J E 2015 Phys. Rev. Lett. 115 020401
. G Rupak, T Schäfer, 10.1103/PhysRevA.76.053607Phys. Rev. A. 7653607Rupak G and Schäfer T 2007 Phys. Rev. A 76 053607
. P How, A Leclair, 10.1088/1742-5468/2010/07/p07001J. Stat. Mech. 7001How P and LeClair A 2010 J. Stat. Mech. 2010 P07001
. L Salasnich, F Toigo, 10.1007/s10909-011-0391-8J. Low Temp. Phys. 165Salasnich L and Toigo F 2011 J. Low Temp. Phys. 165 239-48
. N Pakhira, R Mckenzie, 10.1103/PhysRevB.92.125103Phys. Rev. B. 92125103Pakhira N and McKenzie R 2015 Phys. Rev. B 92 125103
. J Engelbrecht, K Bedell, 10.1103/PhysRevLett.74.4265Phys. Rev. Lett. 74Engelbrecht J and Bedell K 1995 Phys. Rev. Lett. 74 4265-8
. J A Jackiewicz, K Bedell, 10.1080/14786430500040720Phil. Mag. 85Jackiewicz J A and Bedell K 2005 Phil. Mag. 85 1755-63
Landau Fermi-Liquid Theory 1st edn. G Baym, C Pethick, Wiley-VCHWeinheim, GermanyBaym G and Pethick C 1991 Landau Fermi-Liquid Theory 1st edn (Weinheim, Germany: Wiley-VCH)
. L Alberte, M Baggioli, O Pujolàs, J. High Energy Physics. 74Alberte L, Baggioli M and Pujolàs O 2016 J. High Energy Physics 2016 74
. J Sachin, R Samanta, S P Trivedi, 10.1007/JHEP10(2015)028J. High Energy Phys. 28Sachin J, Samanta R and Trivedi S P 2015 J. High Energy Phys. 2015 28
. R Samanta, R Sharma, S Trivedi, 10.1103/PhysRevA.96.053601Phys. Rev. A. 9653601Samanta R, Sharma R and Trivedi S 2017 Phys. Rev. A 96 053601
. T Enss, R Haussmann, W Zwerger, 10.1016/j.aop.2010.10.002Ann. Phys. 326Enss T, Haussmann R and Zwerger W 2011 Ann. Phys. 326 770-96
Gravity, holography and applications to condensed matter Ph. M Baggioli, D. Thesis Universitat Autònoma de Barcelona TheBaggioli M 2016 Gravity, holography and applications to condensed matter Ph.D. Thesis Universitat Autònoma de Barcelona The address of the publisher
A Abrikosov, L Gorkov, I Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics 1st edn. New YorkDover PublicationsAbrikosov A, Gorkov L and Dzyaloshinski I 1975 Methods of Quantum Field Theory in Statistical Physics 1st edn (New York: Dover Publications)
. L Gorkov, T Melik-Barkhudarov, J. Exp. Theo. Phys. 13Gorkov L and Melik-Barkhudarov T 1961 J. Exp. Theo. Phys. 13 1018-22 http://www.jetp.ac.ru/cgi-bin/e/index/e/13/5/p1018? a=list
. T L Ainsworth, K S Bedell, 10.1103/PhysRevB.35.8425Phys. Rev. B. 35Ainsworth T L and Bedell K S 1987 Phys. Rev. B 35 8425-39
. S Babu, G Brown, 10.1016/0003-4916(73)90002-XAnn. of Phys. 78Babu S and Brown G 1973 Ann. of Phys. 78 1-38
. T L Ainsworth, K S Bedell, G Brown, K Quader, 10.1007/BF00681609J. Low Temp. Phys. 50Ainsworth T L, Bedell K S, Brown G and Quader K 1983 J. Low Temp. Phys. 50 319-36
. K S Bedell, K F Quader, 10.1103/PhysRevB.32.3296Phys. Rev. B. 32Bedell K S and Quader K F 1985 Phys. Rev. B 32 3296-9
. S Gaudio, J Jackiewicz, K Bedell, 10.1080/14786430802691204Phil. Mag. 89Gaudio S, Jackiewicz J and Bedell K 2009 Phil. Mag. 89 1823-30
. S Giorgini, L Pitaevskii, S Stringari, 10.1103/RevModPhys.80.1215Rev. Mod. Phys. 80Giorgini S, Pitaevskii L P and Stringari S 2008 Rev. Mod. Phys. 80 1215-74
. K S Dy, C J Pethick, 10.1103/PhysRev.185.373Phys. Rev. 185Dy K S and Pethick C J 1969 Phys. Rev. 185 373-84
. G Brunn, 10.1088/1367-2630/13/3/035005New J. Phys. 1335005Brunn G 2011 New J. Phys. 13 035005
. E Mueller, 10.1088/1361-6633/aa7e53Rep. Prog. Phys. 80104401Mueller E 2017 Rep. Prog. Phys. 80 104401
. G Bruun, H Smith, 10.1103/PhysRevA.75.043612Phys. Rev. A. 7543612Bruun G and Smith H 2007 Phys. Rev. A 75 043612
. J Gaebler, J Stewart, T Drake, Jin D Perali, A Pieri, P Strinati, G , 10.1038/nphys1709Nat. Phys. 6Gaebler J, Stewart J, Drake T, Jin D, Perali A, Pieri P and Strinati G 2010 Nat. Phys. 6 569-73
. Y Sagi, T Drake, R Paudel, Chapurin R , Jin D , 10.1103/PhysRevLett.114.075301Phys. Rev. Lett. 11475301Sagi Y, Drake T, Paudel R, Chapurin R and Jin D 2015 Phys. Rev. Lett. 114 075301
. R Pathria, Butterworth-HeinemannBurlington, MassachusettsStatistical Mechanics 2nd ednPathria R 1996 Statistical Mechanics 2nd edn (Burlington, Massachusetts: Butterworth-Heinemann)
. L Luo, J Thomas, 10.1007/s10909-008-9850-2J. Low Temp. Phys. 154Luo L and Thomas J 2009 J. Low Temp. Phys. 154 1-29
. M Bluhm, J Hou, T Schäfer, 10.1103/PhysRevLett.119.065302Phys. Rev. Lett. 11965302Bluhm M, Hou J and Schäfer T 2017 Phys. Rev. Lett. 119 065302
. K Bedell, C Pethick, 10.1007/BF00681588J. Low Temp. Phys. 49Bedell K and Pethick C 1982 J. Low Temp. Phys. 49 213-25
. L D Landau, E M Lifshitz, Butterworth-HeinemannOxfordTheory of Elasticity 3rd ednLandau L D and Lifshitz E M 1986 Theory of Elasticity 3rd edn (Jordan Hill, Oxford: Butterworth-Heinemann)
. D Son, 10.1103/PhysRevLett.98.020604Phys. Rev. Lett. 9820604Son D T 2007 Phys. Rev. Lett. 98 020604
. C Pethick, 10.1103/PhysRev.185.384Phys. Rev. 185Pethick C J 1969 Phys. Rev. 185 384-91
. L D Landau, E M Lifshitz, Butterworth-HeinemannBurlington, MassachusettsFluid Mechanics 2nd ednLandau L D and Lifshitz E M 1987 Fluid Mechanics 2nd edn (Burlington, Massachusetts: Butterworth-Heinemann)
. Chen J Nakano, E , 10.1016/j.physletb.2007.02.026Phys. Lett. B. 647Chen J and Nakano E 2007 Phys. Lett. B 647 371-5
. F Karsch, Laermann E Perkert, A , 10.1016/S0550-3213(01)00200-0Nucl. Phys. B. 605Karsch F, Laermann E and Perkert A 2001 Nucl. Phys. B 605 579-99
. B Jacak, P Steinberg, 10.1063/1.3431330Phys. Today. 63Jacak B and Steinberg P 2010 Phys. Today 63 39-43
. G Wlazłowski, P Magierski, A Bulgac, K Roche, 10.1103/PhysRevA.88.013639Phys. Rev. A. 8813639Wlazłowski G, Magierski P, Bulgac A and Roche K 2013 Phys. Rev. A 88 013639
. G Wlazłowski, P Magierski, J Drut, 10.1103/PhysRevLett.109.020406Phys. Rev. Lett. 10920406Wlazłowski G, Magierski P and Drut J 2012 Phys. Rev. Lett. 109 020406
| []
|
[
"γ-ray spectra and enhancement factors for positron annihilation with core electrons",
"γ-ray spectra and enhancement factors for positron annihilation with core electrons"
]
| [
"D G Green \nDepartment of Applied Mathematics and Theoretical Physics\nQueen's University Belfast\nNorthern IrelandBT7 1NNBelfastUnited Kingdom\n",
"G F Gribakin \nDepartment of Applied Mathematics and Theoretical Physics\nQueen's University Belfast\nNorthern IrelandBT7 1NNBelfastUnited Kingdom\n"
]
| [
"Department of Applied Mathematics and Theoretical Physics\nQueen's University Belfast\nNorthern IrelandBT7 1NNBelfastUnited Kingdom",
"Department of Applied Mathematics and Theoretical Physics\nQueen's University Belfast\nNorthern IrelandBT7 1NNBelfastUnited Kingdom"
]
| []
| Many-body theory is developed to calculate the γ-spectra for positron annihilation with valence and core electrons in the noble gas atoms. A proper inclusion of correlation effects and core annihilation provides for an accurate description of the measured spectra [Iwata et al., Phys. Rev. Lett. 79, 39 (1997)]. The theory enables us to calculate the enhancement factors γ nl , which describe the effect of electron-positron correlations for annihilation on individual electron orbitals nl. We find that the enhancement factors scale with the orbital ionization energy I nl (in electron-volt), as γ nl = 1 + A/I nl + (B/I nl ) β , where A ≈ 40 eV, B ≈ 24 eV and β ≈ 2.3. | 10.1103/physrevlett.114.093201 | [
"https://export.arxiv.org/pdf/1406.4323v2.pdf"
]
| 29,254,719 | 1406.4323 | f8d13c82ecead8c0c2726889be798e4ec4a6905d |
γ-ray spectra and enhancement factors for positron annihilation with core electrons
D G Green
Department of Applied Mathematics and Theoretical Physics
Queen's University Belfast
Northern IrelandBT7 1NNBelfastUnited Kingdom
G F Gribakin
Department of Applied Mathematics and Theoretical Physics
Queen's University Belfast
Northern IrelandBT7 1NNBelfastUnited Kingdom
γ-ray spectra and enhancement factors for positron annihilation with core electrons
(Dated: March 1, 2022)
Many-body theory is developed to calculate the γ-spectra for positron annihilation with valence and core electrons in the noble gas atoms. A proper inclusion of correlation effects and core annihilation provides for an accurate description of the measured spectra [Iwata et al., Phys. Rev. Lett. 79, 39 (1997)]. The theory enables us to calculate the enhancement factors γ nl , which describe the effect of electron-positron correlations for annihilation on individual electron orbitals nl. We find that the enhancement factors scale with the orbital ionization energy I nl (in electron-volt), as γ nl = 1 + A/I nl + (B/I nl ) β , where A ≈ 40 eV, B ≈ 24 eV and β ≈ 2.3.
Introduction.-In this Letter we show that many-body theory provides an accurate description of the positron annihilation gamma spectra for noble gas atoms. Key to this is the ability of the theory to describe strong electron-positron correlations which enhance the annihilation beyond the independent-particle approximation (IPA) results. This enhancement is important not only for positron annihilation with weakly-bound valence electrons, but also for the core electrons, and the corresponding enhancement factors display a near-universal scaling with the electron orbital ionization energy.
Due to repulsion from the nuclei, low-energy positrons annihilate predominantly on the outermost (valence) electrons in atoms, molecules, and condensed matter systems. Small fractions of positrons can, however, tunnel through the repulsive potential and annihilate on the core electrons. Two-photon annihilation is the dominant mode in both cases. The corresponding Dopplerbroadened γ-ray energy spectrum is centered on 511 keV, and is characteristic of the electron velocity distribution in the states involved. In particular, annihilation on the tightly-bound core electrons results in distinct features at higher-energy Doppler shifts in the spectrum [1,2]. The core annihilation signal shows high elemental specificity [3], which can be used to study vacancies in metals and identify defects in semiconductors [4,5] (see [6] and references therein). Positron annihilation on core electrons is also a key process in the surface-analytic positroninduced Auger-electron spectroscopy (PAES) [7][8][9][10], and the time-resolved PAES [11], which enables the study of dynamics of catalysis, corrosion, and surface alloying [12]. Coincident measurements of the annihilation γ-ray and Auger electrons allows one to determine the annihilation γ-ray spectra for individual core orbitals [13,14].
Interpretation of the experiments relies heavily on theoretical input. For example, for PAES one needs to know the relative probabilities of positron annihilation with inner electrons of various atoms [15]. However, the process of positron annihilation in many-electron systems is characterised by strong electron-positron correlations.
These correlations affect both the positron wave function and the electron-positron annihilation vertex. They lead to dramatic enhancements of positron annihilation rates in heavier noble-gas atoms, compared with the singleparticle (e.g., Hartree-Fock) approximation (see [16] and references therein), and have a significant effect on the shapes of the γ-ray spectra [17][18][19]. For atomic systems electron-positron correlations can be included systematically and accurately by using many-body theory methods [16,20]. Many-body theory provided important initial insights into positron annihilation in metals by considering positrons in an electron gas [21,22]. These early works introduced the concept of enhancement factors, which measure the increase of the electron density at the positron. Subsequently, density functional theories have been developed for condensed-matter systems [23,24]. They describe positron states and annihilation in real materials, often using parametrizations of the correlation energy and enhancement factors for the positron in electron gas from many-body theory [25]. The enhancement factors are particularly large (∼10) for the valence electrons, but they are also significant for the core electrons [26]. They can be used to improve the annihilation probabilities and γ-spectra calculated in IPA [1,15]. However, benchmarking common electron-density-dependent enhancement factors against accurate positron-atom calculations reveals their deficiencies [27]. Their use also leads to spurious effects in the γ-ray spectra [5].
Positron interaction with noble-gas atoms has been studied thoroughly in experiment by measuring the scattering cross sections and annihilation rates. This system is ideal for testing the ability of theoretical and computational approaches to account for electron-positron correlations. An extensive comparison with the available data attests the validity and accuracy of the many-body theory we have developed [16]. An outstanding issue is the annihilation γ-spectra of Ar, Kr and Xe which were measured by the San Diego group [2], but till now have eluded theoretical description. In this work we extend the manybody theory approach to calculate the γ-ray spectra for positron annihilation with the valence and core electrons of the noble gas atoms. We show that by properly accounting for the correlations in both the core and valence annihilation, the theory yields excellent agreement with experiment, including the range of large Doppler shifts where the core contribution dominates. The many-body theory also allows us to quantify the effect of correlations on the annihilation vertex and extract the "exact" enhancement factors γ nl for individual valence and core electron orbitals nl [28].
Theory.-In the dominant process, a positron annihilates with an electron in state n to form two γ-ray photons of total momentum P [29]. In the centre-ofmass frame (P = 0), the two γ-rays have equal energies mc 2 = 511 keV (neglecting the initial positron and electron energies ε and ε n ). In the laboratory frame, however, the photon energies are Doppler shifted by ≤ P c/2 (typically, a few keV). The corresponding γ-spectrum is
w n ( ) = 1 c ∞ 2| |/c Ω P |A nε (P)| 2 dΩ P (2π) 3 P dP,(1)
where A nε (P) is the annihilation amplitude [17]. It is given diagrammatically in Fig. 1 (see [16-18, 30, 31] for details). Specifically, shown are the dominant contributions to the annihilation vertex: the zeroth-order vertex, which represents the IPA amplitude, the firstorder, and higher-order ('Γ-block') corrections, which account for the attractive electron-positron interaction at short range and enhance the annihilation rate. In practice, one usually calculates the spectrum for all electrons in a given atomic orbital nl. The total spectrum, w( ) = nl w nl ( ), which is probed in experiment, retains distinct features of the contributions of individual valence and core orbitals. The fully-correlated incident positron quasiparticle wave function ψ ε is calculated from the Dyson equation H 0 +Σ ε ψ ε = εψ ε , where H 0 is the Hamiltonian of the positron in the field of the Hartree-Fock (HF) ground state atom, andΣ ε is the positron self-energy operator which plays the role of a nonlocal, energy-dependent positron-atom correlation potential. This potential accounts for polarization of the atom by the positron and for virtual positronium formation (represented by the Γblock), both of which contribute to the positron-atom attraction (see [16,20] for details).
= P n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ ν 1 µ 1 µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 = P ε n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ ν 1 µ 1 µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 " (P) = n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 = P ε n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ ν 1 µ 1 µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 P) (c) (b) (a) = P n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ 1 1 µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 = P ε n + + ν P ε n µ P ε n Γ P µ 2 ε ν 1 µ 1 ν 2 n (a) (b) ( c) + . . . + + ≡ Γ ν 1 µ 1 µ 2 ν 2 ν 1 ν 2 µ 2 µ 1 ν 2 µ 1 µ 2 ν 1 µ 1 µ 2 ν 2 ν 1 " (P)
The positron annihilation rate in a gas is usually parameterized by the dimensionless effective number of electrons, Z eff [32]. For an orbital nl, it is given by
Z eff,nl = ∞ −∞ w nl ( ) d .(2)
In general, Z eff,nl for valence electrons is greater than the number of electrons in the subshell, owing to the positron-atom attraction and enhancement due to the electron-positron short-range correlations. The positron self-energy diagrams and the annihilation amplitude involve summations over intermediate excited electron and positron continuum states. We calculate them numerically by employing B-spline basis sets. Here, we use a basis of 40 splines of order 6, and a spherical box of radius 30 a.u. The maximum angular momentum of the intermediate states is l max =15. For this basis the sums over the energies converge rapidly, and we perform extrapolation to l max → ∞ as in [17]. Full details of the numerical implementation are given in [16,30,31].
Results.-The annihilation γ-ray spectra for Ar, Kr and Xe were measured at low gas pressures with roomtemperature positrons confined in a Penning-Malmberg trap [2]. That work also showed that the IPA [ Fig. 1 (a)] fails to describe the spectra accurately. It overestimates both the full width at half maximum (FWHM) of the spectra by 10-15%, and the fraction of core annihilation, as seen from an excessive spectral weight at large Doppler shifts. Ref. [17] showed that the first-order correction [ Fig. 1 (b)] led to a narrowing of the spectrum, but was insufficient to describe the measured spectra.
The full calculation presented in this work highlights the importance of higher-order corrections [ Fig. 1 (c)], especially for the valence electrons. The many-body theory also shows that the self-energy correlations that affect the positron wave function (double line in Fig. 1) and the correlation corrections to the annihilation vertex [diagrams (b) and (c)] have strikingly different effects on the spectra. As an example, Fig. 2 presents the γ-ray spectra for the outer valence 4p orbital and a core 3p orbital in Kr [33]. It shows that the vertex corrections enhance the annihilation signal by almost an order of magnitude for the valence electrons and by about 50% for the core orbital. The role of the higher-order corrections [ Fig. 1 (c)] is much more prominent for the valence electrons. Vertex corrections also lead to a significant narrowing of the spectrum for the valence electrons. Physically, this is related to the fact that in the vertex correction diagrams -spectra for valence and subvalence p-subshells of Argon, calculated using HF (static) n positron wavefunction in di↵erent orders of the annihilation vertex. the positron annihilates with an excited electron, whose wave function is more diffuse than that of the core hole. In contrast, the main result of improving the positron wave function (i.e., using the Dyson orbital rather than the static HF wave function) is a uniform increase in the annihilation signal. This increase is due to the build-up of the positron density in the vicinity of the atom caused by the positron-atom attraction. The magnitude of this effect is similar for the valence and core electrons. However, in contrast with the vertex corrections, it is sensitive to the atomic environment and the positron energy. In particular, low-energy annihilation in heavier noblegas atoms is strongly enhanced by positron virtual states [16]. Figure 3 shows the γ-spectra for positron annihilation on individual subshells of Ar, Kr and Xe, calculated with the full amplitude (Fig. 1) using the Dyson swave positron state of thermal momentum k = 0.04 a.u. The narrowly peaked valence spectra dominate the total spectra at low Doppler shifts. Compared with the valence electrons, the tightly-bound core electrons have greater velocities and produce broader γ-ray spectra. Note also that most individual spectra include multiple 'shoulders'. The number of these is determined by the number of nodes n − l in the electron radial wave function, as each 'lobe' of the bound state wave function produces a characteristic contribution to the annihilation momentum density. In this way the spectra of the valence orbitals contain high-momentum components characteristic of the core orbitals to which they are orthogonal. Overall, the total γ-spectra retain the characteristics of Calculated γ-spectra for positron annihilation on individual subshells nl in Ar, Kr and Xe: valence ns, np, (solid black and red lines); core (n − 1)s, (n − 1)p, and (n − 1)d (dashed lines); inner core (n − 2)s, (n − 2)p, and (n − 2)d (dash-dash-dotted lines); and total spectra (thick solid green line). All spectra are obtained using the full annihilation vertex (Fig. 1) and Dyson positron wave function. the valence and core contributions. Figure 4 shows the calculated total spectra convolved with the detector resolution function and normalized to the experimental data at zero Doppler shifts [2]. For each atom the valence component underestimates the experimental spectrum in the high-energy 'wings', while the inclusion of the core brings the theoretical spectra into close agreement with experiment [34]. This agreement supports the accuracy of the fractions of core annihilation derived from our many-body theory calculations: 0.55% in Ar, 1.53% in Kr, and 2.23% in Xe [35]. Note that the IPA γ-spectra obtained with the positron states in the static atomic field (dotted lines in Fig. 3) are significantly broader than the experiment. Such calculation also overestimates the fraction of core annihilation by a factor of two. However, when this fraction is used as a free parameter to fit the experimental data [2], the core annihilation fractions for Kr and Xe (1.3% and 2.4%, respectively) are close to the above ab initio values.
Enhancement factors.-The many-body theory developed here and in Ref. [16] allows us to calculate the en- hancement factors due to the correlation corrections to the annihilation vertex (Fig. 1). These factors can be determined from the ratio of the annihilation rates Z eff obtained with the full vertex to that of the zeroth-order (IPA), for each electron orbital nl:
γ nl = Z (0+1+Γ) eff,nl Z (0) eff,nl .
(3) Figure 5 shows the enhancement factors γ nl for the core and valence orbitals of Ar, Kr and Xe, for both static HF and Dyson incident positron states. Also shown are values of γ 1s for hydrogen and hydrogen-like ions, obtained using the many-body theory approach [18,20]. The values of γ nl obtained with the positron wave function in the static atomic field are slightly larger that those found using the fully correlated Dyson wave functions (although this effect is negligible for the positive ions). This difference aside, Fig. 5 displays a near-universal scaling of the enhancement factors for the neutral atoms with the orbital ionization energy I nl . This scaling can be parametrized by the formula γ nl = 1 + A/I nl + (B/I nl ) β ,
where A, B and β are constants found by fitting the numerical data. The second term on the right-hand side of (4) describes the effect of the first-order correction, Fig. 1 (b). Its scaling with I nl is motivated by the 1/Z scaling of the enhancement factors in hydrogen-like ions [18]. The third term is phenomenological; it accounts for the higher-order corrections which are important for the valence electrons (cf. Fig. 2).
Summary.-Many-body theory has been used to calculate the contribution of individual subshells to the γspectra of positron annihilation in noble gases. Inclusion of core annihilation gives results in excellent agreement with experiment and yields accurate core annihilation probabilities. 'Exact' vertex enhancement factors obtained from the calculations have been found to follow a simple scaling with the electron ionization energy. We suggest that this result can be used to improve simple IPA calculations of core annihilation on atoms across the periodic table and in condensed matter.
We thank C. M. Surko for valuable discussions. DGG is grateful to the Institute for Theoretical Atomic, Molecular and Optical Physics at the Harvard-Smithsonian Centre for Astrophysics, where he carried out part of this work, and thanks H. R. Sadeghpour and colleagues for their generous hospitality.
FIG. 1 .
1Amplitude of positron annihilation with an electron in state n: (a) zeroth-order vertex, (b) first-order, and (c) 'Γ-block' corrections. Double lines labelled ε represent the incident positron wave function; single lines labelled ν (µ) represent positron (excited electron) states, which are summed over; lines labelled n represent holes in the atomic ground state; wavy lines represent the electron-positron Coulomb interactions, and double-dashed lines represent the two γ-ray photons. The Γ-block is the sum of an infinite series of electron-positron ladder diagrams[16,20].
15 FIG. 2 .
152for valence and subvalence p-subshells of Krypton, calculated using HF (static) n positron wavefunction in di↵erent orders of the annihilation vertex. be much larger than the radius at which annihilation occurs (typically the size of ) [57-60]. The relative strength of the wavefunction enhancement of Ar, Kr and sponds to the increase in their respective scattering lengths: for Ar a ⇡ 4.4 a.u., ⇡ 10.1 a.u., and for Xe a ⇡ 81 a.u. [61]. Incidentally, this resonant behaviour at Z e↵ drops o↵ dramatically as k increases and thus a proper comparison with the ental Z e↵ measured using thermalized positrons, can only be made after performing ellian average of the calculated Z e↵ (k) over k. Such values, calculated from the Annihilation γ-ray spectra for the 4p valence and 3p core electron orbitals in Kr, calculated using the positron wave function in the static field of the HF atom, and with the account of the correlation potentialΣε (Dyson), and with various approximations for the annihilation vertex [Fig. 1].Dashed curves are for the zeroth-order vertex ("0", IPA); chain curves include the first-order correction ("0 + 1"); solid curves show the results for the full vertex ("0 + 1 + Γ").
FIG. 3. Calculated γ-spectra for positron annihilation on individual subshells nl in Ar, Kr and Xe: valence ns, np, (solid black and red lines); core (n − 1)s, (n − 1)p, and (n − 1)d (dashed lines); inner core (n − 2)s, (n − 2)p, and (n − 2)d (dash-dash-dotted lines); and total spectra (thick solid green line). All spectra are obtained using the full annihilation vertex (Fig. 1) and Dyson positron wave function.
FIG
. 4. γ-spectra for positron annihilation in Ar, Kr and Xe. Experiment: red circles. Theory (convolved with the detector resolution function): valence (dashed line); core (dash-dashdotted line); and total (solid line), calculated with the full annihilation vertex and Dyson positron wave function. Also shown is the static HF (IPA) calculation of [2] (blue dots).
FIG. 5 .
5Enhancement factors (3) calculated using static HF (open symbols) and Dyson (solid symbols) positron states, for the ns (triangles), np (squares) and nd (circles) valence and core orbitals in Ar, Kr, and Xe; 1s orbitals hydrogen (upsidedown triangles)[20]; and hydrogen-like ions (plus signs) [18]. Dashed line is the fit (4) of γ nl for atoms obtained using the static HF positron wave function (A = 42.0 eV, B = 24.9 eV, β = 2.54), and the solid line is that for the Dyson positron wave function (A = 35.7 eV, B = 22.7 eV, β = 2.15).
* Correspondences to: [email protected]; Present address. Durham, DH1 3LE, UKJoint Quantum Centre (JQC) Durham/ Newcastle, Department of Chemistry, Durham University, South Road* Correspondences to: [email protected]; Present address: Joint Quantum Centre (JQC) Durham/ Newcastle, Department of Chemistry, Durham Univer- sity, South Road, Durham, DH1 3LE, UK. † [email protected]
. K G Lynn, J R Macdonald, R A Boie, L C Feldman, J D Gabbe, M F Robbins, E Bonderup, J Golovchenko, 10.1103/PhysRevLett.38.241Phys. Rev. Lett. 38241K. G. Lynn, J. R. MacDonald, R. A. Boie, L. C. Feld- man, J. D. Gabbe, M. F. Robbins, E. Bonderup, and J. Golovchenko, Phys. Rev. Lett. 38, 241 (1977).
. K Iwata, G F Gribakin, R G Greaves, C M Surko, 10.1103/PhysRevLett.79.39Phys. Rev. Lett. 7939K. Iwata, G. F. Gribakin, R. G. Greaves, and C. M. Surko, Phys. Rev. Lett. 79, 39 (1997).
. P Asoka-Kumar, M Alatalo, V J Ghosh, A C Kruseman, B Nielsen, K G Lynn, 10.1103/PhysRevLett.77.2097Phys. Rev. Lett. 772097P. Asoka-Kumar, M. Alatalo, V. J. Ghosh, A. C. Kruse- man, B. Nielsen, and K. G. Lynn, Phys. Rev. Lett. 77, 2097 (1996).
. K G Lynn, J E Dickman, W L Brown, M F Robbins, E Bonderup, 10.1103/PhysRevB.20.3566Phys. Rev. B. 203566K. G. Lynn, J. E. Dickman, W. L. Brown, M. F. Robbins, and E. Bonderup, Phys. Rev. B 20, 3566 (1979).
. M Alatalo, B Barbiellini, M Hakala, H Kauppinen, T Korhonen, M J Puska, K Saarinen, P Hautojärvi, R M Nieminen, 10.1103/PhysRevB.54.2397Phys. Rev. B. 542397M. Alatalo, B. Barbiellini, M. Hakala, H. Kauppinen, T. Korhonen, M. J. Puska, K. Saarinen, P. Hautojärvi, and R. M. Nieminen, Phys. Rev. B 54, 2397 (1996).
. F Tuomisto, I Makkonen, 10.1103/RevModPhys.85.1583Rev. Mod. Phys. 851583F. Tuomisto and I. Makkonen, Rev. Mod. Phys. 85, 1583 (2013).
. A Weiss, R Mayer, M Jibaly, C Lei, D Mehl, K G Lynn, 10.1103/PhysRevLett.61.2245Phys. Rev. Lett. 612245A. Weiss, R. Mayer, M. Jibaly, C. Lei, D. Mehl, and K. G. Lynn, Phys. Rev. Lett. 61, 2245 (1988).
. T Ohdaira, R Suzuki, T Mikado, H Ohgaki, M Chiwaki, T Yamazaki, 10.1016/S0169-4332(96)01049-5Appl. Surf. Sci. 116177T. Ohdaira, R. Suzuki, T. Mikado, H. Ohgaki, M. Chi- waki, and T. Yamazaki, Appl. Surf. Sci. 116, 177 (1997).
. A H Weiss, N G Fazleev, M P Nadesalingam, S Mukherjee, S Xie, J Zhu, B R Davis, 10.1016/j.radphyschem.2006.03.053Radiat. Phys. Chem. 76285A. H. Weiss, N. G. Fazleev, M. P. Nadesalingam, S. Mukherjee, S. Xie, J. Zhu, and B. R. Davis, Radiat. Phys. Chem. 76, 285 (2007).
. J Mayer, C Hugenschmidt, K Schreckenbach, 10.1016/j.susc.2010.07.003Surface Science. 6041772J. Mayer, C. Hugenschmidt, and K. Schreckenbach, Sur- face Science 604, 1772 (2010).
. C Hugenschmidt, B Löwe, J Mayer, C Piochacz, P Pikart, R Repper, M Stadlbauer, K Schreckenbach, 10.1016/j.nima.2008.05.038Nuc. Instrum. Meth. A. 593616C. Hugenschmidt, B. Löwe, J. Mayer, C. Piochacz, P. Pikart, R. Repper, M. Stadlbauer, and K. Schrecken- bach, Nuc. Instrum. Meth. A 593, 616 (2008).
. J Mayer, C Hugenschmidt, K Schreckenbach, 10.1103/PhysRevLett.105.207401Phys. Rev. Lett. 105207401J. Mayer, C. Hugenschmidt, and K. Schreckenbach, Phys. Rev. Lett. 105, 207401 (2010).
. A Eshed, S Goktepeli, A R Koymen, S Kim, W C Chen, D J O'kelly, P A Sterne, A H Weiss, 10.1103/PhysRevLett.89.075503Phys. Rev. Lett. 8975503A. Eshed, S. Goktepeli, A. R. Koymen, S. Kim, W. C. Chen, D. J. O'Kelly, P. A. Sterne, and A. H. Weiss, Phys. Rev. Lett. 89, 075503 (2002).
. S Kim, A Eshed, S Goktepeli, P A Sterne, A R Koymen, W C Chen, A H Weiss, 10.1103/PhysRevB.73.014114Phys. Rev. B. 7314114S. Kim, A. Eshed, S. Goktepeli, P. A. Sterne, A. R. Koy- men, W. C. Chen, and A. H. Weiss, Phys. Rev. B 73, 014114 (2006).
. K O Jensen, A Weiss, 10.1103/PhysRevB.41.3928Phys. Rev. B. 413928K. O. Jensen and A. Weiss, Phys. Rev. B 41, 3928 (1990).
. D G Green, J A Ludlow, G F Gribakin, 10.1103/PhysRevA.90.032712Phys. Rev. A. 9032712D. G. Green, J. A. Ludlow, and G. F. Gribakin, Phys. Rev. A 90, 032712 (2014).
. L J M Dunlop, G F Gribakin, 10.1088/0953-4075/39/7/008J. Phys. B. 391647L. J. M. Dunlop and G. F. Gribakin, J. Phys. B 39, 1647 (2006).
. D G Green, G F Gribakin, 10.1103/PhysRevA.88.032708Phys. Rev. A. 8832708D. G. Green and G. F. Gribakin, Phys. Rev. A 88, 032708 (2013).
. D G Green, S Saha, F Wang, G F Gribakin, C M Surko, New J. Phys. 1435021D. G. Green, S. Saha, F. Wang, G. F. Gribakin, and C. M. Surko, New J. Phys. 14, 035021 (2012).
. G F Gribakin, J Ludlow, 10.1103/PhysRevA.70.032720Phys. Rev. A. 7032720G. F. Gribakin and J. Ludlow, Phys. Rev. A 70, 032720 (2004).
. S Kahana, 10.1103/PhysRev.129.1622Phys. Rev. 1291622S. Kahana, Phys. Rev. 129, 1622 (1963).
. J P Carbotte, 10.1103/PhysRev.155.197Phys. Rev. 155197J. P. Carbotte, Phys. Rev. 155, 197 (1967).
. E Boroński, R M Nieminen, 10.1103/PhysRevB.34.3820Phys. Rev. B. 343820E. Boroński and R. M. Nieminen, Phys. Rev. B 34, 3820 (1986).
. M J Puska, R M Nieminen, 10.1103/RevModPhys.66.841Rev. Mod. Phys. 66841M. J. Puska and R. M. Nieminen, Rev. Mod. Phys. 66, 841 (1994).
. J Arponen, E Pajanne, 10.1016/0003-4916(79)90101-5Ann. Phys. 121343J. Arponen and E. Pajanne, Ann. Phys. 121, 343 (1979).
. E Bonderup, J U Andersen, D N Lowy, 10.1103/PhysRevB.20.883Phys. Rev. B. 20883E. Bonderup, J. U. Andersen, and D. N. Lowy, Phys. Rev. B 20, 883 (1979).
. J Mitroy, B Barbiellini, 10.1103/PhysRevB.65.235103Phys. Rev. B. 65235103J. Mitroy and B. Barbiellini, Phys. Rev. B 65, 235103 (2002).
The theory also allows one to extract momentumdependent enhancement factors. 18The theory also allows one to extract momentum- dependent enhancement factors [18, 19].
V B Berestetskii, E M Lifshitz, L P Pitaevskii, Quantum Electrodynamics. Pergamon, Oxford2nd ed.V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii, Quantum Electrodynamics, 2nd ed. (Pergamon, Oxford, 1982).
. D G Green, G F Gribakin, unpublishedD. G. Green and G. F. Gribakin, (unpublished).
. D G Green, Queen's University BelfastPh.D. thesisD. G. Green, Ph.D. thesis, Queen's University Belfast (2011).
Z eff is the ratio of the positron annihilation rate in an atomic or molecular gas to the basic Dirac annihilation rate in the electron gas of the same number density. Z eff is the ratio of the positron annihilation rate in an atomic or molecular gas to the basic Dirac annihilation rate in the electron gas of the same number density.
All calculations are done for the s-wave incident positron with room-temperature momentum k = 0. 04 a.uAll calculations are done for the s-wave incident positron with room-temperature momentum k = 0.04 a.u.
In Kr and Xe the theoretical spectrum slightly underestimates the measurements at large Doppler shifts. One possible reason for this discrepancy is the neglect of relativistic effects on the electron wave functions [36]. However, a recent work [37] which employs model potentials to describe positron-atom interactions, shows that the relativistic effect on the γ-spectra is small. increasing the FWHM in Xe by only 1.4%.In Kr and Xe the theoretical spectrum slightly underes- timates the measurements at large Doppler shifts. One possible reason for this discrepancy is the neglect of rel- ativistic effects on the electron wave functions [36]. How- ever, a recent work [37] which employs model potentials to describe positron-atom interactions, shows that the relativistic effect on the γ-spectra is small, e.g., increas- ing the FWHM in Xe by only 1.4%.
We estimate the uncertainty in these numbers to be about 5%, comparable to effect of nonladder 3rd-order diagrams on the valence annihilation rates. 16We estimate the uncertainty in these numbers to be about 5%, comparable to effect of nonladder 3rd-order diagrams on the valence annihilation rates [16].
. J P D Cook, J Mitroy, E Weigold, 10.1103/PhysRevLett.52.1116Phys. Rev. Lett. 521116J. P. D. Cook, J. Mitroy, and E. Weigold, Phys. Rev. Lett. 52, 1116 (1984).
. Y Cheng, J Mitroy, 10.1103/PhysRevA.90.042702Phys. Rev. A. 9042702Y. Cheng and J. Mitroy, Phys. Rev. A 90, 042702 (2014).
| []
|
[
"Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components",
"Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components",
"Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components",
"Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components"
]
| [
"Sipu Ruan ",
"Karen L Poblete ",
"Hongtao Wu ",
"Qianli Ma ",
"Gregory S Chirikjian ",
"Sipu Ruan ",
"Karen L Poblete ",
"Hongtao Wu ",
"Qianli Ma ",
"Gregory S Chirikjian "
]
| []
| []
| Path planning has long been one of the major research areas in robotics, with PRM and RRT being two of the most effective classes of planners. Though generally very efficient, these sampling-based planners can become computationally expensive in the important case of "narrow passages". This paper develops a path planning paradigm specifically formulated for narrow passage problems. The core is based on planning for rigid-body robots encapsulated by unions of ellipsoids. Each environmental feature is represented geometrically using a strictly convex body with a C 1 boundary (e.g., superquadric). The main benefit of doing this is that configuration-space obstacles can be parameterized explicitly in closed form, thereby allowing prior knowledge to be used to avoid sampling infeasible configurations. Then, by characterizing a tight volume bound for multiple ellipsoids, robot transitions involving rotations are guaranteed to be collision-free without needing to perform traditional collision detection. Furthermore, by combining with a stochastic sampling strategy, the proposed planning framework can be extended to solving higher dimensional problems in which the robot has a moving base and articulated appendages. Benchmark results show that the proposed framework often outperforms the sampling-based planners in terms of computational time and success rate in finding a path through narrow corridors for both single-body robots and those with higher dimensional configuration spaces. Physical experiments using the proposed framework are further demonstrated on a humanoid robot that walks in several cluttered environments with narrow passages.Index Terms-Motion and path planning, computational geometry, Minkowski sums | 10.1109/tro.2022.3187818 | [
"https://export.arxiv.org/pdf/2104.04658v2.pdf"
]
| 250,144,866 | 2104.04658 | e6c2e683e421065c38fcc80b088d4123c0efc663 |
Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components
30 Jun 2022
Sipu Ruan
Karen L Poblete
Hongtao Wu
Qianli Ma
Gregory S Chirikjian
Efficient Path Planning in Narrow Passages for Robots with Ellipsoidal Components
30 Jun 20221
Path planning has long been one of the major research areas in robotics, with PRM and RRT being two of the most effective classes of planners. Though generally very efficient, these sampling-based planners can become computationally expensive in the important case of "narrow passages". This paper develops a path planning paradigm specifically formulated for narrow passage problems. The core is based on planning for rigid-body robots encapsulated by unions of ellipsoids. Each environmental feature is represented geometrically using a strictly convex body with a C 1 boundary (e.g., superquadric). The main benefit of doing this is that configuration-space obstacles can be parameterized explicitly in closed form, thereby allowing prior knowledge to be used to avoid sampling infeasible configurations. Then, by characterizing a tight volume bound for multiple ellipsoids, robot transitions involving rotations are guaranteed to be collision-free without needing to perform traditional collision detection. Furthermore, by combining with a stochastic sampling strategy, the proposed planning framework can be extended to solving higher dimensional problems in which the robot has a moving base and articulated appendages. Benchmark results show that the proposed framework often outperforms the sampling-based planners in terms of computational time and success rate in finding a path through narrow corridors for both single-body robots and those with higher dimensional configuration spaces. Physical experiments using the proposed framework are further demonstrated on a humanoid robot that walks in several cluttered environments with narrow passages.Index Terms-Motion and path planning, computational geometry, Minkowski sums
I. INTRODUCTION
Sampling-based planners such as PRM [2] and RRT [3] (and a multitude of their extensions, e.g., [4], [5]) have demonstrated remarkable success in solving complex robot motion planning problems. These frameworks generate state samples randomly and perform explicit collision detection to assess their feasibility. These methods have had a profound impact both within robotics and across other fields such as molecular docking, urban planning, and assembly automation.
It is well known that despite the great success of these methods, the "narrow passage" problem remains a significant challenge. Generally speaking, when there is a narrow passage, random state samples and edges that eventually will be discarded. To increase the probability of sampling and connecting valid configurations in a narrow passage, various methods have been proposed such as [6]- [8] (Sec. II-A provides more detailed reviews on narrow passage problems). In this article, however, the narrow passage problem is addressed through an explicit closed-form characterization of the boundary between free and in-collision regions. The first goal of this paper is to: 1. Extend the previous methods of parameterizing the free space for single-body ellipsoidal robots avoiding ellipsoidal obstacles [9]. A more general case is studied where the obstacles are represented by unions of strictly convex bodies with C 1 boundaries.
In our proposed path planning framework, the robot is encapsulated by a union of ellipsoids. The configuration spaces to be considered are SE(d) and SE(d) × (S 1 ) n for rigid-body and articulated robots, respectively 1 . Ellipsoids have a wide range of applications in encapsulating robots. For example, the projection contour of a humanoid robot can be tightly encapsulated by an ellipse since its shoulders are narrower than the head [10] (Fig. 1a). In computational crystallography, it is natural to approximate a protein molecule by a momentof-inertia ellipsoid, which simplifies the complex geometric models and maintains the physical information of the protein [11] (Fig. 1b). Moreover, superquadrics are chosen as examples to represent environmental features. This family of shapes generalizes ellipsoids by adding freedoms in choosing the power of the exponents rather than restricting to quadratics. It represents a wider range of the complex shapes (e.g., cuboids, cylinders, etc) while also requiring only a few parameters [12].
When a robot is fixed at a certain orientation and internal joint angles, a "slice" of the configuration space (C-space) is defined by the Minkowski sums between the rigid body parts and the obstacles in the workspace [13], [14], denoted here as a "C-slice" [15]. (Sec. II-B reviews the literature in details on the computations of Minkowski sums). Once the C-space obstacles (C-obstacles) are computed, the complement regions between the planning arena 2 and the union of C-obstacles is the free space that allows the robot to travel through safely. Consequently, collision-free samples can be generated within this collision-free C-space. However, if one seeks to connect such samples using current sampling-based planners like PRM (a) Projection contour of a NAO humanoid robot is enclosed by an ellipse (yellow). or RRT, explicit collision checking is still required. Therefore, the second goal of this article is to:
Develop guaranteed safe and efficient methods for connecting configurations between different C-slices without performing explicit collision checking between pairwise bodies.
A "bridge C-slice" idea is proposed as a local planner to guarantee safe transitions between different C-slices. The name suggests that a new C-slice is built as a bridge between two adjacent C-slices. To efficiently construct a bridge C-slice, an enlarged void for each ellipsoidal robot part is computed in closed form. Here, a "void" is the free space that fully contains the robot part, ensuring that it moves without collisions. A sweep volume is then constructed to enclose the robot at all possible intermediate configurations during the transition.
All the above methods are combined into a path planning algorithm called "Highway RoadMap (HRM)". This planner is deterministic and suitable for rigid-body planning problems. It is known that traditional deterministic planners suffer from the curse of dimensionality burden in the case of articulated robots. Therefore, the third goal of this article is to:
3. Develop an effective method to tackle the exponential computational complexity for the planning of articulated robots.
A hybrid algorithm called "Probabilistic Highway RoadMap (Prob-HRM)" is proposed here to make planning in higher dimensional configuration spaces tractable. It randomly samples the rotational components (i.e., the base orientation and internal joint angles) and takes advantage of the explicit parameterizations of free space in each C-slice from HRM.
This article extends the conference version [1] on the same topic, and has significant updates. Comparing to the conference paper, the key contributions of this article are:
• Extend the graph construction procedure in each C-slice to 3D multi-body case; • Introduce a novel "bridge C-slice" method to connect vertices between adjacent C-slices; • Propose a hybrid planner which integrates the advantages of sampling-based planners on higher dimensional articulated robot planning problems; • Conduct rigorous benchmark simulations and physical experiments in challenging environments to evaluate the proposed planning framework.
These extensions are essential since more general 3D and articulated robot models are implemented. The benchmark and physical experimental settings are also more realistic. The rest of this article is organized as follows. Section II reviews related literature. Section III provides mathematical foundations. Section IV extends our previously proposed HRM planner to the case of 3D multi-body robot with ellipsoidal components. The novel "bridge C-slice" method is then introduced. Section V introduces the hybrid Prob-HRM planner. Section VI conducts extensive benchmarks with some popular and successful sampling-based planners. In Section VII, our planning framework is demonstrated by physical experiments in real world, which solve walking path planning problems for a humanoid robot in cluttered environments. We discuss the advantages and limitations of our proposed framework in Section VIII. Finally, we conclude in Section IX.
II. LITERATURE REVIEW
This section reviews related work on the key topics that this article addresses.
A. The challenge of narrow passages
One of the key factors that affects the performance of sampling-based planners is the random state sampling strategy. To tackle the "narrow passage" challenge, various sampling strategies have been studied throughout these years, many of which try to capture the local features around obstacles.
The bridge test [6] finds a collision-free middle point between configurations that are in collision with the obstacles. UOBPRM [16] searches for collision-free samples from a configuration in collision by moving in different ray directions. In [8], a Bayesian learning scheme is used to model sampling distributions. It subsequently updates the previous samples by maximizing the likelihood from the region that has higher probability in forming a valid path within the narrow passage. Ideas about generating samples on the "medial axis" were proposed in [17], [18]. Each sampled state, regardless of free or in-collision, is retracted to the medial axis of free space. The retraction direction is selected between the sampled state and its nearest neighbor on the boundary of free space. The resulting samples stay far from obstacles. And the usage of in-collision samples is able to detect regions close to narrow passages. The proposed framework in this article also attempts to generate vertices that stay away from obstacles as far as possible. A similar idea is used in the "maximize clearance" sampler, i.e., PRM(MC), in the benchmark studies of this article. For each valid sample, the sampler searches a new sample close-by but with larger distance to the obstacles. We use PRM(MC) for comparisons since it is implemented on the well-known Open Motion Planning Library (OMPL) [19]. This provides a standardized way to benchmark with other sampling-based planning algorithms as well as samplers.
Other methods combine the advantages of different kinds of algorithms. For example, Toggle PRM [20] simultaneously maps both free space and obstacle space, enabling an augmentation from a failed connection attempt in one space to the other. Spark PRM [7] grows a tree inside the narrow passage region to connect different parts of the roadmap on different ends of the region. Retraction-based RRT [21] tries to retract initial samples into more difficult regions, so as to increase probability of sampling near narrow passages. More recently, a reinforcement learning method is applied to enhance the ability to explore local regions where the tree grows [22].
Hybrid planner [15] combines a random sampling strategy with Minkowski sums computations, which increases the probability of identifying narrow regions. In this article, we use an approach with some similarities to the Prob-HRM planner to randomly sample the robot shapes. Nevertheless, the differences are significant. We propose a closed-form Minkowski sum expression for continuous bodies, as compared to pointbased Minkowski sums for polyhedral objects. To generate valid vertices, they directly choose the points on C-obstacle boundary, but we generate in the middle of free space in a more uniform way. And to connect different C-slices, they add a new vertex and search for paths on the C-obstacle boundaries, but we instead generate a new slice based on an enlarged void.
B. Computations of Minkowski sums
The Minkowski sum is ubiquitous in many fields such as computational geometry [23], robot motion planning [13], control theory [24], etc. Despite its straightforward definition, which will be given in Sec. III, computing an exact boundary of Minkowski sum between two general non-convex polytopes in R 3 can be as high as O(N 3 1 N 3 2 ), where N 1 and N 2 are the complexities (i.e., the number of facets) of the two polytopes. Therefore, many efficient methods decompose the general polytopes into convex components [25], since the Minkowski sums between two convex polytopes can achieve O(N 1 N 2 ) complexity [26]. Another type of methods is based on convolutions of two bodies, since Minkowski sum of two solid bodies is the support of the convolution of their indicator functions [27]. A simple approximated algorithm [28] is proposed that avoids computing 3D arrangement and winding numbers via collision detection. An exact Minkowski sum for polytopes containing holes is proposed using convolution [29]. In addition, point-based methods avoid convex decomposition [30]. The major advantages are the ease of generating points than meshes and the possibility of parallelisms [31]. An exact closed-form Minkowski sum formula for d-dimensional ellipsoids was introduced [32]. And in [33], a parameterized ellipsoidal outer boundary for the Minkowski sum of two ellipsoids is proposed. This article studies a more general case when one body is an ellipsoid and the other is a strictly convex body with a C 1 boundary (e.g., superquadric).
C. Ellipsoids and superquadrics for object representation
Besides using polyhedra for object representations, other geometric primitives such as ellipsoids and superquadrics also play an important role due to their simple algebraic characterizations. Recently, in many robotic applications, they are good candidates to encapsulate objects [34], [35].
A 3D ellipsoid in a general pose only needs 9 parameters: 3 for the shape (i.e., semi-axes lengths) and 6 for the pose. Algorithms related to ellipsoids are studied extensively [36], [37]. The minimum volume enclosing ellipsoid (MVEE), which is characterized as a convex optimization problem [38], is widely used to encapsulate a point cloud. The studies of algebraic separation conditions for two ellipsoids provides very efficient algorithms to detect collisions in both static and dynamic cases [39], [40]. Another attractive attribute of the representation using ellipsoids is the existence of efficient procedures of computing their distance [41], [42]. Once an ellipsoid is fully contained in another, the volume of its limited available motions is computed explicitly [43].
Superquadrics can be seen as an extension of ellipsoids, with the two additional exponents determining the sharpness and convexity [12]. They are able to represent a wider range of geometries such as cube, cylinder, octahedron, etc. Using optimization or deep learning techniques, point cloud data can be segmented and fitted by unions of superquadrics [44], [45]. Proximity queries and contact detection are useful applications of this geometric model [46], [47].
III. MATHEMATICAL PRELIMINARIES
This section provides the mathematical preliminaries for developing the new path planning paradigm in this article.
A. Minkowski sum and difference between two bodies
The Minkowski sum and difference of two point sets (or bodies) centered at the origin, i.e., P 1 and P 2 in R d , are defined respectively as [48]
P 1 ⊕ P 2 . = {p 1 + p 2 | p 1 ∈ P 1 , p 2 ∈ P 2 }, and P 1 ⊖ P 2 . = {p | p + P 2 ⊆ P 1 }.(1)
When computing the boundary in which the two bodies touch each other externally (i.e., their contact space), we refer to the calculation of ∂[P 1 ⊕ (−P 2 )], where −P 2 is the reflection of P 2 as viewed in its body frame [28]. Note that when P 2 is centrally symmetric, such as ellipsoids and superquadrics that this article focuses on, the Minkowski sum boundary and contact space are equivalent. Moreover, when the bodies are non-convex, using the fact that
if P 1 = Q 1 ∪Q 2 , then, P 1 ⊕P 2 = (Q 1 ⊕P 2 )∪(Q 2 ⊕P 2 ),(2)
their Minkwoski sums can be obtained via convex decomposition.
B. Implicit and parametetric surfaces
Assume that S 1 is a strictly convex body bounded by a C 1 hyper-surface embedded in R d . The implicit and parametric forms of its surface can be expressed as
Φ(x 1 ) = 1 and x 1 = f (ψ 1 ),(3)
where Φ(·) is a real-valued differentiable function of x 1 ∈ R d and f is a differentiable d-dimensional vector-valued function of surface parameters ψ 1 = [ψ 1 , ψ 2 , ..., ψ d−1 ] ⊤ ∈ R d−1 . Let E 2 be an ellipsoid in R d in general orientation, with semi-axis lengths a 2 = [a 1 , a 2 , ..., a n ] ⊤ . Its implicit and explicit expressions are
x ⊤ 2 A −2 2 x 2 = 1 and x 2 = A 2 u(ψ 2 ),(4)
where A 2 = R 2 Λ(a 2 )R ⊤ 2 is the shape matrix of E 2 , R 2 ∈ SO(d) denotes the orientation of E 2 , and Λ(·) is a diagonal matrix with the semi-axis length a i at the (i, i) entry.
A −2 2 . = (A 2 2 ) −1 = (A −1
2 ) 2 is used here for the sake of simplicity. u(ψ 2 ) is the standard parameterization of the ddimensional unit hyper-sphere using d − 1 angles. Specifically, in 2D, u(θ) = [cos θ, sin θ] ⊤ , and in 3D, u(η, ω) = [cos η cos ω, cos η sin ω, sin η] ⊤ .
One class of strictly convex bodies meeting the conditions stated earlier include those with specific kinds of superquadric boundaries. The implicit equations in the 2D and 3D cases are given by
Φ(x, y) = x a 2 ǫ + y b 2 ǫ and (5) Φ(x, y, z) = x a 2 ǫ 2 + y b 2 ǫ 2 ǫ 2 ǫ 1 + z c 2 ǫ 1 ,(6)
where a, b, c are the semi-axes lengths, and ǫ, ǫ 1 , ǫ 2 ∈ (0, 2) are the exponents that ensure strict convexity.
C. Closed-form Minkowski operations between an ellipsoid and a general convex differentiable surface
It has been shown previously in [32] that the Minkowski sum and difference between two ellipsoids can be parameterized in closed-form. The expression can be extended when one ellipsoid is substituted by S 1 [1]. The general simplified form for the Minkowski sum can be computed as
x mb = x 1 + R 2 Λ 2 (a 2 )R ⊤ 2 ∇ x1 Φ(x 1 ) Λ(a 2 )R ⊤ 2 ∇ x1 Φ(x 1 ) ,(7)
where ∇ x1 Φ(x 1 ) is the gradient of S 1 at x 1 . The conditions that S 1 is strictly convex and its boundary is C 1 ensure that the gradient exists and that there is never division by zero when using Eq. (7). Figure 2 illustrates the geometric interpretation of the computational process. Detailed derivations were presented in [1].
D. The minimum volume concentric ellipsoid (MVCE) enclosing two ellipsoids with the same center
When two ellipsoids are fixed at the same center, a "minimum volume concentric ellipsoid (MVCE)" can be computed in closed form as follows.
Given two d-dimensional ellipsoids E a and E b with semiaxis lengths a and b respectively. One ellipsoid (e.g., E b ) can be shrunk into a sphere (E ′ b ) via the affine transformation
T = R b Λ(r/b)R ⊤ b ,
where r is the radius and r/b
. = [r/b 1 , r/b 2 , ..., r/b d ] ⊤ ∈ R d .
Then the shape matrix of E a in shrunk space, i.e., E ′ a , can be computed as A ′ = T −1 R a Λ −2 (a)R ⊤ a T −1 . Using singular value decomposition (SVD), its semi-axis lengths and orientation, i.e., a ′ and R ′ a , can be obtained respectively. The shape matrix of their MVCE, i.e., E m , is obtained as M = T R ′ a Λ −2 (max(a ′ , r))R ′⊤ a T , where max(a ′ , r) . = [max(a ′ 1 , r), ..., max(a ′ d , r)] ⊤ and a ′ .
= [a ′ 1 , a ′ 2 , ..., a ′ d ] ⊤ ∈ R d .
The computational procedure is visualized in Fig. 3 for the 3D case. The idea here is inspired by [36], which provides equivalent computations for a maximum volume concentric ellipsoid covered by two ellipsoids.
Furthermore, this process can be applied iteratively if there are multiple concentric ellipsoids. For example, the MVCE that enclose the previous two ellipsoids, along with the next ellipsoid, can be enclosed by a new MVCE. The final resulting ellipsoid encapsulates all the original set of ellipsoids, which is denoted as a tightly-fitted ellipsoid (TFE).
E. Superquadric model fitting to point cloud data
Given a set of m 3D points {x i = [x i , y i , z i ] ⊤ , i = 1, ..., m}, a superquadric model can be approximated by minimizing [45], [49] min a,b,c,ǫ1,ǫ2,R,t
abc m i=1 (Φ ǫ1 (x ′ i , y ′ i , z ′ i ) − 1) 2 ,(8)
where Φ(·) is shown in Eq. (6),
x ′ i = R ⊤ (x i − t)
is the transformed data point as viewed in the body frame of the superquadric, and R ∈ SO(3) and t ∈ R 3 are the orientation and center of the superquadric, respectively. The factor abc is added here in order to minimize the volume of the fitted superquadric body. Similarly, for the 2D case, the corresponding nonlinear optimization problem can be formulated as
min a,b,ǫ,θ,t ab m i=1 (Φ ǫ (x ′ i , y ′ i ) − 1) 2 ,(9)
where Φ(·) is now referred to Eq. (5) and θ is the rotational angle of the 2D superellipse. Solving the above optimizations requires good initial conditions. The parameters from minimum volume enclosing ellipsoid (MVEE) are used, which can be computed using convex optimization as [38] min
A,t log det A s.t. A ≻ 0 , (x i − t) ⊤ A −2 (x i − t) ≤ 1 (i = 1, ..., m) ,(10)
where A is the shape matrix of an ellipsoid as in Eq. (4). This convex optimization process can also be used to bound the robot parts if they are originally modeled by surface meshes.
IV. HIGHWAY ROADMAP PLANNING ALGORITHM FOR RIGID-BODY ROBOTS WITH ELLIPSOIDAL COMPONENTS
This section introduces the extended "Highway RoadMap (HRM)" algorithm. The extension to the previous work [9] from 2D to 3D rigid-body path planning problems is explained here. Then, a novel vertex connection strategy for configurations with different rotational components is proposed. This strategy can be applied when the robot is constructed by a union of ellipsoids. Also, a procedure to iteratively refine the roadmap is introduced.
A. Overview of the Highway RoadMap planner
The general workflow to construct this graph-based roadmap system is illustrated in Alg. 1. To visually demonstrate the concept, a fully connected graph obtained by running our algorithm in the planar case is shown in Fig. 4. (c) E 2 is shrunk into a sphere, and an offset surface is computed.
(d) Stretch back and obtain S 1 ⊕ (−E 2 ) (the yellow region). (a) Two concentric 3D ellipsoids, Ea and E b . path: an ordered list of configurations
(b) Shrink E b into a sphere E ′ b ,1 R ← SampleOrientations(N slice ); 2 foreach i < N slice do 3 robot.ForwardKinematics(R i ) ; 4 roadmap ← ConstructOneSlice(robot, obstacle, arena, N line ) ; 5 end 6 foreach i < N slice do 7 roadmap ← ConnectAdjacentSlice(i, robot, R, N point ); 8 end 9 path ← GraphSearch(roadmap, endpts); 10 while Not TerminationCondition do 11 roadmap, path ← RefineExistRoadMap(robot, obstacle, arena, N line , R) ; 12 end
The input of the robot is a union of ellipsoids, including the body shapes and kinematic data. The kinematic data of each body part stores the relative rigid-body transformation with respect to the base. The input environmental data includes a set of superquadric objects that represent the obstacles and arena. And the endpts input indicates the start and goal configurations of the robot. There are two major input parameters: the number of C-slices N slice and the initial number of sweep lines at each C-slice N line . These two parameters determine the initial resolution of the roadmap. N line will be increased after the initial roadmap is built but the termination condition is not reached. The outputs of the algorithm are the roadmap and path. The roadmap is represented as a graph structure and the path stores an ordered list of valid configurations from the start to the goal. This algorithm will terminate when any of the following conditions is satisfied: a valid path is found, the maximum planning time is reached or the maximum number of sweep lines is generated.
In Alg. 1, Line 1 generates N slice of discrete rotations in SO(d) and stored a priori. Then, the forward kinematics is computed in Line 3 to rotate the rigid-body robot. At each fixed orientation, a subset of the C-space that only contains translation is built, denoted here as a "C-slice", in Line 4. Once all C-slices are constructed, the vertices among adjacent C-slices are connected via a novel idea of "bridge C-slice" in Line 7. Each constructed C-slice only connects to its most adjacent C-slice. In Line 9, a graph search technique is applied to find a path from the starting configuration to the goal. In this work, A * algorithm [50] is used. When the termination condition is not satisfied, the roadmap is refined in an iterative way in Line 11.
B. Discretization of the robot orientations
Line 1 of Alg. 1 pre-computes a set of orientation samples from SO(d). In the 2D case, uniformly distributed angles
Algorithm 2: ConstructOneSlice(robot, obstacle, arena, N line ) 1 C obstacle , C arena ← MinkowskiOperations(robot, obstacle, arena); 2 C free ← SweepLineProcess(C obstacle , C arena , N line ); 3 roadmap.vertex.Append( GenerateVertex(C free ) ); 4 roadmap.edge.Append( ConnectOneslice(C free ) );
Return: roadmap within the interval [−π, π] are computed. In 3D, the icosahedral rotational symmetry group of the Platonic solid (consisting 60 elements) is used, which gives a finite and deterministic sampling of SO (3). The geodesic distances between two neighboring samples are almost uniformly distributed [51].
Using this set of orientation samples, the rotational difference between two adjacent C-slices is smaller compared to nonuniform sample sets. Note that more rotations can be sampled to construct a denser roadmap per the user's choice, i.e., using the strategies proposed in [52], [53].
C. Construction of one C-slice
The detailed procedure to construct one single C-slice (i.e., Line 4 of Alg. 1) is outlined in Alg. 2. Within each C-slice, the closed-form Minkowski sum and difference for the bodies of robot are computed with the obstacles and arena, which results in C obstacle and C arena , respectively (Line 1). By sweeping parallel lines throughout the C-slice with a certain resolution (indicated by N line ), the free portion of the C-slice (C free ) is detected and represented as a set of line segments (Line 2). Furthermore, the middle point of each collision-free line segment is generated as the sampled vertices in the roadmap (Line 3). Then, two vertices in adjacent sweep lines attempt to be connected by collision-free edges (Line 4).
1) Minkowski operations for a multi-body robot:
At each C-slice, the closed-form Minkowski operations are computed to generate C-obstacles (i.e., in Line 1 of Alg. 2). The robot is constructed by a finite union of M rigidly connected ellipsoids E 1 , E 2 , . . . , E M . Without loss of generality, E 1 is chosen as the base of the robot. The relative transformations between the base E 1 and other ellipsoidal parts
E 2 , E 3 , . . . , E M are defined as g i = (R i , t i ) (i = 2, . . . , M )
, respectively. For a multi-link rigid-body robot, these relative transformations can be computed via forward kinematics with all the internal joints being fixed. With this definition and the property from Eq. (2), the union of the Minkowski operations for all body parts can be expressed relative to one single reference point, which we choose as the center of the base ellipsoid E 1 . In particular, for each ellipsoidal body E i , a positional offset t i is added to Eq. (7). For practical computational purposes, each Minkowski sum and difference boundary is discretized as a convex polygon in 2D and polyhedral mesh in 3D. The vertices of the discrete boundary are generated using the parametric expression of Minkowski operations. Figure 5 shows the Minkowski sums of a multi-body robot at a fixed orientation (Fig. 5a) and the collision-free C-space in the corresponding C-slice (Fig. 5b). 2) A sweep-line process to characterize free regions within one C-slice: The general idea of the "sweep-line" process (i.e., Line 2 of Alg. 2) is analogous to raster scanning -a set of parallel lines is defined to sweep throughout the whole Cslice. Theoretically, these parallel lines can be defined along any direction. But for simplicity of representation and storage, throughout this article, the lines are defined to be parallel to the basis axes of the coordinate system. Specifically, the sweep lines are parallel to x-axis and z-axis for the 2D and 3D case, respectively. Note that, in the 3D case, one could think the process as firstly sweeping planes through the 3D translational space, then sweeping lines within each plane. But in practice, there is no need to compute each plane completely that includes the silhouettes of C-obstacles. Instead, this work approximates each plane by a bundle of sweep lines, which are then used directly to compute free segments via line-obstacle intersections.
To generate collision-free configurations, segments on each sweep line within all C-obstacles and C-arenas are computed, denoted as L O and L A , respectively. Then, the collision-free segments L free can be computed as [9], [54]
L free = MA×M i=1 L Ai − MO ×M j=1 L Oj(11)
where M A and M O are numbers of arenas and obstacles respectively. All L free are stored in C free (Line 2 of Alg. 2). Then, collision-free vertices are generated as the middle point of each L free (Line 3 of Alg. 2). Afterwards, more vertices can be generated as an enhancement step. An example is given and applied throughout this article. Denote L j,k as the k th free segment of j th sweep line, with V j,k being its corresponding middle point. Firstly, L j+1,k2 is projected onto L j,k1 . If the projection overlaps with L j,k1 but V j,k1 is not within the overlapping segment, a new vertex within the overlapping segment that is nearest to V j,k1 is added to the vertex list. The resulting new vertex is closer to V j+1,k2 than V j,k1 does, which gives higher chance to make the further connection success, especially in narrow regions. Once a list of collision-free vertices is generated, the next step is to connect them (Line 4 of Alg. 2). In this work, only two vertices in adjacent sweep lines attempt to be connected with a straight line segment. Assume a candidate connection is attempted between V j,k1 and V j+1,k2 . The connection validity is checked by computing the intersections between the line segment V j,k1 V j+1,k2 and all meshed C-obstacles. If the segment is outside all C-obstacles, the whole edge is guaranteed to be collision-free. Figure 6 shows the decomposed C-space in one slice of a planar case. The horizontal raster lines indicate the collision-free line segments. This method provides a continuous way of validating edges within each C-slice, in the sense that the whole edge is checked without interpolation.
D. Vertex connections between adjacent C-slices
Since each C-slice only represents one orientation of the robot, rotational motions are required when connecting different C-slices. A novel "bridge C-slice" method is proposed (i.e., in Line 7 of Alg. 1) to guarantee that the vertices at different C-slices can be safely connected without performing explicit collision detection. Algorithm 3 outlines this new local planner. The general idea is to construct a new C-slice based on an enlarged ellipsoidal void that encloses the robot at two configurations and compute a translational sweep volume that bounds the whole transition.
1) General ideas of the "bridge C-slice" local planner: Each C-slice only attempts to connect with one adjacent C-slice, which is searched at the beginning in Line 1 of Alg. 3. And the metric that evaluates adjacency is based on the distance of the rotational components. In the 3D case, for instance, the Euclidean distance between the quaternion parameterization of the two bodies is used. The core steps in Alg. 3 are Line 2, which constructs an enlarged tightly-fitted
Algorithm 3: ConnectAdjacentSlice(i, robot, R, N point ) 1 R near ← GetAdjacentSlice(i, R); 2 TFE ← ComputeTightlyFittedEllipsoids(robot, R i , R near , N point ) ; 3 C obstacle , C arena ← MinkowskiOperations(TFE, obstacle, arena); 4 foreach vertex V 1 in current C-slice do 5 {V 1,near } ← NeighborVerticesInAdjacentSlice( roadmap.vertex ) ; 6 foreach V 2 ∈ {V 1,near } do 7 {V step } ← PathInterp(V 1 , V 2 , N point ) ; 8 if IsPathValid({t step }, C obstacle , C arena ) then 9 roadmap.edge.Append({V 1 , V 2 }) ;V 1 = (R 1 , t 1 ) to V 2 = (R 2 , t 2 )
, where R i and t i (i = 1, 2) represents the rotation and translation part of vertex V i , respectively. The idea here is to enclose the motions of each ellipsoidal part of the robot, i.e., E k , between the two configurations by a tightly fitted sweep volume, which is guaranteed to be collisionfree. The intermediate configurations between V 1 and V 2 can be computed using interpolation technique. To construct the sweep volume, a tightly-fitted concentric ellipsoid (TFE) for each E k at all orientations from the interpolated motions is computed, which will be detailed in Sec. IV-D2. The computed TFE is the void that guards the safe motions of the actual ellipsoidal part. Then, the computed TFE translates from t 1 to t 2 following the interpolated path (i.e., {t step }) of E k 's center. The resulting sweep volume bounds the whole transition of E k between the two configurations. To ensure that each computed TFE stays inside the collision-free space, one could query the inside-outside status of all the intermediate translation parts {t step } with all C-obstacles and C-arena. Then, if all the positions from {t step } are valid, the sweep volume is guaranteed to be safe. Therefore, the whole transition for the ellipsoidal part E k is collision-free. Figure 7a shows the procedure of constructing the sweep volume for an individual body part. And Fig. 7b illustrates the union of sweep volumes that encloses the whole multibody robot in the planar case. The robot base follows a 2D straight line with rotations, and the TFEs of different body parts follow different paths (as show in white curves). In this process, the TFE for each body part translates with respect to its own center individually. This differs from the operations within one C-slice, which requires an offset to the C-obstacle and C-arena boundaries in order to make the robot as a whole rigid body. The reason is that, as Fig. 7b shows, the motion of each robot part is no longer a pure translation. Therefore, the reference points of Minkowski operations for different body 2) Computational procedure for "Tightly-Fitted Ellipsoids": Line 2 of Alg. 3 generates the TFE for each individual part of the robot. The detailed computational procedure is shown in Alg. 4. Firstly, an interpolation of the orientations is computed (Line 1). The number of intermediate orientations is pre-defined by users as the parameter N point . Then, the TFE set, represented as a set of ellipsoids, is initiated as the robot at i th orientation (Line 2). For each interpolated step, the orientations of all the robot parts are updated (Line 4). Finally, the updated TFE for each robot part is generated by computing the minimum volume concentric ellipsoid (MVCE) (introduced in Sec. III-D) of the current TFE and each ellipsoidal part at the new orientation (Line 6). This procedure requires N point iterations so that all the interpolated orientations between two C-slices can be fully encapsulated.
3) Vertex connections based on bridge C-slice calculations: A "bridge C-slice" is constructed via closed-form Minkowski operations between the computed TFE and obstacles/arena (Line 3 of Alg. 3). Then, the algorithm attempts to connect all the existing vertices to their nearest neighbors within the adjacent C-slice. The nearest neighbors of a vertex are defined as located within the same sweep line (Line 5 of Alg. 3).
For each candidate connection, the robot is transformed according to the interpolated configurations between two vertices (Line 7 of Alg. 3). Note that the rotation part of each interpolated motion needs to match those when computing TFEs (i.e., Line 1 of Alg. 4). This is not hard to achieve
Algorithm 4: ComputeTightlyFittedEllipsoids(robot, R i , R j , N point ) 1 {R step } ← RotationInterpolation(R i , R j , N point ) ; 2 TFE ← robot.ForwardKinematics(R i ) ; 3 foreach R step do 4 robot.ForwardKinematics(R step ) ; 5 foreach robot part E k do 6 TFE[k] ← MVCE(TFE[k], E k ) ; 7 end 8 end
Return: TFE (a set of TFEs for different robot parts)
for a typical interpolation of rigid-body motions, even when a simultaneous rotation and translation is considered. For example, this article uses interpolations in SE(3) of the form
g step = g 1 exp[τ log(g −1 1 g 2 )]
, where τ ∈ [0, 1] parameterizes the transition, g 1 , g 2 ∈ SE(3) are the two end points of the interpolation, exp[·] and log(·) are matrix exponential and logarithm respectively. The rotation part of each step is the same with interpolations on SO(3) since the group operation for the rotation part is not affected by the translation part.
With the C-obstacle and C-arena being computed for the TFE of each individual robot part, the next step is to check the validity of the translation motions of each TFE (Line 8 of Alg. 3). The inside-outside status of this point with all the C-obstacles are queried. If any of the center point is inside any C-obstacle, the validating process is terminated and the corresponding connection is discarded. Otherwise, further checks for other ellipsoidal parts are conducted until all the parts are checked.
The sweep volume gives a conservative encapsulation of the robot transitions between two vertices. But if the orientation samplings are incremental and uniform, there will not be a large rotational difference between adjacent C-slices. Thus, the extra free space inside the sweep volume will be small.
E. Refinement of the existing roadmap
Line 11 of Alg. 1 tries to refine the existing roadmap by increasing the density of sweep lines at each existing Cslice. This process will be triggered when the termination conditions are not satisfied after building and searching the whole roadmap. Detailed procedures are presented in Alg. 5. Firstly, N line is doubled (Line 1). Then, at each C-slice, the same procedure with Alg. 2 that constructs one C-slice with more sweep lines is performed (Line 4). Note that the Cobstacles are stored during the initial construction of C-slices, then they can be directly retrieved without re-computing. Afterwards, the new denser set of vertices attempts to connect with the old vertices within the same C-slice (Line 5). This process firstly locates the vertices that have the same rotation part in the existing roadmap. Then, each new vertex attempts to connect with nearby existing vertex using the same procedure in Line 4 of Alg. 2. Once connections are done, the graph search is performed again (Line 6). The original HRM planner in Sec. IV only designs for the case when robot parts are rigidly connected to each other. This limits its ability to extend to higher dimensional configuration space, i.e., SE(d) × (S 1 ) n . To avoid the exponential computational complexity in concatenating all possible combinations of the base pose and joint angles, a hybrid algorithm is proposed here. The general idea is to combine with samplingbased planners, which are proved to be advantageous in dealing with the "curse of dimensionality". Algorithm 6 shows the general workflow of the proposed hybrid probabilistic Highway RoadMap (Prob-HRM) planner. Prob-HRM mainly differs from the original HRM algorithm in that it utilizes the idea of random sampling for rotational components of the robot, i.e., the orientation of the base part and all joint angles.
The robot with fixed rotational components is called a "shape" [15], and a single C-slice is computed for each robot shape. Since for each shape, the internal joint angles are fixed, computations within the same C-slice in Prob-HRM stay the same with those in HRM, i.e., Line 5 of Alg. 6 are the same with the corresponding subroutine in Alg. 1. Other subroutines are also easy to be ported from the original HRM to Prob-HRM. In particular, the only difference for vertex connections among adjacent C-slices (Line 6) with that in HRM is that the connection attempts are made only for the new C-slice in the current loop. In HRM, as a comparison, the adjacent Cslices are connected in the end after all C-slices are generated. Also, the graph search process is conducted each time after the new C-slice is connected to the graph (as in Line 8). In contrast, for HRM, the graph search is conducted once after the whole graph is built. The new subroutines in Prob-HRM are the random sampling of robot shapes (Line 3) and the computations of forward kinematics (Line 4) in each loop. To sample a shape, the orientation of the robot base is uniformly and randomly sampled [52], followed by random sampling of joint angles within their ranges. After that, the forward kinematics is computed to get the poses of all the robot body parts with respect to the world frame. When N slice reaches a certain number but the termination conditions are still not satisfied, the C-slice exploration is paused and the refinement of the current roadmap is triggered. In practice, this refinement process is triggered when each 60 new C-slices are generated. The refinement procedure in Line 12 is the same with Alg. 5. Once all existing C-slices are refined but the algorithm is still not terminated, new C-slices exploration will be resumed. Note that N slice in HRM is no longer a pre-defined parameter of Prob-HRM since the orientation of the robot base and joint angles are updated online.
VI. BENCHMARKS ON PATH PLANNING FOR ELLIPSOIDAL
ROBOTS IN SUPERQUADRIC ENVIRONMENTS This section compares the performance of the proposed HRM and Prob-HRM planners with some well-known sampling-based motion planners. The proposed planners are written in C++. The baseline sampling-based planners to be compared are sourced from the "Open Motion Planning Library (OMPL)" [19]. All the benchmarks are conducted on Ubuntu 16.04 using an Intel Core i7 CPU at 3.40 GHz × 8. Figure 8 shows the planning environments and the solved paths for different robots using our proposed HRM or Prob-HRM planners. Both rigid-body and articulated robots are considered. The rigid-body robots include:
A. Planning environment and robot type settings
• tilted rabbit (Fig. 8a), with 3 body parts being rigidly and serially connected but not co-planar; and • rigid object with 13 parts, resembling a common chair (Fig. 8b). The articulated robots include:
• snake-like robot (Figs. 8c and 8d) which is serially configured with one movable base and 3 links (totally 9 degrees of freedom); and • tree-like robot (Fig. 8e), which is a tree structure with one movable base in the middle and 3 branches of RRR-typed serial linkages (totally 15 degrees of freedom). The planning environments being considered include:
• spatial maze map (Fig. 8a) with more narrow corridors; • home map (Fig. 8b) that is constructed as a 2-floor house with walls, corridors, stairs and tables; • cluttered map (Fig. 8c) with obstacles in arbitrary poses; • narrow window map (Fig. 8d) that includes one wall with a small window available for the robot to move through; • sparse map (Fig. 8e) with only two obstacles.
B. Parameter settings for planners
The compared baseline sampling-based planners from OMPL are PRM [2], Lazy PRM [5], RRT [3], RRT Connect [4] and EST [55]. Moreover, different sampling methods for PRM are also considered, including uniform random sampling (Uniform), obstacle-based sampling (OB) [56], Gaussian sampling (Gaussian) [57], bridge test (Bridge) [6] and maximized clearance sampling (MC). We conduct 50 planning trials per planner per map. A time limit of 300 seconds is set for one planning trial for all planners. A planning trial is considered failure if the time exceeds this limit. Table I shows the parameters of the HRM-based planners for each scenarios in Fig. 8. N slice is only defined for HRM planner, as explained in Sec. IV-B. For scenes including articulated robot (i.e., snake and tree), N slice is not a pre-defined parameter. The initial value of N line is a parameter defined by user or computed according to the planning scenario. In the following benchmark studies, the latter case is used. Based on the sizes of the obstacles and robot parts, the initial number of lines along a certain direction (i.e., N dir ) is computed by
1) Parameters for our proposed HRM-based planners:
N dir = a dir (A) − max i a(E i ) min j a(O j ) ,(12)
where a dir (A), a(E i ) and a(O j ) denote the semi-axis length of the arena A along direction dir, an ellipsoidal robot part E i and an obstacle O j , respectively. In 3D case, N line is a multiplication of the numbers of lines along x and y axes directions, i.e., N line (N x × N y ).
2) Parameters for sampling-based planners: For samplingbased planners, the choice of a relatively fast collision checker is essential. We choose the open-source and widely-used library, i.e., "Flexible Collision Library (FCL)" [58], as an external plug-in for collision detection between robot parts and obstacles. In particular, a special and efficient collision object from FCL is applied for ellipsoidal parts of the robot. The library uses 12 extreme vertices to outer bound the exact ellipsoidal surface, resultng in a discretized polyhedral model. For superquadrics, their surfaces are discretized as triangular meshes based on the parametric expressions. Then, the bodies can be seen as convex polyhedra. The collision objects are generated a priori, and the collision queries are made online by only changing the poses of each body part.
Since the efficiency and accuracy of collision checking highly depend on the quality of discretization, we provide a statistical evaluation to determine the number of vertices for the discrete superquadric surface. The evaluation metric is based on the relative volume difference between the ground truth and fitted geometries, i.e.,
κ volume = |Vol fitted − Vol true | Vol true × 100% ,(13)
where, Vol true and Vol fitted denote the volume of the ground truth and fitted geometries, respectively. Here the ground truth is considered as the superquadric and the fitted object is the convex polyhedron. The volume of a superquadric body can be computed as
Vol SQ = 2abcǫ 1 ǫ 2 β( ǫ1 2 + 1, ǫ 1 )β( ǫ2 2 , ǫ2 2 )
, where β(x, y) = 2 π/2 0 sin 2x−1 φ cos 2y−1 φ dφ is the beta function. κ volume are computed for different numbers of vertices on the superquadric surface. For each discretization, 100 random superquadric shapes are generated. Figure 9 shows the statistical plot of the discretization quality for different vertex resolutions. After around 100 vertices, the error starts to be plateaued and below 10%. Therefore, we choose 100 as the number of vertices for the superquadric surface. To make the comparison relatively fair, the same number of 100 vertices are chosen to discretize the closed-form Minkowski sums boundaries in each C-slice for our HRM-based planners.
C. Results and analysis
An ablation study for the "bridge C-slice" subroutine is firstly conducted, followed by benchmark studies among the proposed HRM-based and the sampling-based planners. The benchmark results include the total time and the success rate to solve different planning problems.
1) Ablation study for "bridge C-slice" subroutine: In this study, the HRM planner is treated as the baseline because of its deterministic property. The ablated version replaces the bridge C-slice with direct interpolation between two vertices in different C-slices and collision detection at each intermediate step using FCL. The number of steps is chosen as N point , the same with that in the bridge C-slice process. The average planning time and the number of edges in the graph (i.e., N edge ) are shown in Tab. II.
The original HRM with "bridge C-slice" connects less valid edges than the ablated version which uses direct interpolation Obstacles are 3D superquadrics and the robots are constructed by unions of 3D ellipsoids. The magenta curve represents the solved path of the robot base center that is projected from C-space to Euclidean space. and explicit collision detection. This is mainly due to the fact that the computation of TFE for each robot part is conservative. However, the efficient computations of Minkowski sums for TFEs and point inclusion queries in the bridge C-slice help speed up the planner. Especially in the more complex environments like cluttered and home maps, the proposed HRM runs around two times faster than the ablated version.
2) Benchmark results for SE (3) and higher dimensional planning problems: The comparisons of total running time and success rate in SE(3) rigid-body planning problems are shown in Fig. 10. Figures 11 and 12 show the computational time and success rate results for articulated robots in SE(3) × (S 1 ) n configuration space, respectively. For our proposed HRMbased and the PRM-based planners, the total running time at each trial includes both graph construction and search phases.
From the benchmark results, sampling-based planners are very efficient when the environments are sparse (such as in Figs. 10a, 11a, 11b, etc). However, they become slower when the space occupied by obstacles increases. Also, the success rate of sampling-based planners decreases as the environment becomes denser. In cases like in Figs. 12f and 12h, some planners cannot even find any solution within the assigned time limit of 300 seconds. For graph-based algorithms, even with the help of different types of samplers, they still take longer time to finally find a valid path. The tree-based planners are much more efficient for single queries in sparse and cluttered maps. And even in the maze map, when the dimensions of the problems increase, both RRT and RRT-connect planners can still search for a valid path efficiently. But in more complex maps like the home and narrow environments, both of their speed and success rate start to drop.
On the other hand, the proposed HRM and Prob-HRM planners are more efficient in complex environments, such as in Figs. 10c, 10d, 11f and 11h. The success rates among multiple planning trials are also higher, as in Figs. 10g, 10h, 12f and 12h. These results show the advantages of the proposed HRM-based planners in solving narrow passage (Figs. 10e, 10f, 10g and 10h).
problems. Furthermore, as can be seen from Fig. 10, the performance of HRM is more stable among different trials in rigid-body planning problems, which is mainly due to its deterministic nature. Prob-HRM planner, on the other hand, has larger variance in planning time for articulated robots (such as in Fig. 11c). Another feature of our proposed HRM and Prob-HRM is that they are both graph-based planners. They are competitive in solving complex problems with single-query planners (as in Figs. 10b, 11e and 11g), and outperforms all planners in environments with narrow corridors (as in Figs 10d, 11f and 11h). This is desirable since ours can not only build the roadmap efficiently but also answer planning queries multiple times when the environment does not change.
VII. PHYSICAL EXPERIMENTS ON WALKING PATH PLANNING FOR A HUMANOID ROBOT
In order to demonstrate the capabilities of our proposed planning framework in the real-world setting, physical experiments with a NAO humanoid robot [59] are conducted. The task is to guide the robot to walk through environments with several objects on the floor in random poses. The robot is required to avoid them in order to pass this cluttered space. Therefore, the problem is simplified into a planar case, where the robot and all objects are projected onto the floor. The contour of the robot projection is encapsulated by an ellipse, with pre-defined semi-axes lengths. The robot is able to walk sideways and its the configuration space is SE (2). The arena is a pre-defined rectangular area, which is bounded by a superellipse with exponent defined as 0.1. The whole experimental pipeline consists of three main modules: perception, planning, and control, which is shown in Fig 13. Robot Operating System (ROS) is used to communicate between different modules.
The whole scene is firstly captured from a fixed RGB-D camera as point cloud data. The point cloud is transformed from the camera frame into the world frame (indicated by an ArUco marker [60] on the floor), and segmented into disjoint clusters using Point Cloud Library (PCL) [61]. Each cluster is then projected onto the x-y plane and fitted into a superelliptical model using Eq. (9). The obtained environmental data is then given as the input to the planning module. By manually selecting the start and goal poses of the robot, a valid SE (2) path is then solved by the proposed HRM planner. Finally, given a list of SE(2) poses, the robot follows the path via a simple proportional controller [62]. The robot pose is tracked by an ArUco tag attached to its head and is controlled to minimize the distance with the next way point on the trajectory until reaching the goal configuration.
Since the planning scene does not change during the whole trial of the experiment, the perception and planning modules both run offline. The control module runs as an on-board process to keep the robot following the solved path. Table III shows the planning results in different example trials of experiments. Figure 14 demonstrates the walking sequences of NAO for the three different planning scenarios.
VIII. DISCUSSIONS
This section discusses the advantages of the proposed HRMbased planners, followed by some potential limitations.
A. Geometric approximations of rigid objects
The superquadric is an example model to enclose the environmental features. Alternatively, the convex polyhedron is a well-known type of geometry to represent a complex body, but may require many vertices and faces to describe a rounded region. It is possible to fit a convex polyhedra with superquadric model and vice versa, which introduces approximation errors. The fitting quality is evaluated as the relative volume between the two different models as in Eq. (13).
To fit a superquadric model, the vertices of a convex polyhedron is used in Eq. (8). The evaluation metrics include not only Eq. (13), but also the averaged sum of absolute difference between the point and implicit function, i.e.,
κ implicit = 1 m m i=1 |Φ(x i ) − 1| ,
where m is the number of vertices of the convex polyhedron.
Firstly, the convex polyhedron is treated as ground truth and generated as the convex hull of a set of 100 random points. Two types of convex polyhedra are studied: centrally symmetric and random shapes. To generate the centrally symmetric convex polyhedra, all the random vertices are flipped around the origin, followed by computing the convex hull. Among all the 100 trials, the mean of κ volume and κ implicit are: for the centrally symmetric polyhedra, 11.74% and 0.3886 respectively; and, for the random polyhedra, 19.88% and 0.6057 respectively. The results show that the superquadric surfaces fit closely to the polyhedral vertices when the object is centrally symmetric. However, when the convex polyhedron is highly non-symmetric, fitting the centralsymmetric superquadric model is conservative and the volume difference might be unavoidably large. On the other hand, a superquadric body can be considered as the ground truth, which this article mainly addresses and uses for benchmarks with sampling-based planners. The fitting process is introduced when selecting parameters for sampling-based planners in Sec. VI-B2. It can be seen that a good convex polyhedral approximation uses many more sampled points. This is mainly because that a better faceted representation of the curved surface of a superquadric requires denser set of sampled points.
B. Parameters selection for HRM and Prob-HRM
Two major parameters that affect the success and performance of the proposed algorithms are the number of C-slices (N slice ) and sweep lines (N line ).
For the HRM planner, a pre-defined deterministic sampling of the orientation is required, which is discussed in Sec.
IV-B. For Prob-HRM planner, N slice is incremental during the planning, meaning that the user does not need to provide this parameter beforehand. When there is no path found, this number will keep increasing until the termination condition is satisfied. Therefore, in this case, the roadmap can be kept refining instead of being computed from scratch again. Also, one could store the roadmap after one trial of planning for further reuse and refinements.
For both HRM and Prob-HRM planners, the selection of N line can be either user-defined or computed based on Eq. (12). The latter choice is the default of the proposed planners. This choice initially generates a coarse resolution of sweep lines that can efficiently solve an easy problem, but tries to detect most of the C-obstacles. Since there is a refinement step for the existing roadmap, the input N line only defines an initial resolution of the roadmap. When the problem becomes more complex, e.g., including narrow passages, the existing roadmap will be made denser by iteratively doubling the number of sweep lines until one of the termination conditions is satisfied.
C. Advantageous properties of our proposed framework
One of the highlights of our proposed path planning framework is the closed-form parameterization of Minkowski sum and difference that explicitly characterizes the C-space. The closed-form expression only depends on the parameters of one body (such as the superquadric obstacle body when computing the C-obstacle boundary). Therefore, the computational complexity is linear with respect to only one body, not both ones as the traditional polytope-based Minkowski sum [26]. Moreover, the numerical errors introduced in this process only come from the geometric approximations of object since the Minkowski sums computations are exact. The density of sampling vertices on the object surface is determined and used throughout all the experiments in this article. It is shown to be robust in different scenarios in terms of success rate and speed to solve motion planning queries.
The sweep line method in a single C-slice avoids traditional collision detection computation in generating collision-free samples. The vertices computed in each C-slice are automatically guaranteed to be safe. With the enhancement step, more vertices within each free segment can be generated. The added new vertices are closer to the adjacent free segment than the existing middle point, making it possible to circumvent obstacles compared to directly connecting two middle points. This step makes the vertex generation process more robust since more possible valid edges can be connected. Moreover, when connecting an edge between two vertices within one C-slice, the whole edge is checked for intersections with C-obstacle boundaries (if a straight-line connection is considered). This is a continuous way of performing validity check, since no interpolation along the edge is required.
With the roadmap refinement process, the portion of free space represented by the collision-free intervals of each sweep line increases with higher resolutions. Each C-slice can be explored uniformly along the sweeping direction and completely within a certain resolution parameter. The initial resolution parameter set by users might not be enough to find a valid solution. But with this refinement step, the free space can be explored in an adaptive way, making the proposed algorithms more robust in dealing with resolution errors.
The "bridge C-slice" adds another C-slice to the whole roadmap, which doubles the total number of slices. However, it simplifies the edge validation process. In each bridge Cslice, only the center point of each enlarged ellipsoidal void for robot part is checked with C-obstacles. Computing a path in the bridge C-slice can be viewed as a projection of the SE(3) (or SE(3) × (S 1 ) n ) motion sequence of the robot onto a path for translational motion of the enlarged void in R 3 . The validation process still involves interpolations between two SO(3) (or SO(3) × (S 1 ) n ) configurations. But they are only computed once before connecting two C-slices. The deterministic nature of HRM makes it stable over different benchmark trials on the same planning scene. The Prob-HRM planner, on the other hand, integrates the shining features of the probabilistic ideas in sampling-based algorithms. Comparing to HRM, the number of robot shapes sampled in Prob-HRM is unknown a priori. But as shown in the benchmark results, the final numbers of C-slices are within a tractable range. This is mainly because that Prob-HRM still preserves the deterministic nature when exploring each C-slice, which increases the chance of identifying difficult regions. The collaboration with sampling-based planners avoids the dimensionality explosions for higher degrees-offreedom robots, making our framework extendable to wider and more complicated tasks.
D. Limitations
There are also some limitations of the proposed framework. Firstly, the geometry of the robot parts is limited to ellipsoids. The Minkowski sums are exact only when one of the bodies is an ellipsoid. For other geometric representations, such as polyhedra and point cloud, a fitting process is required before running the planner. Also, the meshed surface of the exact Minkowski sum in the sweep-line process introduces another level of approximation errors.
The computation of tightly-fitted ellipsoid (TFE) in the bridge C-slice is conservative in the sense that some free space will be lost when the robot parts are enlarged. This enlargement scarifies the completeness of the planner. However, the efficiency using the bridge C-slice is significant based on the ablation study and benchmark results. Also, when the distance between two C-slices is smaller, the TFE encloses each robot part tighter, resulting in losing less free space.
The current HRM and Prob-HRM are both effective when the robot motions are dominated by translations. But they are not advantageous for robots with a fixed base such as manipulators. Prob-HRM can possibly be used to solve problems with pure rotational motions. In this case, useful operations within a single C-slice might be very limited, since no translational connections can be made. When the robot base is fixed, Prob-HRM is equivalent to a pure sampling-based planner. In this case, the proposed closed-form Minkowski operations and the sweep line method can be used to generate valid vertices during the C-space exploration. And the "bridge Cslice" method can be applied as the transition validity checker between adjacent C-slices.
IX. CONCLUSION
This article proposes a path planning framework based on the closed-form characterization of Minkowski sum and difference. The important "narrow passage" problem can be solved efficiently by the proposed extended Highway RoadMap (HRM) planner. Collision-free configurations are generated directly by a "sweep line" process. And connections between two configurations with the same rotational components can be validated without interpolations. Configurations with different rotational components are connected through a novel "bridge C-slice" method using the sweep volume of enlarged ellipsoidal voids. A new hybrid probabilistic variant, i.e., Prob-HRM, is then proposed to solve higher dimensional problems. It combines the efficient explicit descriptions of C-space and the effectiveness of random sampling. This hybrid idea can thereby achieve better performance in higher dimensional (articulated robot) motion planning problems in cluttered environments with narrow passages.
Fig. 1 :
1Examples of robots and protein molecules encapsulated by ellipsoids.
original space with S 1 in the center and E 2 translating around. (b) Both bodies are rotated by the inverse orientation of E 2 .
Fig. 2 :
2Process for the characterizations of the Minkowski sums between a superquadric S 1 and an ellipsoid E 2 .
Fig. 3 :
3Computational procedure for minimum volume concentric ellipsoid that covers two ellipsoids in 3D.Algorithm 1: Highway RoadMap (HRM) Algorithm Inputs : robot: a union of ellipsoidal objects; obstacle: a set of superquadric objects; arena: a set of superquadric objects; endpts: start and goal configurations Parameter: N slice : number of C-slices; N line : number of sweep lines; N point : number of points for interpolation Outputs : roadmap: a graph structure;
Fig. 4 :
4The fully connected graph structure, generated from one simulation trial. The vertical axis represents the rotational angle; dots are vertices and line segments are edges.
obstacle as the Minkowski sum boundaries of individual ellipsoidal bodies and their union. (b) Collision free C-space as an intersection of free space for individual robot parts.
Fig. 5 :
5The characterization of the Minkowski sum between a convex superquadric and a union of ellipsoids.
Fig. 6 :
6The sweep line process for detecting free space and construct sub-graph in one C-slice.
ellipsoid (TFE) for each robot part, and Line 8, which validates the whole edge connecting the two vertices.Suppose that the robot is moving from vertex
volume for individual elliptical part. (b) Sweep volume for the whole multi-body robot.
Fig. 7 :
72D example illustrating the sweep volume idea based on the sliding of tightly-fitted ellipsoids.parts have different trajectories to follow. The transition for the whole robot is guaranteed safe if all the individual reference points are within their own free space.
Return: roadmap, path V. HYBRID PROBABILISTIC VARIATION OF HIGHWAY ROADMAP PLANNER FOR ARTICULATED ROBOTS WITH ELLIPSOIDAL COMPONENTS
Fig. 8 :
8maze map, rabbit-shape robot (b) 3D home map, chair object (c) 3D cluttered map, snake-like robot (d) 3D narrow window map, snake-like robot (e) 3D sparse map, tree-like robot Demonstration of path planning solutions using our proposed HRM-based planners for different types of robots in different environments. Problems with rigid-body and articulated robots are planned using HRM and Prob-HRM respectively.
Fig. 9 :
9Relative volume for discretized superquadrics.
Fig. 10 :
10Running time and success rate comparisons between HRM and sampling-based motion planners. PRM-based planners use different sampling strategies, denoted as "PRM (sampler name)". The planning time is shown as a box plot (Figs. 10a, 10b, 10c and 10d). The red line inside the each box is the median of the data, while the upper and lower edges of the box show the 25%-th and 75%-th percentile respectively. The dashed lines extend to the most extreme data points excluding the outliers. And the outliers are plotted as + signs. The success rates are shown as bar plots
Fig. 11 :
11Running time comparisons between Prob-HRM and sampling-based motion planners.
Fig. 12 :
12Success rate comparisons between Prob-HRM and sampled-based motion planners.
Fig. 13 :
13The pipeline for physical experiments of the walking path planning for NAO humanoid robot.
Fig. 14 :
14Walking sequences of NAO that follows the planned SE(2) paths in physical experiments.
path ← GraphSearch(roadmap, endpts); 9 R.Append( R current ); 10 N slice = N slice + 1 ; 11 if Refine current roadmap then 12 roadmap, path ← RefineExistRoadMap(robot, obstacle, arena, N line , R); 13 endAlgorithm 6: Probabilistic Highway RoadMap (Prob-
HRM) Algorithm
Inputs
: robot: a list of ellipsoidal objects and
robot kinematic information;
obstacle: a set of superquadric objects;
arena: a set of superquadric objects;
endpts: start and goal configurations;
Parameter: N line : number of sweep lines;
N point : number of points for interpolation
Outputs : roadmap: a graph structure;
path: an ordered list of configurations
1 N slice = 0 ;
2 while Not TerminationCondition do
3
R current ← RandomSampleRobotShape();
4
robot.ForwardKinematics(R current );
5
roadmap ← ConstructOneSlice(robot, obstacle,
arena, N line );
6
roadmap ← ConnectAdjacentSlice(robot,
R current , N point );
7 end
8
TABLE I :
IParameters for HRM-based planners in scenarios
from Fig. 8
Map
Robot
N slice
N line (Nx × Ny)
Maze
Rabbit
60
55 (11 × 5)
Home
Chair
60
400 (20 × 20)
Cluttered
Snake
-
72 (12 × 6)
Narrow
Snake
-
18 (6 × 3)
Sparse
Tree
-
10 (5 × 2)
TABLE II :
IIResults of ablation study for "bridge C-slice"
Map
Robot
HRM version
Total time (s) N edge
Sparse
Rabbit
Original
0.2779
1883
Ablated
0.4530
1959
Cluttered
Rabbit
Original
2.238
10421
Ablated
5.203
11206
Maze
Rabbit
Original
0.8925
2494
Ablated
1.123
2840
Home
Chair
Original
87.31
72063
Ablated
153.3
72785
TABLE III :
IIINAO walking path planning results using HRMScene
Graph time (ms) Search time (ms) Total time (ms)
1
55.71
3.18
58.90
2
17.62
1.07
18.69
3
147.46
4.53
151.99
SE(d), d = 2, 3 is the pose of the robot base frame and (S 1 ) n represents the configuration space of n revolute joints.2 Here, the word "arena" denotes the bounded area in which the robot and obstacles are contained.
ACKNOWLEDGEMENTThe authors would like to thank Dr. Yan
Path planning for ellipsoidal robots and general obstacles via closed-form characterization of Minkowski operations. S Ruan, Q Ma, K L Poblete, Y Yan, G S Chirikjian, Algorithmic Foundations of Robotics XIII. SpringerS. Ruan, Q. Ma, K. L. Poblete, Y. Yan, and G. S. Chirikjian, "Path planning for ellipsoidal robots and general obstacles via closed-form characterization of Minkowski operations," in Algorithmic Foundations of Robotics XIII. Springer, 2018, pp. 3-18.
Probabilistic roadmaps for path planning in high-dimensional configuration spaces. L E Kavraki, P Svestka, J.-C Latombe, M H Overmars, IEEE transactions on Robotics and Automation. 124L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars, "Prob- abilistic roadmaps for path planning in high-dimensional configuration spaces," IEEE transactions on Robotics and Automation, vol. 12, no. 4, pp. 566-580, 1996.
Rapidly-exploring random trees: A new tool for path planning. S M Lavalle, Computer Science Department, Iowa State University, Tech. Rep.S. M. Lavalle, "Rapidly-exploring random trees: A new tool for path planning," Computer Science Department, Iowa State University, Tech. Rep., 1998.
RRT-connect: An efficient approach to single-query path planning. J J Kuffner, S M Lavalle, IEEE International Conference on Robotics and Automation (ICRA). IEEE2J. J. Kuffner and S. M. LaValle, "RRT-connect: An efficient approach to single-query path planning," in IEEE International Conference on Robotics and Automation (ICRA), vol. 2. IEEE, 2000, pp. 995-1001.
Path planning using lazy PRM. R Bohlin, L E Kavraki, IEEE International Conference on Robotics and Automation (ICRA). IEEE1R. Bohlin and L. E. Kavraki, "Path planning using lazy PRM," in IEEE International Conference on Robotics and Automation (ICRA), vol. 1. IEEE, 2000, pp. 521-528.
The bridge test for sampling narrow passages with probabilistic roadmap planners. D Hsu, T Jiang, J Reif, Z Sun, IEEE International Conference on Robotics and Automation (ICRA). IEEE3D. Hsu, T. Jiang, J. Reif, and Z. Sun, "The bridge test for sampling narrow passages with probabilistic roadmap planners," in IEEE Interna- tional Conference on Robotics and Automation (ICRA), vol. 3. IEEE, 2003, pp. 4420-4426.
Spark PRM: Using RRTs within PRMs to efficiently explore narrow passages. K Shi, J Denny, N M Amato, IEEE International Conference on Robotics and Automation (ICRA). IEEEK. Shi, J. Denny, and N. M. Amato, "Spark PRM: Using RRTs within PRMs to efficiently explore narrow passages," in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 4659-4666.
Bayesian local samplingbased planning. T Lai, P Morere, F Ramos, G Francis, IEEE Robotics and Automation Letters. 52T. Lai, P. Morere, F. Ramos, and G. Francis, "Bayesian local sampling- based planning," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1954-1961, 2020.
Path planning based on closedform characterization of collision-free configuration-spaces for ellipsoidal bodies obstacles and environments. Y Yan, Q Ma, G S Chirikjian, Proc. 1st Int. Workshop Robot Learn. Planning. 1st Int. Workshop Robot Learn. PlanningY. Yan, Q. Ma, and G. S. Chirikjian, "Path planning based on closed- form characterization of collision-free configuration-spaces for ellip- soidal bodies obstacles and environments," in Proc. 1st Int. Workshop Robot Learn. Planning, 2016, pp. 13-19.
Real-time reciprocal collision avoidance with elliptical agents. A Best, S Narang, D Manocha, IEEE International Conference on Robotics and Automation (ICRA). IEEEA. Best, S. Narang, and D. Manocha, "Real-time reciprocal collision avoidance with elliptical agents," in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 298-305.
Mathematical aspects of molecular replacement. v. isolating feasible regions in motion spaces. B Shiffman, S Lyu, G Chirikjian, Acta Crystallographica Section A: Foundations and Advances. B. Shiffman, S. Lyu, and G. Chirikjian, "Mathematical aspects of molecular replacement. v. isolating feasible regions in motion spaces," Acta Crystallographica Section A: Foundations and Advances, 2020.
Superquadrics and angle-preserving transformations. A H Barr, IEEE Computer Graphics and Applications. 11A. H. Barr, "Superquadrics and angle-preserving transformations," IEEE Computer Graphics and Applications, vol. 1, no. 1, pp. 11-23, 1981.
Robot motion planning. J.-C Latombe, Springer Science & Business Media124J.-C. Latombe, Robot motion planning. Springer Science & Business Media, 2012, vol. 124.
Configuration products and quotients in geometric modeling. S Nelaturi, V Shapiro, Computer-Aided Design. 437S. Nelaturi and V. Shapiro, "Configuration products and quotients in geometric modeling," Computer-Aided Design, vol. 43, no. 7, pp. 781- 794, 2011.
Hybrid motion planning using Minkowski sums. J.-M Lien, Robotics: Science and Systems. J.-M. Lien, "Hybrid motion planning using Minkowski sums," Robotics: Science and Systems, 2008.
UOBPRM: A uniformly distributed obstacle-based PRM. H.-Y Yeh, S Thomas, D Eppstein, N M Amato, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEH.-Y. Yeh, S. Thomas, D. Eppstein, and N. M. Amato, "UOBPRM: A uniformly distributed obstacle-based PRM," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2012, pp. 2655-2662.
MAPRM: A probabilistic roadmap planner with sampling on the medial axis of the free space. S A Wilmarth, N M Amato, P F Stiller, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C). 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C)IEEE2S. A. Wilmarth, N. M. Amato, and P. F. Stiller, "MAPRM: A proba- bilistic roadmap planner with sampling on the medial axis of the free space," in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), vol. 2. IEEE, 1999, pp. 1024- 1031.
UMAPRM: Uniformly sampling the medial axis. H.-Y C Yeh, J Denny, A Lindsey, S Thomas, N M Amato, 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEEH.-Y. C. Yeh, J. Denny, A. Lindsey, S. Thomas, and N. M. Amato, "UMAPRM: Uniformly sampling the medial axis," in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 5798-5803.
The open motion planning library. I A Sucan, M Moll, L E Kavraki, IEEE Robotics & Automation Magazine. 194I. A. Sucan, M. Moll, and L. E. Kavraki, "The open motion planning library," IEEE Robotics & Automation Magazine, vol. 19, no. 4, pp. 72-82, 2012.
Toggle PRM: A coordinated mapping of Cfree and C-obstacle in arbitrary dimension. J Denny, N M Amato, Algorithmic Foundations of Robotics X. SpringerJ. Denny and N. M. Amato, "Toggle PRM: A coordinated mapping of C- free and C-obstacle in arbitrary dimension," in Algorithmic Foundations of Robotics X. Springer, 2013, pp. 297-312.
A selective retractionbased RRT planner for various environments. J Lee, O Kwon, L Zhang, S.-E Yoon, IEEE Transactions on Robotics. 304J. Lee, O. Kwon, L. Zhang, and S.-E. Yoon, "A selective retraction- based RRT planner for various environments," IEEE Transactions on Robotics, vol. 30, no. 4, pp. 1002-1011, 2014.
A learning-based multi-RRT approach for robot path planning in narrow passages. W Wang, L Zuo, X Xu, Journal of Intelligent & Robotic Systems. 901-2W. Wang, L. Zuo, and X. Xu, "A learning-based multi-RRT approach for robot path planning in narrow passages," Journal of Intelligent & Robotic Systems, vol. 90, no. 1-2, pp. 81-100, 2018.
Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. X Qin, N T An, Computational Optimization and Applications. 743X. Qin and N. T. An, "Smoothing algorithms for computing the projec- tion onto a Minkowski sum of convex sets," Computational Optimization and Applications, vol. 74, no. 3, pp. 821-850, 2019.
Smallest ellipsoid containing p-sum of ellipsoids with application to reachability analysis. A Halder, IEEE Transactions on Automatic Control. A. Halder, "Smallest ellipsoid containing p-sum of ellipsoids with application to reachability analysis," IEEE Transactions on Automatic Control, 2020.
Exact Minkowksi sums of polyhedra and exact and efficient decomposition of polyhedra into convex pieces. P Hachenberger, Algorithmica. 552P. Hachenberger, "Exact Minkowksi sums of polyhedra and exact and efficient decomposition of polyhedra into convex pieces," Algorithmica, vol. 55, no. 2, pp. 329-345, 2009.
On the exact maximum complexity of Minkowski sums of polytopes. E Fogel, D Halperin, C Weibel, Discrete & Computational Geometry. 424654E. Fogel, D. Halperin, and C. Weibel, "On the exact maximum com- plexity of Minkowski sums of polytopes," Discrete & Computational Geometry, vol. 42, no. 4, p. 654, 2009.
Harmonic Analysis for Engineers and Applied Scientists: Updated and Expanded Edition. G S Chirikjian, A B Kyatkin, Courier. Dover PublicationsG. S. Chirikjian and A. B. Kyatkin, Harmonic Analysis for Engineers and Applied Scientists: Updated and Expanded Edition. Courier Dover Publications, 2016.
A simple method for computing Minkowski sum boundary in 3D using collision detection. J.-M Lien, Algorithmic foundation of robotics VIII. SpringerJ.-M. Lien, "A simple method for computing Minkowski sum boundary in 3D using collision detection," in Algorithmic foundation of robotics VIII. Springer, 2009, pp. 401-415.
Exact Minkowski sums of polygons with holes. A Baram, E Fogel, D Halperin, M Hemmer, S Morr, Computational Geometry. 73A. Baram, E. Fogel, D. Halperin, M. Hemmer, and S. Morr, "Exact Minkowski sums of polygons with holes," Computational Geometry, vol. 73, pp. 46-56, 2018.
Contributing vertices-based Minkowski sum computation of convex polyhedra. H Barki, F Denis, F Dupont, Computer-Aided Design. 417H. Barki, F. Denis, and F. Dupont, "Contributing vertices-based Minkowski sum computation of convex polyhedra," Computer-Aided Design, vol. 41, no. 7, pp. 525-538, 2009.
Covering Minkowski sum boundary using points with applications. J.-M Lien, Computer Aided Geometric Design. 258J.-M. Lien, "Covering Minkowski sum boundary using points with applications," Computer Aided Geometric Design, vol. 25, no. 8, pp. 652-666, 2008.
Closed-form characterization of the Minkowski sum and difference of two ellipsoids. Y Yan, G S Chirikjian, Geometriae Dedicata. 1771Y. Yan and G. S. Chirikjian, "Closed-form characterization of the Minkowski sum and difference of two ellipsoids," Geometriae Dedicata, vol. 177, no. 1, pp. 103-128, 2015.
On the parameterized computation of minimum volume outer ellipsoid of Minkowski sum of ellipsoids. A Halder, 2018 IEEE Conference on Decision and Control (CDC). IEEEA. Halder, "On the parameterized computation of minimum volume outer ellipsoid of Minkowski sum of ellipsoids," in 2018 IEEE Con- ference on Decision and Control (CDC). IEEE, 2018, pp. 4040-4045.
A grasping approach based on superquadric models. G Vezzani, U Pattacini, L Natale, IEEE International Conference on Robotics and Automation (ICRA). IEEEG. Vezzani, U. Pattacini, and L. Natale, "A grasping approach based on superquadric models," in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 1579-1586.
Multi-contact heavy object pushing with a centaur-type humanoid robot: Planning and control for a real demonstrator. M P Polverini, A Laurenzi, E M Hoffman, F Ruscelli, N G Tsagarakis, IEEE Robotics and Automation Letters. 52M. P. Polverini, A. Laurenzi, E. M. Hoffman, F. Ruscelli, and N. G. Tsagarakis, "Multi-contact heavy object pushing with a centaur-type humanoid robot: Planning and control for a real demonstrator," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 859-866, 2020.
Algorithms for ellipsoids. S B Pope, Cornell University Report No. FDAS. B. Pope, "Algorithms for ellipsoids," Cornell University Report No. FDA, pp. 08-01, 2008.
Ellipsoidal Toolbox (ET). A A Kurzhanskiy, P Varaiya, IEEE Conference on Decision and Control (CDC). A. A. Kurzhanskiy and P. Varaiya, "Ellipsoidal Toolbox (ET)," in IEEE Conference on Decision and Control (CDC), Dec 2006, pp. 1498-1503.
Rounding of polytopes in the real number model of computation. L G Khachiyan, Mathematics of Operations Research. 212L. G. Khachiyan, "Rounding of polytopes in the real number model of computation," Mathematics of Operations Research, vol. 21, no. 2, pp. 307-320, 1996.
An algebraic condition for the separation of two ellipsoids. W Wang, J Wang, M.-S Kim, Computer-Aided Geometric Design. 186W. Wang, J. Wang, and M.-S. Kim, "An algebraic condition for the separation of two ellipsoids," Computer-Aided Geometric Design, vol. 18, no. 6, pp. 531-539, 2001.
An algebraic approach to continuous collision detection for ellipsoids. X Jia, Y.-K Choi, B Mourrain, W Wang, Computer-Aided Geometric Design. 283X. Jia, Y.-K. Choi, B. Mourrain, and W. Wang, "An algebraic approach to continuous collision detection for ellipsoids," Computer-Aided Geo- metric Design, vol. 28, no. 3, pp. 164-176, 2011.
Obstacle collision detection using best ellipsoid fit. E Rimon, S P Boyd, Journal of Intelligent and Robotic Systems. 182E. Rimon and S. P. Boyd, "Obstacle collision detection using best ellipsoid fit," Journal of Intelligent and Robotic Systems, vol. 18, no. 2, pp. 105-126, 1997.
Computing the signed distance between overlapping ellipsoids. S Iwata, Y Nakatsukasa, A Takeda, SIAM Journal on Optimization. 254S. Iwata, Y. Nakatsukasa, and A. Takeda, "Computing the signed dis- tance between overlapping ellipsoids," SIAM Journal on Optimization, vol. 25, no. 4, pp. 2359-2384, 2015.
The kinematics of containment for N-dimensional ellipsoids. S Ruan, J Ding, Q Ma, G S Chirikjian, Journal of Mechanisms and Robotics. 11441005S. Ruan, J. Ding, Q. Ma, and G. S. Chirikjian, "The kinematics of containment for N-dimensional ellipsoids," Journal of Mechanisms and Robotics, vol. 11, no. 4, p. 041005, 2019.
Revisiting superquadric fitting: A numerically stable formulation. N Vaskevicius, A Birk, IEEE transactions on pattern analysis and machine intelligence. 41N. Vaskevicius and A. Birk, "Revisiting superquadric fitting: A numer- ically stable formulation," IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 1, pp. 220-233, 2017.
Superquadrics revisited: Learning 3D shape parsing beyond cuboids. D Paschalidou, A O Ulusoy, A Geiger, IEEE Conference on Computer Vision and Pattern Recognition. D. Paschalidou, A. O. Ulusoy, and A. Geiger, "Superquadrics revisited: Learning 3D shape parsing beyond cuboids," in IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 344-10 353.
Contact detection between convex polyhedra and superquadrics in discrete element codes. D Peng, K J Hanley, Powder Technology. 356D. Peng and K. J. Hanley, "Contact detection between convex polyhedra and superquadrics in discrete element codes," Powder Technology, vol. 356, pp. 11-20, 2019.
Efficient exact collision detection between ellipsoids and superquadrics via closed-form Minkowski sums. S Ruan, K L Poblete, Y Li, Q Lin, Q Ma, G S Chirikjian, International Conference on Robotics and Automation (ICRA). IEEES. Ruan, K. L. Poblete, Y. Li, Q. Lin, Q. Ma, and G. S. Chirikjian, "Efficient exact collision detection between ellipsoids and superquadrics via closed-form Minkowski sums," in International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 1765-1771.
M D Berg, O Cheong, M V Kreveld, M Overmars, Computational geometry: algorithms and applications. Springer-Verlag TELOSM. d. Berg, O. Cheong, M. v. Kreveld, and M. Overmars, Computational geometry: algorithms and applications. Springer-Verlag TELOS, 2008.
Recovery of parametric models from range images: The case for superquadrics with global deformations. F Solina, R Bajcsy, IEEE transactions on pattern analysis and machine intelligence. 12F. Solina and R. Bajcsy, "Recovery of parametric models from range images: The case for superquadrics with global deformations," IEEE transactions on pattern analysis and machine intelligence, vol. 12, no. 2, pp. 131-147, 1990.
A formal basis for the heuristic determination of minimum cost paths. P E Hart, N J Nilsson, B Raphael, IEEE transactions on Systems Science and Cybernetics. 42P. E. Hart, N. J. Nilsson, and B. Raphael, "A formal basis for the heuristic determination of minimum cost paths," IEEE transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, 1968.
Quantizing Euclidean motions via double-coset decomposition. C Wülker, S Ruan, G S Chirikjian, Research. 20191608396C. Wülker, S. Ruan, G. S. Chirikjian et al., "Quantizing Euclidean mo- tions via double-coset decomposition," Research, vol. 2019, p. 1608396, 2019.
Uniform random rotations. K Shoemake, Graphics Gems III (IBM Version). ElsevierK. Shoemake, "Uniform random rotations," in Graphics Gems III (IBM Version). Elsevier, 1992, pp. 124-132.
Generating uniform incremental grids on SO(3) using the Hopf fibration. A Yershova, S Jain, S M Lavalle, J C Mitchell, The International Journal of Robotics Research. 297A. Yershova, S. Jain, S. M. Lavalle, and J. C. Mitchell, "Generating uniform incremental grids on SO(3) using the Hopf fibration," The International Journal of Robotics Research, vol. 29, no. 7, pp. 801- 812, 2010.
Geometric motion planning methods for robotics and biological crystallography. Y Yan, Johns Hopkins UniversityPh.D. dissertationY. Yan, "Geometric motion planning methods for robotics and biological crystallography," Ph.D. dissertation, Johns Hopkins University, 2014.
Path planning in expansive configuration spaces. D Hsu, J.-C Latombe, R Motwani, IEEE International Conference on Robotics and Automation (ICRA). IEEE3D. Hsu, J.-C. Latombe, and R. Motwani, "Path planning in expansive configuration spaces," in IEEE International Conference on Robotics and Automation (ICRA), vol. 3. IEEE, 1997, pp. 2719-2726.
An obstacle-based rapidly-exploring random tree. S Rodriguez, X Tang, J.-M Lien, N M Amato, Proceedings 2006 IEEE International Conference on Robotics and Automation. 2006 IEEE International Conference on Robotics and AutomationIEEES. Rodriguez, X. Tang, J.-M. Lien, and N. M. Amato, "An obstacle-based rapidly-exploring random tree," in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. IEEE, 2006, pp. 895-900.
The Gaussian sampling strategy for probabilistic roadmap planners. V Boor, M H Overmars, A F Van Der Stappen, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C). 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C)IEEE2V. Boor, M. H. Overmars, and A. F. Van Der Stappen, "The Gaussian sampling strategy for probabilistic roadmap planners," in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), vol. 2. IEEE, 1999, pp. 1018-1023.
FCL: A general purpose library for collision and proximity queries. J Pan, S Chitta, D Manocha, IEEE International Conference on Robotics and Automation (ICRA). IEEEJ. Pan, S. Chitta, and D. Manocha, "FCL: A general purpose library for collision and proximity queries," in IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2012, pp. 3859-3866.
NAO the humanoid and programmable robot. Accessed on. NAO the humanoid and programmable robot. Accessed on April 3, 2021. [Online]. Available: https://www.softbankrobotics.com/emea/en/nao
Automatic generation and detection of highly reliable fiducial markers under occlusion. S Garrido-Jurado, R Muñoz-Salinas, F J Madrid-Cuevas, M J Marín-Jiménez, Pattern Recognition. 476S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, "Automatic generation and detection of highly reliable fiducial markers under occlusion," Pattern Recognition, vol. 47, no. 6, pp. 2280-2292, 2014.
3D is here: Point Cloud Library (PCL). R B Rusu, S Cousins, IEEE International Conference on Robotics and Automation (ICRA). R. B. Rusu and S. Cousins, "3D is here: Point Cloud Library (PCL)," in IEEE International Conference on Robotics and Automation (ICRA).
. IEEE. IEEE, 2011, pp. 1-4.
Can I lift it? Humanoid robot reasoning about the feasibility of lifting a heavy box with unknown physical properties. Y Han, R Li, G S Chirikjian, arXiv:2008.03801arXiv preprintY. Han, R. Li, and G. S. Chirikjian, "Can I lift it? Humanoid robot reasoning about the feasibility of lifting a heavy box with unknown physical properties," arXiv preprint arXiv:2008.03801, 2020.
| []
|
[
"Constraints on minute-scale transient astrophysical neutrino sources",
"Constraints on minute-scale transient astrophysical neutrino sources"
]
| [
"M G Aartsen \nDept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"M Ackermann \nDESY\nD-15738ZeuthenGermany\n",
"J Adams \nDept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"J A Aguilar \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"M Ahlers \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n",
"M Ahrens \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"I Al Samarai \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland\n",
"D Altmann ",
"K Andeen \nDepartment of Physics\nMarquette University\n53201MilwaukeeWIUSA\n",
"T Anderson \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"I Ansseau \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"G Anton ",
"C Argüelles \nDept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"J Auffenberg \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"S Axani \nDept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"P Backes \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"H Bagherpour \nDept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"X Bai \nPhysics Department\nSouth Dakota School of Mines and Technology, Rapid City\n57701SDUSA\n",
"A Barbano \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland\n",
"J P Barron \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"S W Barwick \nDept. of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA\n",
"V Baum \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"R Bay \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"J J Beatty \nDept. of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n\nDept. of Astronomy\nOhio State University\n43210ColumbusOHUSA\n",
"J Becker Tjus ",
"K.-H Becker \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"S Benzvi \nDept. of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA\n",
"D Berley \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"E Bernardini \nDESY\nD-15738ZeuthenGermany\n",
"D Z Besson \nDept. of Physics and Astronomy\nUniversity of Kansas\n66045LawrenceKSUSA\n",
"G Binder \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"D Bindig \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"E Blaufuss \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"S Blot \nDESY\nD-15738ZeuthenGermany\n",
"C Bohm \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"M Börner \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"F Bos ",
"S Böser \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"O Botner \nDept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden\n",
"E Bourbeau \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n",
"J Bourbeau \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"F Bradascio \nDESY\nD-15738ZeuthenGermany\n",
"J Braun \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Brenzke \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"H.-P Bretz \nDESY\nD-15738ZeuthenGermany\n",
"S Bron \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland\n",
"J Brostean-Kaiser \nDESY\nD-15738ZeuthenGermany\n",
"A Burgman \nDept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden\n",
"R S Busse \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T Carver \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland\n",
"E Cheung \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"D Chirkin \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Christov \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland\n",
"K Clark \nSNOLAB\n1039 Regional Road 24, Creighton Mine 9, LivelyP3Y 1N2ONCanada\n",
"L Classen \nInstitut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany\n",
"G H Collin \nDept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"J M Conrad \nDept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"P Coppin \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"P Correa \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"D F Cowen \nDept. of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPAUSA\n\nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"R Cross \nDept. of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA\n",
"P Dave \nSchool of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA\n",
"M Day \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J P A M De André \nDept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA\n",
"C De Clercq \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"J J Delaunay \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"H Dembinski \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"K Deoskar \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"S De Ridder \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"P Desiati \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K D De Vries \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"G De Wasseige \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"M De With \nInstitut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany\n",
"T Deyoung \nDept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA\n",
"J C Díaz-Vélez \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"V Di Lorenzo \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"H Dujmovic \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"J P Dumm \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"M Dunkman \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"E Dvorak \nPhysics Department\nSouth Dakota School of Mines and Technology, Rapid City\n57701SDUSA\n",
"B Eberhardt \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"T Ehrhardt \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"B Eichmann ",
"P Eller \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"P A Evans \nDepartment of Physics and Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK\n",
"P A Evenson \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"S Fahey \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A R Fazely \nDept. of Physics\nSouthern University\n70813Baton RougeLAUSA\n",
"J Felde \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"K Filimonov \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"C Finley \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"A Franckowiak \nDESY\nD-15738ZeuthenGermany\n",
"E Friedman \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"A Fritz \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"T K Gaisser \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"J Gallagher \nDept. of Astronomy\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"E Ganster \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"L Gerhardt \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"K Ghorbani \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"W Giang \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"T Glauch \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"T Glüsenkamp ",
"A Goldschmidt \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"J G Gonzalez \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"D Grant \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"Z Griffith \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"C Haack \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"A Hallgren \nDept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden\n",
"L Halve \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"F Halzen \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Hanson \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"D Hebecker \nInstitut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany\n",
"D Heereman \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"K Helbing \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"R Hellauer \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"S Hickford \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"J Hignight \nDept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA\n",
"G C Hill \nDepartment of Physics\nUniversity of Adelaide\n5005AdelaideAustralia\n",
"K D Hoffman \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"R Hoffmann \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"T Hoinka \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"B Hokanson-Fasig \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Hoshina \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"F Huang \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"M Huber \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"K Hultqvist \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"M Hünnefeld \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"R Hussain \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n\nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"S In ",
"N Iovine \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"A Ishihara \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"E Jacobi \nDESY\nD-15738ZeuthenGermany\n",
"G S Japaridze \nCTSPS\nClark-Atlanta University\n30314AtlantaGAUSA\n",
"M Jeong \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"K Jero \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"B J P Jones \nDept. of Physics\nUniversity of Texas at Arlington\n502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA\n",
"P Kalaczynski \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"W Kang \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"A Kappes \nInstitut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany\n",
"D Kappesser \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"T Karg \nDESY\nD-15738ZeuthenGermany\n",
"A Karle \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"U Katz ",
"M Kauer \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Keivani \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"J L Kelley \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Kheirandish \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Kim \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"T Kintscher \nDESY\nD-15738ZeuthenGermany\n",
"J Kiryluk \nDept. of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA\n",
"T Kittler ",
"S R Klein \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"R Koirala \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"H Kolanoski \nInstitut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany\n",
"L Köpke \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"C Kopper \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"S Kopper \nDept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"J P Koschinsky \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"D J Koskinen \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n",
"M Kowalski \nInstitut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany\n\nDESY\nD-15738ZeuthenGermany\n",
"K Krings \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"M Kroll ",
"G Krückl \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"S Kunwar \nDESY\nD-15738ZeuthenGermany\n",
"N Kurahashi \nDept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA\n",
"A Kyriacou \nDepartment of Physics\nUniversity of Adelaide\n5005AdelaideAustralia\n",
"M Labare \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"J L Lanfranchi \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"M J Larson \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n",
"F Lauber \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"K Leonard \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Leuermann \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"Q R Liu \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"E Lohfink \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"C J Lozano Mariscal \nInstitut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany\n",
"L Lu \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"J Lünemann \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"W Luszczak \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Madsen \nDept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"G Maggi \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"K B M Mahn \nDept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA\n",
"Y Makino \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"S Mancina \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"I C Mariş \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"R Maruyama \nDept. of Physics\nYale University\n06520New HavenCTUSA\n",
"K Mase \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"R Maunu \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"S C Nowicki \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"D R Nygren \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"A Obertacke Pollmann \nDept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany\n",
"A Olivas \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"A O'murchadha \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"J P Osborne \nDepartment of Physics and Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK\n",
"E O'sullivan \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"T Palczewski \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"H Pandya \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"D V Pankova \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"P Peiffer \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"J A Pepper \nDept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"C Pérez De Los Heros \nDept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden\n",
"D Pieloth \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"E Pinat \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"A Pizzuto \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Plum \nDepartment of Physics\nMarquette University\n53201MilwaukeeWIUSA\n",
"P B Price \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"G T Przybylski \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"C Raab \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"M Rameez \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n",
"L Rauch \nDESY\nD-15738ZeuthenGermany\n",
"K Rawlins \nDept. of Physics and Astronomy\nUniversity of Alaska Anchorage\n3211 Providence Dr99508AnchorageAKUSA\n",
"I C Rea \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"R Reimann \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"B Relethford \nDept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA\n",
"G Renzi \nUniversité Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium\n",
"E Resconi \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"W Rhode \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"M Richman \nDept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA\n",
"S Robertson \nDepartment of Physics\nUniversity of Adelaide\n5005AdelaideAustralia\n",
"M Rongen \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"C Rott \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"T Ruhe \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"D Ryckbosch \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"D Rysewyk \nDept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA\n",
"I Safa \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S E Sanchez Herrera \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"A Sandrock \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"J Sandroos \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"M Santander \nDept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"S Sarkar \nNiels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark\n\nDept. of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK\n",
"S Sarkar \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"K Satalecka \nDESY\nD-15738ZeuthenGermany\n",
"M Schaufel \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"P Schlunder \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"T Schmidt \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"A Schneider \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Schneider ",
"S Schöneberg ",
"L Schumacher \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"S Sclafani \nDept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA\n",
"D Seckel \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"S Seunarine \nDept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"J Soedingrekso \nDept. of Physics\nTU Dortmund University\nD-44221DortmundGermany\n",
"D Soldin \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"M Song \nDept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"G M Spiczak \nDept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"C Spiering \nDESY\nD-15738ZeuthenGermany\n",
"J Stachurska \nDESY\nD-15738ZeuthenGermany\n",
"M Stamatikos \nDept. of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n",
"T Stanev \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"A Stasik \nDESY\nD-15738ZeuthenGermany\n",
"R Stein \nDESY\nD-15738ZeuthenGermany\n",
"J Stettner \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"A Steuer \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"T Stezelberger \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"R G Stokstad \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"A Stößl \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"N L Strotjohann \nDESY\nD-15738ZeuthenGermany\n",
"S Tilav \nBartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA\n",
"P A Toale \nDept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"M N Tobin \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"C Tönnis \nDept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea\n",
"S Toscano \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"D Tosi \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Tselengidou ",
"C F Tung \nSchool of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA\n",
"A Turcati \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"C F Turley \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"B Ty \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"E Unger \nDept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden\n",
"M A Unland Elorrieta \nInstitut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany\n",
"M Usner \nDESY\nD-15738ZeuthenGermany\n",
"J Vandenbroucke \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"W Van Driessche \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"D Van Eijk \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"N Van Eijndhoven \nVrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium\n",
"S Vanheule \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"J Van Santen \nDESY\nD-15738ZeuthenGermany\n",
"M Vraeghe \nDept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium\n",
"C Walck \nOskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden\n",
"A Wallace \nDepartment of Physics\nUniversity of Adelaide\n5005AdelaideAustralia\n",
"M Wallraff \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"F D Wandler \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"N Wandkowsky \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T B Watson \nDept. of Physics\nUniversity of Texas at Arlington\n502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA\n",
"A Waza \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"C Weaver \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"M J Weiss \nDept. of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"C Wendt \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Werthebach \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Westerhoff \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"B J Whelan \nDepartment of Physics\nUniversity of Adelaide\n5005AdelaideAustralia\n",
"N Whitehorn \nDepartment of Physics and Astronomy\nUCLA\n90095Los AngelesCAUSA\n",
"K Wiebe \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany\n",
"C H Wiebusch \nIII. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany\n",
"L Wille \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"D R Williams \nDept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"L Wills \nDept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA\n",
"M Wolf \nPhysik-department\nTechnische Universität München\nD-85748GarchingGermany\n",
"J Wood \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T R Wood \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"E Woolsey \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"K Woschnagg \nDept. of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"G Wrede ",
"D L Xu \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"X W Xu \nDept. of Physics\nSouthern University\n70813Baton RougeLAUSA\n",
"Y Xu \nDept. of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA\n",
"J P Yanez \nDept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany\n",
"G Yodh \nDept. of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA\n",
"S Yoshida \nDept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan\n",
"T Yuan \nDept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n"
]
| [
"Dept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland",
"Department of Physics\nMarquette University\n53201MilwaukeeWIUSA",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Dept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"Physics Department\nSouth Dakota School of Mines and Technology, Rapid City\n57701SDUSA",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Dept. of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"Dept. of Astronomy\nOhio State University\n43210ColumbusOHUSA",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nUniversity of Kansas\n66045LawrenceKSUSA",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"DESY\nD-15738ZeuthenGermany",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"DESY\nD-15738ZeuthenGermany",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\nCH-1211GenèveSwitzerland",
"SNOLAB\n1039 Regional Road 24, Creighton Mine 9, LivelyP3Y 1N2ONCanada",
"Institut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany",
"Dept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Dept. of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPAUSA",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Dept. of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA",
"School of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Institut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany",
"Dept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Physics Department\nSouth Dakota School of Mines and Technology, Rapid City\n57701SDUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics and Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nSouthern University\n70813Baton RougeLAUSA",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Astronomy\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Institut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA",
"Department of Physics\nUniversity of Adelaide\n5005AdelaideAustralia",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"DESY\nD-15738ZeuthenGermany",
"CTSPS\nClark-Atlanta University\n30314AtlantaGAUSA",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nUniversity of Texas at Arlington\n502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Institut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Institut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"Institut für Physik\nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\nHumboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany",
"DESY\nD-15738ZeuthenGermany",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA",
"Department of Physics\nUniversity of Adelaide\n5005AdelaideAustralia",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Institut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Dept. of Physics\nYale University\n06520New HavenCTUSA",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Dept. of Physics\nUniversity of Wuppertal\nD-42119WuppertalGermany",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Department of Physics and Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Dept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nMarquette University\n53201MilwaukeeWIUSA",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nUniversity of Alaska Anchorage\n3211 Providence Dr99508AnchorageAKUSA",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA",
"Université Libre de Bruxelles\nScience Faculty CP230B-1050BrusselsBelgium",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Dept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA",
"Department of Physics\nUniversity of Adelaide\n5005AdelaideAustralia",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"Dept. of Physics and Astronomy\nMichigan State University\n48824East LansingMIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Dept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Niels Bohr Institute\nUniversity of Copenhagen\nDK-2100CopenhagenDenmark",
"Dept. of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"DESY\nD-15738ZeuthenGermany",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"Dept. of Physics\nTU Dortmund University\nD-44221DortmundGermany",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Dept. of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"DESY\nD-15738ZeuthenGermany",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"DESY\nD-15738ZeuthenGermany",
"DESY\nD-15738ZeuthenGermany",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"DESY\nD-15738ZeuthenGermany",
"Bartol Research Institute and Dept. of Physics and Astronomy\nUniversity of Delaware\n19716NewarkDEUSA",
"Dept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nSungkyunkwan University\nSuwon 440-746Korea",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"School of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Astronomy\nUppsala University\nBox 516S-75120UppsalaSweden",
"Institut für Kernphysik\nWestfälische Wilhelms-Universität MünsterD-48149MünsterGermany",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Vrije Universiteit Brussel (VUB)\nDienst ELEMB-1050BrusselsBelgium",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"DESY\nD-15738ZeuthenGermany",
"Dept. of Physics and Astronomy\nUniversity of Gent\nB-9000GentBelgium",
"Oskar Klein Centre and Dept. of Physics\nStockholm University\nSE-10691StockholmSweden",
"Department of Physics\nUniversity of Adelaide\n5005AdelaideAustralia",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nUniversity of Texas at Arlington\n502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nUniversity of Adelaide\n5005AdelaideAustralia",
"Department of Physics and Astronomy\nUCLA\n90095Los AngelesCAUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 7D-55099MainzGermany",
"III. Physikalisches Institut\nRWTH Aachen University\nD-52056AachenGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Dept. of Physics\nDrexel University\n3141 Chestnut Street19104PhiladelphiaPAUSA",
"Physik-department\nTechnische Universität München\nD-85748GarchingGermany",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dept. of Physics\nSouthern University\n70813Baton RougeLAUSA",
"Dept. of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA",
"Dept. of Physics\nCentre for Astroparticle Physics\nUniversity of Alberta\nFriedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany",
"Dept. of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA",
"Dept. of Physics and Institute for Global Prominent Research\nChiba University\n263-8522ChibaJapan",
"Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA"
]
| []
| High-energy neutrino emission has been predicted for several short-lived astrophysical transients including gamma-ray bursts (GRBs), core-collapse supernovae (CCSNe) with choked jets and neutron star mergers. IceCube's optical and X-ray follow-up program searches for such transient sources by looking for two or more muon neutrino candidates in directional coincidence and arriving within 100 s. The measured rate of neutrino alerts is consistent with the expected rate of chance coincidences of atmospheric background events and no likely electromagnetic counterparts have been identified in Swift follow-up observations. Here, we calculate generic bounds on the neutrino flux of 3 short-lived transient sources. Assuming an E −2.5 neutrino spectrum, we find that the neutrino flux of rare sources, like long gamma-ray bursts, is constrained to < 5% of the detected astrophysical flux and the energy released in neutrinos (100 GeV to 10 PeV) by a median bright GRB-like source is < 10 52.5 erg. For a harder E −2.13 neutrino spectrum up to 30% of the flux could be produced by GRBs and the allowed median source energy is < 10 52 erg. A hypothetical population of transient sources has to be more common than 10 −5 Mpc −3 yr −1 (5 × 10 −8 Mpc −3 yr −1 for the E −2.13 spectrum) to account for the complete astrophysical neutrino flux. | 10.1103/physrevlett.122.051102 | [
"https://arxiv.org/pdf/1807.11492v3.pdf"
]
| 73,489,811 | 1807.11492 | d7606577cc8cbc0ae8a1eaf7d01a5e0752529cd2 |
Constraints on minute-scale transient astrophysical neutrino sources
1 Aug 2018
M G Aartsen
Dept. of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
M Ackermann
DESY
D-15738ZeuthenGermany
J Adams
Dept. of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
J A Aguilar
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
M Ahlers
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
M Ahrens
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
I Al Samarai
Département de physique nucléaire et corpusculaire
Université de Genève
CH-1211GenèveSwitzerland
D Altmann
K Andeen
Department of Physics
Marquette University
53201MilwaukeeWIUSA
T Anderson
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
I Ansseau
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
G Anton
C Argüelles
Dept. of Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
J Auffenberg
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
S Axani
Dept. of Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
P Backes
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
H Bagherpour
Dept. of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
X Bai
Physics Department
South Dakota School of Mines and Technology, Rapid City
57701SDUSA
A Barbano
Département de physique nucléaire et corpusculaire
Université de Genève
CH-1211GenèveSwitzerland
J P Barron
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
S W Barwick
Dept. of Physics and Astronomy
University of California
92697IrvineCAUSA
V Baum
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
R Bay
Dept. of Physics
University of California
94720BerkeleyCAUSA
J J Beatty
Dept. of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
Dept. of Astronomy
Ohio State University
43210ColumbusOHUSA
J Becker Tjus
K.-H Becker
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
S Benzvi
Dept. of Physics and Astronomy
University of Rochester
14627RochesterNYUSA
D Berley
Dept. of Physics
University of Maryland
20742College ParkMDUSA
E Bernardini
DESY
D-15738ZeuthenGermany
D Z Besson
Dept. of Physics and Astronomy
University of Kansas
66045LawrenceKSUSA
G Binder
Dept. of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
D Bindig
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
E Blaufuss
Dept. of Physics
University of Maryland
20742College ParkMDUSA
S Blot
DESY
D-15738ZeuthenGermany
C Bohm
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
M Börner
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
F Bos
S Böser
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
O Botner
Dept. of Physics and Astronomy
Uppsala University
Box 516S-75120UppsalaSweden
E Bourbeau
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
J Bourbeau
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
F Bradascio
DESY
D-15738ZeuthenGermany
J Braun
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Brenzke
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
H.-P Bretz
DESY
D-15738ZeuthenGermany
S Bron
Département de physique nucléaire et corpusculaire
Université de Genève
CH-1211GenèveSwitzerland
J Brostean-Kaiser
DESY
D-15738ZeuthenGermany
A Burgman
Dept. of Physics and Astronomy
Uppsala University
Box 516S-75120UppsalaSweden
R S Busse
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T Carver
Département de physique nucléaire et corpusculaire
Université de Genève
CH-1211GenèveSwitzerland
E Cheung
Dept. of Physics
University of Maryland
20742College ParkMDUSA
D Chirkin
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Christov
Département de physique nucléaire et corpusculaire
Université de Genève
CH-1211GenèveSwitzerland
K Clark
SNOLAB
1039 Regional Road 24, Creighton Mine 9, LivelyP3Y 1N2ONCanada
L Classen
Institut für Kernphysik
Westfälische Wilhelms-Universität MünsterD-48149MünsterGermany
G H Collin
Dept. of Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
J M Conrad
Dept. of Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
P Coppin
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
P Correa
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
D F Cowen
Dept. of Astronomy and Astrophysics
Pennsylvania State University
16802University ParkPAUSA
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
R Cross
Dept. of Physics and Astronomy
University of Rochester
14627RochesterNYUSA
P Dave
School of Physics and Center for Relativistic Astrophysics
Georgia Institute of Technology
30332AtlantaGAUSA
M Day
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J P A M De André
Dept. of Physics and Astronomy
Michigan State University
48824East LansingMIUSA
C De Clercq
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
J J Delaunay
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
H Dembinski
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
K Deoskar
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
S De Ridder
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
P Desiati
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K D De Vries
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
G De Wasseige
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
M De With
Institut für Physik
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
Humboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany
T Deyoung
Dept. of Physics and Astronomy
Michigan State University
48824East LansingMIUSA
J C Díaz-Vélez
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
V Di Lorenzo
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
H Dujmovic
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
J P Dumm
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
M Dunkman
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
E Dvorak
Physics Department
South Dakota School of Mines and Technology, Rapid City
57701SDUSA
B Eberhardt
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
T Ehrhardt
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
B Eichmann
P Eller
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
P A Evans
Department of Physics and Astronomy
University of Leicester
LE1 7RHLeicesterUK
P A Evenson
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
S Fahey
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A R Fazely
Dept. of Physics
Southern University
70813Baton RougeLAUSA
J Felde
Dept. of Physics
University of Maryland
20742College ParkMDUSA
K Filimonov
Dept. of Physics
University of California
94720BerkeleyCAUSA
C Finley
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
A Franckowiak
DESY
D-15738ZeuthenGermany
E Friedman
Dept. of Physics
University of Maryland
20742College ParkMDUSA
A Fritz
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
T K Gaisser
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
J Gallagher
Dept. of Astronomy
University of Wisconsin
53706MadisonWIUSA
E Ganster
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
L Gerhardt
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
K Ghorbani
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
W Giang
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
T Glauch
Physik-department
Technische Universität München
D-85748GarchingGermany
T Glüsenkamp
A Goldschmidt
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
J G Gonzalez
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
D Grant
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
Z Griffith
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
C Haack
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
A Hallgren
Dept. of Physics and Astronomy
Uppsala University
Box 516S-75120UppsalaSweden
L Halve
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
F Halzen
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Hanson
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
D Hebecker
Institut für Physik
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
Humboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany
D Heereman
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
K Helbing
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
R Hellauer
Dept. of Physics
University of Maryland
20742College ParkMDUSA
S Hickford
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
J Hignight
Dept. of Physics and Astronomy
Michigan State University
48824East LansingMIUSA
G C Hill
Department of Physics
University of Adelaide
5005AdelaideAustralia
K D Hoffman
Dept. of Physics
University of Maryland
20742College ParkMDUSA
R Hoffmann
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
T Hoinka
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
B Hokanson-Fasig
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Hoshina
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
F Huang
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
M Huber
Physik-department
Technische Universität München
D-85748GarchingGermany
K Hultqvist
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
M Hünnefeld
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
R Hussain
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
S In
N Iovine
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
A Ishihara
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
E Jacobi
DESY
D-15738ZeuthenGermany
G S Japaridze
CTSPS
Clark-Atlanta University
30314AtlantaGAUSA
M Jeong
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
K Jero
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
B J P Jones
Dept. of Physics
University of Texas at Arlington
502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA
P Kalaczynski
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
W Kang
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
A Kappes
Institut für Kernphysik
Westfälische Wilhelms-Universität MünsterD-48149MünsterGermany
D Kappesser
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
T Karg
DESY
D-15738ZeuthenGermany
A Karle
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
U Katz
M Kauer
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Keivani
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
J L Kelley
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Kheirandish
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Kim
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
T Kintscher
DESY
D-15738ZeuthenGermany
J Kiryluk
Dept. of Physics and Astronomy
Stony Brook University
11794-3800Stony BrookNYUSA
T Kittler
S R Klein
Dept. of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
R Koirala
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
H Kolanoski
Institut für Physik
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
Humboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany
L Köpke
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
C Kopper
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
S Kopper
Dept. of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
J P Koschinsky
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
D J Koskinen
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
M Kowalski
Institut für Physik
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
Humboldt-Universität zu BerlinD-12489, D-44780Berlin, BochumGermany, Germany
DESY
D-15738ZeuthenGermany
K Krings
Physik-department
Technische Universität München
D-85748GarchingGermany
M Kroll
G Krückl
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
S Kunwar
DESY
D-15738ZeuthenGermany
N Kurahashi
Dept. of Physics
Drexel University
3141 Chestnut Street19104PhiladelphiaPAUSA
A Kyriacou
Department of Physics
University of Adelaide
5005AdelaideAustralia
M Labare
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
J L Lanfranchi
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
M J Larson
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
F Lauber
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
K Leonard
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Leuermann
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
Q R Liu
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
E Lohfink
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
C J Lozano Mariscal
Institut für Kernphysik
Westfälische Wilhelms-Universität MünsterD-48149MünsterGermany
L Lu
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
J Lünemann
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
W Luszczak
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Madsen
Dept. of Physics
University of Wisconsin
River Falls54022WIUSA
G Maggi
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
K B M Mahn
Dept. of Physics and Astronomy
Michigan State University
48824East LansingMIUSA
Y Makino
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
S Mancina
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
I C Mariş
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
R Maruyama
Dept. of Physics
Yale University
06520New HavenCTUSA
K Mase
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
R Maunu
Dept. of Physics
University of Maryland
20742College ParkMDUSA
S C Nowicki
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
D R Nygren
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
A Obertacke Pollmann
Dept. of Physics
University of Wuppertal
D-42119WuppertalGermany
A Olivas
Dept. of Physics
University of Maryland
20742College ParkMDUSA
A O'murchadha
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
J P Osborne
Department of Physics and Astronomy
University of Leicester
LE1 7RHLeicesterUK
E O'sullivan
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
T Palczewski
Dept. of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
H Pandya
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
D V Pankova
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
P Peiffer
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
J A Pepper
Dept. of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
C Pérez De Los Heros
Dept. of Physics and Astronomy
Uppsala University
Box 516S-75120UppsalaSweden
D Pieloth
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
E Pinat
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
A Pizzuto
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Plum
Department of Physics
Marquette University
53201MilwaukeeWIUSA
P B Price
Dept. of Physics
University of California
94720BerkeleyCAUSA
G T Przybylski
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
C Raab
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
M Rameez
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
L Rauch
DESY
D-15738ZeuthenGermany
K Rawlins
Dept. of Physics and Astronomy
University of Alaska Anchorage
3211 Providence Dr99508AnchorageAKUSA
I C Rea
Physik-department
Technische Universität München
D-85748GarchingGermany
R Reimann
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
B Relethford
Dept. of Physics
Drexel University
3141 Chestnut Street19104PhiladelphiaPAUSA
G Renzi
Université Libre de Bruxelles
Science Faculty CP230B-1050BrusselsBelgium
E Resconi
Physik-department
Technische Universität München
D-85748GarchingGermany
W Rhode
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
M Richman
Dept. of Physics
Drexel University
3141 Chestnut Street19104PhiladelphiaPAUSA
S Robertson
Department of Physics
University of Adelaide
5005AdelaideAustralia
M Rongen
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
C Rott
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
T Ruhe
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
D Ryckbosch
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
D Rysewyk
Dept. of Physics and Astronomy
Michigan State University
48824East LansingMIUSA
I Safa
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S E Sanchez Herrera
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
A Sandrock
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
J Sandroos
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
M Santander
Dept. of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
S Sarkar
Niels Bohr Institute
University of Copenhagen
DK-2100CopenhagenDenmark
Dept. of Physics
University of Oxford
1 Keble RoadOX1 3NPOxfordUK
S Sarkar
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
K Satalecka
DESY
D-15738ZeuthenGermany
M Schaufel
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
P Schlunder
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
T Schmidt
Dept. of Physics
University of Maryland
20742College ParkMDUSA
A Schneider
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Schneider
S Schöneberg
L Schumacher
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
S Sclafani
Dept. of Physics
Drexel University
3141 Chestnut Street19104PhiladelphiaPAUSA
D Seckel
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
S Seunarine
Dept. of Physics
University of Wisconsin
River Falls54022WIUSA
J Soedingrekso
Dept. of Physics
TU Dortmund University
D-44221DortmundGermany
D Soldin
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
M Song
Dept. of Physics
University of Maryland
20742College ParkMDUSA
G M Spiczak
Dept. of Physics
University of Wisconsin
River Falls54022WIUSA
C Spiering
DESY
D-15738ZeuthenGermany
J Stachurska
DESY
D-15738ZeuthenGermany
M Stamatikos
Dept. of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
T Stanev
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
A Stasik
DESY
D-15738ZeuthenGermany
R Stein
DESY
D-15738ZeuthenGermany
J Stettner
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
A Steuer
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
T Stezelberger
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
R G Stokstad
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
A Stößl
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
N L Strotjohann
DESY
D-15738ZeuthenGermany
S Tilav
Bartol Research Institute and Dept. of Physics and Astronomy
University of Delaware
19716NewarkDEUSA
P A Toale
Dept. of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
M N Tobin
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
C Tönnis
Dept. of Physics
Sungkyunkwan University
Suwon 440-746Korea
S Toscano
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
D Tosi
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Tselengidou
C F Tung
School of Physics and Center for Relativistic Astrophysics
Georgia Institute of Technology
30332AtlantaGAUSA
A Turcati
Physik-department
Technische Universität München
D-85748GarchingGermany
C F Turley
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
B Ty
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
E Unger
Dept. of Physics and Astronomy
Uppsala University
Box 516S-75120UppsalaSweden
M A Unland Elorrieta
Institut für Kernphysik
Westfälische Wilhelms-Universität MünsterD-48149MünsterGermany
M Usner
DESY
D-15738ZeuthenGermany
J Vandenbroucke
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
W Van Driessche
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
D Van Eijk
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
N Van Eijndhoven
Vrije Universiteit Brussel (VUB)
Dienst ELEMB-1050BrusselsBelgium
S Vanheule
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
J Van Santen
DESY
D-15738ZeuthenGermany
M Vraeghe
Dept. of Physics and Astronomy
University of Gent
B-9000GentBelgium
C Walck
Oskar Klein Centre and Dept. of Physics
Stockholm University
SE-10691StockholmSweden
A Wallace
Department of Physics
University of Adelaide
5005AdelaideAustralia
M Wallraff
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
F D Wandler
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
N Wandkowsky
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T B Watson
Dept. of Physics
University of Texas at Arlington
502 Yates St., 2 Science Hall Rm 108Box 1905976019ArlingtonTXUSA
A Waza
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
C Weaver
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
M J Weiss
Dept. of Physics
Pennsylvania State University
16802University ParkPAUSA
C Wendt
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Werthebach
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Westerhoff
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
B J Whelan
Department of Physics
University of Adelaide
5005AdelaideAustralia
N Whitehorn
Department of Physics and Astronomy
UCLA
90095Los AngelesCAUSA
K Wiebe
Institute of Physics
University of Mainz
Staudinger Weg 7D-55099MainzGermany
C H Wiebusch
III. Physikalisches Institut
RWTH Aachen University
D-52056AachenGermany
L Wille
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
D R Williams
Dept. of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
L Wills
Dept. of Physics
Drexel University
3141 Chestnut Street19104PhiladelphiaPAUSA
M Wolf
Physik-department
Technische Universität München
D-85748GarchingGermany
J Wood
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T R Wood
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
E Woolsey
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
K Woschnagg
Dept. of Physics
University of California
94720BerkeleyCAUSA
G Wrede
D L Xu
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
X W Xu
Dept. of Physics
Southern University
70813Baton RougeLAUSA
Y Xu
Dept. of Physics and Astronomy
Stony Brook University
11794-3800Stony BrookNYUSA
J P Yanez
Dept. of Physics
Centre for Astroparticle Physics
University of Alberta
Friedrich-Alexander-Universität Erlangen-NürnbergT6G 2E1 24, D-91058Edmonton, Erlangen, ErlangenAlbertaCanada, Germany
G Yodh
Dept. of Physics and Astronomy
University of California
92697IrvineCAUSA
S Yoshida
Dept. of Physics and Institute for Global Prominent Research
Chiba University
263-8522ChibaJapan
T Yuan
Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
Constraints on minute-scale transient astrophysical neutrino sources
531 Aug 2018(Dated: August 2, 2018)11PACS numbers:
High-energy neutrino emission has been predicted for several short-lived astrophysical transients including gamma-ray bursts (GRBs), core-collapse supernovae (CCSNe) with choked jets and neutron star mergers. IceCube's optical and X-ray follow-up program searches for such transient sources by looking for two or more muon neutrino candidates in directional coincidence and arriving within 100 s. The measured rate of neutrino alerts is consistent with the expected rate of chance coincidences of atmospheric background events and no likely electromagnetic counterparts have been identified in Swift follow-up observations. Here, we calculate generic bounds on the neutrino flux of 3 short-lived transient sources. Assuming an E −2.5 neutrino spectrum, we find that the neutrino flux of rare sources, like long gamma-ray bursts, is constrained to < 5% of the detected astrophysical flux and the energy released in neutrinos (100 GeV to 10 PeV) by a median bright GRB-like source is < 10 52.5 erg. For a harder E −2.13 neutrino spectrum up to 30% of the flux could be produced by GRBs and the allowed median source energy is < 10 52 erg. A hypothetical population of transient sources has to be more common than 10 −5 Mpc −3 yr −1 (5 × 10 −8 Mpc −3 yr −1 for the E −2.13 spectrum) to account for the complete astrophysical neutrino flux.
High-energy neutrino emission has been predicted for several short-lived astrophysical transients including gamma-ray bursts (GRBs), core-collapse supernovae (CCSNe) with choked jets and neutron star mergers. IceCube's optical and X-ray follow-up program searches for such transient sources by looking for two or more muon neutrino candidates in directional coincidence and arriving within 100 s. The measured rate of neutrino alerts is consistent with the expected rate of chance coincidences of atmospheric background events and no likely electromagnetic counterparts have been identified in Swift follow-up observations. Here, we calculate generic bounds on the neutrino flux of short-lived transient sources. Assuming an E −2.5 neutrino spectrum, we find that the neutrino flux of rare sources, like long gamma-ray bursts, is constrained to < 5% of the detected astrophysical flux and the energy released in neutrinos (100 GeV to 10 PeV) by a median bright GRB-like source is < 10 52.5 erg. For a harder E −2.13 neutrino spectrum up to 30% of the flux could be produced by GRBs and the allowed median source energy is < 10 52 erg. A hypothetical population of transient sources has to be more common than 10 −5 Mpc −3 yr −1 (5 × 10 −8 Mpc −3 yr −1 for the E −2.13 spectrum) to account for the complete astrophysical neutrino flux.
INTRODUCTION
An astrophysical neutrino flux at high energies (from ∼10 TeV to a few PeV) was discovered by the IceCube neutrino observatory [1][2][3]. The neutrino arrival directions are largely isotropic suggesting a predominantly extragalactic origin. Possible sources include gamma-ray bursts (GRBs) [4][5][6][7], core-collapse supernovae (CCSNe) with choked jets [8][9][10] and active galactic nuclei (AGNs) [11][12][13][14][15] (see e.g. Ref. [16], for a more extensive list). While several neutrino events have been associated with a blazar [17,18], blazars likely cannot account for the complete astrophysical flux [19]. The absence of luminous neutrino point sources [3,20,21] implies that the observed flux can only be emitted by a class of sufficiently numerous sources [21][22][23][24].
The IceCube detector consists of 5160 photomultipliers, each with a diameter of 25 cm, which are deployed in the glacier at the geographical South Pole at depths between 1450 to 2450 m comprising a volume of 1 km 3 [25]. Relativistic charged leptons and hadrons, created as secondary particles in neutrino interactions, can be detected via their Cherenkov radiation. While cosmicray-induced atmospheric muons can enter the detector from above, particles travelling upwards are most likely produced in neutrino interactions. Due to the track-like signature of muons, the arrival directions of muon neutrinos can be well reconstructed with an angular resolution of about 1 • depending on the deposited energy [20] which makes them a good channel for point source searches.
In addition to searching for individual high-energy astrophysical neutrinos, IceCube has a dedicated optical and X-ray follow-up program which is triggered by bursts of two or more lower-energy events that could stem from a short-lived transient source, like a GRB or CCSN with a choked jet [26][27][28]. Well-reconstructed track-like events with energies between 100 GeV and a few PeV are selected from the Northern sky. In this energy range most selected neutrino candidates are atmospheric background events and the stream of incoming events is therefore scanned in realtime for two or more tracks that are detected within 100 s and are consistent with a point source origin. To look for a potential electromagnetic counterpart, follow-up observations for the least background-like alerts are obtained with the X-ray Telescope (XRT [29]) on board the Neil Gehrels Swift observatory, the 48-inch telescope of the Palomar Transient Factory (PTF [30,31]; until Feb. 2017), and the Robotic Optical Transient Search Experiment (ROTSE [32]; until Nov. 2015).
So far, no optical or X-ray transient sources have been positively associated with any of the neutrino multiplets [27,28,33]. As the alert rates are consistent with the background-only hypothesis, we find that strong constraints on the existence of short-lived transient populations can be derived from the IceCube data alone.
DETECTED NEUTRINO ALERTS
IceCube's optical and X-ray follow-up program was established in Dec. 2008 to search for short-lived transient neutrino sources and here we present results from the first five years of operation with the complete detector (Sept. 2011 -May 2016).
For the follow-up program we select track-like events from the Northern sky (for a detailed description of the event selection see Ref. [34]) which are detected at a rate of about 3 mHz. To suppress the dominating background of atmospheric neutrino and muon events we search for two or more neutrinos candidates with a temporal separation of less than 100 s and an angular separation of less than 3.5 • . Doublets are alerts consisting of two coincident events, while we call alerts with three or more coincident events multiplets.
Within the live time of 1648.1 days we selected in total 460 438 neutrino candidates. The selected data consists of about ∼ 80% atmospheric neutrinos, ∼ 20% misreconstructed atmospheric muons from the Southern sky [35] and less than 1% astrophysical neutrinos depending on the assumed spectral shape of the astrophysical neutrino flux.
Alerts can also be produced by chance coincidences of background events and we calculate the rate of background alerts by randomly exchanging the detection times of events, as described in Ref. [28]. The expected background is 312.7 doublets, 0.341 triplets and only 5 × 10 −4 quadruplets within the analyzed livetime. We have observed 338 neutrino doublets and one neutrino triplet [28] (see Supplemental Material for more detail on the alerts [53]). We hence observe a small excess of doublets, with a significance of 1.4σ, which therefore does not provide significant evidence for the existence of short-lived transient neutrino sources. The resulting 90% upper limit [36] on the number of astrophysical doublets is < 56, while the limit on the expected number of astrophysical triplets is < 4.0 within the analyzed livetime. We find that the triplet rate provides stronger constraints on the neutrino flux of transient source populations.
To quantify the significance of doublet alerts and decide whether follow-up observations should be initiated we calculate a likelihood parameter λ defined in Ref. [27]. More significant doublets are defined by a short time interval and small angular separation. Further, preference is given to doublets with small errors on the average alert direction to increase the chances to find a potential counterpart in follow-up observations.
None of the neutrino alerts were significant enough to challenge the background-only hypothesis. The two most significant alerts were studied in great detail [27,28] and no likely electromagnetic counterpart was detected. Swift XRT follow-up observations have been obtained for 25 alerts. Those taken before Aug. 2014 are discussed in a separate paper [33]. No transient X-ray sources were identified above the predefined threshold.
The alert rates, doublet significances and Swift XRT follow-up observations hence do not provide evidence for the existence of a population of short-lived transient sources. In the following we therefore do not make use of the collected follow-up observations, but use the low rate of alerts with three or more neutrino candidates to calculate generic constraints on the neutrino emission of short-lived transient populations like GRBs and CCSNe.
SIMULATING TRANSIENT SOURCE POPULATIONS
The low rate of detected neutrino multiplets allows us to calculate limits on the neutrino flux of a population of transient sources with durations up to 100 s. For this purpose we simulate two types of transient source populations whose properties are chosen such that they are similar to long GRBs and CCSNe with a choked jet.
The redshift distributions for GRBs and CCSNe are taken from Refs. [37] and [38] respectively. The distribution for CCSNe peaks at a lower redshift of z ∼ 2 compared to the one for GRBs which peaks at z ∼ 3. We simulate sources in the Northern sky up to a redshift of z = 8 and use the cosmological parameters from Ref. [39]. Sources located at z > 4 only contribute 1% (5%) of the events for the CCSN-like (GRB-like) population and hence only have a small effect on the results.
The distribution of GRB peak luminosities is relatively broad, spanning at least four orders of magnitude [37]. We assume that the neutrino peak luminosities of GRBs follow the distribution measured in gamma rays. The population of CCSNe does not show such extreme luminosity fluctuations at the optical wavelengths [40] and we assume a narrow lognormal distribution with a width of 0.4 in log-10-space corresponding to fluctuations of one astronomical magnitude. The fluctuations assumed for the GRB-like population are larger by a factor of 300. Ultimately the neutrino luminosity functions of both populations are unknown, the two different scenarios allow us to quantify its influence on the detection probability.
Transient durations in the source restframe are drawn from a lognormal distribution centered around 11.2 s with a width of 0.58 in log-10-space, which approximately reproduces the duration distribution of long GRBs measured at Earth [54]. We hence assume that Swift's Burst Alert Telescope [41] is equally sensitive to all GRB durations and that the duration of the neutrino and gammaray emission is similar. CCSNe with choked jets have not yet been observed, but we chose to use the same duration distribution. We assume that the transient source instantaneously rises to its peak luminosity and then decays exponentially according to its simulated duration. The number of multiplet alerts does not depend on the shape of the light curve as long as the neutrinos arrive within 100 s.
The neutrino emission of each source is assumed to follow a power law spectrum similar to the detected astrophysical neutrino flux
φ(E) = φ 0 × (E/GeV) −γ .(1)
To account for the uncertainty on the measured neutrino flux, we use two different spectral shapes: a hard spectrum with γ = 2.13 and φ 0 = 4.0 × 10 −8 GeV −1 cm −2 s −1 sr −1 and a soft spectrum with γ = 2.5 and φ 0 = 7.1 × 10 −6 GeV −1 cm −2 s −1 sr −1 . The normalization φ 0 is per neutrino flavor and includes both neutrinos and antineutrinos. The soft spectrum has been measured in a global fit extending down to an energy of 10 TeV [42] while the hard E −2.13 spectrum was found in an analysis restricted to track-like events from the Northern sky with energies 100 TeV [3].
The sensitivity of the follow-up program is evaluated using simulated IceCube neutrino events accounting for the detector acceptance and the effects of high-energy neutrino absorption in the Earth's core. During the datataking period, data selection methods and reconstructions have been steadily improved. We account for these changes in our simulations.
The energy distributions of the events which pass all selection cuts are shown in Fig. 1. The total expected number of astrophysical neutrino track events within the livetime of 1648.1 days is about 470 and 2800 ν µ for the E −2.13 and E −2.5 spectrum respectively (see Table 2 in the Supplemental Material [55] for more details).
Here we extrapolate the power law neutrino flux down to 100 GeV. Such a spectrum is expected if the neutrinos are produced in pp interactions, however for pγ interactions there would be a low-energy cutoff [24]. Above the threshold of 10 TeV, where the astrophysical flux is constrained by data [43], we expect about 280 or 910 ν µ , respectively.
GENERIC CONSTRAINTS
The simulated source populations are used to infer limits on the neutrino emission of short transient sources. We vary both the rate of sources, and the neutrino flux emitted by the complete population, to rule out scenarios which produce more than one detected neutrino multiplet within the analyzed livetime at 90% confidence level.
While the source rate is a free parameter in the final result, we here discuss two example scenarios in more detail: In the first case we constrain the neutrino emission of a GRB-like population while in the second one we assume that 1% of all CCSNe contribute to the astrophysical neutrino flux (e.g. because they contain choked jets pointed towards Earth; see also Refs. [44][45][46]). The local rates of GRBs and CCSNe are taken from Refs. [47,48] and [49], respectively. They allows us to convert between the local source rate and the number of transients (see Table I).
We then vary the neutrino flux of the source populations and calculate the expected number of detected neutrino events for each source. This depends on the source redshift, peak luminosity, transient duration as well as zenith direction. We use a Poisson distribution to calculate how likely it is that one, two or more than two neutrinos are detected from a source (shown in parentheses in Table I).
The probability that the reconstructed directions of two neutrinos from the same source are separated by more than 3.5 • depends strongly on the neutrino energies and zenith direction with a median probability of 27% for the E −2.5 spectrum. Additional losses occur when the neutrinos arrive more than 100 s apart, which happens for 9% of the sources for the assumed duration and redshift distribution. Assuming that the population produces the entire astrophysical neutrino flux, the expected number of astrophysical doublet and multiplet alerts is shown in the middle part of Table I. Sources with a single detected event cannot produce an alert.
Using the Feldman Cousins method [36], we rule out scenarios in which the detection of more than one multiplet from signal or background (0.341 chance coincidences) is expected with 90% probability. We find that the expected number of astrophysical multiplets is < 4.0 within the analyzed livetime. We calculate limits on the population's neutrino emission and on the energy that the median source in the population can release in neutrinos in the energy range from 100 GeV to 10 PeV in the source restframe.
Systematic errors on IceCube's sensitivity are dominated by the uncertainty on the optical efficiency of the (26) Resulting limits: c frac. of diffuse flux <30% <5% <250% <40% source ν energy [erg] < 10 52 < 10 52.5 < 10 50.5 d < 10 50.8 a Number of transients in the Northern sky within z 8 within the livetime of 1648.1 days. b Expected number of signal doublets and multiplets if the respective population accounts for 100% of the astrophysical neutrino flux. The numbers in parentheses do not include losses due to our cuts (two events within < 3.5 • and 100 s). The total number of expect events is ∼ 470 for an E −2.13 spectrum and ∼ 2800 for an E −2.5 spectrum. c 90% c.l. upper limits on the neutrino emission (100 GeV to 10 PeV; flavor equipartition) based on the detection of only one multiplet. d The detected astrophysical flux yields a more constraining limit on the energy emitted in neutrinos of < 10 50.1 erg.
detector and scattering and absorption in the ice. To quantify these uncertainties, we repeat the analysis with the efficiency reduced by 10% and ice absorption increased by 10%. Due to the lower number of detected neutrino events and the worse angular resolution, the number of multiplets decreases by 17% (14%) for the E −2.5 (E −2.13 ) spectrum. Figure 2 shows the upper limits, including systematic errors, on the source energy for the GRB-like and SN-like source populations. The width of the band includes the uncertainty on the neutrino spectrum, where the lower edge is for the E −2.13 spectrum and the upper one for the E −2.5 spectrum (compare numbers in Tab. I). The dashed lines indicate the median source luminosity which would produce the complete detected flux for the E −2.5 spectrum. The corresponding lines for the E −2.13 spectrum are a factor of 13 lower. The ratio between the limits and the respective broken lines depicts the fraction of the detected astrophysical flux that a population with a given rate can at most produce (also given in the second last row of Table I).
So far we have assumed that the observed astrophysical flux follows a power law spectrum down to energies of 100 GeV. The study was repeated using only events with energies above 10 TeV where the astrophysical flux has been measured. Without the extrapolation to lower energies both neutrino spectra yield similar results (compare also Fig. 1). The limit for the smaller energy range (shown in Fig. 1 in the Supplemental material [56]) is a factor of ∼ 1.5 lower compared to the lower edge of the bands shown in Fig. 2, but corresponds to a larger fraction of the astrophysical neutrino flux.
The typical distance of a transient source that produces a neutrino multiplet depends on the source luminosity and on the source rate of the population, and is large for most considered rates (e.g. a median distance of 100 Mpc for 1% of the CCSN rate and the E −2.13 neutrino spectrum). Only for the CCSN rate does the median distance decreases to ∼ 10 Mpc, such that local inhomogeneities in the universe might affect the multiplet rate [50].
As shown in Fig. 2 and Table I, we can constrain the neutrino emission from a GRB-like population to 5% of the astrophysical flux adopting the E −2.5 neutrino spectrum and to 30% for the E −2.13 spectrum. More frequent sources, such as NS-NS mergers [51] or CCSNe, can account for much or all of the astrophysical neutrino flux. However, the rates shown for those two source classes do not include a beaming factor. Assuming that a neutrino detection is only possible if the jet is pointed at us would reduce the source rate.
CCSN-like populations can only account for the complete astrophysical flux if their rate is larger than 10 −5 Mpc −3 yr −1 (5 × 10 −8 Mpc −3 yr −1 ) for an E −2.5 (E −2.13 ) spectrum. We can hence exclude rare transients with less than 15% (0.07%) of the CCSN rate [49] pro- emitted in neutrinos between 100 GeV and 10 PeV within 100 s. The area above the bands is excluded for CCSN-like (orange) and GRB-like (gray) populations respectively. The upper edge of the limit corresponds to an E −2.5 neutrino spectrum and the lower one to an E −2.13 spectrum. The dashed lines show which source energy corresponds to 100% of the astrophysical flux for an E −2.5 spectrum. The corresponding lines for an E −2.13 spectrum would be lower by a factor of 13. The rate of long GRBs, NS-NS mergers and CCSNe is indicated. Beaming is included for long GRBs, but not for NS-NS mergers or CCSNe due to the unknown jet opening angles.
ducing the entire astrophysical neutrino flux.
CONCLUSION
IceCube's optical and X-ray follow-up program triggers observations when multiple muon neutrino candidates are detected within 100 s and are directionally consistent with a common source origin. The observed alert rates can be explained by background and no likely neutrino source has been identified. Extrapolating the detected astrophysical neutrino flux to 100 GeV, we expect the detection of 470 to 2800 astrophysical muon neutrino events within the data collected over 1648.1 days. Based on the low rate of detected neutrino multiplets we calculate limits on the neutrino flux for two classes of short transient sources similar to GRBs and CCSNe with choked jets.
We find that a transient source population similar to long GRBs can at most account for 5% (30%) of the astrophysical neutrino flux for a neutrino spectrum of E −2.5 (E −2.13 ; see Fig. 2). This corresponds to a limit on the energy emitted in neutrinos within 100 s of < 10 52.5 erg (< 10 52 erg). Fewer neutrino multiplets are expected if the neutrino flux is emitted by a larger number of faint transients. A CCSN-like population can account for the complete flux if its rate at z = 0 is larger than 10 −5 Mpc −1 yr −1 (5 × 10 −8 Mpc −1 yr −1 ).
The derived limits are valid for transient sources with durations up to 100 s which follow the star formation rate or GRB redshift distribution. The neutrino emission of detected GRBs has been constrained to < 1% [52] of the astrophysical neutrino flux. However, the limits derived here are more general: They are solely based on neutrino detections and therefore also apply to sources that are not detected in electromagnetic radiation or that exhibit a time delay between the neutrino and electromagnetic signal.
The obtained limits strongly depend on the number of detected astrophysical neutrinos which is determined by the event selection, the assumed neutrino spectrum and the considered energy range. This is the likely cause for the different limits found in literature [23,24]. Contrary to previous analyses, our results are based on the full simulation of the IceCube detector including energy and directional dependent sensitivity and resolution, livetime, event selection and alert generation. Our search for transient neutrino sources is ongoing [34] and real-time multiwavelength follow-up observations extend our sensitivity to sources which cannot be detected and identified by IceCube alone.
FIG. 1 :
1Expected number of astrophysical neutrinos passing the event selection of the follow-up program within 1648.1 days of livetime. Two different fits to the measured flux are adopted (seeEquation 1). The reconstructed energy can be much lower than the true neutrino energy shown here, since most track-like events are not contained within the instrumented volume.
FIG. 2 :
2Limits on the median source energy (90% c. l.)
TABLE I :
IExpected number of alerts from simulated source populations and 90% upper limits on their neutrino emission. The limits were calculated based on the observation of only one neutrino triplet within the analyzed livetime.population
long GRBs
1% of CCSNe
spectral shape
E −2.13
E −2.5
E −2.13
E −2.5
rate [Mpc −3 yr −1 ]
4.2 × 10 −10
6.8 × 10 −7
# sources a
7200
5.9 × 10 6
Expected # of alerts: b
# singlets (1νµ)
0 (143) 0 (339)
0 (450) 0 (2470)
# doublets (2νµ)
16 (26) 58 (92) 2.3 (4.0) 33 (60)
# multiplets ( 3νµ) 22 (28) 119 (144) 1.1 (1.5) 19
This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. Funding for the Swift project in the UK is provided by the UK Space Agency.We acknowledge the work of Andreas Homeier who contributed to the development of this analysis.
. M G Aartsen, IceCubeScience. 3421242856M. G. Aartsen et al. (IceCube), Science 342, 1242856 (2013).
. M G Aartsen, IceCubePhys. Rev. Lett. 113101101M. G. Aartsen et al. (IceCube), Phys. Rev. Lett. 113, 101101 (2014).
. M G Aartsen, IceCubeAstrophys. J. 8333M. G. Aartsen et al. (IceCube), Astrophys. J. 833, 3 (2016).
. E Waxman, J Bahcall, Phys. Rev. Lett. 782292E. Waxman and J. Bahcall, Phys. Rev. Lett. 78, 2292 (1997).
. D Guetta, D Hooper, J Alvarez-Muñiz, F Halzen, E Reuveni, Astroparticle Physics. 20429D. Guetta, D. Hooper, J. Alvarez-Muñiz, F. Halzen, and E. Reuveni, Astroparticle Physics 20, 429 (2004).
. P Mészáros, Reports on Progress in Physics. 692259P. Mészáros, Reports on Progress in Physics 69, 2259 (2006).
. P Baerwald, M Bustamante, W Winter, Astroparticle Physics. 6266P. Baerwald, M. Bustamante, and W. Winter, Astropar- ticle Physics 62, 66 (2015).
. N Fraija, Monthly Notices of the Royal Astronomical Society. 4372187N. Fraija, Monthly Notices of the Royal Astronomical Society 437, 2187 (2014).
. N Senno, K Murase, P Mészáros, Phys. Rev. D. 9383003N. Senno, K. Murase, and P. Mészáros, Phys. Rev. D 93, 083003 (2016).
. I Tamborra, S Ando, Phys. Rev. D. 9353010I. Tamborra and S. Ando, Phys. Rev. D 93, 053010 (2016).
. F W Stecker, C Done, M H Salamon, P Sommers, Phys. Rev. Lett. 662697F. W. Stecker, C. Done, M. H. Salamon, and P. Sommers, Phys. Rev. Lett. 66, 2697 (1991).
. L Sironi, A Spitkovsky, Astrophys. J. 72675L. Sironi and A. Spitkovsky, Astrophys. J. 726, 75 (2011).
. W Essey, O E Kalashev, A Kusenko, J F Beacom, Phys. Rev. Lett. 104141102W. Essey, O. E. Kalashev, A. Kusenko, and J. F. Bea- com, Phys. Rev. Lett. 104, 141102 (2010).
. O E Kalashev, A Kusenko, W Essey, Phys. Rev. Lett. 11141103O. E. Kalashev, A. Kusenko, and W. Essey, Phys. Rev. Lett. 111, 041103 (2013).
. K Murase, Y Inoue, C D Dermer, Phys. Rev. D. 9023007K. Murase, Y. Inoue, and C. D. Dermer, Phys. Rev. D 90, 023007 (2014).
K Murase, American Institute of Physics Conference Series. American Institute166640006K. Murase, in American Institute of Physics Conference Series (2015), vol. 1666 of American Institute of Physics Conference Series, p. 040006.
. M Aartsen, IceCube and othersScience. 361M. Aartsen et al. (IceCube and others), Science 361, eaat1378 (2018), ISSN 0036-8075.
. M Aartsen, IceCubeScience. 361147M. Aartsen et al. (IceCube), Science 361, 147 (2018).
. M G Aartsen, IceCube1611.03874Astrophys. J. 83545M. G. Aartsen et al. (IceCube), Astrophys. J. 835, 45 (2017), 1611.03874.
. M G Aartsen, IceCubeAstrophys. J. 835151M. G. Aartsen et al. (IceCube), Astrophys. J. 835, 151 (2017).
. M G Aartsen, IceCube1710.01179M. G. Aartsen et al. (IceCube), ArXiv e-prints (2017), 1710.01179.
. P Lipari, Phys. Rev. D. 7883011P. Lipari, Phys. Rev. D 78, 083011 (2008).
. M Ahlers, F Halzen, Phys. Rev. Lett. 9043005M. Ahlers and F. Halzen, Phys. Rev. Lett. 90, 043005 (2014).
. K Murase, E Waxman, Phys. Rev. D. 94103006K. Murase and E. Waxman, Phys. Rev. D 94, 103006 (2016).
. M G Aartsen, IceCubeJournal of Instrumentation. 123012M. G. Aartsen et al. (IceCube), Journal of Instrumenta- tion 12, P03012 (2017).
. R Abbasi, IceCubeA&A. 53960R. Abbasi et al. (IceCube), A&A 539, A60 (2012).
IceCube and others). M G Aartsen, Astrophys. J. 81152M. G. Aartsen et al. (IceCube and others), Astrophys. J. 811, 52 (2015).
. M G Aartsen, IceCubeAstronomy and Astrophysics. 607115M. G. Aartsen et al. (IceCube and others), Astronomy and Astrophysics 607, A115 (2017).
. D N Burrows, J E Hill, J A Nousek, J A Kennea, A Wells, J P Osborne, A F Abbey, A Beardmore, K Mukerjee, A D T Short, Space Science Reviews. 120165D. N. Burrows, J. E. Hill, J. A. Nousek, J. A. Kennea, A. Wells, J. P. Osborne, A. F. Abbey, A. Beardmore, K. Mukerjee, A. D. T. Short, et al., Space Science Re- views 120, 165 (2005).
. N M Law, S R Kulkarni, R G Dekany, E O Ofek, R M Quimby, P E Nugent, J Surace, C C Grillmair, J S Bloom, M M Kasliwal, Publications of the Astronomical Society of the Pacific. 1211395N. M. Law, S. R. Kulkarni, R. G. Dekany, E. O. Ofek, R. M. Quimby, P. E. Nugent, J. Surace, C. C. Grillmair, J. S. Bloom, M. M. Kasliwal, et al., Publications of the Astronomical Society of the Pacific 121, 1395 (2009).
. A Rau, S R Kulkarni, N M Law, D Bloom, J S Ciardi, G S Djorgovski, D B Fox, A Gal-Yam, C C , A. Rau, S. R. Kulkarni, N. M. Law, D. Bloom, J. S. and- Ciardi, G. S. Djorgovski, D. B. Fox, A. Gal-Yam, C. C.
Publications of the. M M Grillmair, P E Kasliwal, Nugent, Astronomical Society of the Pacific. 1211334Grillmair, M. M. Kasliwal, P. E. Nugent, et al., Publi- cations of the Astronomical Society of the Pacific 121, 1334 (2009).
. C W Akerlof, R L Kehoe, T A Mckay, E S Rykoff, D A Smith, D E Casperson, K E Mcgowan, W T Vestrand, P R Wozniak, J A Wren, Publications of the Astronomical Society of the Pacific. 115132C. W. Akerlof, R. L. Kehoe, T. A. McKay, E. S. Rykoff, D. A. Smith, D. E. Casperson, K. E. McGowan, W. T. Vestrand, P. R. Wozniak, J. A. Wren, et al., Publica- tions of the Astronomical Society of the Pacific 115, 132 (2003).
. P A Evans, J P Osborne, J A Kennea, M Smith, D M Palmer, N Gehrels, J M Gelbord, A Homeier, M Voge, N L Strotjohann, Monthly Notices of the Royal Astronomical Society. 4482210P. A. Evans, J. P. Osborne, J. A. Kennea, M. Smith, D. M. Palmer, N. Gehrels, J. M. Gelbord, A. Homeier, M. Voge, N. L. Strotjohann, et al., Monthly Notices of the Royal Astronomical Society 448, 2210 (2015).
. M G Aartsen, IceCube0927-6505Astroparticle Physics. 92M. G. Aartsen et al. (IceCube), Astroparticle Physics 92, 30 (2017), ISSN 0927-6505.
. M Voge, GermanyMathematisch-Naturwissenschaftliche Fakultät der Rheinischen Friedrich-Wilhelms-Universität BonnPh.D. thesisM. Voge, Ph.D. thesis, Mathematisch- Naturwissenschaftliche Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn, Germany (2016).
. G J Feldman, R D Cousins, Phys. Rev. D. 573873G. J. Feldman and R. D. Cousins, Phys. Rev. D 57, 3873 (1998).
. D Wanderman, T Piran, Monthly Notices of the Royal Astronomical Society. 4061944D. Wanderman and T. Piran, Monthly Notices of the Royal Astronomical Society 406, 1944 (2010).
. P Madau, M Dickinson, ARA&A. 52415P. Madau and M. Dickinson, ARA&A 52, 415 (2014).
. P A R Ade, PlanckA&A. 59413P. A. R. Ade et al. (Planck), A&A 594, A13 (2016).
. D Richardson, R L Jenkins, J Iii, L Wright, Maddox, 1403.5755Astronomical Journal. 147D. Richardson, R. L. Jenkins, III, J. Wright, and L. Mad- dox, Astronomical Journal 147, 118 (2014), 1403.5755.
. S D Barthelmy, Space Science Reviews. 120143S. D. Barthelmy et al., Space Science Reviews 120, 143 (2005).
. M G Aartsen, IceCubeAstrophys. J. 80998M. G. Aartsen et al. (IceCube), Astrophys. J. 809, 98 (2015).
. M G Aartsen, IceCubePhys. Rev. D. 9122001M. G. Aartsen et al. (IceCube), Phys. Rev. D 91, 022001 (2015).
. A M Soderberg, E Nakar, E Berger, S R Kulkarni, astro-ph/0507147Astrophys. J. 638A. M. Soderberg, E. Nakar, E. Berger, and S. R. Kulka- rni, Astrophys. J. 638, 930 (2006), astro-ph/0507147.
. E Sobacchi, J Granot, O Bromberg, M C Sormani, 1705.00281Monthly Notices of the Royal Astronomical Society. 472616E. Sobacchi, J. Granot, O. Bromberg, and M. C. Sor- mani, Monthly Notices of the Royal Astronomical Soci- ety 472, 616 (2017), 1705.00281.
. P B Denton, I Tamborra, 1711.00470P. B. Denton and I. Tamborra, ArXiv e-prints (2017), 1711.00470.
. A Lien, T Sakamoto, N Gehrels, D M Palmer, S D Barthelmy, C Graziani, J K Cannizzo, Astrophys. J. 78324A. Lien, T. Sakamoto, N. Gehrels, D. M. Palmer, S. D. Barthelmy, C. Graziani, and J. K. Cannizzo, Astrophys. J. 783, 24 (2014).
. A Lien, T Sakamoto, N Gehrels, D M Palmer, S D Barthelmy, C Graziani, J K Cannizzo, Astrophys. J. 806276A. Lien, T. Sakamoto, N. Gehrels, D. M. Palmer, S. D. Barthelmy, C. Graziani, and J. K. Cannizzo, Astrophys. J. 806, 276 (2015).
. L.-G Strolger, T Dahlen, S A Rodney, O Graur, A G Riess, C Mccully, S Ravindranath, B Mobasher, A K Shahady, Astrophys. J. 81393L.-G. Strolger, T. Dahlen, S. A. Rodney, O. Graur, A. G. Riess, C. McCully, S. Ravindranath, B. Mobasher, and A. K. Shahady, Astrophys. J. 813, 93 (2015).
. A V Tikhonov, A Klypin, 0807.0924Monthly Notices of the Royal Astronomical Society. 395A. V. Tikhonov and A. Klypin, Monthly Notices of the Royal Astronomical Society 395, 1915 (2009), 0807.0924.
. B P Abbott, Ligo & VirgoPhysical Review Letters. 119161101B. P. Abbott et al. (Ligo & Virgo), Physical Review Let- ters 119, 161101 (2017).
. M G Aartsen, IceCubeAstrophys. J. 843112M. G. Aartsen et al. (IceCube), Astrophys. J. 843, 112 (2017).
The durations of long GRBs from the Swift. The durations of long GRBs from the Swift catalog are taken from http://swift.gsfc.nasa.gov/archive/ grb_table/.
Supplemental Material at. URL will be inserted by publisherSupplemental Material at [URL will be inserted by pub- lisher]
| []
|
[
"DIHEDRAL SIEVING PHENOMENA",
"DIHEDRAL SIEVING PHENOMENA"
]
| [
"Sujit Rao ",
"Joe Suk "
]
| []
| []
| Cyclic sieving is a well-known phenomenon where certain interesting polynomials, especially qanalogues, have useful interpretations related to actions and representations of the cyclic group. We propose a definition of sieving for an arbitrary group G and study it for the dihedral group I 2 pnq of order 2n. This requires understanding the generators of the representation ring of the dihedral group. For n odd, we exhibit several instances of "dihedral sieving" which involve the generalized Fibonomial coefficients, recently studied | 10.1016/j.disc.2020.111849 | [
"https://arxiv.org/pdf/1710.06517v3.pdf"
]
| 62,775,714 | 1710.06517 | 48f2f84496a7c7ba75b2e4626e54a58dc7cf78be |
DIHEDRAL SIEVING PHENOMENA
10 Sep 2018
Sujit Rao
Joe Suk
DIHEDRAL SIEVING PHENOMENA
10 Sep 2018arXiv:1710.06517v2 [math.CO]
Cyclic sieving is a well-known phenomenon where certain interesting polynomials, especially qanalogues, have useful interpretations related to actions and representations of the cyclic group. We propose a definition of sieving for an arbitrary group G and study it for the dihedral group I 2 pnq of order 2n. This requires understanding the generators of the representation ring of the dihedral group. For n odd, we exhibit several instances of "dihedral sieving" which involve the generalized Fibonomial coefficients, recently studied
Introduction
The cyclic sieving phenomenon was originally studied by Reiner, Stanton, and White in [8] in 2004 and has, since then, led to a greater understanding of the combinatorics of various finite sets with a natural cyclic action. In particular, cyclic sieving allows one to count the fixed points of a cyclic action on a finite set through an associated generating function. These generating functions often appear in other contexts, such as the generating function for permutation statistics related to Coxeter groups and as the Hilbert series of interesting graded rings. Proofs of cyclic sieving also tend to have interesting connections with representation theory.
We start by precisely defining cyclic sieving:
Definition 1.1.
[cyclic sieving phenomenon] Let X be a finite set, Xpqq be a polynomial with nonnegative integral coefficients, and Z{nZ be a cyclic group of order n with a group action on X. Let ω n : Z{nZ ãÑ Cb e the map defined by m Þ Ñ e 2πmi{n . Then, we say the triple pX, Xpqq, Z{nZq exhibits the cyclic sieving phenomenon if for all c P Z{nZ, (1.2) |Xpqq| q"ωpcq " |tx P X : cpxq " xu| As Xp1q " |X|, Xpqq can be considered a q-analogue of the cardinality. Before discussing some classic examples of cyclic sieving, we recall the definition of the q-binomial coefficient " n k ‰ q . First, let rns q " 1`qq 2`¨¨¨`qn´1 and let rns! q " rns q rn´1s q¨¨¨r 2s q r1s q . Then, the q-binomial coefficient is defined as
(1.3) " n k q :"
rns! q rks! q rn´ks! q This is a rational function in q. It is not immediately obvious, but the q-binomial coefficient can also be shown to be a polynomial in q with nonnegative integral coefficients. MacMahon's q-Catalan number, defined similarly, (1.4) C n pqq :" 1 rn`1s q " 2n n q is also a polynomial in q with nonnegative integral coefficients. For one proof, see Theorem 1.6 of [6]. Now, we discuss some examples of cyclic sieving given in the seminal paper [8]. All of these are cases where the natural cyclic action of C on the set rns :" t1, . . . , nu induces an action on some collection of subsets of rns. One of these collections is the collection of non-crossing partitions: a partition of rns is non-crossing if its blocks do not cross when drawn on a disk whose boundary is labeled clockwise with 1, 2, . . . , n. Sometimes the action C ý rns is interpreted geometrically via rotations of a regular n-gon. Example 1.5. Let C be a cyclic group of order n. Then, the following triples pX, Xpqq, Cq exhibit cyclic sieving phenomenon.
(1) Let X " tsize k multisubsets of rnsu and let Xpqq " " n`k´1 k ‰ q . (2) Let X " tsize k subsets of rnsu and let Xpqq " " n k ‰ q . (3) Let X " tdissections of a convex n-gon using k diagonalsu and let Xpqq " 1
rn`ksq " n`k k`1 ‰ q " n´3 k ‰ q .(4)
Let X " ttriangulations of a regular n-gonu and let Xpqq " C n´2 pqq. (5) Let X " tnoncrossing partitions of an n-gonu and let Xpqq " C n pqq. (6) Let X " tnoncrossing partitions of an n-gon using n´k partsu and let Xpqq " 1
rnsq " n k ‰ q " n k`1 ‰ q q kpk`1q .
2010 Mathematics Subject Classification. Primary: 05E18. Secondary: 05E10.
Note that in case (3), Xpqq is a q-analogue of f pn, kq " 1 n`k`n`k k`1˘`n´3 k˘, a formula for the number of dissections of a convex n-gon using k diagonals and in case (6), Xpqq is a q-analogue of N pn, kq " 1 n`n k˘`n k`1˘, the Narayana number which counts the number of non-crossing partitions of rns using n´k parts. Both of these are also polynomials in q for virtually the same reasons their corresponding formulas Xp1q are integers. Also note that case (4) is a specific example of case (3) where k " n´3.
Every cyclic action can be equipped with a generating polynomial Xpqq to obtain an instance of cyclic sieving. If we require Xpqq be of degree at most n´1, then the choice of polynomial is unique. Thus, cyclic sieving can be made ubiquitous. However, the interest and fascination of this subject stem from the fact that some of these associated generating polynomials will arise in other contexts. In particular, those arising via q-binomials or q-Catalan numbers are of particular interest and their importance is discussed in [9].
Typically, we have an educated guess for an appropriate cyclic sieving polynomial for a particular cyclic action which has already been used in another context. Proving that a chosen polynomial produces an instance of cyclic sieving often happens in two ways: via direct computation of the generating polynomial or an understanding of the corresponding permutation representation. The following theorem is used in the latter way.
Defining sieving phenomena for other finite groups is a natural generalization for understanding other group actions. For the case of abelian groups, or direct products of cyclic groups, a natural definition of polycyclic sieving using a multivariate generating function was defined with accompanying instances, or examples, in [2].
In this work, we give a new definition of sieving phenomena for finite groups motivated by the representation theoretic perspective of cyclic sieving provided above and apply this to the case of dihedral group actions. We prove that the natural dihedral action in situations (1), (2), (5), and (6) in Example 1.5 have dihedral sieving for n odd. The analogous generating polynomials we obtain are all defined in terms of generalized Fibonacci polynomials, first studied by Hoggatt and Long in [7], and their induced generalized Fibonomial coefficients, recently studied by Amdeberhan, Chen, Moll, and Sagan in [1].
Preliminaries
Throughout the paper, we will use the following notation.
Notation 2.1. Let C " xcy be a cyclic group of order n. Let ω : C Ñ Cˆbe an embedding of C defined by c Þ Ñ e 2πi{n . This can also be considered a 1-dimensional complex representation of C.
If V is a representation of a group G, we will use χ V to refer to its character. If x P G, χ V pxq is the value of the character at x. If C Ă G is a conjugacy class, then χ V pCq is the value of the character on C as a class function. In the case where G " GL N pCq, we use χ V pdiagpx 1 , . . . , x N qq to denote the value of the character on any diagonalizable element of GL N pCq having eigenvalues x 1 , . . . , x N .
We will first give an equivalent definition of cyclic sieving based on representation theory, which is more suitable for adapting to other groups. Definition 2.2. Let G be a group and let A be the set of isomorphism classes of finite-dimensional Grepresentations over C. The representation ring of G with coefficients in Z is
ReppGq " ZrAs{pI`Jq
where ZrAs is the polynomial ring over Z freely generated by A, and I and J are the ideals I " ptrU ' V s´prU s`rV squq J " ptrU b V s´rU srV suq and rU s denotes the isomorphism class of a G-representation U .
We will often use the following facts about the representation ring of a group. which sends an isomorphism class of a representation to its character (in the ring of conjugacy class functions on G with pointwise multiplication), is an injective ring homomorphism whose image is the Z-span of characters of irreducible representations.
The following theorem gives one definition of cyclic sieving based on the representation-theoretic perspective. . Consider a triple pX, Xpqq, Cq as in the setup of Proposition 1.1. Let A X be a graded C-vector space A X " ' iě0 A X,i having ř iě0 dim C A X,i q i " Xpqq. A X can be considered a representation of C in which each c P C acts on the graded component A X,i by the scalar ωpcq i . Then, pX, Xpqq, Cq has cyclic sieving if and only if we have an isomorphism of C-representations A X -CrXs.
We can now state an equivalent definition of cyclic sieving based on the representation ring. This motivates the following general definition, which is key to all results in this work.
Definition 2.7. Let G be a group and ρ 1 , . . . , ρ k be representations of G over C which generate ReppGq as a ring. Let X be a G-set and Xpq 1 , . . . , q k q P Zrq 1 , . . . , q k s. Then the quadruple pX, Xpq 1 , . . . , q k q, pρ 1 , . . . , ρ k q, Gq has G-sieving if CrXs " Xpρ 1 , . . . , ρ k q in ReppGq.
Example 2.8. Let G " C and ρ 1 " ω. Then the definition G-sieving above agrees with the usual definition of cyclic sieving. Example 2.9. Let G " C nˆCm be a product of cyclic groups. Let ρ 1 " ω n b 1 m , where ω n : C n Ñ Cˆis an embedding and 1 m is the trivial representation of C m , and similarly let ρ 2 " 1 n b ω m . Then the definition of G-sieving above agrees with the definition of bicyclic sieving given in [2]. Remark 2.10. Given any G-set X and generators ρ 1 , . . . , ρ k of ReppGq, there is always at least one polynomial Xpq 1 , . . . , q k q which exhibits G-sieving for X. This follows directly from the fact that ρ 1 , . . . , ρ k generate ReppGq. However, there is no guarantee that there is a canonical or interesting choice of such a polynomial, especially if there are complicated relations between the generators.
Remark 2.11. Suppose we are given a set of points ta C u P C k indexed by conjugacy classes in a group G where k P N. Let X be a finite G-set, and p P Crq 1 , . . . , q k s such that ppa C q " χ CrXs pCq for all conjugacy classes C. Then pX, ppq 1 , . . . , q k q, pρ 1 , . . . , ρ k q, Gq exhibits G-sieving if ρ i P ReppGq corresponds to the class function on G defined by C Þ Ñ pa C q i . If instead of ReppGq we take C b ReppGq, then the virtual representations ρ i always exist.
We are typically interested in cases where the polynomial Xp¨q can be written in an interesting way, such as product formulas based on q-analogues.
Dihedral Sieving
By the observations in the previous section, we can describe I 2 pnq-sieving in terms of a generating set of ReppI 2 pnqq. We first need a description of the representation ring of the dihedral group. We start by briefly recalling the irreducible representations of I 2 pnq. We adhere to the presentation (3.1) I 2 pnq " xr, s|r n " s 2 " e, rs " sr´1y
The irreducible representations and the representation ring will depend on whether n is odd or even. For n odd, there are two 1-dimensional irreducible representations, the trivial representation ½ and the determinant representation, and tn{2u 2-dimensional irreducible representations: the representations z m which sends r to a counterclockwise rotation matrix of 2πm{n radians and s to a reflection matrix where m P r1, n{2q X N. For n even, there are four 1-dimensional irreducible representations: ½, det, χ b which sends xr 2 , sy to 1 and r to´1, and det¨χ b . There are pn{2´1q 2-dimensional irreducible representations, defined the same way as the n odd case. Character values for all of our group actions of interest are included in Table 6.1. We refer to [5] for the following. First, we have (for both n odd and even) the following relations among the irreducible representations:
det 2 " 1 det¨z k " z k z k`1 " z k z 1´zk´1 if k ď n´3 2 and where z 0 " 1`det
Thus, ReppI 2 pnqq is generated by det, z 1 for n odd and by det, z 1 , χ b for n even.
Remark 3.2. Using the equation z k`1 " z k z 1´zk´1 to define z k for all k P Z, it can be checked that z 0 " z n " 1`det. Thus z 1 by itself generates ReppI 2 pnqq when n is odd, and z 1 , χ b generate it when n is even. However, the expression of det in terms of z 1 depends on n, so it is more useful to think of I 2 pnq as being generated by z 1 and det.
The examples of dihedral sieving we exhibit are all for odd n and make use of the generalized Fibonacci polynomials and Fibonomial coefficients defined in [1]. We state some of their results here. (2)]] The generalized Fibonacci polynomials are a sequence tnu s,t of polynomials in Nrs, ts defined inductively by
t0u s,t " 0 t1u s,t " 1 tn`2u s,t " stn`1u s,t`t tnu s,t .
We also define tnu! s,t " tnu s,t tn´1u s,t¨¨¨t 1u s,t and the Fibonomial coefficient to be " n k Then
* s,t " tnu! s,t tku! s,t tn´ku! s,t .tnu s,t " Y n´1 rns q | q"X{Y and " n k * s,t " Y kpn´kq " n k qˇq "X{Y .
The generalized Fibonacci polynomials have combinatorial interpretations related to tilings of rows of squares with monominoes and dominoes, which can be found in Section 1 of [1]. The Fibonomial coefficients have a similar interpretation related to tilings of a kˆpn´kq rectangle containing a partition, which can be found in [10].
In this work, all of our dihedral sieving polynomials will be given in terms of generalized Fibonacci polynomials and Fibonomial coefficients. In all further sections, we will use the generators z 1 ,´det of ReppI 2 pnq and say that a triple pX, P ps, tq, I 2 pnqq has dihedral sieving if the quadruple pX, P ps, tq, pz 1 ,´detq, I 2 pnqq does. 4. Dihedral action on k-subsets and k-multisubsets of t1, . . . , nu
We first recall some facts about cyclic sieving. For a rational representation ρ : GL n pCq Ñ GL n pV q, let χ ρ px 1 , . . . , x N q be the trace on V of any diagonalizable element of GL N pCq having eigenvalues x 1 , . . . , x N . Let ρ : GL n pCq Ñ GL n pV q be a representation. Assume V has a basis tv x u xPX which is permuted by Z{nZ in the following way: cpv x q " v cpxq for all c P Z{nZ, x P X Then, let Xpqq be the principal specialization
Xpqq " χ ρ p1, q, . . . , q N´1 q Then, pX, Xpqq, Cq exhibits the cyclic sieving phenomenon.
The above lemma can be used to prove cyclic sieving for Z{nZ ý X where X "`r ns k˘. Specifically, we take V " V λ , the irreducible representation of GL n pCq with highest weight λ " pkq $ k (the partition of k with one part), and the specialization of the character value becomes the q-analogue of the Weyl character formula or the hook-content formula (4.2) χ ρ p1, q, . . . , q n´1 q " s λ p1, q, . . . , q n´1 q " q bpλq ź cells x P λ rn`cpxqs q rhpxqs q "
" n`k´1 k q
where hpxq is the hook-length of λ at x (total number of cells weakly to the right of x or strictly below x), cpxq is the hook content j´i when cell x is in row i and column j, and bpλq " ÿ i pi´1qλ i .
We prove a generalization of this theorem which will help prove dihedral sieving for k-subsets and k-multisubsets. Given a highest-weight GL n pCq-representation V , consider its character specialization a n´2 b, . . . , ab n´2 , b n´1 qq as in Theorem 4.1 for some variables a, b. This is a symmetric polynomial in a n´1 , a n´2 b, . . . , ab n´2 , b n´1 and, thus also in a, b. This means the character specialization can be expressed as a polynomial in a`b, ab. If an action I 2 pnq ý rns is faithful, then I 2 pnq may be considered as a subgroup of permutation matrices of GL n pCq. Proposition 4.3. Let n be odd, let X be a finite set with |X| " n, and let V be a GL n pCq-representation. Considering I 2 pnq as a subgroup of GL n pCq through a faithful action I 2 pnq ý X, assume that V has a basis indexed by X which is permuted by I 2 pnq via gpv x q " v gpxq for all g P I 2 pnq and x P X. Let p be the unique polynomial in two variables such that ppa`b,´abq " χ V pdiagpa n´1 , a n´2 b, . . . , ab n´2 , b n´1 qq noting that the right-hand side is a symmetric function in a and b. Then pX, p, I 2 pnqq exhibits dihedral sieving.
χ V pdiagpa n´1 ,
Proof. Let C be a conjugacy class in I 2 pnq and X " s`?s 2`4 t 2 , Y " s´?s 2`4 t 2 where s " χ z1 pCq and t " χ´d et pCq. It is straightforward to check that the eigenvalues of any element in C are X n´1 , X n´2 Y, . . . , Y n´1 , and that X`Y " χ z1 pCq and XY "´χ´d et pCq. Thus
χ V pCq " χ V pX n´1 , X n´2 Y, . . . , XY n´2 , Y n´1 q " ppX`Y,´XY q " ppχ z1 pCq, χ´d et pCqq.
and V " ppz 1 ,´detq in ReppI 2 pnqq. Proposition 4.4. Suppose V is an I 2 pnq-representation and X is a finite I 2 pnq-set indexing a basis tv x : x P Xu of V which is permuted up to scalars, that is there is some one-dimensional representation ρ : G Ñ Cˆsuch that gpv x q " ρpgq k v gpxq for all g P I 2 pnq and x P X. Suppose further that Vp 1 pz 1 ,´detq and that p 1 " p 2 p with ρp 2 pz 1 ,´detq. Then pX, p, I 2 pnqq exhibits dihedral sieving.
Proof. We have ρCrXs " V " p 2 pz 1 ,´detqppz 1 ,´detq in ReppI 2 pnqq. Since ρ is invertible in ReppI 2 pnqq and ρ " p 2 pz 1 ,´detq, we can cancel it from both sides to get CrXs " ppz 1 ,´detq.
Corollary 4.5. Let n, X, V and p be as in Proposition 4.3, and suppose pps, tq " p´t k qups, tq and instead that the basis of V is permuted up to scalars, that is gpv x q " pdetpgqq k v gpxq . Then pX, ups, tq, I 2 pnqq exhibits dihedral sieving.
Proof. Proposition 4.3 shows that V " ppz 1 ,´detq in ReppI 2 pnqq. Proposition 4.4 then implies that pX, u, I 2 pnqq exhibits dihedral sieving since det k " p´tq k | t"´det . Remark 4.6. Instead of writing a symmetric polynomial in a and b as a polynomial in a`b and ab, we can equivalently make the substitution a " s`?s 2`4 t 2 and b " s´?s 2`4 t 2 , as in Proposition 3.5, which satisfies a`b " s and´ab " t. In particular, we have tnu s"a`b,t"´ab " a n´1`an´2 b`¨¨¨`ab n´2`bn´1 . Stating Proposition 4.3 as we have done makes it clear that the resulting expression is always a polynomial.
Remark 4.7. When n is even, the eigenvalues of an element of I 2 pnq (again viewed as a subgroup of GL n pCq) have a more complicated description. In particular, a reflection with two fixed points has n{2´1 eigenvalues which are´1 and n{2`1 eigenvalues which are`1, so they cannot be put into a geometric sequence of the form X n´1 , X n´1 Y, . . . , X n´1 as done in the proof of Proposition 4.3. However, all reflections with no fixed points and rotations still have eigenvalues which can be written in this form. Proof. Let V " Sym k pC n q " V λ , the irreducible representation of GL n pCq of highest weight λ " pkq $ k. Let it be equipped with the usual basis of symmetric tensors indexed by k-multisubsets of rns. The action of I 2 pnq on this basis and on k-multisubsets of rns are the same, so we can apply Proposition 4.3.
The following lemma is useful for showing that dihedral sieving polynomials obtained from GL n pCq representations have certain product formulas, when the product formula is expressed as a rational function not known to be a polynomial.
Lemma 4.9. Let k be a field and suppose f P kps, tq is a rational function such that f pa`b,´abq is a polynomial. Then f is a polynomial.
Proof. It is clear that f pa`b,´abq is symmetric in a and b, and since it is a polynomial it is in the polynomial subring kra`b,´abs. Hence f is a polynomial.
The main example of the use of this lemma is to prove that the following analogue of the hook-content formula is a polynomial. Proposition 4.10. Let pλ 1 , . . . , λ n q " λ $ k be a partition of k. Then s λ pa n´1 , a n´2 b, . . . , ab n´2 , b n´1 q "˜p´tq bpλq ź xPλ tn`cpxqu s,t thpxqu s,t¸s "a`b,t"´ab where x P λ runs over cells of λ, hpxq is the hooklength of x (total number of cells weakly to the right of x or strictly below x), cpxq is the hook content j´i when cell x is in row i and column j, and bpλq " ÿ i pi´1qλ i .
Proof. We have, using the q-hook content formula stated in Equation 4.2, s λ pa n´1 , a n´2 b, . . . , ab n´2 , b n´1 q " b pdeg s λ qpn´1q s λ p1, a{b, pa{bq 2 , . . . , pa{bq n´1 q " a bpλq b´b pλq`|λ|pn´1q ź xPλ rn`cpxqs q"a{b rhpxqs q"a{b " a bpλq b´b pλq`|λ|pn´1q ź xPλ b´p n`cpxq´1q tn`cpxqu s"a`b,t"´ab b´p hpxq´1q thpxqu s"a`b,t"´ab " a bpλq b´b pλq`|λ|pn´1q ź xPλ b´n´c pxqq tn`cpxqu s"a`b,t"´ab b´h pxq thpxqu s"a`b,t"´ab " a bpλq b´b pλq`|λ|pn´1q`ř xPλ hpxq´cpxq´n ź xPλ tn`cpxqu s"a`b,t"ab thpxqu s"a`b,t"´ab .
where we use the fact that rns q"a{b " b 1´n tnu s"a`b,t"´ab via Proposition 3.5 to establish the third equality.
For the exponent, we havé bpλq`|λ|pn´1q`ÿ xPλ hpxq´cpxq´n "´bpλq´|λ|`ÿ xPλ hpxq´ÿ xPλ cpxq "´bpλq´|λ|`bpλ 1 q`bpλq`|λ|´bpλ 1 q`bpλq " bpλq following from the identities ÿ xPλ cpxq " bpλ 1 q´bpλq ÿ xPλ hpxq " bpλ 1 q`bpλq`k where λ 1 is the conjugate of λ. Since pabq bpλq " p´tq bpλq | t"ab , the result follows.
The technique of applying Theorem 4.1 to prove cyclic sieving for k-subsets and k-multisubsets of rns immediately generalizes with the above proposition. of Λ k C n which is permuted up to scalars by the action of I 2 pnq, that is xpw i q " detpxq p k 2 q w xpiq for all x P I 2 pnq. We construct the basis as follows. First, group`r ns k˘i nto orbits under the action by the cyclic group C n . For each orbit we will choose a distinguished representative such that if A is distinguished then rpAq is also distinguished. Given a subset A " ti 1 , . . . , i k u with i 1 㨨¨ă i k , let v A " e i1^¨¨¨^ei k .
Since the reflection in I 2 pnq has order two, each orbit of`r ns k˘u nder the I 2 pnq action is a union of either two or one orbits of the C n action. Suppose we have two C n -orbits whose union is a single I 2 pnq-orbit. Then choose an arbitrary element A of one orbit to be distinguished, and take B " spAq to be the distinguished element of the other orbit. Form the set
tv A , cpv A q, . . . , c j´1 pv A q, p´1q p k 2 q v B , p´1q p k 2 q cpv B q, . . . , p´1q p k 2 q c j´1 pv B qu Ď^kC n
where the orbits each contain j subsets. These vectors are linearly independent and permuted up to scalars by all elements of tr 0 , . . . , r n´1 , su since this holds for v A and because of the relation sr i " r´is. These elements also generate I 2 pnq, so in fact all elements of I 2 pnq permute these vectors up to the desired scalars.
In the case where we have a C n -orbit which is also an I 2 pnq-orbit, for a given A in the orbit we have spAq " r j pAq for some j, so r´jpspAqq " A. Now take the subset tv A , cpv A q, . . . , c n´1 pv A qu. Then r´jpspv A qq " e i k^¨¨¨^e i1 " p´1q p k 2 q pe i1^¨¨¨^ei k q " p´1q p k 2 q v A . We have the relation pr´j sqr i " r´ipr´j sq analogous to the one above, so this set is also permuted up to scalars. Taking the union of these subsets gives the desired basis of Λ k C n .
Remark 4.12. Dihedral sieving for k-subsets and k-multisubsets of rns can be also proven using direct enumeration. 5. Dihedral Action on Non-Crossing Partitions of t1, . . . , nu There is an action by I 2 pnq on the non-crossing partitions of rns and also on the non-crossing partitions with a fixed number of parts. The character values for the corresponding representations of both actions were studied by Ding in 2016 in [4]. We show, for odd n, these actions are both instances of dihedral sieving using the natural ps, tq-analogue of the Catalan number as the generating polynomial. Proof. Again, let ξ n be the primitive n-th root of unity e 2πi{n and let C be a conjugacy class of I 2 pnq. First, we compute the character values of CrXs. We have that C n pξ ℓ n q counts the fixed points of the action of tr ℓ , r n´ℓ u by cyclic sieving. Next, we have that`n tn{2u˘, or C n p´1q, is the number of fixed points of tr, sr, sr 2 , . . .u by Theorem 2.1.5 of [4].
Using the same notation as before, we consider the generalized Catalan number/sequence in Section 5 of [1] C tnu s,t :"
1 tn`1us,t n k (
s,t where we use the specialization ps, tq " pz 1 ,´detq. Then, we use the fact that tn`1u s,t " Y n rn`1s X{Y " # rn`1s ξ ℓ n if C " tr ℓ , r n´ℓ u rn`1s ξ2 if C " ts, sr, sr 2 , . . .u to get C tnu pz1pCq,´detpCqq " # C n pqq| q"ξ ℓ n if C " tr ℓ , r n´ℓ u C n pqq| q"ξ2 if C " ts, sr, sr 2 , . . .u where C tnu is well-known to be a polynomial in s, t with integral coefficients. The claim follows. Proof. First, we claim the character values of CrXs are N pn, k; ξ ℓ n q for the conjugacy class tr ℓ , r n´ℓ u and N pn, k;´1q for the conjugacy class ts, sr, sr 2 , . . .u, where N pn, kq " |X| is the Narayana number and
(5.3) N pn, k; qq :" 1 rns q " n k q " n k`1 q q kpk`1q
the q-analogue of the Narayana number. In fact, both of these are already known. The character values of rotation conjugacy classes follow from Theorem 7.2 of [8]. For the case of reflections, the character values are given by N pn, k;´1q by Theorem 3.2.7 of [4]. Now, consider the ps, tq-analogue of the Narayana number. This is a polynomial in s, t for the same reasons N pn, k; qq is a polynomial in q:
1 tnu s,t " n k * s,t " n k`1 * s,t " 1 tn´ku s,t " n k`1 * s,t " n´1 k * s,t " 1 tk`1u s,t " n k * s,t " n´1 k * s,t " " n´1 k * s,t " n`1 k`1 * s,t´" n k * s,t " n k`1 * s,t
n odd e r ℓ , r n´ℓ sr, sr 2 , sr 3 , . . .
½ 1 1 1 det 1 1´1 χ m 2 2 cos`2 πmℓ n˘0 χ k-subsets`n k˘" n k ‰ q"ξ ℓ n " n k ‰ q"ξ2
χ NCpnq C n C n pqq| q"ξ ℓ n C n pqq| q"ξ2 χ triangulations C n´2 C n´2 pqq| q"ξ ℓ n 2 n´1 C n´3 pqq| q"ξ2 χ N Cpn,kq N pn, kq N pn, k; ξ ℓ n q N pn, k; ξ 2 q χ k-multisubsets`n`k´1 k˘"
n`k´1 k ‰ q"ξ ℓ n " n`k´1 k ‰ q"ξ2
n even e r ℓ , r n´ℓ sr, sr 3 , sr 5 , . . . s, sr 2 , sr 4 , . . .
½ 1 1 1 1 det 1 1´1´1 χ b :" # xr 2 , sy Þ Ñ 1 r Þ Ñ´1 1 p´1q ℓ´1 1 χ b¨d et 1 p´1q ℓ 1´1 χ m 2 2 cos`2 πmℓ n˘0 0 χ k-subsets`n k˘" n k ‰ q"ξ ℓ n " n k ‰ q"ξ2 " n k ‰ q"ξ2`2 " n´2 k´1 ‰ q"ξ2`" n´2 k´2 ‰ q"ξ2
χ NCpnq C n C n pqq| q"ξ ℓ n C n pqq| q"ξ2 C n pqq| q"ξ2 χ triangulations C n´2 C n´2 pqq| q"ξ ℓ n 0 4 n C n´2 pqq| q"ξ2 χ NCpn,kq N pn, kq N pn, k, ξ ℓ n q N pn, k, ξ 2 q N pn, k, ξ 2 q χ k-multisubsets`n`k´1 k˘" Table 1. Character values for various I 2 pnq actions.
n`k´1 k ‰ q"ξ ℓ n " n`k´1 k ‰ q"ξ2 " n`k´2 k ‰ q"ξ2`2 " n`k´3 k´1 ‰ q"ξ2`" n´3`k k´2 ‰ q"ξ2
Next, the ps, tq-analogue of the Narayana number becomes, by Proposition 3.5,
1 tnu s,t " n k * s,t " n k`1 * s,t " Y p1´nq`kpn´kq`pk`1qpn´k´1q N pn, k; X{Y q
where the exponent of Y simplifies to 2kpn´kq´2k and if Y " ξ´ℓ n , then n|k gcdpn, ℓq ùñ n|ℓk so that the power of Y goes to 1 when the q-binomial " n k ‰ q"X{Y is non-zero. Thus, the claim follows.
6. Further Questions 6.1. The case of even n. Each of our proofs of instances of I 2 pnq-sieving specifically relied on n being odd. In the case of n even, empirical evidence seems to suggest any possible instances of dihedral sieving for k-subsets, k-multisubsets, or noncrossing partitions will not be given by taking an obvious ps, tq-analogue polynomial when specializing to generators of the representation ring of I 2 pnq. This is especially true since the representation ring is different for n odd and even. Nonetheless, the character values for each of these group actions can be shown, in a similar manner to the odd case, to be Z-linear combinations of q-analogues (q-binomials for the k-subsets and k-multisubsets actions and q-Catalan numbers for the noncrossing partitions action) evaluated at certain values, as shown in Table 6.1. We expect that exhibiting dihedral sieving for even n is possible but more difficult, especially for k-subsets and k-multisubsets using product formulas, and offer some evidence as to why. Consider the identity
Sym k pX ' Y q - k à i"0 Sym i pXq b Sym k´i pY q.
Noting that ReppSOp2qq -Zrq, q´1s and Sym k pq m q " q km , an inductive argument shows that Sym k p1`qq n´1 q " " n`k´1 k ‰ q in ReppSOp2qq. The identity suggests that we should consider ReppOp2qq to study dihedral sieving, but this approach fails since the representation ring of Op2q behaves more similarly to the case when n is odd. In particular, all reflections in Op2q are conjugate and the only one-dimensional irreducible representations of Op2q are the trivial and determinant representations. Moreover, Fibonomial coefficients do not seem to describe symmetric powers of Op2q representations in the same way as q-analogues do for SOp2q representations, although there may be some alternative generalization of q-binomial coefficients which do.
Interestingly, the same discrepancy between even and odd n occurs when we consider real representation rings of cyclic groups, i.e. the proper subring of ReppC n q generated by all representations of the form C b V where V is a representation of C n over R. The real irreducible representations of C n are all restrictions of irreducible representations of I 2 pnq, and in particular there is an additional one-dimensional irreducible real representation when n is even, while for n odd and SOp2q the only one-dimensional irreducible real representation is the trivial one. Thus it may useful to exhibit cases of cyclic sieving using generators of the real representation ring (regarded as a proper subring of the complex representation ring) before attempting dihedral sieving for even n. Note that since the irreducible real representations of C n are restrictions of I 2 pnq irreducibles, dihedral sieving for even n would directly exhibit cases of cyclic sieving using generators of the real representation ring. 6.2. Further instances of dihedral sieving. Another curious example is the dihedral group action on triangulations of a regular n-gon ((4) of Theorem 1.5). The character values for this action can be shown to be evaluations of q-Catalan numbers at roots of unity, like the case of the dihedral action on noncrossing partitions. However, even in the n odd case, they do not seem to arise from an ps, tq-Catalan dihedral sieving polynomial. The corresponding character values can be found in Table 6.1.
Fact 2 . 3 .
23The representation ring ReppGq is a free abelian group with a basis given by isomorphism classes of irreducible representations.
Let pX, Xpqq, Cq be a triple where C acts on X and Xpqq P Nrqs. This triple has cyclic sieving if and only if CrXs " Xpωq in ReppC; Zq.
is a polynomial in s and t with nonnegative integer coefficients.
Proposition 4 . 8 .
48Let n be odd and X "´r ns k¯.
Proposition 4 . 11 .
411Let n be odd and let X "`r ns k˘. Using Proposition 4.10 and Corollary 4.5, it suffices to show that there is a basis " w A : A Pˆr ns k˙*
Proposition 5 . 1 .
51Let n be odd and X " tnon-crossing partitions of rnsu.
Proposition 5 . 2 .
52Let n be odd and X " tnon-crossing partitions of rns with n´k blocksu. Then the triplẽ
AcknowledgmentsThis research was carried out as part of the 2017 summer REU program at the School of Mathematics, University of Minnesota, Twin Cities, and was supported by NSF RTG grant DMS-1148634. The authors would like to thank Victor Reiner, Pavlo Pylyavskyy, and Benjamin Strasser for their mentorship and support.
Generalized Fibonacci polynomials and Fibonomial coefficients. T Amdeberhan, X Chen, V Moll, B Sagan, Ann. Comb. 18T. Amdeberhan, X. Chen, V. Moll and B. Sagan, Generalized Fibonacci polynomials and Fibonomial coefficients. Ann. Comb. 18 (2013), 541-562.
. H Barcelo, V Reiner, D Stanton, BiMahonian distributions. J. London Math. Soc. 2H. Barcelo, V. Reiner and D. Stanton, BiMahonian distributions. J. London Math. Soc. 2 (2008), 627-646.
Constructions for Cyclic Sieving Phenomena. A Berget, S Eu, V Reiner, SIAM J. Discrete Math. 25A. Berget, S. Eu, and V. Reiner, Constructions for Cyclic Sieving Phenomena. SIAM J. Discrete Math. 25 (2011), 1297-1314.
Dihedral symmetries of non-crossing partition lattices. Z Ding, U. Miami PhD ThesisZ. Ding, Dihedral symmetries of non-crossing partition lattices. U. Miami PhD Thesis (2016).
Critical groups of McKay-Cartan matrices. C Gaetz, U. Minnesota Undergraduate Honors ThesisC. Gaetz, Critical groups of McKay-Cartan matrices, U. Minnesota Undergraduate Honors Thesis (2016).
The q, t-catalan numbers and the space of diagonal harmonics. J Haglund, AMS University Lecture Series. J. Haglund, The q, t-catalan numbers and the space of diagonal harmonics. AMS University Lecture Series (2008).
Divisibility properties of generalized Fibonacci polynomials. V HogattJr, C Long, Fibonacci Quarterly. 12V. Hogatt Jr. and C. Long, Divisibility properties of generalized Fibonacci polynomials. Fibonacci Quarterly 12 (1974), 113- 120.
The cyclic sieving phenomenon. V Reiner, D Stanton, D White, J. Combin. Theory Ser. A. 108V. Reiner, D. Stanton and D. White, The cyclic sieving phenomenon. J. Combin. Theory Ser. A 108 (2004). 17-50.
The cyclic sieving phenomenon: A survey. B Sagan, London Math. Soc. Lecture Note Ser. 392B. Sagan, The cyclic sieving phenomenon: A survey. London Math. Soc. Lecture Note Ser. 392 (2011).
Combinatorial interpretations of binomial coefficient analogues related to Lucas sequences. B Sagan, C Savage, Integers. 10B. Sagan and C. Savage, Combinatorial interpretations of binomial coefficient analogues related to Lucas sequences. Integers 10 (2010). 697-703
| []
|
[
"ACCEPTED BY APJ ON THE DISAPPEARANCE OF BROAD-LINE REGION IN LOW-LUMINOSITY ACTIVE GALACTIC NUCLEI: THE ROLE OF THE OUTFLOWS FROM ADVECTION DOMINATED ACCRETION FLOWS",
"ACCEPTED BY APJ ON THE DISAPPEARANCE OF BROAD-LINE REGION IN LOW-LUMINOSITY ACTIVE GALACTIC NUCLEI: THE ROLE OF THE OUTFLOWS FROM ADVECTION DOMINATED ACCRETION FLOWS"
]
| [
"Xinwu Cao "
]
| []
| []
| The broad-line region (BLR) disappears in many low-luminosity active galactic nuclei (AGNs), the reason of which is still controversial. The BLRs in AGNs are believed to be associated with the outflows from the accretion disks. Most of the low-luminosity AGNs (LLAGNs) contain advection dominated accretion flows (ADAFs), which are very hot and have a positive Bernoulli parameter. ADAFs are therefore associated with strong outflows. We estimate the cooling of the outflows from the ADAFs, and find that the gases in such hot outflows always cannot be cooled efficiently by bremsstrahlung radiation. The ADAF may co-exist with the standard disk, i.e., the inner ADAF connects to the outer thin accretion disk at radius R d,tr , in the sources accreting at slightly lower than the critical rateṁ crit (ṁ =Ṁ/Ṁ Edd ). For the ADAFs with L bol /L Edd 0.001, a secondary small inner cold disk is suggested to co-exist with the ADAF due to the condensation process. We estimate the Compton cooling of the outflow, of which the soft seed photons either come from the outer cold disk or the secondary inner cold disk. It is found that the gas in the outflow far from the ADAF may be efficiently cooled to form BLR clouds due to the soft seed photons emitted from the cold disks, provided the transition radius of the ADAF to the outer cold disk is small [r d,tr = R d,tr /(2GM/c 2 ) 20] or/and the secondary small cold disk has a luminosity L sd 0.003L Edd . The BLR clouds can still be formed in the outflows from the outer cold thin disks, if the transition radius r tr is not very large. For the sources with L bol /L Edd 0.001, the inner small cold disk is evaporated completely in the ADAF and outer thin accretion disk may be suppressed by the ADAF, which leads to the disappearance of the BLR. The physical implications of this scenario on the double-peaked broad-line emitters are also discussed. | 10.1088/0004-637x/724/2/855 | [
"https://arxiv.org/pdf/1009.5043v1.pdf"
]
| 119,207,937 | 1009.5043 | 2d439b7873d9051b968c6d52014ea0e7823c6d28 |
ACCEPTED BY APJ ON THE DISAPPEARANCE OF BROAD-LINE REGION IN LOW-LUMINOSITY ACTIVE GALACTIC NUCLEI: THE ROLE OF THE OUTFLOWS FROM ADVECTION DOMINATED ACCRETION FLOWS
26 Sep 2010
Xinwu Cao
ACCEPTED BY APJ ON THE DISAPPEARANCE OF BROAD-LINE REGION IN LOW-LUMINOSITY ACTIVE GALACTIC NUCLEI: THE ROLE OF THE OUTFLOWS FROM ADVECTION DOMINATED ACCRETION FLOWS
26 Sep 2010accepted by ApJarXiv:1009.5043v1 [astro-ph.HE] Preprint typeset using L A T E X style emulateapj v. 11/10/09Subject headings: accretion, accretion disks-galaxies: active-quasars: emission lines
The broad-line region (BLR) disappears in many low-luminosity active galactic nuclei (AGNs), the reason of which is still controversial. The BLRs in AGNs are believed to be associated with the outflows from the accretion disks. Most of the low-luminosity AGNs (LLAGNs) contain advection dominated accretion flows (ADAFs), which are very hot and have a positive Bernoulli parameter. ADAFs are therefore associated with strong outflows. We estimate the cooling of the outflows from the ADAFs, and find that the gases in such hot outflows always cannot be cooled efficiently by bremsstrahlung radiation. The ADAF may co-exist with the standard disk, i.e., the inner ADAF connects to the outer thin accretion disk at radius R d,tr , in the sources accreting at slightly lower than the critical rateṁ crit (ṁ =Ṁ/Ṁ Edd ). For the ADAFs with L bol /L Edd 0.001, a secondary small inner cold disk is suggested to co-exist with the ADAF due to the condensation process. We estimate the Compton cooling of the outflow, of which the soft seed photons either come from the outer cold disk or the secondary inner cold disk. It is found that the gas in the outflow far from the ADAF may be efficiently cooled to form BLR clouds due to the soft seed photons emitted from the cold disks, provided the transition radius of the ADAF to the outer cold disk is small [r d,tr = R d,tr /(2GM/c 2 ) 20] or/and the secondary small cold disk has a luminosity L sd 0.003L Edd . The BLR clouds can still be formed in the outflows from the outer cold thin disks, if the transition radius r tr is not very large. For the sources with L bol /L Edd 0.001, the inner small cold disk is evaporated completely in the ADAF and outer thin accretion disk may be suppressed by the ADAF, which leads to the disappearance of the BLR. The physical implications of this scenario on the double-peaked broad-line emitters are also discussed.
INTRODUCTION
Active galactic nuclei (AGNs) are classified as type 1 and 2 AGNs by their line emission. Type 1 AGNs show broad emission lines and narrow forbidden lines, while only narrow lines are observed in type 2 AGNs. According to the unification scheme of AGNs, all AGNs are intrinsically same, but are viewed at different orientations (e.g., Antonucci 1993). The broad-line regions (BLRs) in type 2 AGNs are obscured by the dusty tori, as they are supposed to be viewed at large angles with respect to the axes of the tori. However, there is evidence that the BLR disappears in many low-luminosity active galactic nuclei (LLAGNs) (e.g., Tran 2001Tran , 2003Gu & Huang 2002), and most of the type 1 AGNs have relatively high Eddington ratios (e.g., Trump et al. 2009). These low-luminosity sources are named as "true" type 2 AGNs, which do not have hidden BLRs (see Ho 2008, for a review and references therein). Many workers have explored why the BLR disappears in LLAGNs (e.g., Nicastro et al. 2003;Laor 2003;Elitzur & Shlosman 2006;Elitzur & Ho 2009). Laor (2003) suggested that an upper limit on the observed width of broad emission lines leads to a lower limit on the radius of the BLR based on the empirical correlation between BLR size and optical continuum luminosity (Kaspi et al. 2000). In this scenario, the BLR radius shrinks below a critical value for LLAGNs, which leads to the disappearance of BLR in these sources. Although the origin of BLR is still unclear, an attractive suggestion is that the BLR structure is associated with the outflow from the accretion disk (Emmering et al. 1992). Nicastro (2000) assumed that the winds from the accretion disk are triggered by the thermal instability of radiation pressure dominated region of the disk (Shakura & Sunyaev 1976). The transition radius between the radiation pressure dominated and gas pressure dominated regions in the disk increases with the dimensionless mass accretion rateṁ (Shakura & Sunyaev 1973). In this scenario, the transition radius becomes smaller than the marginal stable orbit of the black hole for low accretion rates (low luminosities), and the winds are switched off and no BLR can be formed in LLAGNs (Nicastro et al. 2003). A correlation between the width of BLR and the luminosity is expected in this model, which is consistent with the observations of AGN samples (Warner et al. 2004;Xu & Cao 2007). An alternative disk-wind scenario was suggested for the BLR and dust torus, in which both the BLR and torus disappear when the bolometric luminosity is low (Elitzur & Shlosman 2006;Elitzur & Ho 2009). The outflow from the accretion disk being switched off is a key ingredient in these scenarios when accretion rates are low, though the detailed physics of the outflow dynamics has not been included in these works.
Low mass accretion rateṁ may lead to the accretion flows to be advection-dominated (Narayan & Yi 1994, 1995b. Advection dominated accretion flows (ADAFs) are suggested to be present in LLAGNs (see Narayan 2002, for a review and references therein), which can successfully explain most observational features of LLAGNs (e.g., Lasota et al. 1996;Gammie et al. 1999;Quataert et al. 1999;Xu & Cao 2009). It was suggested that the ADAF co-exists with the standard disk, i.e., the inner ADAF connects to the outer thin accretion disk, in some sources accreting at rates slightly lower than the critical rateṁ crit (e.g., Esin et al. 1997;Quataert et al. 1999). For even lower accretion rates, a secondary small cold accre-tion disk is suggested to co-exist with the ADAF in the inner region due to the condensation process (RóżaŃska & Czerny 2000). This model was extensively explored by many different authors (e.g., Meyer et al. 2007;Liu et al. 2007;Mayer & Pringle 2007;Taam et al. 2008), which can explain the soft X-ray thermal component observed in some X-ray binaries (Tomsick et al. 2008;Miller et al. 2006). Czerny et al. (2004) assumed that the existence of the BLR is related with the cold accretion disk, and they compared different theoretical model predictions with the observations of AGNs, which favors the disappearance of BLR being related with the different accretion mode in LLAGNs. Based on the evaporation disk model, a lower limit on the accretion rate is also derived for the existence of BLRs on the same assumption that the BLR is associated with a cold accretion disk (Liu & Taam 2009).
It was well known that the gas in ADAFs is very hot, and has a positive Bernoulli parameter, which implies that ADAFs should be associated with strong winds (Narayan & Yi 1994, 1995aBlandford & Begelman 1999;Stone et al. 1999;Igumenshchev et al. 2000;Stone & Pringle 2001). If this is the case, one may expect strong outflows from the ADAFs in LLAGNs, which implies that the assumption of the disk winds being suppressed at low accretion rates in the previous scenarios for the disappearance of BLR in LLAGNs is not valid (Nicastro et al. 2003;Elitzur & Ho 2009). In this work, we explore the relation of the hot outflows from ADAFs with the disappearance of BLR in LLAGNs.
COOLING OF THE OUTFLOWS FROM ADAFS IN LLAGNS
In this work, we assume that the outflow has a conical geometry, and the density of the outflow from an ADAF is
ρ(R) =Ṁ w f w R 2 v(R) ,(1)
whereṀ w is the mass loss rate in the outflow, v(R) is the radial velocity of the outflow at radius R, and f w is the solid angle of the conical outflow ( f w = 4π for an isotropic outflow). The mass loss rateṀ w in the outflow is related to the mass accretion rateṀ of the disk witḣ
M w = η wṀ ,(2)
where η w is a free parameter and required to be less than unity. For the outflow driven by the internal energy of the hot gases in the ADAF, its velocity
v(R) ∼ GM R 1/2 ,(3)
where M is the mass of the black hole. In principle, the velocity can be higher than this value, which is the least velocity of the outflow can escape to infinity. Substituting Eqs.
(2) and (3) into Eq. (1), we have
ρ(r) = 7.51 × 10 −4 η w f −1 wṁ m −1 r −3/2 g cm −1 ,(4)
where the dimensionless quantities are defined as
m = M M ⊙ , r = R 2GM/c 2 ,ṁ =Ṁ M Edd ,(5)
andṀ Edd = 1.39 × 10 18 m g s −1 .
The temperature of the gases in the ADAFs is nearly virialized (Narayan & Yi 1995a), and we assume the gases to be virialized in the outflow,
T gas (R) ∼ T vir (R) = GMm p kR .(6)
The internal energy per unit volume of the gases in the outflow is
U = 3 2 p gas = 3ρkT i 2µ i m p + 3ρkT e 2µ e m p ,(7)
where the effective molecular weights of the ions and electrons are µ i = 1.23 and µ e = 1.14 respectively. As the ion temperature is significantly higher than the electron temperature in the inner region of the ADAF and most of the internal energy is stored in the ions, the electron temperature T e ≤ T i is required in the outflow. The electron temperature T e is mainly determined by the radiative cooling, and the Coulomb interaction between the electrons and ions. In this work, we assume T e = ξ e T gas (ξ e ≤ 1) in our estimates on the cooling of the outflow. Thus, the bremsstrahlung cooling timescale of the gases in the outflow can be estimated as
τ brem cool ∼ U F − brem ,(8)
where the bremsstrahlung cooling rate in unit volume of the gases is (Rybicki & Lightman 1986)
F − brem = 2.36 × 10 −27 n 2 e T 1/2 e erg s −1 cm −3 .(9)
Substituting Eqs. (6), (7) and (9) into Eq. (8), the bremsstrahlung cooling timescale of the gases in the outflow is available,
τ brem cool (r) ∼ U F − brem = 1.00 × 10 −3 f w η −1 w ξ −1/2 e mṁ −1 r s. (10)
The cooling length scale of the outflow is therefore estimated by
l cool (r) = τ cool v = 2.12 × 10 7 f w η −1 w ξ −1/2 e mṁ −1 r 1/2 cm. (11)
The mass accretion rateṁ being lower than a critical valuė m crit is required for an ADAF. The critical rateṁ crit ≃ 0.01 is suggested either by the observations or the theoretical models (see Narayan 2002, for a review and references therein). The lower limit on l cool (r) is derived as
l min cool (r) = τ cool v = 2.12 × 10 9 f w η −1 w mr 1/2 cm,(12)
ifṁ =ṁ crit = 0.01 and ξ e = 1 are substituted into Eq. (11), i.e., the electrons and ions have the same temperature in the outflow. The electron temperature should be significantly lower than the ion temperature at the base of the outflow, because it comes from a two-temperature ADAF (Narayan & Yi 1995b).
As the cooling rate increases with electron temperature T e , the estimate performed with T e = T i gives the minimal cooling timescale (see Eq. 11). Comparing the cooling length scale with radius R, we have
l min cool (R) R = 7.18 × 10 3 f w η −1 w r −1/2 .(13)
The radiative cooling of the gases in the outflow is inefficient if l min cool (R) > R, which leads to
r < 5.15 × 10 7 f 2 w η −2 w .(14)
The reverberation-mapping method (Netzer & Peterson 1997;Peterson 1993) was applied to measure the size of the BLR from the time delay between the line and continuum variations. The correlations between the optical luminosity and BLR size were derived by different authors (e.g., Kaspi et al. 2000;Bentz et al. 2006). Subtracting the contribution from the host galaxy starlight to the AGN emission, Bentz et al. (2006) found that
log R BLR = −21.69 + 0.518 logL bol ,(15)
where L bol ≃ 9λL λ (5100) is used (Kaspi et al. 2000). This is consistent with R BLR ∝ L 0.5 bol expected from the photoionization model if all BLRs have similar physical properties. The distances from the black hole in the outflow, within which the outflows are radiatively cooled inefficiently (see Eq. 14), are compared with the BLR sizes of broad-line AGNs in Fig. 1. It is found that the radiative cooling is always unimportant except in the region far from the BLRs, which implies that the hot outflow from an ADAF is unable to be cooled to form BLR clouds. FIG. 1.-The distance from the black hole in the outflow (solid lines), beyond which the outflow is bremsstrahlung cooled efficiently, as functions of fw with ηw = 1 (see Eq. 14). For comparison, we also plot the BLR size estimated from the bolometric luminosity with the empirical correlation given by Bentz et al. (2006)
(dashed lines).
When the accretion rate is slightly lower than the critical valueṁ crit , an ADAF is present near the black hole, and it may connect to the outer standard disk at a transition radius R d,tr . In this case, the soft photons emitted from the outer cold disk will be Compton-upscattered by the hot electrons in the outflow, and the plasma in the outflow is therefore cooled. The flux due to viscous dissipation in the outer region of the disk is
F vis (R d ) ≃ 3GMṀ 8πR 3 d .(16)
The irradiation of the inner ADAF on the outer cold disk is almost negligible compared with the viscous dissipation in the outer cold disk, because the solid angle of the outer disk subtended to the inner region of the ADAF is too small (Cao & Wang 2006). We neglect this effect in estimating the cooling caused by the Compton scattering in the outflow. The cooling rate in unit volume of the gases at radius R in the outflow is
F − Comp ≃ Rd,out Rd,tr 4kT e m e c 2 F vis R π(R 2 + R 2 d ) 3/2 n e σ T 2πR d dR d ,(17)
where n e is the number density of the electrons in the outflow at R, and σ T is the Thompson cross-section of electron. Using Eqs. (5) and (6), we re-write Eq. (17)
dr d r 2 d (r 2 + r 2 d ) 3/2 erg s −1 cm −3 ,(18)
where r d = R d /(2GM/c 2 ). The Compton cooling timescale for the outflow is available,
τ Comp cool (r) ∼ U F − Comp = 5.20 × 10 −10 ξ −1 e r −1 mṁ −1 × rd,out rd,tr dr d r 2 d (r 2 + r 2 d ) 3/2 −1 s,(19)
and the dynamical timescale of the outflow can be estimated by
τ dyn ∼ R v = R 3/2 (GM) 1/2 = 1.39 × 10 −5 mr 3/2 s.(20)
The importance of the Compton cooling of the gases in the outflow can be evaluated by (21) In the inner region of the ADAF, the electron temperature can be more than one order of magnitude lower than the ion temperature (Narayan & Yi 1995b). Thus, the parameter ξ e 0.1 in the base of the outflow from the ADAF, while ξ e → 1 in the outflow far from the black hole. The results derived with different disk parameters are plotted in Fig. 2. We find that the timescale ratio, τ Comp cool /τ dyn , decreases with increasing radius r in the outflow when r is small (see Fig. 2), because the solid angle of the outer cold disk region subtended to the outflow increases with r at small radii. At large radii, the solid angle decreases with increasing r, and therefore the timescale ratio, τ Comp cool /τ dyn , increases with r. The Compton cooling becomes less important for a disk accreting at a lower rate, because less soft seed photons are emitted from the outer disk.
For ADAFs in the sources with L bol /L Edd 0.001, a secondary small cold accretion disk extending to the marginal stable orbit of the black hole can co-exist with an ADAF due the condensation process. The outflow can be cooled due to the Compton scattering of the soft seed photons emitted from such an inner cold disk. The radiative power of the inner cold disk consists of the viscously dissipated power in the disk and the power of the irradiation from the ADAF. In order to avoid exploring the complicated processes of the interaction between the ADAF and the cold disk, we assume the flux from the unit surface area of the inner cold disk to have the same radial dependence as the standard cold disk (Shakura & Sunyaev 1973),
F vis (R d ) = C sd mL sd R 3 d 1 − R d,in R d 1/2 ,(22)
where L sd is the luminosity of the small cold disk, and R d,in is the radius of the inner edge of the disk. This small disk can extend to the marginal stable orbit of the black hole, and we adopt R d,in = R d,ms = 6GM/c 2 for a non-rotating black hole in all our calculations. The luminosity of the small disk is
L sd = 2 Rd,max Rd,min F vis (R d )2πR d dR d ,(23)
which leads to
C sd = 2.35 × 10 4 1 r d,min 1 − 2 3 3 r d,min 1/2 − 1 r d,max 1 − 2 3 3 r d,max 1/2 −1 .(24)
Similar to the above estimates for the Compton cooling caused by the emission from the outer cold disk, the ratio of the Compton cooling timescale due to the presence of the inner small cold disk to the dynamical timescale of the outflow is estimated as
τ Comp cool τ dyn = 6.59ξ −1 e C −1 sd λ −1 sd r −5/2 rd,max 3 [1 − (3/r d ) 1/2 ] r 2 d (r 2 + r 2 d ) 3/2 dr d −1 ,(25)
where the Eddington ratio of the small disk λ sd = L sd /L Edd . The inner cold small disk is usually truncated at several tens of Schwarzschild radii , and r d,max = 20 is therefore adopted in the estimates. The final results are insensitive to the exact values of r d,max adopted, because most of the emission is from the region of the disk very close to the black hole. We plot the results in Fig. 3, which show that the Compton cooling of the outflow near the ADAF due to the presence of the inner small accretion disk is always unimportant, while the outflow can be cooled efficiently at large distances from the black hole. . The solid lines represent the ratio calculated with ξe = 0.1 for different transition radius, rtr = 10, 20, 50, and 100, respectively (from bottom to up), while the dotted color lines are the results calculated with ξe = 1. The red lines are calculated withṁ = 0.01, whileṁ = 0.001 is adopted for the blue lines. For comparison, we also plot the BLR size (dashed lines) estimated from the bolometric luminosity with the empirical correlation given by Bentz et al. (2006) for different black hole mass, m = 10 7 , 10 8 , and 10 9 , respectively (from right to left). The red dashed lines correspond to the BLR sizes of AGNs with L bol /L Edd = 0.01, while the blue dashed lines are for L bol /L Edd = 0.001. For the LLAGNs, the radius of the BLR should be lower than that for broad-line AGNs, if the correlation between R BLR and L bol (Eq. 15) still holds for low-luminosity sources (but also see Wang & Zhang 2003). Our estimate shows that the radiative cooling of the outflow in the source accreting at a rate significantly lower thanṁ crit is inefficient, which means that the outflow being expanding adiabatically is a good approximation. Considering a small volume V in the outflow with gas temperature T gas and particle number density n, we have dUV = 3 2 d p gas V = −p gas dV,
for an adiabatic expanding outflow, where p gas = nkT gas . The conservation of particles requires
dV V = − dn n .(27)
Substituting Eq. (27) into (26), we arrive at
d lnT gas = 2 3 d ln n,(28)
i.e., T gas ∝ n 2/3 . As the number density n ∝ r −3/2 in the outflow (see Eq. 4), we find that the gas temperature T gas ∝ r −1 in an adiabatically expanding outflow.
DISCUSSION
The broad-line AGNs are relatively luminous, which contain cold accretion disks. The accretion flows transit to hot ADAFs when the sources are accreting at very low rates. Strong outflows may probably be present in LLAGNs, as the ADAFs have a positive Bernoulli parameter (Narayan & Yi 1995a). This implies that the disappearance of BLR in LLAGNs cannot be simply attributed to the lack of outflows from the accretion disk.
We estimate the cooling of the hot outflows from the ADAF, and find that the radiative cooling of the outflows is always inefficient within the radius of the BLR with any values of the parameters adopted (see Fig. 2). The internal energy U ∝ n e , and the cooling rate F − ∝ n 2 e , which indicates that the cooling timescale increases with decreasing electron number density n e . In the estimate of the cooling, we assume that the radial velocity of the outflow is the same as the virialized velocity, which is the least velocity that the outflow can escape to infinity. If the gases in the outflow move at the speed higher than the virialized velocity, the number density n e of the electrons decreases with increasing outflow velocity provided all other parameters are fixed, and therefore the cooling timescale becomes larger for higher outflow velocity. The results plotted in Fig. 1 are calculated with η w = 1, i.e.,Ṁ w =Ṁ, anḋ m =ṁ crit = 0.01, which, of course, leads to an lower limit on the cooling length scale (see Eqs. 12 and 14). For most of the LLAGNs, the two parameters, η w ≪ 1 andṁ ≪ṁ crit , are satisfied, which strengthens the conclusion derived in our estimates.
The detailed physics for the transition of accretion modes is still unclear. It was suggested that the ADAF co-exists with the standard disk, i.e., the inner ADAF connects to the outer thin accretion disk, in some sources accreting at rates slightly lower than the critical rateṁ crit (e.g., Quataert et al. 1999;Cao 2003;Xu & Cao 2009). The transition radius increases with decreasing accretion rateṁ, which is expected by the thermal instability or disk evaporation induced transition scenarios (e.g., Abramowicz et al. 1995;Liu et al. 1999;RóżaŃska & Czerny 2000;Spruit & Deufel 2002). In the presence of an outer cold disk, the soft photons from the cold disk will be Compton upscattered by the hot electrons in the outflow. For the ADAF accreting at a rate lower thanṁ crit but with L bol /L Edd 10 −3 , a secondary inner cold small disk will surround the black hole together with an ADAF. The mass accretion rate of the small cold disk is regulated by the condensation process, which is always significantly lower than the total accretion rate (see Liu et al. 2007, for the details). Similar to the accretion disk-corona system, the small cold disk is also irradiated by the ADAF, which implies that the luminosity of the small disk should be less than a half of the bolometric luminosity. We adopt ξ e = 0.1 in our calculations of the Compton cooling in the outflow near the ADAF, while ξ e = 1 is adopted in the calculations for the outflow far from the ADAF. We find that the Compton cooling of the outflow near the ADAF is always inefficient due to the soft seed photons from the outer cold disk (see Fig. 2). The situation is similar for the small inner cold disk, even if the luminosity of the inner cold disk is as high as L sd = 0.01L Edd (see Fig. 3). In the region of the outflow with large distances from the ADAF, the electrons may have the same temperature as the ions, i.e., ξ e = 1. In this case, our results show that the outflow can be Compton cooled efficiently at large distances, provided the transition radius of the ADAF to the outer cold disk is small (r d,tr 20) or/and the secondary small cold disk has a luminosity L sd 0.003L Edd . We note that our estimates of the importance of the Compton cooling are independent of the density of the outflow, i.e., the mass loss rate in the outflow, which is due to both the Compton cooling rate and the internal energy of the gas being proportional to the density of the gas in the outflow. The cold outflows can still be driven from the outer cold thin disk if the sources are accreting at rates slightly lower thanṁ crit , i.e., the transition radius is not very large. In this case, the outflow from the ADAF can still be cooled at large distances from the black hole due to the Compton scattering of the soft seed photons from the outer cold disk or/and the secondary small inner cold disk. The small inner cold disk is evaporated completely in the ADAF, which may connect to the outer thin disk at a very large radius (or the outer cold disk is suppressed by the ADAF), when L bol /L Edd 10 −3 , and therefore the BLR disappears due to the lack of cold outflow from the disk or the cooling of the outflow from the ADAF being inefficient. This is consistent with the observations that almost all "true" type 2 AGNs have mass accretion rates L bol /L Edd 10 −3 (e.g., Nicastro et al. 2003).
For the cases that the radiatively cooling can be neglected, the temperature of the gas will drop in an adiabatically expanding outflow. Our estimate shows that the gas temperature T gas ∝ r −1 in the outflow. The typical temperature of the ions in an ADAF near the black hole is ∼ 10 11−12 K (e.g. Narayan & McClintock 2008), the gases can be cooled to the typical temperature of BLRs (∼ 10 4 K) only in the outflow with a distance > 10 6−7 Schwarzschild radii from the black hole. It corresponds to ∼ 10 4−5 light days for a black hole with M = 10 7 M ⊙ , which is obviously beyond the BLR in luminous AGNs (see Fig. 1). Therefore, we propose that the outflows from the ADAFs in LLAGNs are too hot to be cooled to form clouds in the BLRs, which leads to the disappearance of the BLR in LLAGNs.
A small fraction of AGNs were found to have emission lines with double-peaked profiles (e.g., Eracleous & Halpern 1994;Strateva et al. 2003), which usually have low Eddington ratios (see Eracleous 2006, for a review and references therein, but also see Wu & Liu 2004;Bian et al. 2007). The most favorite model for the double-peaked emitters suggests that the double-peaked broad emission lines are emitted from a ring in the accretion disk, which may also be photo-ionized by the radiation from the inner region or/and the outflow (e.g., Chen et al. 1989;Nemmen et al. 2006;Cao & Wang 2006). The observed broad-line emission may originate from two separated regions: the clouds in the normal BLRs, or/and the outer ring in the thin accretion disk. The broad-line emission from the BLR clouds dominates over that from the outer region of the accretion disk in normal broad-line AGNs. For the double-peaked emitters accreting at rates lower than the critical accretion rateṁ crit , the ADAF is present in the inner region and connects to the outer thin accretion disk. The gases in the outflow from the ADAF are too hot to be cooled to form the clouds in the BLR when the transition radius of the ADAF to the outer disk r d,tr 20 and the secondary small cold disk is less luminous than L sd 0.003L Edd , which leads to the disappearance of BLR clouds in these sources. Thus, the line emission from the outer region of the accretion disk is not contaminated by the emission from the BLR clouds, which emerges as double-peaked emission lines. This also provides a clue to the theoretical models for the accretion mode transition.
I thank the referee for the very helpful comments/suggestions. This work is supported by the NSFC (grants 10773020, 10821302, and 10833002), the National Basic Research Program of China (grant 2009CB824800), the Science and Technology Commission of Shanghai Municipality (10XD1405000), the CAS (grant KJCX2-YW-T03), and the CAS/SAFEA International Partnership Program for Creative Research Teams.
. 2.-The ratios of the Compton cooling timescale to the dynamical timescale as functions of radius r for different model parameters (solid and dotted color lines)
FIG
. 3.-The ratios of the Compton cooling timescale to the dynamical timescale as functions of radius r in the presence of a small cold accretion disk co-existing with an ADAF in the inner region. The black lines represent the ratios calculated with ξe = 0.1 for different Eddington ratios of the small cold accretion disk: λ sd = 0.01 (solid), 0.003 (dashed), and 0.001 (dotted), respectively. The green lines are the same as those black lines, but ξe = 1 is adopted. All other lines have the same meanings as those inFig. 2.
Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai, 200030, China; [email protected]
. M A Abramowicz, X Chen, S Kato, J.-P Lasota, O Regev, ApJ. 43837Abramowicz, M. A., Chen, X., Kato, S., Lasota, J.-P., & Regev, O. 1995, ApJ, 438, L37
. R Antonucci, ARA&A. 31473Antonucci, R. 1993, ARA&A, 31, 473
. M C Bentz, B M Peterson, R W Pogge, M Vestergaard, C A Onken, ApJ. 644133Bentz, M. C., Peterson, B. M., Pogge, R. W., Vestergaard, M., & Onken, C. A. 2006, ApJ, 644, 133
. W.-H Bian, Y.-M Chen, Q.-S Gu, J.-M Wang, ApJ. 668721Bian, W.-H., Chen, Y.-M., Gu, Q.-S., & Wang, J.-M. 2007, ApJ, 668, 721
. R D Blandford, M C Begelman, MNRAS. 3031Blandford, R. D., & Begelman, M. C. 1999, MNRAS, 303, L1
. X Cao, ApJ. 599147Cao, X. 2003, ApJ, 599, 147
. X Cao, T.-G Wang, ApJ. 652112Cao, X., & Wang, T.-G. 2006, ApJ, 652, 112
. K Chen, J P Halpern, A V Filippenko, ApJ. 339742Chen, K., Halpern, J. P., & Filippenko, A. V. 1989, ApJ, 339, 742
. B Czerny, A Rózańska, J Kuraszkiewicz, A&A. 42839Czerny, B., Rózańska, A., & Kuraszkiewicz, J. 2004, A&A, 428, 39
. M Elitzur, L C Ho, ApJ. 70191Elitzur, M., & Ho, L. C. 2009, ApJ, 701, L91
. M Elitzur, I Shlosman, ApJ. 648101Elitzur, M., & Shlosman, I. 2006, ApJ, 648, L101
. R T Emmering, R D Blandford, I Shlosman, ApJ. 385460Emmering, R. T., Blandford, R. D., & Shlosman, I. 1992, ApJ, 385, 460
M Eracleous, Astronomical Society of the Pacific Conference Series. C. Martin Gaskell, Ian M. McHardy, Bradley M. Peterson & Sergey G. SergeevSan Francisco360217AGN Variability from X-Rays to Radio WavesEracleous, M. 2006, in AGN Variability from X-Rays to Radio Waves, ed. C. Martin Gaskell, Ian M. McHardy, Bradley M. Peterson & Sergey G. Sergeev (San Francisco), Astronomical Society of the Pacific Conference Series, 360, 217
. M Eracleous, J P Halpern, ApJS. 901Eracleous, M., & Halpern, J. P. 1994, ApJS, 90, 1
. A A Esin, J E Mcclintock, R Narayan, ApJ. 489865Esin, A. A., McClintock, J. E., & Narayan, R. 1997, ApJ, 489, 865
. C F Gammie, R Narayan, R Blandford, ApJ. 516177Gammie, C. F., Narayan, R., & Blandford, R. 1999, ApJ, 516, 177
. Q Gu, J Huang, ApJ. 579205Gu, Q., & Huang, J. 2002, ApJ, 579, 205
. L C Ho, ARA&A. 46475Ho, L. C. 2008, ARA&A, 46, 475
. I V Igumenshchev, M A Abramowicz, R Narayan, ApJ. 53727Igumenshchev, I. V., Abramowicz, M. A., & Narayan, R. 2000, ApJ, 537, L27
. S Kaspi, P S Smith, H Netzer, D Maoz, B T Jannuzi, U Giveon, ApJ. 533631Kaspi, S., Smith, P. S., Netzer, H., Maoz, D., Jannuzi, B. T., & Giveon, U. 2000, ApJ, 533, 631
. A Laor, ApJ. 59086Laor, A. 2003, ApJ, 590, 86
. J.-P Lasota, M A Abramowicz, X Chen, J Krolik, R Narayan, I Yi, ApJ. 462142Lasota, J.-P., Abramowicz, M. A., Chen, X., Krolik, J., Narayan, R., & Yi, I. 1996, ApJ, 462, 142
. B F Liu, R E Taam, ApJ. 707233Liu, B. F., & Taam, R. E. 2009, ApJ, 707, 233
. B F Liu, R E Taam, E Meyer-Hofmeister, F Meyer, ApJ. 671695Liu, B. F., Taam, R. E., Meyer-Hofmeister, E., & Meyer, F. 2007, ApJ, 671, 695
. B F Liu, W Yuan, F Meyer, E Meyer-Hofmeister, G Z Xie, ApJ. 52717Liu, B. F., Yuan, W., Meyer, F., Meyer-Hofmeister, E., & Xie, G. Z. 1999, ApJ, 527, L17
. M Mayer, J E Pringle, MNRAS. 376435Mayer, M., & Pringle, J. E. 2007, MNRAS, 376, 435
. F Meyer, B F Liu, E Meyer-Hofmeister, A&A. 4631Meyer, F., Liu, B. F., & Meyer-Hofmeister, E. 2007, A&A, 463, 1
. J M Miller, J Homan, G Miniutti, ApJ. 652113Miller, J. M., Homan, J., & Miniutti, G. 2006, ApJ, 652, L113
R Narayan, Proc. MPA/ESO/MPE/USM Joint Astronomy Conf., Lighthouses of the Universe: The Most Luminous Celestial Objects and Their Use for Cosmology. M. Gilfanov, R. Sunyaev, & E. ChurazovMPA/ESO/MPE/USM Joint Astronomy Conf., Lighthouses of the Universe: The Most Luminous Celestial Objects and Their Use for CosmologyBerlinSpringer405Narayan, R. 2002, in Proc. MPA/ESO/MPE/USM Joint Astronomy Conf., Lighthouses of the Universe: The Most Luminous Celestial Objects and Their Use for Cosmology, ed. M. Gilfanov, R. Sunyaev, & E. Churazov (Berlin: Springer), 405
. R Narayan, J E Mcclintock, New Astronomy Review. 51733Narayan, R., & McClintock, J. E. 2008, New Astronomy Review, 51, 733
. R Narayan, I Yi, ApJ. 42813Narayan, R., & Yi, I. 1994, ApJ, 428, L13
. R Narayan, I Yi, ApJ. 444231Narayan, R., & Yi, I. 1995a, ApJ, 444, 231
. R Narayan, I Yi, ApJ. 452710Narayan, R., & Yi, I. 1995b, ApJ, 452, 710
. R S Nemmen, T Storchi-Bergmann, F Yuan, M Eracleous, Y Terashima, A S Wilson, ApJ. 643652Nemmen, R. S., Storchi-Bergmann, T., Yuan, F., Eracleous, M., Terashima, Y., & Wilson, A. S. 2006, ApJ, 643, 652
. H Netzer, B M Peterson, Astronomical Time Series. 21885Netzer, H., & Peterson, B. M. 1997, Astronomical Time Series, 218, 85
. F Nicastro, ApJ. 53065Nicastro, F. 2000, ApJ, 530, L65
. F Nicastro, A Martocchia, G Matt, ApJ. 58913Nicastro, F., Martocchia, A., & Matt, G. 2003, ApJ, 589, L13
. B M Peterson, PASP. 105247Peterson, B. M. 1993, PASP, 105, 247
. E Quataert, T Di Matteo, R Narayan, L C Ho, ApJ. 52589Quataert, E., Di Matteo, T., Narayan, R., & Ho, L. C. 1999, ApJ, 525, L89
. A Różańska, B Czerny, A&A. 3601170RóżaŃska, A., & Czerny, B. 2000, A&A, 360, 1170
G B Rybicki, A P Lightman, pp. 400. ISBN 0-471-82759-2Radiative Processes in Astrophysics. George B. Rybicki, Alan P. LightmanWiley-VCHRybicki, G. B., & Lightman, A. P. 1986, Radiative Processes in Astrophysics, by George B. Rybicki, Alan P. Lightman, pp. 400. ISBN 0-471-82759-2. Wiley-VCH , June 1986
. N I Shakura, R A Sunyaev, MNRAS. 175613Shakura, N. I., & Sunyaev, R. A. 1976, MNRAS, 175, 613
. N I Shakura, R A Sunyaev, A&A. 24337Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
. H C Spruit, B Deufel, A&A. 387918Spruit, H. C., & Deufel, B. 2002, A&A, 387, 918
. J M Stone, J E Pringle, MNRAS. 322461Stone, J. M., & Pringle, J. E. 2001, MNRAS, 322, 461
. J M Stone, J E Pringle, M C Begelman, MNRAS. 3101002Stone, J. M., Pringle, J. E., & Begelman, M. C. 1999, MNRAS, 310, 1002
. I V Strateva, AJ. 1261720Strateva, I. V., et al. 2003, AJ, 126, 1720
. R E Taam, B F Liu, F Meyer, E Meyer-Hofmeister, ApJ. 688527Taam, R. E., Liu, B. F., Meyer, F., & Meyer-Hofmeister, E. 2008, ApJ, 688, 527
. J A Tomsick, ApJ. 680593Tomsick, J. A., et al. 2008, ApJ, 680, 593
. H D Tran, ApJ. 583632Tran, H. D. 2003, ApJ, 583, 632
. H D Tran, ApJ. 55419Tran, H. D. 2001, ApJ, 554, L19
. J R Trump, ApJ. 70049Trump, J. R., et al. 2009, ApJ, 700, 49
. T.-G Wang, X.-G Zhang, MNRAS. 340793Wang, T.-G., & Zhang, X.-G. 2003, MNRAS, 340, 793
. C Warner, F Hamann, M Dietrich, ApJ. 608136Warner, C., Hamann, F., & Dietrich, M. 2004, ApJ, 608, 136
. X.-B Wu, F K Liu, ApJ. 61491Wu, X.-B., & Liu, F. K. 2004, ApJ, 614, 91
. Y Xu, X.-W Cao, Chinese Journal of Astronomy and Astrophysics. 763Xu, Y., & Cao, X.-W. 2007, Chinese Journal of Astronomy and Astrophysics, 7, 63
. Y.-D Xu, X.-W Cao, Research in Astronomy and Astrophysics. 9401Xu, Y.-D., & Cao, X.-W. 2009, Research in Astronomy and Astrophysics, 9, 401
| []
|
[]
| [
"EYVINDURAlex Iosevich ",
"ANDAri Palsson ",
"Sean R Sovine "
]
| []
| []
| We establish some new L p -improving bounds for the k-simplex averaging operators S k that hold in dimensions d ≥ k. As a consequence of these L p -improving bounds we obtain nontrivial bounds S k : L p 1 × · · · × L p k → L r with r < 1. In particular we show that the triangle averaging operator S 2 mapsThis improves quasi-Banach bounds obtained in [8] and extends bounds obtained in [3] for the case of k = d = 2. | null | [
"https://arxiv.org/pdf/2109.09017v1.pdf"
]
| 237,571,925 | 2109.09017 | 0a3562ddca07610303294dbfd69abc6007ecfb56 |
18 Sep 2021
EYVINDURAlex Iosevich
ANDAri Palsson
Sean R Sovine 18 Sep 2021SIMPLEX AVERAGING OPERATORS: QUASI-BANACH AND L p -IMPROVING BOUNDS IN LOWER DIMENSIONS
We establish some new L p -improving bounds for the k-simplex averaging operators S k that hold in dimensions d ≥ k. As a consequence of these L p -improving bounds we obtain nontrivial bounds S k : L p 1 × · · · × L p k → L r with r < 1. In particular we show that the triangle averaging operator S 2 mapsThis improves quasi-Banach bounds obtained in [8] and extends bounds obtained in [3] for the case of k = d = 2.
Introduction
Let d ≥ k and let ∆ k = {u 0 = 0, u 1 , . . . , u k } ⊆ R d be the set of vertices of a regular k-simplex of unit side length. We define the k-simplex averaging operator
S k (f 1 , . . . , f k )(x) := O(d) f 1 (x − Ru 1 ) · · · f k (x − Ru k ) dµ(R),
where µ is the normalized Haar measure on the group O(d). At input x this operator computes the average value of the function f 1 ⊗ · · · ⊗ f k on the smooth manifold
M k (x) = {(v 1 , . . . , v k ) ∈ (R d ) k : |v i − v j | 2 = 1 for 0 ≤ i < j ≤ k, with v 0 = x} of all tuples (v 1 , . . . , v k ) ∈ (R d ) k such that {x, v 1 , .
. . , v k } is the set of vertices of a regular k-simplex of unit side length. The k-simplex averaging operator is a k-linear analogue of the spherical averaging operator, which computes the average value of the function f over a sphere centered at x and can be expressed as
S 1 (f )(x) := O(d) f (x − Ru 1 ) dµ(R),
for any u 1 with |u 1 | = 1.
Cook, Lyall, and Magyar [1] introduce a technique that can be used to establish a wide range of nontrivial and L p -improving bounds for averages over non-degenerate k-simplices in higher dimensions. In this work we establish L p improving bounds for S k that hold in lower dimensions and show how these can be used to obtain further quasi-Banach bounds for S k . In the case with k = 2 we have the triangle averaging operator, which we denote by T := S 2 , and our result is: Theorem 1.1. The triangle averaging operator T satisfies the bound.
T : L d+1 d (R d ) × L d+1 d (R d ) → L s (R d ),
for all s ∈ [ d+1 2d , 1] and d ≥ 2, Moreover,
T : L p (R d ) × L q (R d ) → L 1 (R d )
if and only if ( 1 p , 1 q ) lies in the convex hull of the points {(0, 1), (1, 0), ( d d+1 , d d+1 )}.
For the simplex operators S k we establish the following L p improving bounds that hold in lower dimensions.
Theorem 1.2. In dimensions d ≥ k, S k is of restricted strong-type (k, . . . , k, k) , S k is of restricted strong-type k d+1 d , . . . , k d+1 d , d + 1 .
The work of the first listed author was supported in part by NSF grant HDR TRIPODS -1934962, the work of the second listed author was supported in part by Simons Foundation Grant #360560, and the work of the third listed author was supported in part by NSF grant DMS -1907435. When d is large the unrestricted version of the first of these bounds follows from the second bound by interpolation, but this is not the case when d is close to k. In higher dimensions these bounds are contained in the range of bounds obtained by Cook, Lyall, and Magyar. Our proof of the first bound is an adaptation of the proof given by Greenleaf, Iosevich, Krause, and Liu [3] in the case where k = d = 2, which can also be derived from the work of Stovall [10]. We also describe a technique for obtaining bounds into L r with r < 1 from L p improving bounds mapping into L 1 . As observed in [3], for f, g ≥ 0
S 2 (f, g) L 1 = f, S 1 (g) = S 1 (f ), g ,
and hence by the well-known bounds for the spherical averaging operator S 1 we have that
S 2 : L p × L q → L 1 iff 1 p , 1 q ∈ Conv (0, 1), (1, 0), ( d d+1 , d d+1 )
. This gives the bounds in Theorem 1.1, which improve on bounds obtained in [8]. As a further application of this technique we show that the bilinear spherical averaging operator
B(f, g)(x) := S 2d−1 f (x − u 1 )g(x − u 2 ) dσ(u 1 , u 2 ) maps L 1 × L 1 → L s for s ∈ [1/2, 1] and d ≥ 2.
Bounds of Cook, Lyall, and Magyar
The following result was established in [1]: Theorem 2.1 (Special case of Proposition 3 of Cook, Lyall, and Magyar [1]). Let k, m ≥ 2 be integers with d ≥ km. Then the k-simplex averaging operator S k satisfies the bounds
S k (f 1 , . . . , f k )(x) ≤ C d,m,k (S k−1 (|f 1 | q , . . . , |f k−1 | q )(x)) 1 q (S(|f k | q )) 1 q , uniformly for x ∈ R d , where q = m m−1 . Hence by induction, S k (f 1 , . . . , f k )(x) ≤ C d,m,k (S(|f 1 | q k−1 )(x)) 1 q k−1 (S(|f 2 | q k−1 )(x)) 1 q k−1 k j=3 (S(|f j | q k+1−j )(x)) 1 q k+1−j , uniformly for x ∈ R d .
Combining this with Hölder's inequality yields the following range of L p bounds for S k , which includes nearoptimal non-trivial bounds.
Corollary 2.2. The operator S k satisfies the bounds S k : L p σ(1) × · · · × L p σ(k) → L r for all exponents satisfying p 1 ≥ q k−1 , p j ≥ q k+1−j for j ≥ 2, and 1 p1 + · · · + 1 p k = 1 r , whenever k, m ≥ 2, d ≥ mk, and q = m m−1 , for all permutations σ of {1, . . . , k}. Hence by interpolation
S k : L kr × · · · × L kr → L r with r = q k−1 2 + q + q 2 + . . . + q k−2 .
These bounds are asymptotically optimal as m, and hence d, increases. Combining Theorem 2.1 with bounds for the spherical average gives strong L p -improving bounds, for example, Corollary 2.3. The operator S k satisfies the bounds
S k : L p σ(1) × · · · × L p σ(k) → L r , where r = q k−1 (d + 1) 2 + q + q 2 + . . . + q k−2 ,
where p 1 = q k−1 d+1 d and p j = q k+1−j d+1 d for j ≥ 2, for each permutation σ. Hence by interpolation
S k : L kr d × · · · × L kr d → L r ,
for all m ≥ 2 and d ≥ mk.
Background
The triangle averaging operator was introduced in dimension d = 2 by Greenleaf and Iosevich in [4], where Sobolev bounds for T were obtained and applied to a generalization of the Falconer distance problem. Greenleaf, Iosevich, Krause, and Liu [3] showed that in dimension d = 2 a family of operators including T satisfies L p × L q → L r bounds for ( 1 p , 1 q , 1 r ) in the set {( 2 3 , 2 3 , 1), ( 2 3 , 0, 1 3 ), (0, 2 3 , 1 3 )} and a restricted strongtype bound for ( 1 p , 1 q , 1 r ) = ( 1 2 , 1 2 , 1 2 ), and showed that these L p improving bounds are sharp in the Banach range. These bounds can also be derived from the work of Stovall [10]. In [8] Palsson and Sovine studied the L p × L p → L r boundedness of T (f, g) using a frequency-space decomposition and obtained quasi-Banach bounds in higher dimensions. Cook, Lyall, and Magyar [1] established bounds for maximal averages with respect to general non-degenerate k-simplices using the majorization technique described above.
4. Quasi-Banach Bounds from L p -Improving Bounds into L 1
In [2] Grafakos and Kalton show that the operator
I(f, g)(x) := |t|≤1 f (x − t)g(x + t) dt is bounded on L 1 (R d ) × L 1 (R d ) → L 1/2 (R d ).
We show how their argument can be adapted to a slightly more general situation. In the following, for l = (l 1 , . . . , l d ) ∈ Z d we denote by Q l the cube with side length 1 and lower left corner at l.
Suppose that the k-linear operator U (f 1 , . . . , f k ) has the following localization properties:
(L1) There is a finite number N such that U (f 1 , . . . , f k ) ≡ 0 whenever there are i, j with f i , f j supported on cubes Q l i , Q l j with l i − l j ∞ := max 1≤n≤d |(l i ) n − (l j ) n | > N . (L2) There is a fixed R > 0 such that U (f 1 , . . . , f k )(x) is supported on k i=1 sppt(f i ) + B(0, R)
. It is easy to see that each of the k-simplex averaging operators S k and the bilinear spherical averaging operator satisfy conditions L1 and L2. Now suppose that whenever each f i is supported on a cube of side length 1 we have the bound
(4.1) U (f 1 , . . . , f k ) L 1 (R d ) ≤ A f 1 L p 1 (R d ) · · · f k L p k (R d )
for some exponents with 1
p 1 + · · · + 1 p k =: 1 r > 1.
We define F N := {l ∈ Z d : l ∞ ≤ N }. Then by properties L1 and L2 we have for each s ∈ [r, 1],
U (f 1 , . . . , f k ) L s = R d l∈Z d d2,...,d k ∈FN U (f 1 1 Q l , f 2 1 Q l +d2 . . . , f k 1 Q l +d k )(x) s dx 1 s ≤ C d2,...,d k ∈FN R d l∈Z d U (f 1 1 Q l , f 2 1 Q l +d2 . . . , f k 1 Q l +d k )(x) s dx 1 s ≤ C d2,...,d k ∈FN R d l∈Z d |U (f 1 1 Q l , f 2 1 Q l +d2 . . . , f k 1 Q l +d k )(x)| s dx 1 s ≤ C d2,...,d k ∈FN l∈Z d U (f 1 1 Q l , f 2 1 Q l +d2 . . . , f k 1 Q l +d k ) s L 1 1 s ≤ C d2,...,d k ∈FN l∈Z d f 1 1 Q l s L p 1 f 2 1 Q l +d2 s L p 2 · · · f k 1 Q l +d k s L p k 1 s ≤ C d2,...,d k ∈FN l∈Z d f 1 1 Q l r L p 1 f 2 1 Q l +d2 r L p 2 · · · f k 1 Q l +d k r L p k 1 r ≤ C l∈Z d f 1 1 Q l p1 L p 1 1 p 1 · · · l∈Z d f k 1 Q l p k L p k 1 p 1 = C f 1 L p 1 · · · f k L p k ,
where the constant depends on N , R, d, s, and A. We summarize this result in the following proposition.
U (f 1 , . . . , f k ) L 1 (R d ) ≤ A f 1 L p 1 (R d ) · · · f k L p k (R d ) for some exponents p 1 , . . . , p k ≥ 1 with 1 p1 + · · · + 1 p k =: 1 r > 1 whenever each f i is supported on a cube. Then for each s ∈ [r, 1] U : L p1 (R d ) × · · · × L p k (R d ) → L s (R d ).
Note that the bilinear convolution operator T µ associated to any compactly supported finite Borel measure on R 2d satisfies the localization conditions (L1) and (L2). The following proposition is an abstract version of the technique used to obtain the bound I : L 1 × L 1 → L 1 2 of Grafakos and Kalton [2] and our result below on the boundedness of T .
µ (−) (A) := R 2d 1 A (y − z) dµ(y, z) on R d is absolutely continuous with density dµ (−) dt ∈ L ∞ . Then T µ : L 1 (R d ) × L 1 (R d ) → L 1 (R d ),
and thus by Proposition 4.1
T µ : L 1 (R d ) × L 1 (R d ) → L s (R d ), for s ∈ [ 1 2 , 1]. If T µ (−) : L p → L q with q > p, then T µ : L q ′ (R d ) × L p (R d ) → L 1 (R d ),
and thus by Proposition 4.1
T µ : L q ′ (R d ) × L p (R d ) → L s (R d ), for s ∈ [ pq ′ p+q ′ , 1].
Proof. Suppose that µ (−) is absolutely continuous with L ∞ density. This proof is essentially the same as the one given by Grafakos and Kalton to bound I. We have
T µ (f, g) L 1 ≤ R d R 2d |f (x − u)||g(x − v)| dµ(u, v) dx = R d |f (x)| R 2d |g(x − (u − v))| dµ(u, v) dx = R d |f (x)| R d |g(x − t)| dµ (−) (t) dx = R d |f (x)| R d |g(x − t)| dµ (−) dt dt dx ≤ dµ (−) /dt L ∞ f L 1 g L 1 .
We can now apply Proposition 4.1.
Now suppose that T µ (−) : L p → L q . Then we have T µ (f, g) L 1 ≤ R d |f (x)| R d |g(x − t)| dµ (−) (t) dx ≤ f L q ′ T µ (−) (g) L q ≤ C f L q ′ g L p .
5. Applications to the Bilinear Spherical and Triangle Averaging Operators 5.1. Application to triangle averaging operator. We will use the L p improving bound S 1 : L d+1 → d+1 d for the spherical averaging operator to estimate the L 1 norm of T = S 2 . By Tonelli's theorem and a change of variables we have 1]. This argument was previously used in [3]. In fact, the reasoning above shows that for f, g ≥ 0,
T (f 1 , f 2 ) L 1 ≤ SO(d) R d |f 1 (x − Ru 1 )| |f 2 (x − Ru 2 )| dx dµ(R) = R d |f 1 (x)| SO(d) |f 2 (x − R(u 2 − u 1 ))| dµ(R) dx = f 1 (x)S 1 (|f 2 |)(x) L 1 ≤ f 1 L d+1 d S 1 (|f 2 |) L d+1 ≤ C f 1 L d+1 d f 2 L d+1 d .
Now it follows by Proposition 2.1 that
T : L d+1 d × L d+1 d → L s for all s ∈ [ d+1 2d ,T (f, g) L 1 = S(f ), g = f, S(g) .
It follows by L p duality that
T : L p × L q → L 1 if and only if S : L p → L q ′ .
Hence from the known range of bounds for the spherical averaging operator (see for example [6]) we have that T : L p × L q → L 1 if and only if ( 1 p , 1 q ) lies in the region shown in Figure 1. Thus the bounds T : L p × L q → L r with 1 p + 1 q = 1 r ≥ 1 that can be obtained by applying Proposition 4.1 are exactly those with ( 1 p , 1 q ) in the region shown in Figure 1. The essential new T : L p × L q → L pq p+q bound in this range is the one with
( 1 p , 1 q ) = ( d d+1 , d d+1 )
, since the others can be obtained from this one by interpolation with bounds in the Banach range.
( d d+1 , d d+1 ) (1,0) (0,1) (0,0) 1 p 1 q Figure 1. Pairs ( 1 p , 1 q ) for which T : L p × L q → L 1 .
5.2.
Application to bilinear spherical averaging operator. Recall that the bilinear spherical averaging operator is defined by that the bilinear spherical averaging operator
B(f, g)(x) := S 2d−1 f (x − u 1 )g(x − u 2 ) dσ(u 1 , u 2 ),
where σ is the surface measure on the unit sphere in R 2d . Multilinear spherical convolutions of this type were first introduced by Daniel Oberlin in the case where d = 1 [7]. A complete characterization of L p bounds for these operators in the case of d = 1 was recently obtained by Shrivastava and Shuin [9]. x ∈ R d } the antidiagonal subspace, and notice that these subspaces decompose R 2d orthogonally. Then for two points
(a, b), (c, d) ∈ R d × R d ≃ R 2d we have a − b = c − d if and only if (a, b) − (c, d) ∈ D.
Hence, if π A is the orthogonal projection onto A and π A (a, b) = (c, −c), then a − b = 2c. Thus for E ⊆ R d , if π 1 : R 2d → R d is the projection onto the first d coordinates, i.e., π 1 (a, b) = a, then
a − b ∈ E ⇐⇒ 2(π 1 • π A )(a, b) ∈ E.
Now let R ∈ O(2d) be the orthogonal transformation with block matrix
R = 1 √ 2 I −I I I that maps A onto the subspace S 1 := {(x, 0) : x ∈ R d }. Then a − b ∈ E ⇐⇒ 2(π 1 • R)(a, b) ∈ E.
But then by the invariance of the spherical measure under orthogonal transformations R(a, b))] dσ(a, b) (a, b).
σ {(a, b) ∈ S 2d−1 : a − b ∈ E} = S 2d−1 1 E (a − b) dσ(a, b) = S 2d−1 1 1 2 E [π 1 (= S 2d−1 1 1 2 E [π 1 (a, b)] dσ(a, b) = S 2d−1 1 1 2 E (a) dσ
For dx the Lebesgue measure on R d and dy the Lebesgue measure on R d−1 we have
S 2d−1 1 1 2 E (a) dσ(a, b) = 2 B 2d−1 (0,1) 1 1 2 E (x) 1 1 − |x| 2 − |y| 2 dx dy = R d 1 1 2 E (x) · 2 R d−1 1 B 2d−1 (0,1) (x, y) 1 1 − |x| 2 − |y| 2 dy dx = R d 1 1 2 E (x) F (x) dx.
Letting r 2 0 := 1 − |x| 2 , we have 6. L p -Improving and Quasi-Banach Bounds for k-Simplex Operators for d ≥ k
F (x) = C d r0 0 r d−2 dr r 2 0 − r 2 ≤ C for all x ∈ B d (0, 1). Thus F ∈ L ∞ (R d ),
In this section we establish L p -improving and quasi-Banach bounds that hold in lower dimensions d ≥ k, which are not included in the range of bounds obtained by the technique of Cook, Lyall, and Magyar [1].
Let E 1 . . . E k ⊆ R d be measurable and using the symmetry of the operator assume WLOG that |E 1 | ≤ |E j | for all j. Then we have by the L p -improving bounds for spherical averages,
S k (1 E1 , . . . , 1 E k ) L d+1 ≤ S(1 E1 ) L d+1 ≤ 1 E1 L d+1 d = |E 1 | d d+1 ≤ |E 1 | d k(d+1) · · · |E k | d k(d+1) ,
so S k satisfies a restricted strong-type (k + k d , . . . , k + k d , d + 1) bound, which has an L p improvement ratio of d versus the Hölder exponents.
Theorem 6.1. S k satisfies a restricted strong-type ( k(d+1) d , . . . , k(d+1) d , d + 1) bound for d ≥ k.
Hence by interpolation against the L ∞ × · · · × L ∞ → L ∞ bound
S k : L p × · × L p → L dp k for all p > k(d + 1) d .
Now using the fact that each face of a regular k-simplex is a regular (k − 1)-simplex, we have
S k (f 1 , . . . , f k ) L 1 ≤ R d |f 1 |(x) SO(d) |f 2 |(x − R(u 2 − u 1 )) · · · |f k |(x − R(u k − u 1 )) dµ(R) dx = |f 1 |(x)S k−1 (|f 2 |, . . . , |f k |)(x) dx ≤ f 1 L dp dp−k+1 S k−1 (|f 2 |, . . . , |f k |) L dp k−1 ≤ C f 1 L dp dp−k+1 f 2 L p · · · f k L p , for all p > (k−1)(d+1) d .
Applying the technique from Section 4 then establishes nontrivial bounds for S k . Corollary 6.2. The k-simplex operator S k satisfies the bound
S k : L p1 × · · · × L p k → L r where 1 p 1 + · · · + 1 p k = 1 r ,
and ( 1 p1 , . . . , 1 p k , 1 r ) lies in the interior of the convex hull of the set of points ( 1 q1 , . . . , 1 q k , 1 r ) with
q σ(1) = d + 1 d , q σ(j) = (k − 1)(d + 1) d for 2 ≤ j ≤ k, r = d + 1 2d ,
for some permuation σ of {1, . . . , k}. In particular,
S k : L kr × · · · × L kr → L r for r > d + 1 2d and d ≥ k.
A straightforward calculation shows that for nice functions f, g, h,
T (f, g), h = f, T (g, h) .
It follows that T :
L p × L q → L r implies T : L r ′ × L p → L q ′ whenever 1 ≤ r, q ′ < ∞.
Applying this with the L p improving bound above shows that T : L p × L q → L r for d ≥ 2 when (1/p, 1/q, 1/r) is one of the following
L p -improving triples d d + 1 , d 2(d + 1) , d + 2 2d + 2 , d 2(d + 1) , d d + 1 , d + 2 2d + 2 .
7. Restricted strong-type (k, k, . . . , k) bounds for S k for d ≥ k
In [3] the authors established that a family of operators that includes T in dimension d = 2 satisfies a restricted strong-type (2, 2, 2) bound. Here we adapt the ideas of the proof in [3] to obtain a restricted strongtype (k, k, . . . , k) bound for S k in dimensions d ≥ k. The interesting cases occur when d is close to k, since in higher dimensions this bound follows from the method of Cook, Lyall, and Magyar [1]. The key observation behind this adaptation is that if p 1 , . . . , p k are linearly independent points of S d−1 , then on a neighborhood of (p 1 , . . . , p k ) the addition map (u 1 , . . . , u k ) → u 1 + . . . + u k from S d−1 × · · · × S d−1 → R d is a submersion and hence behaves locally like a projection.
Theorem 7.1. S k is of restricted strong-type (k, . . . , k, k) in dimensions d ≥ k.
Proof. We assume that d ≥ k and let E 1 , . . . , E k ⊆ R d be measurable sets, WLOG (by the symmetry of the operator in its inputs) with |E 1 | ≤ |E 2 | ≤ . . . ≤ |E k |. Our goal in this section is to show that
S k (1 E1 , . . . , 1 E k ) L k ≤ C(|E 1 | · · · |E k |) 1/k ,
i.e., that S k is of restricted strong-type (k, . . . , k, k). We have for {0, u 1 , . . . , u k } the vertices of a regular k-simplex of unit side length,
S k (1 E1 , . . . , 1 E k ) k L k = R d k i=1 O(d) 1 E1 (x − R i u 1 ) · · · 1 E k (x − R i u k ) dµ(R i ) dx.
By the compactness of the product space O(d) × · · · × O(d) = (O(d)) k it is sufficient to show that for each (R 1 , . . . , R k ) ∈ (O(d)) k there is a neighborhood N (R 1 , . . . , R k ) of (R 1 , . . . , R k ) such that, with µ k := (µ × · · · × µ),
R d N (R1,...,R k ) k i=1 1 E1 (x − R i u 1 ) · · · 1 E k (x − R i u k ) dµ k (R 1 , . . . , R k ) dx ≤ C|E 1 | · · · |E k |,
since then (O(d)) k will be covered by finitely many such neighborhoods N (R 1 , . . . , R k ).
To show that such a neighborhood exists, for each i we will keep one of the factors in
1 E1 (x − R i u 1 ) · · · 1 E k (x − R i u k )
and drop the remaining k − 1. Which factors we keep and which ones we drop will depend on the relative positions of the vectors R i u j .
(A) Selecting which factors to keep and drop:
We fix (R 1 , . . . , R k ) ∈ (O(d)) k and use the following algorithm to select which factors to keep and which to drop in each integral:
(i) For i = 1 we will keep 1 E1 (x − R 1 u 1 ). We set a 1 = u 1 and A 1 = E 1 .
(ii) Suppose that j < k and we have chosen a 1 , . . . a j , with a i = u m for some m ≤ i for each i and such that R 1 a 1 , . . . , R j a j are linearly independent. We choose a j+1 and A j+1 as follows: Note that {0, R j+1 u 1 , . . . , R j+1 u j+1 } form the set of vertices of a regular (j + 1)-simplex, and hence the vertices R j+1 u 1 , . . . , R j+1 u j+1 are linearly independent. It follows that there must be a p in 1, . . . , j + 1 such that R j+1 u p ∈ span{R 1 a 1 , . . . , R j a j }. Then we set a j+1 = a p and A j+1 = A p .
This algorithm produces sequences a 1 , . . . , a k of vectors and A 1 , . . . , A k of sets, where each a i is equal to some u m , and A i = E m , with m ≤ i for i = 1, . . . , k. For each i we will keep the factor 1 Ai (x − R i a i ) in the integrand and drop the remaining k − 1 factors corresponding to R i .
(B) Bounding inner integral by parameterized spherical integral:
We now let
B := B(R 1 a 1 , r) × · · · × B(R k a k , r),
where r > 0 will be chosen sufficiently small in a later step, and define the open neighborhood
N (R 1 , . . . , R k ) := {(S 1 , . . . , S k ) ∈ (O(d)) k : (S 1 a 1 , . . . , S k a k ) ∈ B}.
We now have, denoting again dµ k := d(µ × · · · × µ),
R d N (R1,...,R k ) k i=1 1 E1 (x − R i u 1 ) · · · 1 E k (x − R i u k ) dµ k (R 1 , . . . , R k ) dx ≤ R d N (R1,...,R k ) 1 A1 (x − R 1 a 1 ) · · · 1 A k (x − R k a k ) dµ k (R 1 , . . . , R k ) dx = R d B⊆(S d−1 ) k 1 A1 (x − p 1 ) · · · 1 A k (x − p k ) dσ k (p 1 , . . . , p k ) dx = R d 1 A1 (x) B⊆(S d−1 ) k 1 A2 (x − (p 2 − p 1 )) · · · 1 A k (x − (p k − p 1 )) dσ k (p 1 , .
. . , p k ) dx, (7.1) and it suffices to show that inner integral in the last line is ≤ C|A 2 | · · · |A k | with constant C independent of E 1 , . . . , E k and x.
There is an s = s(r) > 0 and for each i an orthogonal transformation O i such that a subset S of S d−1 containing B(R i a i , r) ∩ S d−1 is parameterized by
f i (x 1 , . . . , x d−1 ) = O i (x 1 , . . . , x d−1 , 1 − |x| 2 ) for x = (x 1 , . . . , x d−1 ) ∈ C d−1 (0, s), where f i (0) = R i a i and C d−1 (0, s) = [−s, s] d−1 .
Recall that the tangent space to S d−1 at the point p i := R i a i can be realized as the hyperplane P (p i ) := {x : x · p i = 0} and that it is spanned by the partial derivatives of f i at 0. By choosing s (and hence r) small enough we can assume that the "volume element" of the coordinate chart f i is bounded on C d−1 (0, s), with bound depending only on s. By translation invariance of Lebesgue measure we can assume that x = 0, and we get for y i = (y i 1 , . . . , y i d−1 ), . . . , y k ).
B⊆(S d−1 ) k 1 A2 (p 2 − p 1 ) · · · 1 A k (p k − p 1 ) dσ k (p 1 , . . . , p k ) ≤ C C k(d−1) (0,s) 1 A2 (f 2 (y 2 ) − f 1 (y 1 )) · · · 1 A k (f k (y k ) − f 1 (y 1 )) d(y 1 ,
(C) Linear algebra for tangent hyperplanes: Recall that the vectors p i = R i a i , 1 ≤ i ≤ k were chosen to be linearly independent. For each i ≥ 2 let
p i = p i + p ⊥ i
with p i ∈ P (p 1 ) and p ⊥ i ∈ span{p 1 }.
If there are α 1 , . . . α k not all zero with 0 = α 1 p 1 + . . . + α k p k , then
α 1 p 1 + . . . + α k p k = α 1 p ⊥ 1 + . . . + α k p ⊥ k = cp 1 ,
contradicting the linear independence of p 1 , . . . , p k . Hence the orthogonal projections p i are also linearly independent. Further, since p i and p 1 are not linearly dependent for i ≥ 2 the vector p i is not contained in P (p i ). Formally, if p i ∈ P (p i ), then 1 A2 (f 2 (y 2 ) − f 1 (y 1 )) · · · 1 A k (f k (y k ) − f 1 (y 1 )) d(y 1 , . . . , y k )
p i 2 = p i · p i = p i · p ⊥ i ≤ p i p ⊥ i ⇒ p i = p ⊥ i ⇒ p i = cp 1 ,≤ C V ×C (k−1)(d−1) (0,s)
1 A2 (f 2 (y 2 ) − f 1 (Ay 1 )) · · · 1 A k (f k (y k ) − f 1 (Ay 1 )) d(y 1 , . . . , y k ) ≤ C C k(d−1) (0,ks)
1 A2 (f 2 (y 2 ) − f 1 (Ay 1 )) · · · 1 A k (f k (y k ) − f 1 (Ay 1 )) d(y 1 , . . . , y k ), where k only depends on our choice of a 1 , . . . , a k .
(D) Applying the inverse function theorem: We consider the map (y 1 , . . . , y k ) → Φ(y 1 , . . . , y k ) = f 2 (y 2 ) − f 1 (Ay 1 ), f k (y k ) − f 1 (Ay 1 ) from R k(d−1) into R (k−1)d . Since each map f i is a submersion, i.e., each has a surjective derivative at each point, by the construction of this map the partial derivatives ∂ y i j Φ(0) for i ≥ 2 form a linearly independent set S of (k − 1)(d − 1) vectors. By the arguments in (C) above the partial derivatives ∂ y 1 j Φ(0) are also linearly independent with the derivatives in S and with one another. Thus we have a set of (k−1)d linearly independent partial derivatives at the origin, specifically the derivatives {∂ y 1 j Φ(0) : i ≥ 2 or (i = 1 and 1 ≤ j ≤ k − 1)} are linearly independent.
If we denote y 1 f = (y 1 1 , . . . , y 1 k−1 ) and y 1 l = (y 1 k , y 2 d−1 ), then the inverse function theorem tells us that we can choose s (and hence r) small enough that for each fixed y 1 l ∈ C d−k (0, ks) the map (y 1 f , y 2 , . . . , y k ) → Φ(y 1 , . . . , y k ) = f 2 (y 2 ) − f 1 (Ay 1 ), f k (y k ) − f 1 (Ay 1 ) is a diffeomorphism of C (k−1)d (0, ks) onto an open subset U (y 1 l ) of R (k−1)d . Note that by choosing s small enough we can assume that the Jacobian of this diffeomorphism is uniformly bounded for all choices of y 1 l ∈ C d−k (0, ks). Then we have C k(d−1) (0,ks)
1 A2 (f 2 (y 2 ) − f 1 (Ay 1 )) · · · 1 A k (f k (y k ) − f 1 (Ay 1 )) d(y 1 , . . . , y k ) ≤ C C d−k (0,ks) U(y 1 l ) 1 A2 (z 2 ) · · · 1 A k (z k ) d(z 2 , . . . , y k ) dy 1 l ≤ C|A 2 | · · · |A k | ≤ C|E 2 | · · · |E k |,
where we have used that A i = E m with m ≤ i and the assumption that |E 1 | ≤ |E 2 | ≤ . . . ≤ |E k |. Inserting this into (7.1) gives the required estimate.
Proposition 4. 1 .
1Suppose that the k-linear operator U (f 1 , . . . , f k ) satisfies the localization conditions (L1) and (L2) and that
Proposition 4 . 2 .
42Let µ be a compactly supported finite positive Borel measure on R 2d such that the pushforward measure
Here we address the case where d ≥ 2 and show that B : L 1 × L 1 → L s for s ∈ [1/2, 1]. Jeong and Lee [5] recently completely characterized the L p boundedness of the maximal version of the operator B using a slicing technique; our approach in this section bears some resemblance to the slicing technique used by Jeong and Lee. Let d ≥ 2. Let D := {(x, x) : x ∈ R d } be the diagonal subspace of R 2d and A := {(x, −x) :
so the pushforward measure σ (−) is absolutely continuous with bounded density. Hence B : L 1 × L 1 → L 1 and by Proposition 4.1 B : L 1 × L 1 → L 1/2 for d ≥ 2.
contradicting the independence of p i and p 1 . Hence span(P (p i ) ∪ {p i }) = R d . Now since p 2 , . . . , p k are in the image of Df 1 (0), which is exactly P (p 1 ), there is an invertible (d−1)×(d−1) matrix A such that the first columns of Df 1 (0)A = D(f 1 • A)(0) are p 2 , . . . , p k . Then by a simple change of the first d − 1 variables we have for some bounded open set V , C k(d−1) (0,s)
using compactness to get a finite covering, summing the corresponding integrals and taking the kth root gives S k (1 E1. Combining estimates: Inserting the previous estimate back into (7.1). 1 E k ) L k ≤ C(|E 1 | · · · |E k |) 1/kCombining estimates: Inserting the previous estimate back into (7.1), using compactness to get a finite covering, summing the corresponding integrals and taking the kth root gives S k (1 E1 , . . . , 1 E k ) L k ≤ C(|E 1 | · · · |E k |) 1/k .
Multilinear maximal operators associated to simplices. Brian Cook, Neil Lyall, Akos Magyar, https:/londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/jlms.12467Journal of the London Mathematical Society. 23Brian Cook and Neil Lyall and Akos Magyar, Multilinear maximal operators associated to simplices, Jour- nal of the London Mathematical Society n/a, no. n/a, DOI https://doi.org/10.1112/jlms.12467, available at https://londmathsoc.onlinelibrary.wiley.com/doi/pdf/10.1112/jlms.12467. ↑1, 2, 3, 7, 8
Some remarks on multilinear maps and interpolation. Loukas Grafakos, Nigel Kalton, DOI10.1007/PL00004426.MR1812822Math. Ann. 31914Loukas Grafakos and Nigel Kalton, Some remarks on multilinear maps and interpolation, Math. Ann. 319 (2001), no. 1, 151-180, DOI 10.1007/PL00004426. MR1812822 ↑3, 4
Bilinear generalized Radon transforms in the plane. Allan Greenleaf, Alex Iosevich, Ben Krause, Allen Liu, arXiv:1704.00861.↑12Allan Greenleaf and Alex Iosevich and Ben Krause and Allen Liu, Bilinear generalized Radon transforms in the plane (2017), available at arXiv:1704.00861. ↑1, 2, 3, 5, 8
On triangles determined by subsets of the Euclidean plane, the associated bilinear operators and applications to discrete geometry. Allan Greenleaf, Alex Iosevich, DOI10.2140/apde.2012.5.397.MR2970712↑3Anal. PDE. 52Allan Greenleaf and Alex Iosevich, On triangles determined by subsets of the Euclidean plane, the associated bilinear opera- tors and applications to discrete geometry, Anal. PDE 5 (2012), no. 2, 397-409, DOI 10.2140/apde.2012.5.397. MR2970712 ↑3
Maximal estimates for the bilinear spherical averages and the bilinear Bochner-Riesz operators. Eunhee Jeong, Sanghyuk Lee, DOI10.1016/j.jfa.2020.108629.MR4103874↑6J. Funct. Anal. 279729Eunhee Jeong and Sanghyuk Lee, Maximal estimates for the bilinear spherical averages and the bilinear Bochner-Riesz operators, J. Funct. Anal. 279 (2020), no. 7, 108629, 29, DOI 10.1016/j.jfa.2020.108629. MR4103874 ↑6
Sparse bounds for spherical maximal functions. Michael T Lacey, DOI10.1007/s11854-019-0070-2.MR4041115↑5J. Anal. Math. 1392Michael T. Lacey, Sparse bounds for spherical maximal functions, J. Anal. Math. 139 (2019), no. 2, 613-635, DOI 10.1007/s11854-019-0070-2. MR4041115 ↑5
Multilinear convolutions defined by measures on spheres. M Daniel, Oberlin, DOI10.2307/2000993.MR943305↑6Trans. Amer. Math. Soc. 3102Daniel M. Oberlin, Multilinear convolutions defined by measures on spheres, Trans. Amer. Math. Soc. 310 (1988), no. 2, 821-835, DOI 10.2307/2000993. MR943305 ↑6
The triangle averaging operator. A Eyvindur, Sean R Palsson, Sovine, DOI10.1016/j.jfa.2020.108671.MR4109092↑1J. Funct. Anal. 27983Eyvindur A. Palsson and Sean R. Sovine, The triangle averaging operator, J. Funct. Anal. 279 (2020), no. 8, 108671, 21, DOI 10.1016/j.jfa.2020.108671. MR4109092 ↑1, 2, 3
L p estimates for multilinear convolution operators defined with spherical measure. Saurabh Shrivastava, Kalachand Shuin, arXiv:2006.037546Saurabh Shrivastava and Kalachand Shuin, L p estimates for multilinear convolution operators defined with spherical measure (2020), available at arXiv:2006.03754. ↑6
Lp inequalities for certain generalized Radon transforms, dissertation. Betsy Stovall, 23Betsy Stovall, Lp inequalities for certain generalized Radon transforms, dissertation (2009). ↑2, 3
| []
|
[
"Shortest-support Multi-Spline Bases for Generalized Sampling *",
"Shortest-support Multi-Spline Bases for Generalized Sampling *"
]
| [
"Alexis Goujon ",
"Shayan Aziznejad ",
"Alireza Naderi \nUniversity of British Columbia\n\n",
"Michael Unser ",
"\n1É cole polytechnique fédérale de Lausanne\n\n"
]
| [
"University of British Columbia\n",
"1É cole polytechnique fédérale de Lausanne\n"
]
| []
| Generalized sampling consists in the recovery of a function f , from the samples of the responses of a collection of linear shift-invariant systems to the input f . The reconstructed function is typically a member of a finitely generated integer-shift-invariant space that can reproduce polynomials up to a given degree M . While this property allows for an approximation power of order (M + 1), it comes with a tradeoff on the length of the support of the basis functions. Specifically, we prove that the sum of the length of the support of the generators is at least (M +1). Following this result, we introduce the notion of shortest basis of degree M , which is motivated by our desire to minimize computational costs. We then demonstrate that any basis of shortest support generates a Riesz basis. Finally, we introduce a recursive algorithm to construct the shortest-support basis for any multispline space. It provides a generalization of both polynomial and Hermite B-splines. This framework paves the way for novel applications such as fast derivative sampling with arbitrarily high approximation power. | 10.1016/j.cam.2021.113610 | [
"https://arxiv.org/pdf/2012.08954v2.pdf"
]
| 234,835,363 | 2012.08954 | 5ed1c6be0e99a941f64bd0d3783c6c819acd8efc |
Shortest-support Multi-Spline Bases for Generalized Sampling *
June 18, 2021
Alexis Goujon
Shayan Aziznejad
Alireza Naderi
University of British Columbia
Michael Unser
1É cole polytechnique fédérale de Lausanne
Shortest-support Multi-Spline Bases for Generalized Sampling *
June 18, 2021
Generalized sampling consists in the recovery of a function f , from the samples of the responses of a collection of linear shift-invariant systems to the input f . The reconstructed function is typically a member of a finitely generated integer-shift-invariant space that can reproduce polynomials up to a given degree M . While this property allows for an approximation power of order (M + 1), it comes with a tradeoff on the length of the support of the basis functions. Specifically, we prove that the sum of the length of the support of the generators is at least (M +1). Following this result, we introduce the notion of shortest basis of degree M , which is motivated by our desire to minimize computational costs. We then demonstrate that any basis of shortest support generates a Riesz basis. Finally, we introduce a recursive algorithm to construct the shortest-support basis for any multispline space. It provides a generalization of both polynomial and Hermite B-splines. This framework paves the way for novel applications such as fast derivative sampling with arbitrarily high approximation power.
Introduction
Generalized Sampling in Shift-Invariant Spaces
Since the formulation of Nyquist-Shannon's celebrated sampling theorem [1], the reconstruction of a function from discrete measurements has been extended in many ways [2,3]. In particular, Papoulis proposed the framework of generalized sampling [4], where he showed that any bandlimited function f is uniquely determined by the sequences of discrete measurements (generalized samples) g n (kT ) = (h n * f )(kT ) = f, ψ n (· − kT ) , n = 1, ..., N, k ∈ Z,
where (g n (t)) n=1,...,N are the outcome of N linearly independent systems applied to f . The sampling is assumed to proceed at 1/N the Nyquist rate (i.e., T = N T Nyq = 2N π/ω max , where ω max is the maximum frequency of f ). The functions ψ n (t) = h n (−t), t ∈ R, are called the analysis functions. They are the time-reversed versions of the impulse responses. The sampling theorem was also generalized to many different function spaces such as integer-shift-invariant spaces [5,6], including spline spaces [7,8,9]. Following this extension and Papoulis' theory, Unser and Zerubia introduced a framework to perform generalized sampling without the bandlimited constraint [10,11] which includes important cases such as interlaced and derivative sampling in spline spaces. In this paper, we adopt the same framework and propose to reconstruct a function f from discrete samples g n (k), k = 1, ..., N in an integer-shift-invariant space generated by a finite collection of generators as in some recent works [12,13,14].
The structure of such reconstruction spaces has been thoroughly studied [15,16,17] and there exist theoretical results that lead to the critical choice of relevant generating functions [18]. As a minimal requirement to get a good approximation space, the generating functions should satisfy jointly the partition-of-unity condition [19]. In addition, there exists a tradeoff between the approximation power of the space and the size of the support of the generating functions [20].
Polynomial Splines
A polynomial spline is a piecewise polynomial function defined over the real line. Of special interest are the splines of degree n because they provide one free parameter per segment. They are defined by distinct knots and polynomial pieces of degree n that are connected smoothly so that the global function has continuous derivatives up to order (n − 1). The splines whose knots are uniformly spaced are called cardinal splines and they are relevant to many applications such as image processing [21]. In the 50s, Isaac Schoenberg laid the foundation of cardinal splines [22,23] when he showed that the set S n of cardinal splines of degree n could be generated by a single function [24], the B-spline of degree n. In this paper we will consider the causal B-spline and denote it by β n + . This simple building block is also the shortest nonzero spline of degree n. Interestingly, the B-splines can be constructed recursively with the relation β n+1
+ = β n + * β 0 + ,(2)
starting from β 0 + , which is the rectangular window over [0, 1)
β 0 + (x) = 1, 0 ≤ x < 1 0, otherwise.(3)
The convolution by β 0 + can be decomposed in two successive operations: an integration (which transforms a spline of degree n into a spline of degree (n + 1)) followed by a finite difference (which gives back a compactly supported function). Indeed, (f * β 0 + )(x) = ∆{
x −∞ f (t)dt}, where ∆{f } = (f (·) − f (· − 1)) is the finite difference of f . Along with their great reproducing properties and shortest support, B-splines allow an efficient and practical implementation, which is exploited in many fields [25,26,27,28].
Multi-Splines
To perform generalized sampling, it is natural to look at multi-spline spaces since they offer additional degrees of freedom. A cardinal multi-spline space is defined as the sum of N ∈ N spline spaces: S n = S n1 + · · · + S n N , n = (n 1 , ..., n N ) and n 1 < · · · < n N ∈ N. From now on, any spline will be assumed to be a cardinal spline unless stated otherwise. It is worth noting that, in the case of consecutive spaces specified by n k = n 1 + (k − 1), the resulting space is exactly the space of piecewise polynomials of degree n N that are in C n1−1 (R), the space of functions with (n 1 − 1) continuous derivatives (see Proposition 2). Some multi-spline spaces have proved to be of great interest for derivative sampling, where the goal is to reconstruct a signal from the samples of the function and of its first-order derivative. We should mention the well-known bicubic Hermite splines (h 1 , h 2 ), first introduced by Schoenberg and Lipow in [29]. They constitute a basis of S 2 + S 3 with the shortest support and provide the direct interpolation formula
∀f ∈ S 2 + S 3 , ∀x ∈ R : f (x) = k∈Z f (k)h 1 (x − k) + f (k)h 2 (x − k) ,(4)
where f = f (1) is the derivative of f . The excellent approximation capabilities and minimal-support property of the Hermite splines [30] give a strong incentive to investigate more general multi-spline spaces. The bicubic Hermite splines are the backbone of many computer-graphics applications and closely linked to Bézier curves [31,32,33,34,35]. Schoenberg and Lipow also found two fundamental functions to reconstruct any function in S 4 +S 5 from its samples and the samples of its first-order derivative. Nonetheless, those functions are not well-suited to practical applications since they are not compactly supported. Building on top of an impressive body of work from various communities, we propose a systematic study of shortest bases for any multi-spline space. In particular, the main goal is to generalize the concept of B-splines to any multi-spline space.
The paper is organized as follows: in Section 2, we formulate the problem in the framework of finitely generated shift-invariant spaces. We then state the properties that relevant generating functions should satisfy. In Section 3, we show that the conditions imposed can only be met if the sum of the support of the generating functions is large enough. In Section 4, we present a method to construct shortest-support bases for any multi-spline space. This has important implications in practice, which we illustrate in Section 5 where we give practical examples to implement generalized sampling with the new set of functions, including interpolation, derivative sampling, and a new way to envision Bézier curves.
Formulation of the Problem
Let φ = (φ 1 , φ 2 , . . . , φ N ) be a finite collection of functions in L 2 (R), Lebesgue's space of square-integrable functions. The integer-shift-invariant subspace of L 2 (R) generated by φ is denoted by S(φ) and is defined as
S(φ) = S(φ 1 ) + S(φ 2 ) + · · · + S(φ N ),(5)
where
S(φ n ) = Span ({φ n (· − k)} k∈Z ) ⊆ L 2 (R), n = 1, . . . , N.(6)
We shall not restrict ourselves to multi-spline spaces for now and rather consider finitely generated integer-shiftinvariant spaces. To formulate the problem, we recall three properties of φ that have been imposed in previous works for practical applications. Multi-spline spaces will then naturally stand out as practical and important reconstruction spaces (Sections III and IV).
Riesz Basis
Definition 1. The set of functions {φ n (· − k) : k ∈ Z, n = 1, . . . , N } ⊂ L 2 (R) is said to be a Riesz basis with bounds A, B ∈ R with 0 < A ≤ B < +∞ if, for any vector of square-summable sequences c = (c 1 , ..., c N ) ∈ ( 2 (Z)) N , we have that A c 2 ≤ k∈Z c[k] T φ(· − k) L2(R) ≤ B c 2 ,(7)
where
c 2 = N n=1 c n 2 2 1 2 , φ = (φ 1 , φ 2 , . . . , φ N )
and where A and B are the tightest constants.
When this property is satisfied, we say that φ generates a Riesz basis. The Riesz-basis property guarantees that any f ∈ S(φ) has the unique and stable representation ( [36])
f (·) = k∈Z c[k] T φ(· − k) = k∈Z N n=1 c n [k]φ n (· − k).(8)
This property is well characterized in the Fourier domain via the Gramian matrix-valued function
G(ω) = k∈Zφ (ω + 2kπ)φ(ω + 2kπ) H = k∈Z φ, φ T (· − k) e −jωk ,(9)
where the inner product is defined as f, g = R f (t)g * (t)dt, * is the complex conjugate operator, and H is the conjugate transpose operator. Equality (9) follows from Poisson's formula applied to the sampling at the integers of the matrix-valued autocorrelation function t → φ, φ T (· − t) = (φ * φ H∨ )(t) [37]. The Fourier equivalent of the Riesz-basis condition is [16] 0 < A 2 = ess inf
ω∈[0,2π) λ min (ω) ≤ ess sup ω∈[0,2π) λ max (ω) = B 2 < +∞,(10)
where λ min (ω) and λ max (ω) are the smallest and largest eigenvalues ofĜ(ω).
Reproducing Polynomials
Definition 2. The space S(φ) is said to reproduce polynomials of degree up to M if, for all m = 0, 1, ..., M , there exist vector sequences c m (not necessarily in ( 2 (Z)) N ) such that 1
∀x ∈ R, x m = k∈Z c m [k] T φ(x − k).(11)
Strang and Fix showed that the property of the reproduction of polynomials of degree up to M is directly linked to the approximation power of the reconstruction space [38]. More precisely, let
S h (φ) = {f (·/h) : f ∈ S(φ)}(12)
be the h-dilate of S(φ). The space S(φ) is said to have an approximation power of order M if any sufficiently smooth and decaying function can be approached by an element of S h (φ) with an error decaying as O(h M ). The so called "Strang-Fix conditions" give sufficient conditions to have a space with an approximation power of order M [30,39,40]. In particular, for compactly supported and integrable generating functions, it is sufficient to have the space S(φ) reproduce polynomials of degree up to (M − 1). A straightforward implication is that the spline space S n has an approximation power of order (n + 1) since (i) it can reproduce polynomials of degree up to n;
(ii) it can be generated by the compactly supported function β n + . The multi-spline space S n1 + · · · + S n N inherits the highest approximation power of its spline spaces. Its approximation power is (n N + 1), since S n N ⊂ S n1 + · · · + S n N .
Compact Support
The evaluation of f ∈ S(φ) at a given x ∈ R from its discrete representation c ∈ ( 2 (Z)) N requires a number of computations more or less proportional to the support size of φ. So, ideally, we want to minimize the support of φ while maintaining a good approximation power [20]. The support of a function f ∈ L 2 (R) is written as supp(f ) = {x ∈ R : f (x) = 0}. If it is a compact subset of R, then the support size is defined as |supp(f )| = R 1 supp(f ) (t)dt, where 1 supp(f ) is the indicator function of supp(f ). For a finite collection of compactly supported functions φ = (φ 1 , ..., φ N ), the natural extension for the support size is
|supp(φ)| = N n=1 |supp(φ n )|.(13)
In Section 3, we present theoretical results that clarify the relation between the desired properties.
Shortest Bases
For a single generator φ such that S(φ) reproduces polynomials of degree up to M , Schoenberg stated that |supp(φ)| ≥ M + 1 [22]. The result was proved in [41] for N = 2. We now extend the proof to any N ∈ N \ {0}.
Proof. If φ is not compactly supported, then the inequality is clear. Now, we can assume that φ is compactly supported. This implies that, for any x ∈ R, the sum k∈Z c
[k] T φ(x − k) = k∈Z N n=1 c n [k]φ n (x − k)
has only a finite number of nonzero terms that are identified by the set
Λ(x) = {(n, k) ∈ {1, . . . , N } × Z : x ∈ supp(φ n (· − k))} ,(15)
and its cardinality
λ(x) = #(Λ(x)) = k∈Z N n=1 1 supp(φn) (x + k) ∈ N.= N n=1 ∞ −∞ 1 supp(φn) (x)dx = |supp(φ)|,(17)
where we applied Fubini's Theorem in (17). Because λ is bounded and takes values in N, it only takes a finite number of values. Consequently, there exists a set A ⊂ [0, 1] of nonzero measure such that λ is constant on A and no greater than its average, as in
∀x ∈ A : λ(x) = λ A ≤ λ = |supp(φ)|.(19)
The function #(Λ) restricted to A is constant, but this does not imply that Λ is constant on A. Noting that A is bounded and that the φ n are compactly supported, the image of A under Λ, denoted by Λ(A), is a finite set. Therefore, there exists B ⊂ A ⊂ [0, 1] of nonzero measure such that Λ is constant on B. This means that the set S(φ) |B of functions of S(φ) restricted to B is spanned by λ A functions (φ n (· − k)) (n,k)∈Λ(B) . Moreover, due to the reproducing property, the polynomials of degree up to M restricted to B form a linear subspace of S(φ) |B whose dimension is (M + 1), because B is infinite. Then, we must have that λ A ≥ M + 1 and, since λ A ≤ |supp(φ)|, we deduce the announced bound |supp(φ)| ≥ M + 1.
If λ is not a.e. constant, then A can be chosen so that λ A < λ = |supp(φ)| and S(φ) |B is spanned by fewer than |supp(φ)| functions. The reproduction property implies that |supp(φ)| > M + 1. This means that the equality
|supp(φ)| = M + 1 is possible only if λ is a.e. constant.
Following Theorem 1, we can introduce the central notion of shortest-support basis.
Definition 3. A collection of functions φ ∈ (L 2 (R)) N is said to be a shortest-support basis of degree M if S(φ)
reproduces polynomials of degree up to M with the shortest support, i.e. with |supp(φ)| = M + 1.
The qualifier of basis comes from Theorem 2.
Theorem 2 (Shortest support and Riesz basis). Any shortest basis generates a Riesz basis.
Before proving the theorem, we define the kth slice of any function f as
∀x ∈ R : S k {f }(x) = f (x + k), x ∈ [0, 1) 0, otherwise,(20)
and the set of nonzero slices of all the generating functions as
T (φ) = {S k {φ n } : S k {φ n } ≡ 0 and k ∈ Z, n = 1, ..., N }.(21)
The proof will also invoke Lemma 1.
Lemma 1. Let φ ∈ (L 2 (R)) N be compactly supported. If T (φ)
is a set of linearly independent functions, then φ generates a Riesz basis.
Proof. The generating functions can be expressed in terms of their slices as φ n (x) = k∈Z S k {φ n }(x − k). The Riesz-basis property is best characterized in the Fourier domain with the Gramian matrix (note that, φ being compactly supported, all the sums are in fact finite), which leads to
(Ĝ(ω)) mn = q∈Z φ m , φ n (· − q) e −jωq = q∈Z k1∈Z k2∈Z S k1 {φ m }, S k2 {φ n }(· − q − (k 2 − k 1 )) e −jωq = k1∈Z k2∈Z S k1 {φ m }, S k2 {φ n } e jω(k2−k1) if q = (k 1 − k 2 ), the inner product vanishes = k1∈Z S k1 {φ m }e −jωk1 , k2∈Z S k2 {φ n }e −jωk2 = φ m (ω, ·),φ n (ω, ·) ,(22)
whereφ n (ω, ·) is the finite weighted sum of slices
φ n (ω, x) = k∈Z S k {φ n }(x)e −jωk .(23)
If, now, T (φ) is a set of linearly independent functions, then, for any ω ∈ R, the functions (φ n (ω, ·)) n=1,...,N are linearly independent because the sums are finite. This means thatĜ(ω) is the Gramian matrix of a linearly independent family of functions, which is known to be equivalent to detĜ(ω) > 0. In addition g : ω → det(Ĝ(ω)) is a finite weighted sum of e jωk since φ is compactly supported. It is therefore continuous and 2π-periodic. The image of [0, 2π] under g is therefore a closed interval such that
0 < ess inf ω∈[0,2π] det(Ĝ(ω)) = min ω∈[0,2π] det(Ĝ(ω)) < ess sup ω∈[0,2π] det(Ĝ(ω)) = max ω∈[0,2π] det(Ĝ(ω)) < +∞.(24)
Noting that det(Ĝ(ω)) is the product of the eigenvalues ofĜ(ω), Condition (10) is satisfied, which means that φ is a Riesz basis.
Note that the converse of Lemma 1 is not necessarily true. For a counterexample, consider the function in (25) made of two side-by-side rectangles of different height, so that
∀x ∈ R : φ(x) = 1, x ∈ [0, 1) α, x ∈ [1, 2) 0, otherwise.(25)
In this case, with a single generator, the Gramian matrix is just a scalar and readsĝ(ω) = (1 + α 2 ) + 2α cos ω, which verifies, for any ω ∈ R, that
(1 − |α|) 2 ≤ |ĝ(ω)| ≤ (1 + |α|) 2 .(26)
So for |α| = 1, φ is a Riesz basis with bound A = (1 − |α|) and B = (1 + |α|). Yet, T (φ) is clearly not a set of linearly independent functions since the second slice is a scaled version of the first one. For a more practical counterexample, see [42,Proposition 2.2.].
Lemma 2. Let φ ∈ (L 2 (R)) N . If φ is a shortest-support basis, then T (φ) is a set of linearly independent functions.
Proof. It is equivalent to prove the contrapositive of the lemma, which states that if T (φ) is not a set of linearly independent functions, then φ is not a shortest-support basis. To that end, suppose that T (φ) is not a set of linearly independent functions. This means that one can find a slice, say S k0 {φ q0 }, that depends linearly on the others. Now, consider the integer-shift-invariant space generated by the set of functions T (φ)\{S k0 {φ q0 }}. Note that the new generating functions differ now both in size (support size of at most 1) and in number (possibly greater than N ). On one hand, the new integer-shift-invariant space is larger than the initial space and, in particular, is still able to reproduce polynomials of degree up to M . On the other hand, the sum of the support size of the generating functions is smaller than |supp(φ)| because a nonzero slice was removed. So, φ cannot be of minimal support.
We can now prove Theorem 2.
Proof of Theorem 2. Let φ ∈ (L 2 (R)) N be compactly supported. By contraposition, if it is not a Riesz basis, then T (φ) is not a set of linearly independent functions (Lemma 1). Then, by Lemma 2, φ cannot be of minimal support.
To conclude this section, we present two results for finitely generated integer-shift-invariant spaces in preparation to a characterization of multi-spline spaces (Theorem 4). The unit sample sequence is written δ[·] and is defined by
δ[k] = 1, k = 0 0, k = 0 , and its matrix version δ N ×N is defined by (δ N ×N ) pq [·] = δ[·], p = q 0, p = q . Lemma 3. Let N, M ∈ N, C ∈ (R Z ) N ×M , B ∈ (R Z ) Mthat C m * B[k] = C * B[k] = δ N ×N [k]
. Therefore, one can write that
C m * B = δ N ×N + −2s≤k<0 (m−1)s+1≤k≤(m+1)s M k δ[· − k],(27)
where M k ∈ R N ×N are matrices that account for the fact that C m is a truncated version of C. This then translates into the following z-transform matrix relation (note that all sequences are compactly supported so the z-transforms are well defined) (28) where A(z) can be decomposed as
C m (z)B(z) = I N ×N + −2s≤k<0 (m−1)s<k≤(m+1)s z −k M k = M N ×N + −1 k=−2s z −k M k + (m+1)s k=(m−1)s+1 z −k M k = z −2s A(z),A(z) = z 2s I N ×N + P (z) + z (m+1)s+1 Q(z),(29)
where P (z) and Q(z) are polynomial matrices of degree (2s − 1). The determinant of A(z) can be expressed in terms of the columns of I N ×N , P (z), and Q(z) (denoted respectively e k , p k (z), and q k (z)), so that
z → det A(z) = det(z 2s e 1 + p 1 (z) + z (m+1)s q 1 (z), . . . , z 2s e N + p N (z) + z (m+1)s q N (z)).(30)
Knowing that the determinant is n-linear with respect to the columns, z → det A(z) is a polynomial function of degree at most (m + 3)sN . We now want to prove that it cannot be identically zero. To that end, we expand the determinant with respect to the columns and find that there is a unique term of the form λz 2sN . It is obtained by picking for k = 1, . . . , N the column e k z 2s . The coefficient in front of z 2sN is therefore det(e 1 , . . . , e N ) = 1 = 0. Indeed, for other combinations of columns in the expansion, we would have that
• if at least one column of the form z (m+1)s q k (z) is chosen, then it results in a term of degree at least (m + 1)s > (2N + 2)s > 2sN ;
• else, at least one column of the form p k (z) is chosen. Since the degree of p k (z) is lower than 2s, the resulting term in the expansion has a degree lower than 2sN .
In the end, we proved that z → det A(z) cannot be identically zero. Therefore, there exists z 0 ∈ R so that rank(A(z 0 )) = N . It implies that
N = rank(Ĉ m (z 0 )B(z 0 )) ≤ min(rank(Ĉ m (z 0 )), rank(B(z 0 ))) ≤ min(M, N ) ≤ M .
Lemma 4. Let ψ ∈ (L 2 (R)) M and η ∈ (L 2 (R)) N be two collections of compactly supported functions that are able to reproduce each other (the reproducing sequences might not be in 2 (Z)). If η is a shortest-support basis, then M ≥ N .
Proof. By hypothesis, there exist vector sequences
c p ∈ (R Z) M such that η p = k∈Z c p [k] T ψ(· − k) = c T p * ψ, which reads in matrix form η = C * ψ, C ∈ (R Z ) N ×M .(31)
Similarly, one can write that
ψ = B * η, B ∈ (R Z ) M ×N .(32)
From Lemma 2, we know that the nonzero slices of η are linearly independent (shortest-support basis). This implies that, to generate the compactly supported function ψ, the sequence of matrices B must be compactly supported as well since the only way to generate the zero function on a segment for η is to set the active coefficient of B to 0. Now, one can mix the equations and find that
η = C * (B * η) = (C * B) * η.(33)
The associativity of the convolution operations is justified by the fact that both η and B are compactly supported, meaning that, for a given argument x, all sums are finite. Because the slices of η are linearly independent, η can reproduce itself in a unique way, which gives
C * B = δ N ×N ,(34)
We can now conclude that M ≥ N with Lemma 3.
Multi-Spline Shortest Bases
With a single generator, the unique shortest basis of degree n ∈ N (up to a scaling and a shift operation) is the B-spline of degree n, which is a generator of S n . For multiple generators, it is natural to consider spaces generated by a finite number of B-splines β n = β n1 + , ..., β n N + , where n = (n 1 , . . . , n N ) and n 1 < . . . < n N . In this way, the reproducing and approximation properties are inherited from the higher-degree spline β n N + . Yet, multi-spline spaces are not generated optimally by the classical B-splines. Proposition 1. Let N ∈ N\{0} and n = (n 1 , . . . , n N ) with n 1 < · · · < n N ∈ N. If N > 1, then β n = β n1 + , ..., β n N + is neither a shortest-support basis nor a Riesz basis.
Proof. • The space S(β n ) can reproduce polynomials of degree at most n N due to the inclusion S(β n N + ) ⊂ S(φ). Moreover, the sum of the support of β n is N m=1 (n m + 1) > n N + 1, which shows that the basis is not a shortest-support one.
• From the proof of Lemma 1, the Gramian matrix can be written
(Ĝ(ω)) pq = β np (ω, ·),β nq (ω, ·) ,(35)
whereβ np (ω, ·) is the finite weighted sum of slices
β np (ω, x) = k∈Z S k {β np }(x)e −jωk .(36)
It is known that β np + satisfies the partition of unity, meaning that, for any x ∈ R, k∈Z β np
+ (x − k) = 1. In terms of slices, it means thatβ np (0, x) = k∈Z S k {β np }(x) = 1 [0,1) (x)
. The functions (β np (0, ·)) p=1,...,N are therefore not linearly independent (because they are equal) and detĜ(0) = 0. As stated in the proof of Lemma 1, ω → detĜ(ω) is a continuous function (because the B-splines are compactly supported), meaning that ess inf
ω∈[0,2π] detĜ(ω) = min ω∈[0,2π] detĜ(ω) = 0.(37)
Following (10), β n cannot be a Riesz basis.
For N > 1, only few shortest bases are known, with the most prominent being the Hermite splines presented by Lipow and Schoenberg [29]. They are solution of the direct interpolation problem find η p ∈ S n,N : η (ν) p (k) = 1, if ν = p and k = 0, 0, otherwise,
with k ∈ Z, ν, p = 0, ..., (N − 1), and S n,N = S n + · · · + S n+N −1 . The function η p has all its derivatives set to zero at the integers, except for the pth derivative that is one at zero. The multi-spline space must be chosen so that η p is sufficiently differentiable, yielding the condition n ≥ N . When n = N , shortest-support functions were found (the Hermite splines, see plots [43] for instance) but, unfortunately, in a higher-order approximation space, i.e. for n > N , the functions are not compactly supported anymore. For instance, for derivative sampling (interpolate f and f ), the smaller order of approximation solution (N = 2) is given by the cubic Hermite-spline generators of S 2 + S 3 .
Consecutive Multi-Spline Spaces
The derivatives up to order (n − 1) of a compactly-supported spline of degree n must vanish on the edges of the support. This constraint cannot be satisfied if the function is too short. In particular, the shortest nonzero function of S n has a support size of (n + 1) and, interestingly, it is precisely the B-spline of degree n. In the special case of a consecutive multi-spline space S n,N = S n + S n+1 + · · · + S n+N −1 , this result can be directly extended. To that end, we define the space
P m m = {p ∈ C m (R) : p is a polynomial of degree m on each [k, k + 1), k ∈ Z}.(39)
Note that the space P m m can be viewed as a spline space with knots of multiplicity (m − m − 1) ([44, Section 5.11]). In our setting with simple knots, P m m is rather regarded as multi-spline space (Proposition 2).
Proposition 2. Let n, N > 0. Then S n,N = P n−1 n+N −1 . Proof. The definition of a spline of degree n implies that, for q = 0, ..., N − 1, we have that S n+q ⊂ P n−1 n+N −1 , from which we deduce that S n,N ⊂ P n−1 n+N −1 . The other inclusion is proven by induction over N , with the induction hypothesis
H N : ∀n ∈ N, P n−1 n+N −1 ⊂ S n,N .(40)
• For N = 1 and any n ∈ N \ {0}, the result is directly given by the definition of S n,1 = S n = P n−1 n .
• Suppose that H N holds for N ∈ N * . Let p ∈ P n−1 n+N . We have that p (n−1) ∈ P 0 N +1 and, consequently, p (n) is a piecewise polynomial function with finite jumps at the knots. There exists f 0 ∈ S 0 that has the same jumps on the knots as p (n) . Then, (p (n) − f 0 ) is continuous on the integers, which implies that (p (n) − f 0 ) ∈ P 0 N . The induction hypothesis guarantees that (p (n) − f 0 ) ∈ S 1,N and, therefore, that p (n) ∈ S 0,N +1 . After n integrations, we finally have that p ∈ S n,N +1 , which concludes the induction step and the proof.
For a given L ∈ N, the space of functions in P m m that are supported in [0, L] is a vector space of the known finite dimension [45] dim
({p ∈ P m m : supp(p) ⊂ [0, L]}) = ((m − m )L − (m + 1)) + ,(41)
where x + = max(0, x). Indeed, any p ∈ P m m supported in [0, L] is uniquely defined by L pieces that are polynomials of degree m. So, L × (m + 1) coefficients have to be set. The smoothness constraints imply that the pieces cannot be set independently. On the first interval [0, 1), the (m + 1) coefficients must be chosen so that p Proof. The set of functions of S n,N that have their support in [0, L] is a vector space of dimension (LN − n) + (Corollary 1). To find at least one non-vanishing function in the vector space, its dimension must be greater than one meaning that (LN − n) ≥ 1 ⇔ L ≥ (n + 1)/N = p + (r + 1)/N . Knowing that L ∈ N and r < N , we conclude that one must have that L = (p + 1) to find a nonzero compactly supported function. In this case, the dimension reads ((p + 1)N − n) = (N + pN − n) = (N − r).
With a single generator, the shortest-support basis is provided by the shortest function. In a consecutive multispline space, one would ideally take (N − r) functions of size (p + 1) (the shortest) and complete with r functions of size (p + 2). This would result in N functions with a total support size of (N − r)(p + 1) + r(p + 2) = N p + r + N = n + N = n N + 1, which is the objective for a shortest-support basis. For nonconsecutive multi-spline spaces, similar results should exist, but in a more complicated form.
Existence and Construction of mB-Splines
We say that a finite collection φ of multi-spline functions is an mB-spline of degree n = (n 1 , . . . , n N ) with n 1 < · · · < n N ∈ N, if it is a shortest-support basis of the space S n . This is the natural extension of B-splines. Similar to the latter, mB-splines can be constructed recursively for any multi-spline space. Indeed, two basic transformations (the "increment step" and the "insertion step") allow one to convert a shortest-support basis of a given space into a shortest-support basis of a different space. To simplify the explanation, we say that the collection φ = (φ 1 , ..., φ N ) ∈ (L 2 (R)) N of compactly supported functions is standardized if, for n = 1, . . . , N , we have that
(i) R φ n (t)dt ∈ {0, 1}, (ii) inf{t ∈ R : φ n (t) = 0} ∈ [0, 1).
The second condition implies that the generating functions are causal, i.e. φ n (t < 0) = 0. Note that any φ compactly supported can be standardized without altering S(φ). H2). We found a shortest support-basis of S1 + S4 (|supp(θ)| = 5).
Increment Step
The B-splines β n+1 + can be constructed recursively by noting that
β n+1 + (x) = ∆ x −∞ β n + (t)dt ,(42)
where ∆ is the finite difference operator ∆{f }(x) = (f (x) − f (x − 1)). The integration increases the polynomial degree, along with the smoothness at the knots (Step 1), while ∆ ultimately returns a compactly supported function (Step 2). For multiple generating functions, a similar two-step recursive approach is proposed. The general process is mathematically detailed below, while an intuitive example is proposed in Figure 1. Suppose η = (η 1 , ..., η N ) ∈ (L 2 (R)) N is an mB-spline of S n1 + · · · + S n N . The goal is to find an mB-spline of S n1+1 + · · · + S n N +1 . It will be a generator with a support size of (n N + 2), able to reproduce the B-splines of degree n 1 + 1, ..., n N + 1.
Integration
The collection of functions η is able to reproduce the B-splines of degree n 1 , . . . , n N , that is, for any s ∈ {1, ..., N } there exists a vector sequence c s = (c s 1 , ..., c s N ) (not necessarily in ( 2 (Z)) N ) so that ∀x ∈ R : β ns
+ (x) = k∈Z c s [k] T η(x − k).(43)
To justify the calculations to come, we assume that The assumption (A n ) is not overly restrictive because it will hold for the starting basis of our algorithm and then be preserved by the construction process. In the end, all the bases constructed will be able to reproduce the B-splines with causal sequences. Let H = (H 1 , ..., H N ) be defined as
H(x) = x −∞ η(t)dt.(44)
The integration of equation (43), followed by the application of the operator ∆, yields β ns+1
+ (x) = ∆ k∈Z c s [k] T H(x − k) = k∈Z c s [k] T ∆{H}(x − k) = k∈Z c s [k] T (H(x − k) − H(x − 1 − k)) = k∈Z (c s [k] T − c s [k − 1] T )H(x − k).(45)
The assumption that c s 1 , ..., c s N are causal and the fact that H is also causal (because η is compactly supported and standardized) implies that, for any x ∈ R, the sums in (45) have a finite number of nonzero terms. This enables us to switch the order of the operations (sum, integral, and ∆). Note that the sequence (c s [k] T − c s [k − 1] T ) k∈Z is causal. In short, H can reproduce (β n1+1 + , ..., β n N +1 + ) with causal sequences, but it is obviously not a shortest-support basis because its support is infinite.
Finite Difference
The aim now is to find a basis with the same reproducing properties as H, but with minimal support. To that end, we denote by s 0 the index so that η s0 is the shortest function in η that satisfies R η s0 = 0. It must exist; if not, the generating S(η) would only contains zero-mean functions and could not reproduce the B-splines that are not zero-mean. A shortest-support basis θ = (θ 1 , ..., θ N ) is then given by
θ s = H s if s = s 0 and R η s (t)dt = 0 H s − H s0 if s = s 0 and R η s (t)dt = 0 ∆H s0 s = s 0(46)
Because η is compactly supported and standardized, the choice of s 0 ensures that
|supp(θ s )| = |supp(η s )| s = s 0 |supp(η s0 )| + 1 s = s 0(47)
In short, |supp(θ)| = 1 + |supp(η)| = n N + 2. Noting that H s0 = k∈N θ s0 (· − k), it is clear that θ can reproduce H with causal coefficients. It also implies that θ can reproduce (β n1+1 + , ..., β n N +1 + ) with causal coefficients (see (45)), which justifies the assumption (A n ). In conclusion, θ is a shortest-support basis of S n1+1 + · · · + S n N +1 .
Insertion Step
The present step enables us to add a generator to a shortest-support basis. Suppose η = (η 1 , ..., η N ) is a standardized shortest-support basis of S n1 +· · ·+S n N and let η = (δ, η 1 , ..., η N ), where δ is the Dirac distribution. The increment step applied to η yields a shortest-support basis for S 0 + S n1+1 + · · · + S n N +1 . Indeed, the shortest function of η being δ, the new basis θ = (θ 0 , ..., θ N ) is given by
θ n : x → ∆{ x −∞ δ(t)dt} = β 0 + (x), n = 0 x −∞ η n (t)dt,
n > 0 and R η n (t)dt = 0
x −∞ (η n (t) − δ(t))dt, n > 0 and R η n (t)dt = 0.
Because η is compactly supported and standardized, we have that |supp(θ n )| = 1, n = 0 |supp(η n )|, otherwise, which means that |supp(θ )| = |supp(η )| + 1 = n N + 2. The process also ensures that θ is a shortest-support basis of S 0 + S n1+1 + · · · + S n N +1 . Theorem 3. Let n 1 < · · · < n N ∈ N\{0}. There exists an mB-spline η = (η 1 , ..., η N ) ∈ (L 2 (R)) N of S n1 +· · ·+S n N that can be constructed recursively with increment and insertion steps.
Proof. The increment and insertion steps are sufficient to construct an mB-spline for any multi-spline space. Indeed, take η 0 = (β n N −n N −1 −1 + ) a shortest support basis for S n N −n N −1 −1 . The insertion step gives a shortest-support basis for S 0 + S n N −n N −1 . After (n N −1 − n N −2 − 1) increment steps and one insertion step, the process gives a shortestsupport basis for S 0 +S n N −1 −n N −2 +S n N −n N −2 . By iteration, a shortest-support basis for S 0 +S n2−n1 +· · ·+S n N −n1 is obtained. Applying n 1 increment steps, we finally obtain a shortest-support basis for S n1 + · · · + S n N Examples of mB-splines will be provided in Section 5. Note that our algorithm does not always output functions with the most practical form. This is corrected by appropriate linear combinations and, possibly, translations that do not alter the reproducing properties and the support size. For instance, for the space S 2 + S 3 , our construction will need a simple linear combination to obtain the wellknown bicubic Hermite splines. We conclude this section with a result on the minimal number of generating functions required to generate multi-spline spaces. Theorem 4. Let n 1 < · · · < n N ∈ N \ {0}. The space S n = S n1 + · · · + S n N cannot be generated by fewer than N compactly supported generating functions.
Proof. From Theorem 3, there exists an mB-spline of S n composed of N functions, say, η = (η 1 , . . . , η N ) ∈ (S n ) N . Let ψ = (ψ 1 , ..., ψ M ) ∈ (S n ) M be a collection of compactly supported functions able to generate S n . It means that η and ψ can reproduce each other and, by Lemma 4, M ≥ N .
Note that N is a lower bound and the number of generating function of a shortest-support basis can exceed N . For instance, take η = (η 1 , η 2 ) with
η 1 : x → β 0 (2x) = 1 [0,1/2) (x) (49) η 2 : x → β 0 (2(x − 1/2)) = 1 [1/2,1)(x) .(50)
Since η 1 + η 2 = β 0 , η can reproduce S 0 . In addition, the fact that |supp(η)| = 1 means that it is a shortest-support basis of degree 0 and now it is composed of two generating functions. (Note that the space they generate is larger than S 0 ).
Applications
Generalized Sampling in Multi-Spline Spaces
We consider a multi-spline space S n along with the N -component mB-spline φ = (φ 1 , . . . , φ N ) and some corresponding analysis functions ψ = (ψ 1 , . . . , ψ N ). As we now show, the generalized-sampling formulation presented in [10] can be extended to multiple generators. Let H be a space considerably larger than S(φ). Consider f ∈ H, from which we know only some discrete measurements (g[n]) n∈Z written
g[n] = ψ(· − n), f = ( ψ 1 (· − n), f , ..., ψ N (· − n), f ).
To construct an approximationf ∈ S(φ) of f , a standard way is to enforce consistency [6,11], in the sense that f andf must give the same measurements. This formulation generalizes the notion of interpolation. For instance, to interpolate the value of f and its derivative at the sampling locations, take ψ 1 = δ and ψ 2 = δ . In such a case, consistency simply means that f andf should have the same value and the same derivative at the grid points. In general, the consistency requirement translates into
ψ(· − n), f = ψ(· − n),f = k∈Z ψ(· − n), φ T (· − k) · c[k] = k∈Z ψ(· − (n − k)), φ T · c[k] = (A ΦΨ * c)[n](51)
where (c[n]) n∈Z is the unique vector sequence representingf = k∈Z c[k] T φ(· − k) and A ΦΨ [n] = ψ(· − n), φ T (·) is the matrix-valued sequence of the measurements of the basis functions. To solve our problem, we rely on the theory of signal and systems, including the z-transform. Indeed, with this framework efficient implementation techniques naturally stand out. When the matrix-valued filter A ΦΨ is invertible (see [10, Proposition 1] for the invertibility condition), the vector c of sequences can be computed from the measurements by applying the matrix-valued inverse filter Q, like in
c[n] = (Q * g)[n].(52)
Its transfer function verifies in the z-domainQ(z) = −1 ΦΨ (z). This matrix filter has not necessarily a finite impulse response (FIR) but it can be decomposed asQ(z) = 1 det ΦΨ (z) com( ΦΨ (z)) T , where com( ΦΨ ) denotes the cofactor matrix of ΦΨ . For compactly supported analysis functions, the comatrix com(Â(z)) is FIR because it is a Laurent polynomial in z, so it is straightforward to implement. On the contrary, 1 det ΦΨ (z) is often not FIR. Nonetheless, it can usually be implemented efficiently too, using the same techniques as in [28].
Derivative Sampling with High-Degree Multi-Splines in
S 2p + S 2p+1
The derivative sampling problem reads for f ∈ H findf ∈ S n :
f (k) = f (k) f (k) = f (k) , k ∈ Z.(53)
The most relevant reconstruction spaces have the form S n = S 2p + S 2p+1 . The underlying reason is that the filter complexity is the same for the spaces S 2p + S 2p+1 and S 2p−1 + S 2p , so, the higher degree is preferred (the filter has 2(p − 1) roots). Note that the same occurs when one performs classical interpolation with B-splines and odd degrees are usually preferred. To the best of our knowledge, when p > 1, no solution based on shortest-support bases and recursive filtering has been proposed so far. Our construction of shortest-bases results in the functions η 1 and η 2 . They have a support size (p + 1) and are plotted in Figure 2. Due to the symmetry properties of those functions, the entries of ΦΨ (z) have poles that come in reciprocal pairs. Consequently, the inverse matrix filter can be implemented with efficient recursive techniques, as detailed in [27,28]. The case of quintic-degree derivative sampling is detailed now. The basis functions are specified in Table 1. The z-transform of the filter ΦΨ (z) readsÂ
ΦΨ (z) = z −1 +z −2 2 z −1 −z −2 2 5(z −1 −z −2 ) 4 5(z −1 +z −2 ) 8 .(54)
It follows that the transpose comatrix satisfies
com(Â(z)) T z ← −−− → 1 2 5(δ[·−1]+δ[·−2]) 4 −δ[· − 1] + δ[· − 2] − 5(δ[·−1]−δ[·−2]) 2 δ[· − 1] + δ[· − 2] (55)
and the determinant z −1
detÂ(z) = 16 5 −z (1 − z 0 z −1 )(1 − z −1 0 z −1 ) z ← −−− → d[n],(56)slice # x 0 k x 1 k x 2 k x 3 k x 4 k x 5 k x 6 k x 7 k S 2 + S 3 η 1 k = 0 -3 2 k = 1 1 -3 1 η 2 k = 0 -1 1 k = 1 1 -2 1 S 4 + S 5 4η 1 k = 0 5 -3 k = 1 2 5 -10 5 k = 2 2 -5 10 -= (x − k) n if x ∈ [k, k + 1) and x n k = 0 otherwise. where z 0 = (3 − 2 √ 2)
. This means that the convolution of any sequence with d can be implemented recursively. Interestingly, it is the same inverse filter as in cubic-spline interpolation. The reader can therefore refer to [21] for a detailed explanation of the implementation. The expansion coefficients can be evaluated as
c 1 = d * 5 8 ∆ + {f } − 1 2 ∆{f } c 2 = d * − 5 4 ∆{f } + 1 2 ∆ + {f } ,(57)
where
∆ + {f }[k] = f [k] + f [k − 1]
. Finally, the multi-spline that is consistent with the measurements is given bỹ
f (x) = k∈Z c 1 [k]η 1 (x − k) + k∈Z c 2 [k]η 2 (x − k).(58)
Derivative Sampling in S
2 + S 3 + S 4
Here, we consider the setting ψ = (δ, δ , δ(· − 1/2)), which means that the value of the function to be reconstructed is sampled twice more often than its derivative. The specification of S 2 + S 3 + S 4 as reconstruction space provides then an explicit interpolation formula, which involves the shortest-support basis η, plotted in Figure 4. This formula readsf
(x) = k∈Z (f (k)η 1 (x − k) + f (k)η 2 (x − k) + f (k + 1/2)η 3 (x − k)) .(59)
More generally, we observed that the addition of N consecutive spline spaces to S 2 + S 3 (i.e., choosing S 2 + S 3 + · · · + S 3+N ) allows one to perform derivative sampling and interpolate the function N times between the integers with a direct interpolation formula.
Direct Derivative Sampling in
S 2 + · · · + S 2p+1
The space S 2 + S 3 + S 4 + S 5 is also well suited for derivative sampling with ψ = (δ, δ , δ(· − 1/2), δ (· − 1/2)) because of the structure of its shortest-support generating functions η 1 , η 2 , η 3 , and η 4 ( Figure 5). Indeed, it yields the direct interpolation formulã The sampling step is 1/2, but the spline knots are still located at the integers. Note that the sampling step can be tuned at will by dilation of the generating functions. More generally, we conjecture that there exist basis functions with the interpolatory property for any space of the form S 2 +· · ·+S 2p+1 and the sampling step 1/p. This conjecture was verified for p = 1 (bicubic Hermite splines), p = 2 ( Figure 5) and p ∈ {3, 4}.
f (x) = k∈Z (f (k + 1/2)η 1 (x − k) + f (k)η 2 (x − k + 1) + f (k + 1/2)η 3 (x − k) + f (k)η 4 (x − k + 1)) .(60)
Classical Interpolation
The classical interpolation problem reads for f ∈ H findf ∈ S n : f (k) =f (k), k ∈ Z.
When the number N of generating functions is greater than 1, we have two equivalent options:
(i) to sample the function f with the sampling step 1/N ;
(ii) to dilate the generators by a factor of N , keeping a unit sampling step.
We present the result in accordance with Option (i). : Shortest-support basis for S1 +· · ·+SN . The basis functions are continuous and able to reproduce any polynomial of degree up to N .
Modified
Lagrange Polynomials in S 1 + · · · + S N Classical interpolation is well solved by B-splines but, starting from degree 2, the filter is neither FIR nor causal. Exact operations such as local interpolation or interpolation with a finite delay are therefore not possible. Some workarounds exist [46]; we present now one that is based on modified Lagrange polynomials. Let l = (l 1 , . . . , l N ) be a collection of N generating function such that, for x ∈ [0, 1], l q (x) = N p=0 p =q N x−p q−p . In this way, when q = 1, . . . , (N −1), l q is zero at x = 0 and x = 1 so it can be set to zero for x ∈ [0, 1] and l q ∈ S 1 . Noting that l N (1) = 1, to make sure that l N ∈ S 1 , we extend its support to [1,2] and set, ∀x ∈ [1,2], l N (x) = l N (2 − x) (see Figure 6). These functions constitute a shortest-support basis of S 1 + · · · + S N and give a direct interpolation formula. Interestingly, those basis functions are sometimes used for finite-element methods [47].
Bi-Spline Classical Interpolation in S 2p+1 + S 2p+2
A bi-spline is the sum of two splines of different degrees, and it can be used to perform classical interpolation. In particular, interpolation in the reconstruction space S n = S 2p+1 + S 2p+2 leads to a filter with p pairs of reciprocal roots. In terms of filtering, it has therefore the same complexity as for the interpolation inverse filter associated with the single space S 2p+1 . Shortest-support basis functions for such spaces are plotted in Figure 7. We now detail how this interpolation is performed for S 3 + S 4 , keeping in mind that the other cases are similar. The z-transform of the filter ΦΨ (z) readsÂ
ΦΨ (z) = z −1 2 z −1 +z −2 4 5(z −1 +z −2 ) 32 5(z −1 +z −3 )+210z −2 320 ,(62)
with z 0 = (4 − √ 15). The final steps are identical to the detailed case of derivative sampling (recursive filtering).
Bézier Curves and Computer
Graphics in S 1 + S 2 + S 3 and S 1 + S 2
In this section, we use our multi-spline formulation to revisit some Bézier curves and, in particular, the cubic Bézier curves that are popular in computer graphics. Each portion of the curve is a cubic polynomial defined by four control points.
• Starting point and ending point of the portion.
• Two handles that control the tangent of the curve at each extremity of the portion.
Thus, the value of the function and its left and right derivatives are controlled on the knots. From a multi-spline perspective, any cubic Bézier curve lies in the space S 1 + S 2 + S 3 . With the well chosen generating functions η 1 , η 2 , and η 3 plotted in Figure 8, the interpolation formula is explicit and reads
f (x) = k∈Z f (k)η 1 (x − k) + k∈Z f (k − )η 2 (x − k) + k∈Z f (k + )η 3 (x − k),(66)
where f (k − ) and f (k + ) denote the left and right derivatives at k, respectively. Interestingly, η 2 and η 3 can be obtained from the bi-cubic Hermite splines, by splitting the antisymmetric function into two functions (see Figure 2 (a)). It gives a simple interpretation to cubic Bézier curves as illustrated in Figure 9. Similarly, quadratic Bézier curves are also multi-splines, this time associated to the space S 1 + S 2 (Figure 8).
2 Figure 8: Shortest-support bases for application in classical computer-graphics. (a) Shortest basis for S1 + S2. The function η1 controls the value of the function on the knots while η2 controls the left derivative on the knots. These functions reproduce any quadratic Bézier curve. (b) Shortest basis for S1 + S2 + S3. The function η1 controls the value of the function on the knots while η2 and η3 control the left and right derivatives, respectively, on the knots. These functions can reproduce any cubic Bézier curve with the shortest support. They also give a simple interpretation of such curves. Figure 9: Screenshot from the online demo. The shortest basis of the space S1 + S2 + S3 allows one to control the value of the function (green dots) and the left/right derivatives (handles). It yields the same curve as with standard vector-graphics editors relying on cubic Bézier curves. In this figure, the parametric curves are two-dimensional and the interpolation is performed component-wise.
Nonconsecutive Bi-spline Spaces
Nonconsecutive multi-spline spaces are relevant to represent signals that have components of different regularity [48]. For instance, the space S 0 + S p , with p > 0, consists of smooth signals with sharp jumps. In Figure 10, we show shortest-support bases of S 0 + S p , for p ∈ {2, 3, 4}, that were obtained with our construction algorithm.
Conclusion
In this work, we have introduced the notion of shortest-support bases of degree M . They are the shortest-support collections of functions that generate a reconstruction space with an approximation power of order (M + 1). We proved that shortest-support bases necessarily generate Riesz bases, a minimal requirement for practical applications. With a single generator, the unique shortest-support basis of degree M is the well-known B-spline of degree M . We extended this notion to multiple generators and proposed a recursive method that yields shortest bases for any multi-spline space. These new sets of functions helped us transpose the efficient reconstruction techniques developed for B-splines, and perform generalized sampling. In particular, we have provided a method to perform fast derivative sampling with any approximation power. Finally, we presented a new way to approach some Bézier curves.
Theorem 1 (
1Minimal support). If S(φ) = S(φ 1 , φ 2 , . . . , φ N ) reproduces polynomials of degree up to M , then |supp(φ)| ≥ M + 1. In addition, if there is equality, then k∈Z N n=1 1 supp(φn) (x + k) = |supp(φ)| for almost every x ∈ R.
11
16) follows from the fact that 1 supp(φn) (x + k) is 1 if and only if (n, k) ∈ Λ(x) and 0 otherwise. The function x → λ(x) is 1-periodic and bounded because supp(φ n ) are compact subsets of R. Its average over one period reads (note that the sums are in fact all finite) supp(φn) (x + k)dx = supp(φn) (x + k)dx
×N . If the sequence of matrices B is compactly supported and C * B = δ N ×N , then M ≥ N . Proof. There exists s ∈ N such that supp(B) ⊂ {−s, ..., s} ⊂ N. The behavior of C[k] when |k| → ∞ is not known, and it is easier to work with the truncated version C m = 1 {−s,...,ms} × C, where m ∈ N is a large enough integer m > 2N + 1. The sequence of matrices C m * B is compactly supported and satisfies supp(C m * B) ⊂ {−2s, ..., (m+1)s}. Following the properties of convolution of compact sequences, we have, for any k = 0, ..., (m−1)s,
( 0 )
0, ..., p (m ) (0) = 0, which leaves (m − m ) degrees of freedom. For the next interval, (m + 1) new coefficients have to be set but the values p (0) (1), ..., p (m ) (1) are already fixed, giving only (m − m ) new degrees of freedom. We see that each interval provides (m − m ) extra degrees of freedom. In the end, there remain LN degrees of freedom. Now, to enforce that p ∈ P m m , we must have that p (0) (L), ..., p (m ) (L) = 0. The total number of degrees of freedom gives the announced dimension ((m − m )L − (m + 1)) + . Corollary 1. Let n, N, L ∈ N. The set of functions of S n,N that have their support in [0, L] is a vector space of dimension (LN − n) + = max(0, LN − n).
Corollary 2 .
2Let the Euclidean division of n by N be written as n = pN + r. Then, the shortest-support nonzero functions of S n,N have a support size of (p + 1). Moreover, the set {f ∈ S n,N : supp(f ) ⊂ [0, p + 1]} is a vector space of dimension (N − r).
Figure 1 :
1Increment step that yields a shortest-support basis of S1 + S4 starting from S0 + S3. (a) A shortest-support basis (η1, η2) for S0 + S3 (|supp(η)| = 4). (b) The integration of η1 and η2 results in two generators of S1 + S4, H1 and H2. (c) To get compactly supported functions with the same generating properties, we choose θ1 = ∆H1 and θ2 = (H1 −
c s 1
1, ..., c s N are causal sequences, i.e., c s n [k] = 0 for any k < 0. (A n )
Figure 2 :
2Shortest-support bases for derivative sampling, obtained with the shortest-basis algorithm and some linear combinations to get a symmetric and an antisymmetric function. (a) The well-known bicubic Hermite splines. (b)-(c)-(d) New bases for derivative sampling with high-degree splines. These functions are piecewise polynomials of degree 5, 7, 9 with continuity of the derivatives of order 3, 5, 7, respectively.Online Interactive Tutorial Some examples are implemented in an online interactive demo 2 , a screenshot being provided inFigure 3. The user can control the discrete measurements of a function (value, derivative), choose a multi-spline reconstruction space, and see in live the reconstructed function.
Figure 3 :
3Derivative sampling with optimal bases. The solid curve lies in S2 + S3 (cubic piecewise polynomials with continuous derivative) and the dashed curve lies in S4 + S5 (quintic piecewise polynomials with continuous third derivative).
Figure 4 :
4Shortest basis of S2 + S3 + S4 associated to the analysis functions ψ = (δ, δ , δ(· − 1/2)).
Figure 5 :S 1 + S 2 + S 3 + S 4 Figure 6
512346Shortest basis of S2 + S3 + S4 + S5 for direct derivative sampling.
Figure 7 :
7Shortest bi-spline bases for classical interpolation with a half-integer sampling step. (a) In S1 + S2, the functions presented give a direct interpolation formula. (b) (c) (d) The functions are piecewise polynomials of degree 4, 6, 8 with continuity of the derivatives of order 2, 4, 6 respectively. To perform interpolation, a filter with 2, 4, 6 roots respectively has to be inverted. while the z-transform of the inverse filter can be decomposed aŝ Q(z) =p(z) ×P (
Figure 10 :
10(a) (b) (c) Shortest-support bases for the spaces S0 + S2, S0 + S3 and S0 + S4. (d) An example of a hybrid bi-spline that lies in the space S0 + S4.
Table 1 :
1Slices of shortest-support bases for derivative sampling. The slices are given as linear combinations of the shifted monomials x n k
for m = 0, we use in (11) the convention that x m = 1, including for x = 0.
https://bigsplinesepfl.github.io/
Communication in the Presence of Noise. C E Shannon, 10.1109/JRPROC.1949.232969Proceedings of the IRE. 371C. E. Shannon, Communication in the Presence of Noise, Proceedings of the IRE 37 (1) (1949) 10-21. doi: 10.1109/JRPROC.1949.232969.
The Shannon Sampling Theorem-Its Various Extensions and Applications: A Tutorial Review. A J Jerri, 10.1109/PROC.1977.10771Proceedings of the IEEE. 6511A. J. Jerri, The Shannon Sampling Theorem-Its Various Extensions and Applications: A Tutorial Review, Proceedings of the IEEE 65 (11) (1977) 1565-1596. doi:10.1109/PROC.1977.10771.
Sampling-50 Years After Shannon. M Unser, 10.1109/5.843002doi:10.1109/ 5.843002Proceedings of the IEEE. 884M. Unser, Sampling-50 Years After Shannon, Proceedings of the IEEE 88 (4) (2000) 569-587. doi:10.1109/ 5.843002.
Generalized Sampling Expansion. A Papoulis, 10.1109/TCS.1977.1084284IEEE Transactions on Circuits and Systems. 2411A. Papoulis, Generalized Sampling Expansion, IEEE Transactions on Circuits and Systems 24 (11) (1977) 652-654. doi:10.1109/TCS.1977.1084284.
Sampling Procedures in Function Spaces and Asymptotic Equivalence with Shannon's Sampling Theory, Numerical Functional Analysis and Optimization. A Aldroubi, M Unser, 10.1080/01630569408816545doi:10.1080/ 0163056940881654515A. Aldroubi, M. Unser, Sampling Procedures in Function Spaces and Asymptotic Equivalence with Shannon's Sampling Theory, Numerical Functional Analysis and Optimization 15 (1-2) (1994) 1-21. doi:10.1080/ 01630569408816545.
A General Sampling Theory for Nonideal Acquisition Devices. M Unser, A Aldroubi, 10.1109/78.330352IEEE Transactions on Signal Processing. 4211M. Unser, A. Aldroubi, A General Sampling Theory for Nonideal Acquisition Devices, IEEE Transactions on Signal Processing 42 (11) (1994) 2915-2925. doi:10.1109/78.330352.
Sampling for Spline Reconstruction. R Hummel, 10.1137/0143019SIAM Journal on Applied Mathematics. 432R. Hummel, Sampling for Spline Reconstruction, SIAM Journal on Applied Mathematics 43 (2) (1983) 278-288. doi:10.1137/0143019.
Polynomial Spline Signal Approximations: Filter Design and Asymptotic Equivalence with Shannon's Sampling Theorem. M Unser, A Aldroubi, M Eden, 10.1109/18.108253IEEE Transactions on Information Theory. 181M. Unser, A. Aldroubi, M. Eden, Polynomial Spline Signal Approximations: Filter Design and Asymptotic Equivalence with Shannon's Sampling Theorem, IEEE Transactions on Information Theory 18 (1) (1992) 95-103. doi:10.1109/18.108253.
Cardinal Spline Filters: Stability and Convergence to the Ideal Sinc Interpolator. A Aldroubi, M Unser, M Eden, 10.1016/0165-1684(92)90030-ZSignal Processing. 282A. Aldroubi, M. Unser, M. Eden, Cardinal Spline Filters: Stability and Convergence to the Ideal Sinc Inter- polator, Signal Processing 28 (2) (1992) 127-138. doi:10.1016/0165-1684(92)90030-Z.
A Generalized Sampling Theory without Band-Limiting Constraints. M Unser, J Zerubia, 10.1109/82.718806IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing. 458M. Unser, J. Zerubia, A Generalized Sampling Theory without Band-Limiting Constraints, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 45 (8) (1998) 959-969. doi:10.1109/82. 718806.
Generalized Sampling: Stability and Performance Analysis. M Unser, J Zerubia, 10.1109/78.650255IEEE Transactions on Signal Processing. 4512M. Unser, J. Zerubia, Generalized Sampling: Stability and Performance Analysis, IEEE Transactions on Signal Processing 45 (12) (1997) 2941-2950. doi:10.1109/78.650255.
Generalized Sampling in Shift-Invariant Spaces with Multiple Stable Generators. A G García, M A Hernández-Medina, G Pérez-Villalón, 10.1016/j.jmaa.2007.03.083Journal of Mathematical Analysis and Applications. 3371A. G. García, M. A. Hernández-Medina, G. Pérez-Villalón, Generalized Sampling in Shift-Invariant Spaces with Multiple Stable Generators, Journal of Mathematical Analysis and Applications 337 (1) (2008) 69-84. doi:10.1016/j.jmaa.2007.03.083.
U-Invariant Sampling and Reconstruction in Atomic Spaces with Multiple Generators. V Pohl, H Boche, 10.1109/TSP.2012.2193576IEEE Transactions on Signal Processing. 607V. Pohl, H. Boche, U-Invariant Sampling and Reconstruction in Atomic Spaces with Multiple Generators, IEEE Transactions on Signal Processing 60 (7) (2012) 3506-3519. doi:10.1109/TSP.2012.2193576.
Sampling and Reconstruction in a Shift Invariant Space with Multiple Generators. R Radha, K Sarvesh, S Sivananthan, 10.1080/01630563.2018.1501701Numerical Functional Analysis and Optimization. 404R. Radha, K. Sarvesh, S. Sivananthan, Sampling and Reconstruction in a Shift Invariant Space with Multiple Generators, Numerical Functional Analysis and Optimization 40 (4) (2019) 365-385. doi:10.1080/01630563. 2018.1501701.
The Structure of Finitely Generated Shift-Invariant Spaces in L2(Rd). C De Boor, R A Devore, A Ron, 10.1006/jfan.1994.1003Journal of Functional Analysis. 1191C. de Boor, R. A. DeVore, A. Ron, The Structure of Finitely Generated Shift-Invariant Spaces in L2(Rd), Journal of Functional Analysis 119 (1) (1994) 37-78. doi:10.1006/jfan.1994.1003.
Oblique Projections in Atomic Spaces. A Aldroubi, 10.1090/S0002-9939-96-03255-8Proceedings of the American Mathematical Society. 1247A. Aldroubi, Oblique Projections in Atomic Spaces, Proceedings of the American Mathematical Society 124 (7) (1996) 2051-2060. doi:10.1090/S0002-9939-96-03255-8.
Sampling Theorems for Shift-Invariant Spaces, Gabor Frames, and Totally Positive Functions. K Gröchenig, J L Romero, J Stöckler, 10.1007/s00222-017-0760-2doi:10.1007/ s00222-017-0760-2Inventiones Mathematicae. 2113K. Gröchenig, J. L. Romero, J. Stöckler, Sampling Theorems for Shift-Invariant Spaces, Gabor Frames, and Totally Positive Functions, Inventiones Mathematicae 211 (3) (2018) 1119-1148. doi:10.1007/ s00222-017-0760-2.
Approximation from Shift-Invariant Subspaces of L2 (Rd). C De Boor, R A Devore, A Ron, 10.2307/2154583Transactions of the American Mathematical Society. 3412C. de Boor, R. A. DeVore, A. Ron, Approximation from Shift-Invariant Subspaces of L2 (Rd), Transactions of the American Mathematical Society 341 (2) (1994) 787-806. doi:10.2307/2154583.
Partitions of Unity and Approximation. C De Boor, R A Devore, 10.1090/s0002-9939-1985-0776207-2Proceedings of the American Mathematical Society. 934C. de Boor, R. A. DeVore, Partitions of Unity and Approximation, Proceedings of the American Mathematical Society 93 (4) (1985) 705-709. doi:10.1090/s0002-9939-1985-0776207-2.
MOMS: Maximal-Order Interpolation of Minimal Support. T Blu, P Thévenaz, M Unser, 10.1109/83.931101IEEE Transactions on Image Processing. 177T. Blu, P. Thévenaz, M. Unser, MOMS: Maximal-Order Interpolation of Minimal Support, IEEE Transactions on Image Processing 17 (7) (2001) 1069-1080. doi:10.1109/83.931101.
Splines: A Perfect Fit for Signal and Image Processing. M Unser, 10.1109/79.799930IEEE Signal Processing Magazine. 166M. Unser, Splines: A Perfect Fit for Signal and Image Processing, IEEE Signal Processing Magazine 16 (6) (1999) 22-38. doi:10.1109/79.799930.
Cardinal Spline Interpolation, SIAM. I J Schoenberg, 10.1137/1.9781611970555I. J. Schoenberg, Cardinal Spline Interpolation, SIAM, 1973. doi:10.1137/1.9781611970555.
On Spline Interpolation at all Integer Points of the Real Axis. I J Schoenberg, Séminaire Delange-Pisot-Poitou. Théorie des nombres. 9I. J. Schoenberg, On Spline Interpolation at all Integer Points of the Real Axis, Séminaire Delange-Pisot-Poitou. Théorie des nombres 9 (1) (1967) 1-18.
Splines as Linear Combinations of B-splines. A Survey, Approximation Theory. C De Boor, doi: 10.1.1.34.8204C. de Boor, Splines as Linear Combinations of B-splines. A Survey, Approximation Theory (1976). doi: 10.1.1.34.8204.
On Calculating with B-Splines. C De Boor, 10.1016/0021-9045(72)90080-9Journal of Approximation Theory. 61C. de Boor, On Calculating with B-Splines, Journal of Approximation Theory 6 (1) (1972) 50-62. doi: 10.1016/0021-9045(72)90080-9.
A Practical Guide to Splines. C De Boor, 10.2307/2006241Springer-VerlagNew YorkC. de Boor, A Practical Guide to Splines, Springer-Verlag New York, 1978. doi:10.2307/2006241.
Signal Processing: Part I-Theory. M Unser, A Aldroubi, B-Spline, 10.1109/78.193220IEEE Transactions on Signal Processing. 412M. Unser, A. Aldroubi, B-Spline Signal Processing: Part I-Theory, IEEE Transactions on Signal Processing 41 (2) (1993) 821-833. doi:10.1109/78.193220.
Signal Processing: Part II-Efficient Design and Applications. M Unser, A Aldroubi, M Eden, B-Spline, 10.1109/78.193221IEEE Transactions on Signal Processing. 412M. Unser, A. Aldroubi, M. Eden, B-Spline Signal Processing: Part II-Efficient Design and Applications, IEEE Transactions on Signal Processing 41 (2) (1993) 834-848. doi:10.1109/78.193221.
Cardinal Interpolation and Spline Functions. III. Cardinal Hermite Interpolation. P R Lipow, I J Schoenberg, 10.1016/0024-3795(73)90029-3Linear Algebra and Its Applications. 6P. R. Lipow, I. J. Schoenberg, Cardinal Interpolation and Spline Functions. III. Cardinal Hermite Interpolation, Linear Algebra and Its Applications 6 (1973) 273-304. doi:10.1016/0024-3795(73)90029-3.
Support and Approximation Properties of Hermite Splines. J Fageot, S Aziznejad, M Unser, V Uhlmann, 10.1016/j.cam.2019.112503Journal of Computational and Applied Mathematics. 368J. Fageot, S. Aziznejad, M. Unser, V. Uhlmann, Support and Approximation Properties of Hermite Splines, Journal of Computational and Applied Mathematics 368 (112503) (2020) 1-15. doi:10.1016/j.cam.2019. 112503.
The Bernstein Polynomial Basis: A Centennial Retrospective. R T Farouki, 10.1016/j.cagd.2012.03.001Computer Aided Geometric Design. 266R. T. Farouki, The Bernstein Polynomial Basis: A Centennial Retrospective, Computer Aided Geometric Design 26 (6) (2012) 379-419. doi:10.1016/j.cagd.2012.03.001.
Hermite Snakes with Control of Tangents. V Uhlmann, J Fageot, M Unser, 10.1109/TIP.2016.2551363IEEE Transactions on Image Processing. 256V. Uhlmann, J. Fageot, M. Unser, Hermite Snakes with Control of Tangents, IEEE Transactions on Image Processing 25 (6) (2016) 2803-2816. doi:10.1109/TIP.2016.2551363.
Ellipse-Preserving Hermite Interpolation and Subdivision. C Conti, L Romani, M Unser, 10.1016/j.jmaa.2015.01.017Journal of Mathematical Analysis and Applications. 4261C. Conti, L. Romani, M. Unser, Ellipse-Preserving Hermite Interpolation and Subdivision, Journal of Mathe- matical Analysis and Applications 426 (1) (2015) 221-227. doi:10.1016/j.jmaa.2015.01.017.
Factorization of Hermite subdivision operators preserving exponentials and polynomials. C Conti, M Cotronei, T Sauer, 10.1007/s10444-016-9453-4doi:10.1007/ s10444-016-9453-4Advances in Computational Mathematics. 425C. Conti, M. Cotronei, T. Sauer, Factorization of Hermite subdivision operators preserving exponen- tials and polynomials, Advances in Computational Mathematics 42 (5) (2016) 1055-1079. doi:10.1007/ s10444-016-9453-4.
On the Refinement Matrix Mask of Interpolating Hermite Splines. L Romani, A Viscardi, 10.1016/j.aml.2020.106524Applied Mathematics Letters. 109106524L. Romani, A. Viscardi, On the Refinement Matrix Mask of Interpolating Hermite Splines, Applied Mathe- matics Letters 109 (2020) 106524. doi:10.1016/j.aml.2020.106524.
An Introduction to Frames and Riesz Bases. O Christensen, 10.2307/30037432SpringerO. Christensen, An Introduction to Frames and Riesz Bases, Springer, 2016. doi:10.2307/30037432.
M Unser, P D Tafti, 10.1017/CBO9781107415805An Introduction to Sparse Stochastic Processes. Cambridge University PressM. Unser, P. D. Tafti, An Introduction to Sparse Stochastic Processes, Cambridge University Press, 2014. doi:10.1017/CBO9781107415805.
A Fourier Analysis of the Finite Element Variational Method. G Strang, G Fix, 10.1007/978-3-642-10984-3_7Constructive Aspects of Functional Analysis. SpringerG. Strang, G. Fix, A Fourier Analysis of the Finite Element Variational Method, in: Constructive Aspects of Functional Analysis, Springer, 2011, pp. 793-840. doi:10.1007/978-3-642-10984-3_7.
Approximation Orders of FSI Spaces in L2(Rd). C De Boor, R A Devore, A Ron, 10.1007/s003659900094Constructive Approximation. 144C. de Boor, R. A. DeVore, A. Ron, Approximation Orders of FSI Spaces in L2(Rd), Constructive Approximation 14 (4) (1998) 631-652. doi:10.1007/s003659900094.
On the Approximation Power of Convolution-Based Least Squares versus Interpolation. M Unser, I Daubechies, 10.1109/78.599940IEEE Transactions on Signal Processing. 457M. Unser, I. Daubechies, On the Approximation Power of Convolution-Based Least Squares versus Interpola- tion, IEEE Transactions on Signal Processing 45 (7) (1997) 1697-1711. doi:10.1109/78.599940.
Optimal Spline Generators for Derivative Sampling. S Aziznejad, A Naderi, M Unser, 10.1109/SampTA45681.2019.90309902019 13th International conference on Sampling Theory and Applications (SampTA). IEEES. Aziznejad, A. Naderi, M. Unser, Optimal Spline Generators for Derivative Sampling, in: 2019 13th International conference on Sampling Theory and Applications (SampTA), IEEE, 2019, pp. 1-4. doi: 10.1109/SampTA45681.2019.9030990.
A General Framework for the Construction of Piecewise-Polynomial Local Interpolants of Minimum Degree. M Antonelli, C V Beccari, G Casciola, 10.1007/s10444-013-9335-yAdvances in Computational Mathematics. 404M. Antonelli, C. V. Beccari, G. Casciola, A General Framework for the Construction of Piecewise-Polynomial Local Interpolants of Minimum Degree, Advances in Computational Mathematics 40 (4) (2014) 945-976. doi: 10.1007/s10444-013-9335-y.
On Hermite Vector Splines and Multi-Wavelets. D Ranirina, J Villiers, 10.1016/j.cam.2018.08.007Journal of Computational and Applied Mathematics. 349D. Ranirina, J. de Villiers, On Hermite Vector Splines and Multi-Wavelets, Journal of Computational and Applied Mathematics 349 (2019) 366-378. doi:10.1016/j.cam.2018.08.007.
An introduction to splines for use in computer graphics and geometric modeling. M Lachance, 10.1016/0734-189x(90)90071-3Computer Vision, Graphics, and Image Processing. M. Lachance, An introduction to splines for use in computer graphics and geometric modeling, Computer Vision, Graphics, and Image Processing (1990). doi:10.1016/0734-189x(90)90071-3.
On the Dimension of Multivariate Piecewise Polynomials. P Alfeld, Numerical analysis. P. Alfeld, On the Dimension of Multivariate Piecewise Polynomials, Numerical analysis (1986) 1-23.
Causal Cubic Splines: Formulations, Interpolation Properties and Implementations. D Petrinović, 10.1109/TSP.2008.929133IEEE Transactions on Signal Processing. 5611D. Petrinović, Causal Cubic Splines: Formulations, Interpolation Properties and Implementations, IEEE Trans- actions on Signal Processing 56 (11) (2008) 5442-5453. doi:10.1109/TSP.2008.929133.
Function Approximation by Finite Elements. H P Langtangen, K.-A , 10.1007/978-3-030-23788-2_3Springer International PublishingChamH. P. Langtangen, K.-A. Mardal, Function Approximation by Finite Elements, Springer International Publish- ing, Cham, 2019, pp. 69-129. doi:10.1007/978-3-030-23788-2_3.
Hybrid-Spline Dictionaries for Continuous-Domain Inverse Problems. T Debarre, S Aziznejad, M Unser, 10.1109/TSP.2019.2944754IEEE Transactions on Signal Processing. 6722T. Debarre, S. Aziznejad, M. Unser, Hybrid-Spline Dictionaries for Continuous-Domain Inverse Problems, IEEE Transactions on Signal Processing 67 (22) (2019) 5824-5836. doi:10.1109/TSP.2019.2944754.
| []
|
[
"DOWNSIZING FROM THE POINT OF VIEW OF MERGING MODEL (preliminary discussion)",
"DOWNSIZING FROM THE POINT OF VIEW OF MERGING MODEL (preliminary discussion)"
]
| [
"V M Kontorovich [email protected] \nInstitute of Radio Astronomy NASU\n\n\nChervonopraporna Str\n61002KharkovUkraine\n\nKharkov National V.N. Karazin University 4\nSvobody square61022KharkovUkraine\n"
]
| [
"Institute of Radio Astronomy NASU\n",
"Chervonopraporna Str\n61002KharkovUkraine",
"Kharkov National V.N. Karazin University 4\nSvobody square61022KharkovUkraine"
]
| []
| In four-particle processes оf scattering with transfer of mass, unlike mergers in which mass can only increase, mass of the most massive galaxies may be reduced. Elementary model describing such process is considered. In this way, it is supposed to explain observed phenomenon of "dawnsizing" when increasing of characteristic mass the heaviest galaxies over cosmological time replaces by its reduction.PACS: 98.65.Fz; 98.80.Bp | null | [
"https://arxiv.org/pdf/1507.00192v1.pdf"
]
| 118,201,870 | 1507.00192 | 641f92117c9b0bb4755c8422839cae757b720a84 |
DOWNSIZING FROM THE POINT OF VIEW OF MERGING MODEL (preliminary discussion)
V M Kontorovich [email protected]
Institute of Radio Astronomy NASU
Chervonopraporna Str
61002KharkovUkraine
Kharkov National V.N. Karazin University 4
Svobody square61022KharkovUkraine
DOWNSIZING FROM THE POINT OF VIEW OF MERGING MODEL (preliminary discussion)
In four-particle processes оf scattering with transfer of mass, unlike mergers in which mass can only increase, mass of the most massive galaxies may be reduced. Elementary model describing such process is considered. In this way, it is supposed to explain observed phenomenon of "dawnsizing" when increasing of characteristic mass the heaviest galaxies over cosmological time replaces by its reduction.PACS: 98.65.Fz; 98.80.Bp
Introduction
Quite unusual in terms of the paradigm of mergers, but has long discussed the fact that the maximal galaxy masses (Shechter parameter ), which grow up with decreasing of the red shift at large distances, begin to decrease as you get closer to the present time (see. Figure 1), seems be in contrary to the model of mergers. We will show that it is not. Figure 1. Observational data of the Hubble ultra deep field [1] relating to the Shechter parameter М* (in our consideration it is the relevant characteristical maximal masses).
In the model of galaxy mergers, built on the basis of Smoluchowski kinetic equation (KE), are taken into account only processes of the (paired) mergers, that is the processes involving three "particles" (App.A, Figure 2). The resulting solutions (App.B) allow to find the slope of the mass function ( , ) f M t in a wide range of redshifts [2,3], satisfactorily explain the observational data of the Hubble ultra deep field [1] (the evolution of the slopes until limiting redshifts). However, arising at this "explosive" evolution leads to an unlimited growth of the maximal mass as we approach the time of the "explosion" [2]. "Explosive" singularity in the solution manifests itself as unlimited growth of the MF as we approach the time of the "explosion" t=t cr . Asymptotics of solution K(M, t) for the modifide MF (mMF, App B) u F M f near the singularity (outside the physical area of the power behavior) has the form (for u=2) characteristical maximal masses).
max ( , ) 1 1 ( ) K M t M M t , max 1 ( ) ( ) cr M t c t t (1)
That artefact, associated with the use of instant shaped source in the Smoluchowski KE , possible to avoid by the obvious physical regularization [2,3], the meaning of which is in taking into account a finite rise time of gravitational instability, which leads to separation galaxies from the general expansion of the universe. Mathematically, this was taken into account by blur of -function on the right side of KE and its replacement by a -shaped stepping stone with a finite small width . The values of MF remain finite in the region of maximal masses too. But still, the very maximal mass in regularized solutions also increases indefinitely when approaching the moment of the explosion [2][3][4]. As in other similar tasks, taking into account three-particle processes (in our case -mergers of galaxies), leading to the explosive evolution, the finite results occur when four-particle processes come into play in the vicinity of singularity and, in our case, describe the scattering with transfer of mass. In this case, unlike mergers ( Figure 2), in which the mass can only increase, a significant role is played the scattering processes in which the mass of the most massive galaxies may be decrease (Figure 3, 4). Below, we consider a simple model scheme describing the disaggregation. In this way, it is supposed to explain the observed "downsizing" phenomenon ( Figure 1), where increasing the heaviest characteristic mass over time replaced of its decreasing.
KE WITH SCATTERING
We remain, as in [2], restrict ourselves by a differential approach, which describes the transfer of a small mass. But now the kinetic equation from the linear is converted into a non-linear (quasi-linear), the most simple form of which is to occur in the KE the nonlinear term
/ F F M , where the coefficient
indicates the probability of the "inelastic" scattering process. We restrict ourselves initially by a merging probability proportional to the square of the mass 2 M .
In this case of the simplest model is natural to choose the same dependence on the mass for the probability of scattering too:
2 M .
To do this, there are physical reasons which we do not discuss here. By introducing
variable 1 z M we rewrite quasilinear term 2 / M F F M in form / F F z . Although
the source in KE quite substantial, mentioned asymptotic expression (1) satisfies the homogeneous kinetic equation, which we confine ourselves.
Our problem 1 thus reduces to the solution of the differential equation
( ) 0 F F g F x z ,(2)where ( ) g F C F 1 ( ) [ ] ( ) 0 t t F C t t F M M (3)
Here -a nonlinearity parameter, C -parameter entered from solving a linear KE, namely, С -factor in the probability of mergers of galaxies 2 CM , 1 We use the notation of the reference book by Zaitsev and Polyanin [5], item 12.4.2.1, point 2 (p. 271) 2 The latter is easily verified by its direct differentiation on time and on mass.
problem (1), which is used as an initial condition for solving KE with the nonlinear term. At 0 t t MF ( , ) F M t , as follows from (3) satisfies the initial condition (1)
2 2 0 0 ( , ) 1 1 [ ] F M t M M ,(4)
SOLUTION OF KE WITH SCATTERING
We are interested in a real solution of the cubic equation
1 1 ( ) C t t C t M M M F t t t .(5)
And the mass is bounded from above by (vanishing curly brackets (3))
max 1 ( ) M M t C t .(6)
It can be seen that the maximal mass decreases with time, what is the required dawnsizing phenomenon. In the resulting solution all the quantities are finite 3 . From the explosive evolution remained only a local on the masses increasing of MF solution near the former peculiarity. It should be noted that this fact may serve as evidence of the explosive stage of evolution.
DISCUSSION
Thus, a complete solution is a decreasing power function, such as Schechter function, which, however, before the recession in large masses begins to increase at times close to the time of the "explosion" [2,3]. The 3 With the exception of infinity introduced by the initial condition. [2]). We were deciding an extremely simplified model problem. But even in such a setting appears the phenomenon downsizing. In reality, the process of downsizing should be described by integral kinetic equation, when as a result of scattering galaxies appear the galaxies of comparable masses.
Author is grateful to Boris Komberg for debates, which led the author to discussion the problem, to Alexander Kats for participation in previous joint works оn the theme, as well as Dr. R. J. Bouwens and his co-authors for permission to reproduce the Figure from the paper [1].
APPENDIX A
Following are the schematic drawings for explaining considered above the merger and scattering.processes The direct scattering by quaternary processes with downsizing of galaxies. In fact, the equation we use corresponds to mass loss during its transmission. In the absence of losses occurs nonlinear equation somewhat different type. Let us note, that accounting of losses in merger process are not essential and does not lead to qualitative effects.
APPENDIX B
F
. The equation (2) is a generalized Hopf equation and very well studied. The solution of Cauchy problem of that KE for the mass function ( , ) F M t with the quasilinear term having a coefficient ( ) g F [5], reduces to cubic equation 2 (x-time t-t 0 where 0 t -the moment of linking with an explosive solution, playing the role of the initial conditions of the Cauchy problem for
t
, parameter of the asymptotics of solution of linear
(
corresponding to the asymptotics of our explosive solution of the linear KE), and the moment 0 t was selected close to cr t , in order to be able use a simple analytical form of the asymptotics of (1)). With M
3
3 as a function of M , in particular, the behavior of the new nonlinear "maximal mass", which is yet to be determined, and its dependence on timeWe restrict ourselves to demonstrating the asymptotic solution of the cubic equation (
Figure 2 .
2The process of mergers by the triple processes with mass increasing, leading to Smoluchowski KE. At low mass transfer KE becomes differential[2,3]. The processes shown inFigures 3 and 4, under conditions of low mass transfer leads to the considered quasilinear KE (2) describing the downsizing. Through M in all figures denotes the mass of the most massive galaxy.
Figure 3 .
3The process of merging with the appearance of an unstable intermediate galaxy, which immediately disintegrates. (Effective scattering due to the triple процессс in the second order). And the highest mass of galaxies can be reduced, which leads to the downscaling.
Figure 4 .
4Figure 4. The direct scattering by quaternary processes with downsizing of galaxies. In fact, the equation we use corresponds to mass loss during its transmission. In the absence of losses occurs nonlinear equation somewhat different type. Let us note, that accounting of losses in merger process are not essential and does not lead to qualitative effects.
Consider solutions of the Smoluchowski KE in the differential form supposing that the main contribution is due to mergers of the low-mass galaxies with the massive ones with the corresponding merging probability,we restrict themselves by the localised sourse, that give us the possibility to find solution explicitly[2][3]. The solution for MF has the power-low part, which is in good agremant with observational data. But near the maximal mass the MF has non physical singuliarities. Regularization[2,3]leads to the Shechter type MF, that has no singuliarities but the maximal mass tends to infinity when the moment of time goes to the explosion time cr t .APPENDIX CThe solution of initial problem for equation(2)may be used in the parametrical form[5] where is a parameter, 0 F is the initial value of mMF:x x , in which we have made the change of variables from " M " to "z" shown in the main text and renoted t by x. Thus, we have from (2) 0 / F and from here:Exclude from (C1) and (C2) the parameter , we receve the cubic equation(3), using
UV luminosity functions at 4, 5 and 6 z from the HUDF and other deep HST ACS fields: evolution and star formation history. R J Bouwens, G D Illingworth, M Franx, H Ford, Astrophys. J. 670R. J. Bouwens, G.D. Illingworth, M. Franx, and H. Ford. UV luminosity functions at 4, 5 and 6 z from the HUDF and other deep HST ACS fields: evolution and star formation history // Astrophys. J. 670, 2007 p. 928-958.
Merger Driven Explosive Evolution of Distant Galaxies. A V Kats, V M Kontorovich, astro-ph/1309.0957Astrophysical Bulletin. 683A.V. Kats and V.M. Kontorovich. Merger Driven Explosive Evolution of Distant Galaxies (Minor Mergers) // Astrophysical Bulletin 68, 2013, No. 3, p. 273-284; astro-ph/1309.0957.
Explosive evolution of galaxies at high redshifts due to minor mergers // Advances in. A V Kats, V M Kontorovich, Astronomy and Space Physics. 3A.V. Kats, V.M. Kontorovich. Explosive evolution of galaxies at high redshifts due to minor mergers // Advances in Astronomy and Space Physics 3, 2013, p. 131-134.
The use of gravitational lenses in the study of distant galaxy mergers // Radio Physics and Radio Astronomy 18. A V Kats, V M Kontorovich, 3A.V. Kats, V.M. Kontorovich. The use of gravitational lenses in the study of distant galaxy mergers // Radio Physics and Radio Astronomy 18, 2013, № 3, p. 220-223.
Differential equations with partial derivatives of the first order. V F Zaitsev, A Polyanin, FIZMATLIT271V.F. Zaitsev A.D Polyanin. Differential equations with partial derivatives of the first order. Reference book. M .: FIZMATLIT, 2003, p. 271.
| []
|
[
"An explanation for the muon and electron g − 2 anomalies and dark matter",
"An explanation for the muon and electron g − 2 anomalies and dark matter"
]
| [
"Kai-Feng Chen \nDepartment of Physics\nNational Taiwan University\nR.O.C10617TaipeiTaiwan\n\nInstitute of Astronomy and Astrophysics Academia Sinica\n10617TaipeiTaiwan\n",
"Cheng-Wei Chiang \nDepartment of Physics\nNational Taiwan University\nR.O.C10617TaipeiTaiwan\n",
"Kei Yagyu \nDepartment of Physics\nOsaka University\n560-0043ToyonakaOsakaJapan\n"
]
| [
"Department of Physics\nNational Taiwan University\nR.O.C10617TaipeiTaiwan",
"Institute of Astronomy and Astrophysics Academia Sinica\n10617TaipeiTaiwan",
"Department of Physics\nNational Taiwan University\nR.O.C10617TaipeiTaiwan",
"Department of Physics\nOsaka University\n560-0043ToyonakaOsakaJapan"
]
| []
| We propose simple models with a flavor-dependent global U (1) and a discrete Z 2 symmetries to explain the anomalies in the measured anomalous magnetic dipole moments of muon and electron, (g − 2) µ,e , while simultaneously accommodating a dark matter candidate. These new symmetries are introduced not only to avoid the dangerous lepton flavor-violating decays of charged leptons, but also to ensure the stability of the dark matter. Our models can realize the opposite-sign contributions to the muon and electron g − 2 via one-loop diagrams involving new vector-like leptons. Under the vacuum stability and perturbative unitarity bounds as well as the constraints from the dark matter direct searches and related LHC data, we find suitable parameter space to simultaneously explain (g − 2) µ,e and the relic density. * | 10.1007/jhep09(2020)119 | [
"https://arxiv.org/pdf/2006.07929v1.pdf"
]
| 219,687,959 | 2006.07929 | e01bae9edb15bf1de7ca3f56d46b49ca005d9898 |
An explanation for the muon and electron g − 2 anomalies and dark matter
14 Jun 2020
Kai-Feng Chen
Department of Physics
National Taiwan University
R.O.C10617TaipeiTaiwan
Institute of Astronomy and Astrophysics Academia Sinica
10617TaipeiTaiwan
Cheng-Wei Chiang
Department of Physics
National Taiwan University
R.O.C10617TaipeiTaiwan
Kei Yagyu
Department of Physics
Osaka University
560-0043ToyonakaOsakaJapan
An explanation for the muon and electron g − 2 anomalies and dark matter
14 Jun 2020(Dated: June 16, 2020)1
We propose simple models with a flavor-dependent global U (1) and a discrete Z 2 symmetries to explain the anomalies in the measured anomalous magnetic dipole moments of muon and electron, (g − 2) µ,e , while simultaneously accommodating a dark matter candidate. These new symmetries are introduced not only to avoid the dangerous lepton flavor-violating decays of charged leptons, but also to ensure the stability of the dark matter. Our models can realize the opposite-sign contributions to the muon and electron g − 2 via one-loop diagrams involving new vector-like leptons. Under the vacuum stability and perturbative unitarity bounds as well as the constraints from the dark matter direct searches and related LHC data, we find suitable parameter space to simultaneously explain (g − 2) µ,e and the relic density. *
I. INTRODUCTION
The Standard Model (SM) for elementary particles has successfully explained a plethora of phenomena in various experiments. Despite its tremendous success, physics beyond the SM (BSM) is strongly called for to explain neutrino oscillations, dark matter (DM) and baryon asymmetry of the Universe that cannot be accommodated within the SM. The question is then how we can experimentally show the existence of such a new physics model. A discovery of new particles, of course, would provide a direct proof. However, no report of such discoveries has been given so far, though there is still a possibility for their detection in future collider experiments, such as the High-Luminosity LHC [1] and the Future Circular Colliders (FCCs) [2]. In addition to the direct searches, precision measurements of certain observables can also offer good opportunities to probe new physics (NP). Deviations in measured values of the observable from their SM predictions can be attributed to the effects of new particles.
Among various observables, the anomalous magnetic dipole moment of the muon, dubbed the muon g − 2, has long been thought to be a harbinger for NP [3] and attracted a lot of attention for almost two decades because of the discrepancy between its experimental value measured at Brookhaven National Laboratory (BNL) [4] and the SM expectation. According to recent studies about the hadronic vacuum polarization contributions [5][6][7][8] to the muon g − 2, the discrepancy is at about 3.3σ level [9], with the experimental value higher than the SM prediction. See also the recent review on the muon g − 2 [10]. On the other hand, the experimental value of the electron g − 2 has been updated in 2018 [11] from a precision determination of the fine-structure constant α em . Interestingly, this measurement also shows a possible disagreement between the data and theory, with the measured value lower than the SM prediction by about 2.4σ [11]. These tantalizing opposite deviations have invited many studies to explore suitable NP models [12][13][14][15][16][17][18][19][20].
In order to accommodate both g − 2 anomalies simultaneously, a characteristic flavordependent structure is called for. In this paper, we propose a new model with a set of new particles whose interactions are constrained by a flavor-dependent global U (1) symmetry and a Z 2 symmetry, and demonstrate its capabilities to simultaneously accommodate both anomalies and, at the same time, offer a DM candidate. These new symmetries do not only play an important role in explaining both anomalies, but also forbid dangerous flavor-violating decays of the charged leptons, such as µ → eγ. Furthermore, they also guarantee the stability of the DM candidate, which is the lightest neutral particle among the new particles. We find regions in the parameter space that can satisfy the relic density and the direct search constraint of the DM while successfully explaining both g − 2 anomalies. This paper is organized as follows. In Sec. II, we define our model and give the Yukawa interactions and the scalar potential that are compliant with the symmetries. In Sec. III, we discuss the new contributions to the muon and electron g − 2, and scan the parameter space for regions that can explain both anomalies. Sec. IV is devoted to the discussion on DM physics and the collider phenomenology. Our conclusion is summarized in Sec. V.
II. MODEL
In addition to the SM gauge symmetry SU (2) L ⊗ U (1) Y , our model has an additional global U (1) and an exact Z 2 symmetries. The particle content in the lepton and scalar sectors is given in Table I 1 . The lepton sector is comprised of new vector-like isospin singlets χ a (a = e, µ) in addition to the SM left-(right-) handed lepton doublets (singlets) L L ( R ) with = e, µ, τ . The scalar sector is also extended from the SM one by introducing additional scalar isospin doublet η D and singlet η S fields, with the SM Higgs doublet field denoted by Φ. All of the new fields (χ a and η D,S ) are assigned to be odd under the Z 2 symmetry. In Table I, the hypercharge Y D is chosen to be either 0 or 1 in order to include at least one neutral particle in the Z 2 -odd sector to be a DM candidate, provided it is the lightest among all the Z 2 -odd particles. For simplicity, we assume η S to be a real field for the scenario with Y D = 1.
The Z 2 -even scalar doublet field is parameterized as usual as
Φ = G + 1 √ 2 (h + v + iG 0 ) ,(1)Fermion Scalar Fields (L e L , L µ L , L τ L ) (e R , µ R , τ R ) (χ e , χ µ ) Φ η D η S SU (2) L 2 1 1 2 2 1 U (1) Y −1/2 −1 −Y D 1/2 Y D − 1/2 Y D − 1 U (1) (q e , q µ , 1) (q e , q µ , 1) (q e , q µ ) 0 0 0 Z 2 + + − + − −) L ⊗U (1) Y ⊗U (1) ⊗ Z 2 , where U (1) is global.
The U (1) charges depend on the lepton flavor with q e = q µ . The parameter Y D appearing in the hypercharges for Z 2 -odd particles can be either 0 or 1.
while the Z 2 -odd scalar doublet can be parameterized as
η D = η + 1 √ 2 (η 0 H + iη 0 A ) for Y D = 1, η D = 1 √ 2 (η 0 H + iη 0 A ) η − for Y D = 0,(2)
where G ± and G 0 are the Nambu-Goldstone bosons that are absorbed as the longitudinal components of the W ± and Z bosons, respectively. The vacuum expectation value (VEV) v is fixed by The lepton Yukawa interactions and the mass term for χ a are given by
v = ( √ 2G F ) −1/2 with G F beingL Y = i=e,µ,τ y i SML i L i R Φ + a=e,µ f a L (L a L χ R,a )η D + f a R (¯ a R χ L,a )η S + M χa (χ L,a χ R,a ) + h.c.,(3)
where ( e R , µ R , τ R ) = (e R , µ R , τ R ). Because of the U (1) symmetry, we can naturally realize the flavor-diagonal couplings f L and f R , so that contributions from the new particles to lepton flavor-violating processes such as µ → eγ can be avoided at all orders. It should be emphasized here that analogous to the GIM mechanism, this structure cannot be achieved in a model with only one vector-like lepton, where it is impossible to accommodate both muon and electron g − 2 while suppressing the µ → eγ decay to the level consistent with the current experimental bound. In general, the new Yukawa couplings f a L,R can be complex, but we assume them to be real for simplicity in the following discussions. The Lagrangian for the quark and gauge sectors are the same as in the SM.
The most general form of the scalar potential consistent with all the symmetries is given by
V = − µ 2 Φ |Φ| 2 + µ 2 D |η D | 2 + µ 2 S |η S | 2 + λ 1 2 |Φ| 4 + λ 2 2 |η D | 4 + λ 3 |Φ| 2 |η D | 2 + λ 4 |Φ † η D | 2 + λ 5 2 (Φ · η D ) 2 + h.c. + λ 6 2 |η S | 4 + λ 7 |Φ| 2 |η S | 2 + λ 8 |η D | 2 |η S | 2 + [κ(η † D Φη S ) + h.c.],(4)
where
Φ · η D = Φ † η D for Y D = 1 , Φ T (iτ 2 )η D for Y D = 0 ,(5)
with τ 2 being the second Pauli matrix. The phases of λ 5 and κ parameters can be removed by a redefinition of the scalar fields without loss of generality. Therefore, CP symmetry is preserved in the scalar potential. We require µ 2 Φ , µ 2 D , µ 2 S > 0 in order to preserve the stability of the SM vacuum.
The squared mass of the Higgs boson h is given by m 2 h = v 2 λ 1 in both scenarios of Y D = 1 and Y D = 0. On the other hand, the mass formulas for the Z 2 -odd scalar bosons are different in the two scenarios. For the scenario with Y D = 1, the singlet field η S is neutral (η 0 S ≡ η S ), so that the η 0 H and η 0 S fields can mix with each other. By introducing a mixing angle θ, the mass eigenstates of these neutral scalar fields can be defined through
η 0 H η 0 S = c θ −s θ s θ c θ η 0 1 η 0 2 ,(6)
where s θ ≡ sin θ and c θ ≡ cos θ. The mixing angle can be expressed as
tan 2θ = 2(M 2 H ) 12 (M 2 H ) 11 − (M 2 H ) 22 ,(7)
where M 2 H is the mass matrix in the basis of (η 0 H , η 0 S ):
M 2 H = µ 2 D + v 2 2 (λ 3 + λ 4 + λ 5 ) vκ vκ 2µ 2 S + v 2 λ 7 .(8)
The squared masses of the scalar bosons are then given by
m 2 η ± = µ 2 D + v 2 2 λ 3 , m 2 η A = µ 2 D + v 2 2 (λ 3 + λ 4 − λ 5 ),m 2 η 1 = c 2 θ (M 2 H ) 11 + s 2 θ (M 2 H ) 22 + s 2θ (M 2 H ) 12 , m 2 η 2 = s 2 θ (M 2 H ) 11 + c 2 θ (M 2 H ) 22 − s 2θ (M 2 H ) 12 .(9)
From the above expressions, we can write the parameters in the scalar potential in terms of the physical parameters as follows:
µ 2 D = m 2 η ± − v 2 2 λ 3 , µ 2 S = 1 2 (m 2 η 1 s 2 θ + m 2 η 2 c 2 θ − v 2 λ 7 )
,
λ 4 = 1 v 2 (m 2 η 1 c 2 θ + m 2 η 2 s 2 θ + m 2 η A − 2m 2 η ± ), λ 5 = 1 v 2 (m 2 η 1 c 2 θ + m 2 η 2 s 2 θ − m 2 η A ), κ = 1 v s θ c θ (m 2 η 1 − m 2 η 2 ).(10)
After fixing m h and v to their experimental values, the remaining ten independent parameters in the scalar potential are then chosen to be
m η ± , m η A , m η 1 , m η 2 , θ, λ 3 , λ 7 ,(11)
and the quartic couplings λ 2,6,8 for the Z 2 -odd scalar bosons.
For the scenario with Y D = 0, the singlet field η S is singly-charged (η ± S ≡ η S ), so that the charged components of the inert doublet field η ± can mix with η ± S . Similar to the above scenario, the mass eigenstates are defined through
η ± η ± S = c θ −s θ s θ c θ η ± 1 η ± 2 ,(12)
with
tan 2θ = 2(M 2 ± ) 12 (M 2 ± ) 11 − (M 2 ± ) 22 .(13)
The mass matrix M 2 ± is expressed in the basis of (η ± , η ± S ) as
M 2 ± = µ 2 D + v 2 2 (λ 3 + λ 4 ) vκ √ 2 vκ √ 2 µ 2 S + v 2 2 λ 7 .(14)
The squared masses of the scalar fields are then given by
m 2 η ± 1 = c 2 θ (M 2 ± ) 11 + s 2 θ (M 2 ± ) 22 + s 2θ (M 2 ± ) 12 , m 2 η ± 2 = s 2 θ (M 2 ± ) 11 + c 2 θ (M 2 ± ) 22 − s 2θ (M 2 ± ) 12 , m 2 η A = µ 2 D + v 2 2 (λ 3 − λ 5 ),m 2 η H = µ 2 D + v 2 2 (λ 3 + λ 5 ).(15)
Some of the parameters in the potential can be rewritten in terms of the physical parameters as
µ 2 D = 1 2 (m 2 η A + m 2 η H − v 2 λ 3 ), µ 2 S = m 2 η ± 1 c 2 θ + m 2 η ± 2 s 2 θ − v 2 2 λ 7 , κ = √ 2 v s θ c θ (m 2 η ± 1 − m 2 η ± 2 )
,
λ 4 = − 1 v 2 (m 2 η A + m 2 η H − 2m 2 η ± 1 c 2 θ − 2m 2 η ± 2 s 2 θ ), λ 5 = 1 v 2 (m 2 η H − m 2 η A ).(16)
Therefore, the ten independent parameters in the scalar potential can be chosen as
m η ± 1 , m η ± 2 , m η A , m η H , θ, λ 3,7 ,(17)
and the quartic couplings λ 2,6,8 for the inert scalar fields.
The parameters in the scalar potential are subject to the constraints of perturbative unitarity and vacuum stability. In order for our models to be perturbative, we require all the quartic couplings λ i in the potential to satisfy
λ 2 i 4π < 1.(18)
To impose the tree-level unitarity constraints, we consider all possible 2 → 2 elastic scatterings for the bosonic states in the high energy limit, and obtain thirteen independent eigenvalues of the s-wave amplitude matrix, expressed in terms of the scalar quartic couplings. By demanding the magnitude of each eigenvalue to be smaller than 8π [22], we find the following conditions for the quartic couplings 2 ;
1 2 λ 1 + λ 2 + (λ 1 − λ 2 ) 2 + 4λ 2 4 < 8π,(19)1 2 λ 1 + λ 2 + (λ 1 − λ 2 ) 2 + 4λ 2 5 < 8π,(20)|λ 3 + 2λ 4 ± 3λ 5 | < 8π, |λ 3 ± λ 5 | < 8π, |λ 3 ± λ 4 | < 8π, c 1 |λ 7,8 | < 8π,(21)|a 1,2,3 | < 8π,(22)
where a 1,2,3 are the eigenvalues for the following 3 × 3 matrix
3λ 1 2λ 3 + λ 4 c 2 λ 7 2λ 3 + λ 4 3λ 2 c 2 λ 8 c 2 λ 7 c 2 λ 8 c 3 λ 6 ,(23)
with the coefficients (c 1 , c 2 ,
c 3 ) = (2, 2, 6) for Y D = 1 and (c 1 , c 2 , c 3 ) = (1, √ 2, 2) for Y D = 0.
If we take λ 6,7,8 = 0, the above expressions are reduced to those in the two-Higgs doublet model (see, e.g., Ref. [24]).
To ensure the stability of the SM vacuum, besides requiring the quadratic terms µ 2 D and µ 2 S to be positive, we further require the potential to be bounded from below. The bounded-from-below conditions are given by [23]
λ i ∈ Ω 1 ∪ Ω 2 , i = 1, . . . , 8(24)
where
Ω 1 = λ 1 , λ 2 , λ 6 > 0; λ 1 λ 6 + λ 7 > 0; λ 2 λ 6 + λ 8 > 0; λ 1 λ 2 + λ 3 + D > 0; λ 7 + λ 1 λ 2 λ 8 ≥ 0 ,(25)Ω 2 = λ 1 , λ 2 , λ 6 > 0; λ 2 λ 6 ≥ λ 8 > − λ 2 λ 6 ; λ 1 λ 6 > −λ 7 ≥ λ 1 λ 2 λ 8 ; (λ 2 7 − λ 1 λ 6 ) (λ 2 8 − λ 2 λ 6 ) > λ 7 λ 8 − (D + λ 3 ) λ 6 (26) in which D = max {0, λ 4 − λ 5 }.
For the convenience of discussions, we define the scalar trilinear coupling λ φ 1 φ 2 φ 3 to be the coefficient of the φ 1 φ 2 φ 3 term in the Lagrangian, where φ i are the physical scalar bosons in our model.
Before closing this section, we briefly comment on neutrino masses in our model. Under the charge assignments given in Table I, the structure of the dimension-5 operator is strongly
constrained: only L τ c L Φ(Φ c ) † L τ L is allowed.
In order to obtain nonzero values for all the elements of the 3 × 3 neutrino mass matrix for the observed mixing pattern, two additional Higgs doublet fields, denoted by Φ e and Φ µ , are required. Taking the U (1) charge for Φ e and Φ µ to be −q e and −q µ , respectively, we can write down all the dimension-5 effective Lagrangian as
L eff = i,j=e,µ,τ c ij Λ L ic L Φ i (Φ c i ) † L j L ,(27)
where Φ τ = Φ, and c ij and Λ are respectively dimensionless couplings and the cutoff scale.
Note that if we consider the case with one of the three Higgs doublets being absent, the neutrino mass matrix has the texture with three zeros; that is, one diagonal and two offdiagonal elements including their transposed elements are zero. It has been known that such textures cannot accommodate the current neutrino oscillation data [25]. Hence, at least three Higgs doublets are required. In the following discussions, we consider the model defined with just the Higgs doublet in Table I by assuming the Φ e and Φ µ fields to be completely decoupled.
III. MUON/ELECTRON MAGNETIC DIPOLE MOMENTS
The anomalous magnetic dipole moment of lepton is usually denoted by a ≡ (g −2) /2.
Currently, the differences between the experimental value a exp and the SM prediction a SM for = µ, e are given by
∆a µ ≡ a exp µ − a SM µ = 261(79) × 10 −11 ,(28)∆a e ≡ a exp e − a SM e = −88(36) × 10 −14 ,(29)
presenting about 3.3σ [9] and 2.4σ [11] deviations, respectively. In our model, the new contribution to a , denoted by ∆a NP , mainly comes from the one-loop diagrams shown in Fig. 1, with Z 2 -odd particles running in the loop. These contributions are calculated to be
∆a NP = − 1 16π 2 k=1,2 m 2 M 2 χ (|g ,k L | 2 + |g ,k R | 2 )F 2 m 2 η k M 2 χ + 2m M χ Re(g ,k L g ,k * R )F 1 m 2 η k M 2 χ (for Y D = 1),(30)∆a NP = − 1 16π 2 k=1,2 m 2 m 2 η ± k (|g ,k L | 2 + |g ,k R | 2 )F 2 M 2 χ m 2 η ± k + 2M χ m m 2 η ± k Re(g ,k L g ,k * R )F 3 M 2 χ m 2 η ± k (for Y D = 0),(31)
where g ,k L,R denote the Yukawa couplings for theχ P L,R η k (χ P L,R η ± k ) vertices in the model with Y D = 1 (0). More explicitly,
g ,1 L = f L √ 2 c θ , g ,2 L = − f L √ 2 s θ , g ,1 R = f R s θ , g ,2 R = f R c θ .(32)
The loop functions are defined as follows:
F 1 (x) = 1 − 4x + 3x 2 − 2x 2 ln x 2(1 − x) 3 , F 2 (x) = 1 − 6x + 3x 2 + 2x 3 − 6x 2 ln x 6(1 − x) 4 , F 3 (x) = 1 − x 2 + 2x ln x 2(1 − x) 3 ,(33)Y D = 1 m η1 = 80 GeV ∆m η = 100 GeV |f L | = |f R | ≡ f θ = π 4 Y D = 1 m η1 = 80 GeV ∆m η = 100 GeV |f L | = |f R | ≡ f θ = π 4 Electron g − 2 Muon g − 2 10 −2 10 −Y D = 1 m η1 = 80 GeV ∆m η = 300 GeV |f L | = |f R | ≡ f θ = π 4 Y D = 1 m η1 = 80 GeV ∆m η = 300 GeV |f L | = |f R | ≡ f θ = π 4 Electron g − 2 Muon g − 2 FIG. 2. Regions in the plane of f ≡ |f L | = |f R | and M χ that can explain the corresponding (g − 2)
for the scenario of Y D = 1 at the 1σ (darker color) and 2σ (lighter color) levels.
where at any given x,
we have F 1 (x) ≥ F 3 (x) > F 2 (x).
In both Eqs. (30) and (31), the coefficient of Re(g ,k L g ,k * R ) can be much larger than that of |g ,k L | 2 + |g ,k R | 2 by a factor of M χ /m , and becomes the dominant factor for ∆a NP . We note that for a fixed value of M χ and the Yukawa couplings, a larger magnitude of the dominant term is obtained for a smaller mass of the scalar boson running in the loop. In addition, the contribution to the dominant term from the lighter scalar boson (η 0 1 or η ± 1 ) is opposite in sign to that from the heavier one (η 0 2 or η ± 2 ) due to the orthogonal rotation of the scalar fields, as seen in Eq. (32). Therefore, the sign of ∆a NP is determined by Re(g ,1 L g ,1 * R ). We thus take Re(g µ,1 L g µ,1 * R ) < 0 and Re(g e,1
L g e,1 * R ) > 0 in order to obtain ∆a NP µ > 0 and ∆a NP e < 0, as required by data.
This in turn can be realized by taking f µ L > 0, f µ R < 0, f e L,R > 0, and the mixing angle θ to be in the first quadrant. Note here that with a degenerate mass for η 1 and η 2 , ∆a NP would vanish due to the cancellation between the contributions of the two scalar bosons.
Therefore, a non-zero mass splitting between η 1 and η 2 is required. For simplicity, we take |f L | = |f R |(≡ f ) in the following analyses.
In Fig. 2, we show the regions in the plane of f and the mass M χ that can explain the corresponding (g − 2) anomalies in the scenario with Y D = 1. The left and right panels show the allowed regions for a mass difference ∆m η ≡ m η 2 − m η 1 of 100 GeV and 300 GeV, respectively. In this scenario, the lighter scalar η 0 1 can be the DM candidate and its mass m η 1 is fixed to be 80 GeV. In the next section, we will see that this choice of the DM mass is Fig. 2, but in the scenario of Y D = 0. The mass of the lighter charged scalar η ± 1 is set to be 200 GeV.
Y D = 0 m η ± 1 = 200 GeV ∆m η ± = 100 GeV |f L | = |f R | ≡ f θ = π 4 Y D = 0 m η ± 1 = 200 GeV ∆m η ± = 100 GeV |f L | = |f R | ≡ f θ = π 4 Electron g − 2 Muon g − 2 10 −2 10 −Y D = 0 m η ± 1 = 200 GeV ∆m η ± = 300 GeV |f L | = |f R | ≡ f θ = π 4 Y D = 0 m η ± 1 = 200 GeV ∆m η ± = 300 GeV |f L | = |f R | ≡ f θ = π 4 Electron g − 2 Muon g − 2 FIG. 3. As in
compatible with both the observed relic density and the direct search experiments. It is clear that a smaller value of ∆m η results in a larger cancellation between the ∆a NP contributions from the two scalar bosons, thus pushing the required Yukawa couplings higher for the same M χ . Also, for a fixed M χ , the required value of f e is smaller than f µ by roughly a factor of 4. This can be understood in such a way that from Eq. (30) the ratio ∆a NP µ /∆a NP e is roughly given by m µ /m e × |f µ /f e | 2 200 × |f µ /f e | 2 if we take M χµ = M χe . Therefore, with the required ratio ∆a µ /∆a e by data to be about 3000, the Yukawa coupling for the muon needed to explain the data should indeed be about 4 times larger than that for the electron.
In Fig. 3, we show the results for Y D = 0. In this scenario, the lighter charged scalar boson η ± 1 would not be a DM candidate and its mass m η ± 1 would not be strongly constrained by the relic density and the direct search experiments. However, O(1) TeV of m η ± 1 requires a large Yukawa coupling f µ to explain the muon g − 2 anomaly, which leads to too small a relic density to explain the observed density of DM as we will see in the next section. We thus take m η ± 1 = 200 GeV as an successful example. In Fig. 3, we also observe a similar trend that for a fixed M χ , the required f e is smaller than f µ by roughly a factor of 4 and both are pushed higher for smaller ∆m η ± . Unlike the scenario of Y D = 1, the contours turn around at M χ ∼ 150 GeV in this scenario. This is because the dominant term in Eq. (31) reaches its maximum at M χ = m η ± k , so that the required value of f becomes smallest at M χ ∼ 150 GeV 3 . Note that this turning point is lower in the left plot because of the larger cancellations for the case with ∆m η ± = 100 GeV (left) than that with ∆m η ± = 300 GeV (right).
We note that, in both scenarios with Y D = 1 and 0, the charged Z 2 -odd particles can be pair produced at colliders and their leptonic decays are subject to constraints from the experimental searches at the LHC. These constraints will be discussed in Sec. IV B.
Lastly, we comment on the contributions from two-loop Barr-Zee type diagrams [26].
In our model, new contributions to the Barr-Zee type diagrams can enter via the Z 2 -odd particle loops in the effective hγγ, hZγ and W + W − γ vertices. The first two vertices, in particular, may give rise to sizable contributions to ∆a NP , if the scalar trilinear couplings are taken to be large. However, such large values are highly constrained by the Higgs data to be discussed in Sec. IV B. Together with the smallness of the Yukawa couplings for muon and electron, we find that contributions from these two types of diagrams are negligible.
The contributions from diagrams with the W + W − γ effective vertex have been examined in detail in Ref. [27]. It is shown that the contributions are at least two orders of magnitude smaller than the experimental measurements and can also be safely neglected.
IV. PHENOMENOLOGY
In this section, we discuss the phenomenological consequences of our models, focusing on the physics of DM and collider signatures of the new particles.
A. Dark Matter Phenomenology
As alluded to in Sec. II, the lightest neutral Z 2 -odd particle can be a DM candidate and corresponds to η 0 1 (η 0 H or χ ) in the scenario of Y D = 1 (Y D = 0). Current measurements of the cosmic microwave background radiation by the Planck satellite show the DM relic density to be [28] Ω DM h 2 = 0.120 ± 0.001,
assuming the cold DM scenario. 3 For Y D = 1, the dominant term in Eq. (30) reaches its maximum at M χ ∼ 0.12m η1 . Thus, the turning behavior is not observed as we take η 0 1 to be the lightest particle. We first discuss the relic density of DM in the scenario of Y D = 1. The important DM annihilation processes are shown in Fig. 4. The amplitude of the s-channel Higgs-mediated process is proportional to the η 0 1 η 0 1 h coupling calculated as
h η 0 1 η 0 1 SM SM (a) η ± /η A η 0 1 W ± /Z W ∓ /Z η 0 1 (b) χ + e /χ + µ η 0 1 e + /µ + e − /µ − η 0 1 (c)λ η 0 1 η 0 1 h = v c 2 θ m 2 η ± v 2 − m 2 η 1 v 2 − λ 3 2 − λ 7 s 2 θ ,(35)
where the λ 3 and λ 7 parameters are chosen as independent parameters [see Eqs. (11) and (17)] in our analyses. Therefore, the λ η 0 1 η 0 1 h coupling can be taken to be any value as far as it satisfies the theoretical bounds discussed in Sec. II. This process can be particularly important when the DM mass is close to half of the Higgs boson mass due to the resonance effect. The amplitude of the t-channel process mediated by the heavier Z 2 -odd scalar bosons becomes important when the DM mass is larger than about 80 GeV because of the threshold of the weak gauge boson channels. The t-channel process mediated by the vector-like lepton χ is sensitive to the Yukawa couplings f L,R , while weakly depending on the mass of the lighter vector-like lepton. In addition to the processes shown in Fig. 4, we also take into account the contributions from DM co-annihilations with the heavier Z 2 -odd particles,
i.e., η 0 A , η 0 2 , η ± and χ ± . For numerical calculations, we have implemented our model using FeynRules [29,30] and derived the relic density and direct search constraints using MadDM [31][32][33]. It is worth mentioning that in the Inert Doublet Model (IDM), another solution of the DM mass to satisfy the relic density may exist in a TeV region when the mass splitting among the Z 2 -odd scalar particles is small, typically less than 10 GeV [34]. In such a scenario, DM dominantly annihilates into a pair of weak gauge bosons whose annihilation cross section decreases by O(1/m 2 DM ), while the annihilation into the Higgs bosons is highly suppressed due to small Higgs-DM couplings. In our model, such a high mass solution cannot be realized, because the additional η 0 2 state cannot have the mass close to η 0 1 in order to explain the g − 2 anomaly as discussed in Sec. III. As a result, the (co)annihilation into a pair of the Higgs bosons is not suppressed at the high mass region. This situation can be clearly seen in the right panel of Fig. 6 in which we take ∆m = 30, 60, 120 GeV that can explain the g − 2 anomalies. Indeed, the predicted density is well below the observed value at the high mass region. In fact, we confirm that solutions do not appear even at a few hundred TeV of m η 1 .
In addition to the DM annihilation, the λ hη 0 1 η 0 1 coupling contributes to the scattering of DM with nuclei via the mediation of the Higgs boson, allowing our DM candidate to be probed by the direct search experiments. Fig. 7 shows the spin-independent DMnucleon scattering cross section and its upper limit at 90% confidence level obtained from the XENON1T experiment with a 1-tonne times one year exposure [35]. We find that λ hη 0 1 η 0 1 /v has to be smaller than 0.0026, 0.0034, and 0.0047 for the DM η 0 1 to has a mass around 50, 65 and 80 GeV, respectively, by which we can explain the observed relic density.
In conclusion, the mass of η 0 1 should be about 50, 65 or 80 GeV while having f 0.34 and λ hη 0
1 η 0 1 /v ∈ [1.0 × 10 −4 , 2.6 × 10 −3 ]
in order to satisfy both the relic density and the direct search experiment in the scenario with Y D = 1.
Next, we discuss the scenario with Y D = 0 assuming η 0 H to be the DM candidate. In this scenario, the properties of DM are quite similar to those of the scenario with Y D = 1 discussed above, where the annihilation processes can be obtained by replacing (η 0 1 ,η ± ,e/µ) with (η 0 H ,η ± 1,2 ,ν e /ν µ ) in Fig. 4. The η 0 H η 0 H h coupling is given as
λ η 0 H η 0 H h = v 2 m 2 η A v 2 − m 2 η H v 2 − λ 3 .(36)
Again, this coupling can be taken to be any value due to the independent parameter processes χ χ → ν ν / + − mediated by a neutral or charged Z 2 -odd scalar boson. These processes alone, however, produce a cross section that is too small to account for the observed relic density. Thus, the scenario of having a fermionic DM in our model is ruled out.
B. Collider Phenomenology
We first discuss the constraints from direct searches for new particles at high-energy collider experiments. In our model, all the new particles are Z 2 -odd, and thus would only be produced in pairs at colliders. In addition, due to the new Yukawa interactions for the muon and the electron, their decays typically include a muon or an electron in association with missing energy carried away by the DM. Therefore, our model can be tested by looking for an excess of events with multiple charged leptons plus missing energy, which is identical to the signatures of slepton or chargino production in supersymmetric models.
We first focus on the pair production of the vector-like leptons χ ± at the LHC in the model with Y D = 1. The pair production occurs via the Drell-Yan process mediated by the photon and Z boson, so that its cross section is simply determined by the mass of χ . The Branching Ratio left panel of Fig. 8 shows the cross section of pp → γ * /Z * → χ + χ − with the collision energy of 13 TeV. The cross section is calculated at the leading order using MadGraph_aMC@NLO [36] with the parton distribution functions NNPDF23_lo_as_0130_qed [37]. It is seen that the cross section is about 900, 20 and 0.8 fb for M χ = 150, 300 and 600 GeV, respectively. On the other hand, the decays of χ ± strongly depend on the mass spectrum of the Z 2 -odd scalar bosons. For the case with (m η 0 1 , m η A , m η ± , m η 0 2 ) = (80, 200, 200, 380) GeV, the various decay branching ratios of χ ± are depicted in the right panel of Fig. 8. In this plot, we take θ = π/4 in which the branching ratios do not depend on f . We see that χ ± decay 100% into η 0 1 ± when M χ < 200 GeV because this is the only kinematically allowed channel. At higher masses, χ ± can also decay into η 0 2 ± , η 0 A ± and η ± ν . The heavier Z 2 -odd scalar bosons can further decay into the DM and a SM particle, i.e., η 0 2 → hη 0 1 , η 0 A → Zη 0 1 , and η ± → W ± η 0 1 . Therefore, when these channels are allowed, the final state of the χ ± decays can have 1 or 3 charged leptons. We note that the tri-lepton channel is highly suppressed by the small branching ratio of the leptonic decays of the Z boson or the Higgs boson.
χ − → η 0 1 − χ − → η 0 2 − χ − → η 0 A − χ − → η − ν FIG
In Fig. 9, we show the observed exclusion limit on the vector-like lepton masses M χ using the same set of parameters as in Fig. 8. The observed limit is derived based on the searches for events with exactly two or three electrons or muons and missing transverse momentum performed by the ATLAS experiment using the 36.1 fb −1 dataset of √ s = 13 TeV collisions [38]. We use MadGraph_aMC@NLO [36] to simulate the events and to compute the χ + χ − production cross section at the leading order. The events are further processed by
Checkmate [39][40][41][42], which utilizes Pythia8 [43,44] for parton showering and hadronization and Delphes3 [45] for detector simulations and compares the number of events with the limit in a given signal region provided by the ATLAS experiment [46]. With our parameter choice, M χ 270 GeV is excluded. Note also that such lower bounds on the χ mass depend on the mass spectrum of the Z 2 -odd scalar bosons, and are usually lower than the bounds extracted in the literature (e.g., in Ref. [21]). In Fig. 10, we summarize all the constraints discussed above in our model with Y D = 1.
Y D = 1 m η 1 = 80 GeV ∆m η = 300 GeV |f L | = |f R | ≡ f θ = π 4 Y D = 1 m η 1 = 80 GeV ∆m η = 300 GeV |f L | = |f R | ≡ f θ = π
The regions shaded by dark green and orange can explain, respectively, the electron and muon g − 2 within 1σ. The lower bound on M χ is derived from the observed direct search limit by the ATLAS collaboration (see Fig. 9), while the region shaded by brown cannot explain the DM relic density as the annihilation cross section of DM in this region is too large to reach the observed density (see Fig. 6).
We note that in addition to the pair production of χ ± , the inert scalar bosons can also be produced in pairs. When we consider the case where the vector-like lepton masses are larger than the masses of the inert scalar bosons, the signature of these scalar bosons become quite similar to that given in the IDM. As shown in Ref. [47], the upper limit on the cross section of multi-lepton final states given by the LHC Run-II data is typically one or more than one order of magnitude larger than that predicted in the IDM. Thus, we can safely avoid the bound from the direct searches for the inert scalar bosons at the LHC.
Let us briefly comment on the collider signatures in the model with Y D = 0. In this scenario, the vector-like lepton is electrically neutral, so that it is not produced in pair via the Drell-Yan process, but can be produced from decays of the inert scalar bosons, e.g., η ± 1,2 → ± χ 0 and η 0 H,A → ν χ 0 . The most promising process to test this scenario could then be a pair production of the charged inert scalar bosons pp → η ± i η ∓ j (i, j = 1, 2). However, we find that the production cross sections of η ± 1,2 are roughly one order of magnitude smaller than those of vector-like leptons shown in Fig. 8, so that such process is more weakly constrained by the current LHC data as compared with that in the model with Y D = 1.
Finally, we discuss an indirect test of our model by focusing on modifications in the Higgs boson couplings. Because of the Z 2 symmetry, the Higgs boson couplings do not change from their SM values at tree level. However, the loop-induced hγγ and hZγ couplings can be modified due to the new charged scalar boson loops, i.e., η ± (η ± 1 and η ± 2 ) in the model with Y D = 1 (Y D = 0). In order to discuss the modifications to the h → γγ and h → Zγ decays, we introduce the signal strength µ γγ and µ Zγ defined as follows:
µ γγ/Zγ ≡ σ h × BR(h → γγ/Zγ) [σ h × BR(h → γγ/Zγ)] SM .(37)
In our model, the production cross section of the Higgs boson should be the same as in the SM. Consequently, these signal strengths are simply given by the ratio of the branching ratio between our model and the SM. The decay rates of h → γγ and h → Zγ depend on the Higgs boson couplings to the charged scalar bosons, which are calculated as
λ hη + η − = −vλ 3 for Y D = 1,(38)
and for Y D = 0, The current global average of the Higgs diphoton signal strength is given by µ Exp γγ = 1.10 +0. 10 −0.09 [9], where the deviation of the central value from the SM expectation mainly originates from the CMS measurements [48]. On the other hand, the h → Zγ decay has not yet been observed, and the strongest limit is given by the ATLAS experiment [49], where the observed upper limit for the signal strength µ Zγ is 6.6 at 95% confidence level.
λ hη + k η − k = v m 2 η A v 2 + m 2 η H v 2 − 2m 2 η ± 1 v 2 − λ 3 c 2 θ − λ 7 s 2 θ for k = 1, v m 2 η A v 2 + m 2 η H v 2 − 2m 2 η ± 2 v 2 − λ 3 s 2 θ − λ 7 c 2 θ for k = 2.(39)
In Fig. 11 Fig. 11, it is clear that both scenarios of our model are able to accommodate the current experimental constraints from the h → γγ decay within a reasonably large range of parameter space without violating the perturbative unitarity and vacuum stability constraints.
As the decay rates of h → γγ and h → Zγ have different dependences on couplings, to see the correlation between µ γγ and µ Zγ would be useful in order to extract the structure of the model [50]. In Fig. 12, we show the correlation between µ Zγ and µ γγ for the scenario of Y D = 1 (left) and Y D = 0 (right). We only show the points which are allowed by the perturbative unitarity and vacuum stability bounds. For Y D = 1, we see that µ Zγ is strongly correlated with µ γγ . Within the 2σ region around the current measurements of µ Exp γγ , a signal strength for h → Zγ is predicted to be from 0.97 to 1.05. Such a prediction can be slightly modified by the choice of the mixing angle θ and the masses of the Z 2 -odd scalar bosons.
For Y D = 0, we observe no or little correlation between µ Zγ and µ γγ . This is because the contributions from the pure η ± 1 and η ± 2 loops are small in our particular choice of θ = π/4 due to smaller η + 1 η − 1 Z and η + 2 η − 2 Z couplings. On the other hand, the η ± 1 and η ± 2 mixed loop contribution, which appears in the h → Zγ decay but not the h → γγ decay, can be sizable.
The coupling λ hη ± 1 η ∓ 2 that contributes to this new diagram is given by
λ hη ± 1 η ∓ 2 = vs θ c θ λ 3 + 1 v 2 m 2 η ± 1 + m 2 η ± 2 − m 2 η A − m 2 η H − λ 7 .(40)
With this additional mixed loop contribution, the model with Y D = 0 can predict µ Zγ = 1 even when µ γγ = 1. We note that our prediction on µ Zγ is sensitive to the choice of θ, because of the Zη ± i η ∓ j couplings. By scanning the mixing angle θ while imposing both theoretical and experimental constraints, we find that the model with Y D = 0 would predict an h → Zγ signal strength that is at most +10% larger than the SM value.
V. CONCLUSIONS
To explain the muon and electron g − 2 anomalies and the dark matter data, we have proposed a new model whose symmetry is enlarged to have a global U (1) and a discrete Z 2 symmetries and whose particle content is extended with two vector-like leptons and the inert scalar singlet and doublet fields. Depending upon the hypercharge assignment of the new fields, there are two different scenarios. Thanks to the new symmetries, we can safely avoid the lepton flavor-violating decays of charged leptons, while obtaining new contributions to the muon and electron g −2 with the desired signs and magnitudes for the data. In addition, the symmetries guarantee the stability of the DM candidate, which is the lightest neutral Z 2 -odd particle.
We have found that there are regions in the parameter space that can simultaneously h → Zγ signal strength that is at most +10% larger than the SM value.
FIG. 1 .
1Feynman diagrams for the muon/electron g − 2. The left (right) diagram contributes to g − 2 in the model with Y D = 1 (Y D = 0).
FIG. 4 .
4Important diagrams that contribute to the DM annihilation into the SM particles.
Fig. 5 FIG. 5 .FIG. 6 .
556shows a typical behavior of the DM relic density as a function of the DM mass m η 1 in the model with Y D = 1. In all three panels, the grey curves show a benchmark Contributions of different processes shown in Fig. 4 to the DM relic density in the model with Y D = 1 as a function of the DM mass m η 1 . The grey curves show the case for the benchmark parameter set with the mass spectrum (m η 2 , m η A , m η ± , M χe , M χµ ) = (380, 200, 200, 1100, 600) GeV and the coupling strengths (f e , f µ , λ ). From the left to right panels, the colored curve shows the case with some of the couplings taken to be zero, by which we see the impact of the contribution from the process of (b), (b) plus (c) and (a) plus (b) shown in Fig. 4. the parameter choice (f e , f µ , λ and (m η 2 , m η A , m η ± , M χe , M χµ ) = (380, 200, 200, 1100, 600) GeV, where M χ are determined according to Fig. 2 such that both electron and muon g − 2 anomalies can be accommodated within 1σ at m η 1 = 80 GeV. By turning off some of the couplings, we show with colored curves in the three panels how the relic density changes if only a subset of the processes in Fig. 4 is taken into account. The leftmost plot of Fig. 5 shows that for m η 1 80 GeV, the t-channel annihilations into weak gauge bosons are kinematically allowed and become the dominant process. It is clear from the central plot that for m η 1 < 50 GeV, the relic density is dominated by the t-channel annihilations into electron and muon pairs. The rightmost plot shows that the Higgs-mediated s-channel process is most important around the Higgs resonance when m η 1 ∼ 62.6 GeV. We observe that for m η 1 < 150 GeV, there are three solutions to the relic density: one at m η 1 ∼ 80 GeV and the remaining two around half the Higgs resonance. The impacts of the key parameters in each process shown in Fig. 4 are investigated in Fig. 6. From left to right, we investigate the dependence on the magnitude of the Yukawa coupling f µ , the λ hη 0 1 η 0 1 coupling, and the mass splitting ∆m between the DM and all the other heavier Z 2 -odd scalar bosons. From the left two plots, we see that an increase in f Relic density as a function of the DM mass m η 1 in the model with Y D = 1. The left, center and right panel shows, respectively, the effect of varying the magnitude of the Yukawa coupling f µ , , and the mass splitting ∆m (with m η 2 = m η A = m η ± ) defined in the figure. For all the panles, M χe − m η 1 is fixed to be 1020 GeV, while M χµ − m η 1 is taken to be 520 (1070) [1820] GeV for f µ = 0.2 (0.4) [0.8] such that the g − 2 anomalies can be explained within 1σ level, where the latter two choices are only taken in the left plot.reduces the overall relic density in the low-mass region while a decrease in λ hη 0 1 η 0 1 makes the dip around the Higgs resonance shallower. In the leftmost (center) plot, we find the critical values f µ 0.54 4 (λ hη 0 1 η 0 1 /v 10 −4 ) above (below) which the solutions of m η 1 to realize the observed relic density disappears. In addition, if we take λ hη 0 1 η 0 1 /v 0.10 in the center plot, the solutions at m η 1 ≥ m h /2 disappear because the dip becomes too deep.
FIG. 7 .
7Spin-independent DM-Nucleon scattering cross section as a function of the DM mass m η 1 for several values of the λ hη 0 1 η 0 1coupling. The black curve shows the 90% confidence level upper limit obtained from the XENON1T experiment with a 1.0t × 1yr exposure. The green and yellow region marks the 1 and 2σ sensitivity bands for the XENON1T results.
λ 3 as
3far as it satisfies the theoretical constraints. Taking similar values of the Higgs to DM coupling and the new Yukawa couplings as those in the model with Y D = 1, we obtain almost identical results as in Figs. 5 and 6, with minor modifications due to the changes in M χ in order to satisfy the (g − 2) anomalies. Finally, we briefly comment on the other possibility of having χ as the DM candidate in the model with Y D = 0. The dominant annihilation channels for χ are the t-channel
. 8 .
8Left: Cross section of pp → χ + χ − as a function of M χ in the model with Y D = 1 at √ s = 13 TeV. Right: Branching ratios of χ in the model with Y D = 1 with (m η 1 , m η A , m η ± , m η 2 ) = (80, 200, 200, 380) GeV and θ = π/4.
FIG. 9 .
9Excluded region in the plane of the masses of vector-like leptons M χµ -M χe in the model with Y D = 1 from the searches for events with exactly two or three electrons or muons and missing transverse momentum by the ATLAS experiment with √ s = 13 TeV and 36.1 fb −1 of the integrated luminosity. We take (m η 1 , m η 2 , m η A , m η ± , θ) = (80 GeV, 380 GeV, 200 GeV, 200 GeV, π/4).
FIG. 10 .
10Summary of the constraints in the plane of f and M χ for the benchmark case with Y D = 1 and (m η 1 , m η 2 , m η A , m η ± , θ) = (80 GeV, 380 GeV, 200 GeV, 200 GeV, π/4). The regions shaded by dark green and orange can explain the electron and the muon g − 2 within 1σ. The lower bounds on M χ are derived from the direct search limit by the ATLAS collaboration, while the brown area cannot explain the observed DM relic density.
FIG. 11 .1 0 1 η 0 1
1100Signal strength µ γγ in the model with Y D = 1 (left) and Y D = 0 (right). The dark (light) green band shows the current global average of µ Exp γγ with 1σ (2σ) uncertainty. For Y D = 0, we take (m η H , m η A , θ) = (80 GeV, 200 GeV, π/4), and the mass splitting between the two charged scalar bosons is fixed to be 300 GeV. We note that the parameter λ 3 in the model with Y D = 1 also appears in the DM coupling [see Eq. (35)], but the dependence of λ hη on λ 7 makes it still possible to choose λ 3 freely. For the model with Y D = 0, λ 3 is controlled by the DM coupling λ hη 1 η 1 [see Eq. (36)], but the λ hη + k η − k couplings can be chosen freely due to their dependence on the λ 7 parameter. In both scenarios, the new fermions χ do not couple to the Higgs boson as they are vector-like.
FIG. 12 .
12, we show the signal strength µ γγ as a function of λ 3 (λ 7 ) for different charged scalar masses in the scenario of Y D = 1 (Y D = 0) in the left (right) panel. For Y D = 1, the scalar boson loops can interfere constructively with the dominant weak gauge boson loops for a negative value of λ hη + η − (corresponding to a positive λ 3 ). A similar effect is also seen for positive λ 7 in the right plot with Y D = 0. In these plots, the dashed Correlation between µ Zγ and µ γγ in the model with Y D = 1 (left) and Y D = 0 (right) under the constraints of perturbative unitarity and vacuum stability. The dark (light) green band shows the current global average of µ γγ with 1σ (2σ) uncertainty. For Y D = 0, we take (m η H , m η A , θ) = (80 GeV, 200 GeV, π/4), and the mass splitting between the two charged scalar bosons is fixed to be 300 GeV.
0 1 η 0 1
00each curve is excluded by the perturbative unitarity or vacuum stability bounds according to Eqs.(18) to(26). For Y D = 1, the lower bounds on λ 3 are determined by the vacuum stability constraints, while the upper bounds indirectly come from the vacuum stability constraints on λ 7 assuming λ hη /v = 10 −3 , as suggested in Sec. IV A. We note that the quartic couplings λ 2,6,8 for the inert scalar fields are scanned for any given λ 3 such that the allowed range of λ 3 is maximized. For Y D = 0, the bounds on λ 7 are derived in a similar way. In this scenario, the lower bounds on λ 7 arise from the bounded-from-below conditions in Eqs.(25) and(26), while the upper bounds are determined by the requirement µ 2 S > 0 [see Eq. (10)]. From
accommodate both g − 2 anomalies and the DM relic density under the constraints from the LHC direct searches for vector-like leptons and DM direct detection experiments. In the successful parameter regions, the masses of the vector-like leptons can be about 300 GeV with the magnitude of new muon and electron Yukawa couplings being about 0.1 and 0.03, respectively. Larger vector-like lepton masses generally go with larger values of new Yukawa couplings, while too large values of the Yukawa couplings cause too large annihilation cross section of DM to explain the current observed relic density. We have shown that typically the magnitude of the new Yukawa couplings should be smaller than about 0.4. Finally, we have discussed the modifications to the Higgs diphoton and Higgs to Zγ decays, which are mediated by the inert charged scalar boson loops. We have seen that the predictions of the h → γγ signal strength in our model are mostly consistent with the current measurements at the LHC. Depending on the choice of parameters, our model would further predict an
TABLE I .
IParticle content and charge assignment under the symmetries SU (2
the Fermi decay constant. The VEVs of η D and η S are assumed to be zero in order to avoid spontaneous breakdown of the Z 2 symmetry. The neutral component h in Φ is identified with the discovered 125-GeV Higgs boson. Because of the assumed exact Z 2 symmetry, no mixing is allowed between h and the other scalars. Hence, the Higgs boson couplings are the same as those of the SM Higgs boson at tree level, while the loop induced couplings such as hγγ and hZγ can be modified by loop contributions of the new particles. We will discuss the impact of these contributions to the decays of h → γγ and h → Zγ in Sec. IV.
Our model can be seen as an extension of the "SLR" model proposed in Ref.[21], where only one new fermion is introduced in order to explain the muon g − 2 anomaly.
The quartic terms of the scalar potential have the same forms as those given in the so-called next-to-two-Higgs doublet model studied in Ref.[23] except for notational differences. We have confirmed that our results are consistent with those given in Ref.[23].
A more conservative upper limit for the magnitude of the Yukawa coupling is found to be 0.34 for the case with f e = f µ .
Report from Working Group 2. M Cepeda, 10.23731/CYRM-2019-007.221arXiv:1902.00134CERN Yellow Rep. Monogr. 7hep-phM. Cepeda et al., "Report from Working Group 2," CERN Yellow Rep. Monogr., 7, 221-584 (2019), arXiv:1902.00134 [hep-ph].
Future circular colliders. M Benedikt, A Blondel, P Janot, M Klein, M Mangano, M Mccullough, V Mertens, K Oide, W Riegler, D Schulte, F Zimmermann, 10.1146/annurev-nucl-101918-023748Annual Review of Nuclear and Particle Science. 69M. Benedikt, A. Blondel, P. Janot, M. Klein, M. Mangano, M. McCullough, V. Mertens, K. Oide, W. Riegler, D. Schulte, and F. Zimmermann, "Future circular colliders," Annual Review of Nuclear and Particle Science, 69, 389-415 (2019).
The Muon anomalous magnetic moment: A Harbinger for 'new physics. Andrzej Czarnecki, William J Marciano, 10.1103/PhysRevD.64.013014arXiv:hep-ph/0102122Phys. Rev. D. 6413014Andrzej Czarnecki and William J. Marciano, "The Muon anomalous magnetic moment: A Harbinger for 'new physics'," Phys. Rev. D, 64, 013014 (2001), arXiv:hep-ph/0102122.
Muon g-2), "Final Report of the Muon E821 Anomalous Magnetic Moment Measurement at BNL. G W Bennett, 10.1103/PhysRevD.73.072003arXiv:hep-ex/0602035Phys. Rev. 7372003hepexG. W. Bennett et al. (Muon g-2), "Final Report of the Muon E821 Anomalous Magnetic Moment Measurement at BNL," Phys. Rev., D73, 072003 (2006), arXiv:hep-ex/0602035 [hep- ex].
Muon g − 2 and α(M 2 Z ): a new data-based analysis. Alexander Keshavarzi, Daisuke Nomura, Thomas Teubner, 10.1103/PhysRevD.97.114025arXiv:1802.02995Phys. Rev. 97114025hep-phAlexander Keshavarzi, Daisuke Nomura, and Thomas Teubner, "Muon g − 2 and α(M 2 Z ): a new data-based analysis," Phys. Rev., D97, 114025 (2018), arXiv:1802.02995 [hep-ph].
Calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment. T Blum, P A Boyle, V Glpers, T Izubuchi, L Jin, C Jung, A Jttner, C Lehner, A Portelli, J T Tsang, ( Rbc, Ukqcd), 10.1103/PhysRevLett.121.022003arXiv:1801.07224Phys. Rev. Lett. 12122003hep-latT. Blum, P. A. Boyle, V. Glpers, T. Izubuchi, L. Jin, C. Jung, A. Jttner, C. Lehner, A. Portelli, and J. T. Tsang (RBC, UKQCD), "Calculation of the hadronic vacuum polarization contri- bution to the muon anomalous magnetic moment," Phys. Rev. Lett., 121, 022003 (2018), arXiv:1801.07224 [hep-lat].
A new evaluation of the hadronic vacuum polarisation contributions to the muon anomalous magnetic moment and to α(m 2 Z ). M Davier, A Hoecker, B Malaescu, Z Zhang, M. Davier, A. Hoecker, B. Malaescu, and Z. Zhang, "A new evaluation of the hadronic vacuum polarisation contributions to the muon anomalous magnetic moment and to α(m 2 Z ),"
. 10.1140/epjc/s10052-020-7792-2arXiv:1908.00921Eur. Phys. J. 80241hep-phEur. Phys. J., C80, 241 (2020), arXiv:1908.00921 [hep-ph].
Muon g -2 theory: The hadronic part. Fred Jegerlehner, 10.1051/epjconf/201816600022EPJ Web of Conferences. 16622Fred Jegerlehner, "Muon g -2 theory: The hadronic part," EPJ Web of Conferences, 166, 00022 (2018).
Review of Particle Physics. M Tanabashi, Particle Data Group10.1103/PhysRevD.98.030001Phys. Rev. D. 9830001M. Tanabashi et al. (Particle Data Group), "Review of Particle Physics," Phys. Rev. D, 98, 030001 (2018).
The anomalous magnetic moment of the muon in the Standard Model. T Aoyama, arXiv:2006.04822hep-phT. Aoyama et al., "The anomalous magnetic moment of the muon in the Standard Model," (2020), arXiv:2006.04822 [hep-ph].
Measurement of the fine-structure constant as a test of the standard model. Richard H Parker, Chenghui Yu, Weicheng Zhong, Brian Estey, Holger Müller, 10.1126/science.aap7706Science. 360Richard H. Parker, Chenghui Yu, Weicheng Zhong, Brian Estey, and Holger Müller, "Mea- surement of the fine-structure constant as a test of the standard model," Science, 360, 191-195 (2018).
Explaining electron and muon g − 2 anomaly in SUSY without lepton-flavor mixings. Motoi Endo, Wen Yin, 10.1007/JHEP08(2019)122arXiv:1906.08768JHEP. 08hep-phMotoi Endo and Wen Yin, "Explaining electron and muon g − 2 anomaly in SUSY without lepton-flavor mixings," JHEP, 08, 122 (2019), arXiv:1906.08768 [hep-ph].
Axion-like particles. Martin Bauer, Matthias Neubert, Sophie Renner, Marvin Schnubel, Andrea Thamm, arXiv:1908.00008hep-phMartin Bauer, Matthias Neubert, Sophie Renner, Marvin Schnubel, and Andrea Thamm, "Axion-like particles, lepton-flavor violation and a new explanation of a µ and a e ," (2019), arXiv:1908.00008 [hep-ph].
Explanation of electron and muon g − 2 anomalies in the MSSM. Marcin Badziak, Kazuki Sakurai, 10.1007/JHEP10(2019)024arXiv:1908.03607JHEP. 1024hep-phMarcin Badziak and Kazuki Sakurai, "Explanation of electron and muon g − 2 anomalies in the MSSM," JHEP, 10, 024 (2019), arXiv:1908.03607 [hep-ph].
Muon and Electron g − 2 and the Origin of Fermion Mass Hierarchy. Naoyuki Haba, Yasuhiro Shimizu, Toshifumi Yamada, arXiv:2002.10230hep-phNaoyuki Haba, Yasuhiro Shimizu, and Toshifumi Yamada, "Muon and Electron g − 2 and the Origin of Fermion Mass Hierarchy," (2020), arXiv:2002.10230 [hep-ph].
Getting chirality right: top-philic scalar leptoquark solution to the (g − 2) e,µ puzzle. Innes Bigaran, Raymond R Volkas, arXiv:2002.12544hep-phInnes Bigaran and Raymond R. Volkas, "Getting chirality right: top-philic scalar leptoquark solution to the (g − 2) e,µ puzzle," (2020), arXiv:2002.12544 [hep-ph].
Resolving electron and muon g − 2 within the 2HDM. Sudip Jana, P K Vishnu, Shaikh Saad, arXiv:2003.03386hep-phSudip Jana, Vishnu P. K., and Shaikh Saad, "Resolving electron and muon g − 2 within the 2HDM," (2020), arXiv:2003.03386 [hep-ph].
Muon and electron g − 2 and lepton masses in flavor models. M L Lorenzo Calibbi, Aurora Lpez-Ibez, Oscar Melis, Vives, arXiv:2003.06633hep-phLorenzo Calibbi, M. L. Lpez-Ibez, Aurora Melis, and Oscar Vives, "Muon and electron g − 2 and lepton masses in flavor models," (2020), arXiv:2003.06633 [hep-ph].
Electron and muon (g − 2) in the B-LSSM. Jin-Lei Yang, Tai-Fu Feng, Hai-Bin Zhang, 10.1088/1361-6471/ab7986arXiv:2003.09781J. Phys. 4755004hep-phJin-Lei Yang, Tai-Fu Feng, and Hai-Bin Zhang, "Electron and muon (g − 2) in the B-LSSM," J. Phys., G47, 055004 (2020), arXiv:2003.09781 [hep-ph].
Electron and muon g − 2, radiative neutrino mass, and → γ in a U (1) e−µ model. Chuan-Hung Chen, Takaaki Nomura, arXiv:2003.07638hep-phChuan-Hung Chen and Takaaki Nomura, "Electron and muon g − 2, radiative neutrino mass, and → γ in a U (1) e−µ model," (2020), arXiv:2003.07638 [hep-ph].
Minimal models for dark matter and the muon g−2 anomaly. Lorenzo Calibbi, Robert Ziegler, Jure Zupan, 10.1007/JHEP07(2018)046arXiv:1804.00009JHEP. 0746hep-phLorenzo Calibbi, Robert Ziegler, and Jure Zupan, "Minimal models for dark matter and the muon g−2 anomaly," JHEP, 07, 046 (2018), arXiv:1804.00009 [hep-ph].
John F Gunion, Howard E Haber, Gordon L Kane, Sally Dawson, The Higgs Hunter's Guide. 80John F. Gunion, Howard E. Haber, Gordon L. Kane, and Sally Dawson, The Higgs Hunter's Guide, Vol. 80 (2000).
Margarete Muhlleitner, O P Marco, Rui Sampaio, Jonas Santos, Wittbrodt, 10.1007/JHEP03(2017)094arXiv:1612.01309The N2HDM under Theoretical and Experimental Scrutiny. 0394hep-phMargarete Muhlleitner, Marco O. P. Sampaio, Rui Santos, and Jonas Wittbrodt, "The N2HDM under Theoretical and Experimental Scrutiny," JHEP, 03, 094 (2017), arXiv:1612.01309 [hep-ph].
Higgs coupling constants as a probe of new physics. Shinya Kanemura, Yasuhiro Okada, Eibun Senaha, C.-P Yuan, 10.1103/PhysRevD.70.115002arXiv:hep-ph/0408364Phys. Rev. D. 70115002Shinya Kanemura, Yasuhiro Okada, Eibun Senaha, and C.-P. Yuan, "Higgs coupling constants as a probe of new physics," Phys. Rev. D, 70, 115002 (2004), arXiv:hep-ph/0408364.
Texture zeros and CP-violating phases in the neutrino mass matrix. Zhi-Zhong Xing, 10.1142/9789812701824_0054arXiv:hep-ph/04060495th Workshop on Neutrino Oscillations and their Origin (NOON2004). Zhi-zhong Xing, "Texture zeros and CP-violating phases in the neutrino mass matrix," in 5th Workshop on Neutrino Oscillations and their Origin (NOON2004) (2004) pp. 442-449, arXiv:hep-ph/0406049.
Electric Dipole Moment of the Electron and of the Neutron. M Stephen, A Barr, Zee, 10.1103/PhysRevLett.65.21Phys. Rev. Lett. 652920Phys.Rev.Lett.Stephen M. Barr and A. Zee, "Electric Dipole Moment of the Electron and of the Neutron," Phys. Rev. Lett., 65, 21-24 (1990), [Erratum: Phys.Rev.Lett. 65, 2920 (1990)].
New Barr-Zee contributions to (g − 2) µ in two-Higgs-doublet models. Victor Ilisie, 10.1007/JHEP04(2015)077arXiv:1502.04199JHEP. 0477hep-phVictor Ilisie, "New Barr-Zee contributions to (g − 2) µ in two-Higgs-doublet models," JHEP, 04, 077 (2015), arXiv:1502.04199 [hep-ph].
N Aghanim, PlanckarXiv:1807.06209Planck 2018 results. VI. Cosmological parameters. astro-ph.CON. Aghanim et al. (Planck), "Planck 2018 results. VI. Cosmological parameters," (2018), arXiv:1807.06209 [astro-ph.CO].
Feyn-Rules 2.0 -A complete toolbox for tree-level phenomenology. Adam Alloul, Neil D Christensen, Cline Degrande, Claude Duhr, Benjamin Fuks, 10.1016/j.cpc.2014.04.012arXiv:1310.1921Comput. Phys. Commun. 185hep-phAdam Alloul, Neil D. Christensen, Cline Degrande, Claude Duhr, and Benjamin Fuks, "Feyn- Rules 2.0 -A complete toolbox for tree-level phenomenology," Comput. Phys. Commun., 185, 2250-2300 (2014), arXiv:1310.1921 [hep-ph].
UFO -The Universal FeynRules Output. Celine Degrande, Claude Duhr, Benjamin Fuks, David Grellscheid, Olivier Mattelaer, Thomas Reiter, 10.1016/j.cpc.2012.01.022arXiv:1108.2040Comput. Phys. Commun. 183hep-phCeline Degrande, Claude Duhr, Benjamin Fuks, David Grellscheid, Olivier Mattelaer, and Thomas Reiter, "UFO -The Universal FeynRules Output," Comput. Phys. Commun., 183, 1201-1214 (2012), arXiv:1108.2040 [hep-ph].
MadDM v.3.0: a Comprehensive Tool for Dark Matter Studies. Federico Ambrogi, Chiara Arina, Mihailo Backovic, Jan Heisig, Fabio Maltoni, Luca Mantani, Olivier Mattelaer, Gopolang Mohlabeng, 10.1016/j.dark.2018.11.009arXiv:1804.00044Phys. Dark Univ. 24100249hep-phFederico Ambrogi, Chiara Arina, Mihailo Backovic, Jan Heisig, Fabio Maltoni, Luca Mantani, Olivier Mattelaer, and Gopolang Mohlabeng, "MadDM v.3.0: a Comprehensive Tool for Dark Matter Studies," Phys. Dark Univ., 24, 100249 (2019), arXiv:1804.00044 [hep-ph].
Direct Detection of Dark Matter with MadDM v.2.0. Mihailo Backović, Antony Martini, Olivier Mattelaer, Kyoungchul Kong, Gopolang Mohlabeng, 10.1016/j.dark.2015.09.001arXiv:1505.04190Phys. Dark Univ. 910hep-phMihailo Backović, Antony Martini, Olivier Mattelaer, Kyoungchul Kong, and Gopolang Mohlabeng, "Direct Detection of Dark Matter with MadDM v.2.0," Phys. Dark Univ., 9- 10, 37-50 (2015), arXiv:1505.04190 [hep-ph].
MadDM v.1.0: Computation of Dark Matter Relic Abundance Using MadGraph5. Mihailo Backovic, Kyoungchul Kong, Mathew Mccaskey, 10.1016/j.dark.2014.04.001arXiv:1308.4955Physics of the Dark Universe. 5-6hep-phMihailo Backovic, Kyoungchul Kong, and Mathew McCaskey, "MadDM v.1.0: Computation of Dark Matter Relic Abundance Using MadGraph5," Physics of the Dark Universe, 5-6, 18-28 (2014), arXiv:1308.4955 [hep-ph].
The Inert Doublet Model: An Archetype for Dark Matter. Laura Lopez Honorez, Emmanuel Nezri, Josep F Oliver, Michel H G Tytgat, 10.1088/1475-7516/2007/02/028arXiv:hep-ph/0612275JCAP. 0228Laura Lopez Honorez, Emmanuel Nezri, Josep F. Oliver, and Michel H.G. Tytgat, "The Inert Doublet Model: An Archetype for Dark Matter," JCAP, 02, 028 (2007), arXiv:hep- ph/0612275.
Dark Matter Search Results from a One Ton-Year Exposure of XENON1T. E Aprile, XENON10.1103/PhysRevLett.121.111302arXiv:1805.12562Phys. Rev. Lett. 121111302astro-ph.COE. Aprile et al. (XENON), "Dark Matter Search Results from a One Ton-Year Exposure of XENON1T," Phys. Rev. Lett., 121, 111302 (2018), arXiv:1805.12562 [astro-ph.CO].
The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. J Alwall, R Frederix, S Frixione, V Hirschi, F Maltoni, O Mattelaer, H S Shao, T Stelzer, P Torrielli, M Zaro, 10.1007/JHEP07(2014)079arXiv:1405.0301JHEP. 0779hep-phJ. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, "The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations," JHEP, 07, 079 (2014), arXiv:1405.0301 [hep-ph].
Parton distributions with QED corrections. Richard D Ball, NNPDFValerio Bertone, NNPDFStefano Carrazza, NNPDFLuigi Del Debbio, NNPDFStefano Forte, NNPDFAlberto Guffanti, NNPDFNathan P Hartland, NNPDFJuan Rojo, NNPDF10.1016/j.nuclphysb.2013.10.010arXiv:1308.0598Nucl. Phys. B. 877hep-phRichard D. Ball, Valerio Bertone, Stefano Carrazza, Luigi Del Debbio, Stefano Forte, Alberto Guffanti, Nathan P. Hartland, and Juan Rojo (NNPDF), "Parton distributions with QED corrections," Nucl. Phys. B, 877, 290-320 (2013), arXiv:1308.0598 [hep-ph].
Search for electroweak production of supersymmetric particles in final states with two or three leptons at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS10.1140/epjc/s10052-018-6423-7EurM. Aaboud et al. (ATLAS), "Search for electroweak production of supersymmetric particles in final states with two or three leptons at √ s = 13 TeV with the ATLAS detector," Eur.
. . J Phys, 10.1140/epjc/s10052-018-6423-7arXiv:1803.0276278995hep-exPhys. J. C, 78, 995 (2018), arXiv:1803.02762 [hep-ex].
CheckMATE 2: From the model to the limit. Daniel Dercks, Nishita Desai, Jong Soo Kim, Krzysztof Rolbiecki, Jamie Tattersall, Torsten Weber, 10.1016/j.cpc.2017.08.021arXiv:1611.09856Comput. Phys. Commun. 221hep-phDaniel Dercks, Nishita Desai, Jong Soo Kim, Krzysztof Rolbiecki, Jamie Tattersall, and Torsten Weber, "CheckMATE 2: From the model to the limit," Comput. Phys. Commun., 221, 383-418 (2017), arXiv:1611.09856 [hep-ph].
The anti-k t jet clustering algorithm. Matteo Cacciari, Gavin P Salam, Gregory Soyez, 10.1088/1126-6708/2008/04/063arXiv:0802.1189JHEP. 0463hep-phMatteo Cacciari, Gavin P. Salam, and Gregory Soyez, "The anti-k t jet clustering algorithm," JHEP, 04, 063 (2008), arXiv:0802.1189 [hep-ph].
FastJet User Manual. Matteo Cacciari, Gavin P Salam, Gregory Soyez, 10.1140/epjc/s10052-012-1896-2arXiv:1111.6097Eur. Phys. J. C. 721896hep-phMatteo Cacciari, Gavin P. Salam, and Gregory Soyez, "FastJet User Manual," Eur. Phys. J. C, 72, 1896 (2012), arXiv:1111.6097 [hep-ph].
Presentation of search results: The CL(s) technique. Alexander L Read, 10.1088/0954-3899/28/10/313J. Phys. G. 28Alexander L. Read, "Presentation of search results: The CL(s) technique," J. Phys. G, 28, 2693-2704 (2002).
PYTHIA 6.4 Physics and Manual. Torbjorn Sjostrand, Stephen Mrenna, Peter Z Skands, 10.1088/1126-6708/2006/05/026arXiv:hep-ph/0603175JHEP. 0526Torbjorn Sjostrand, Stephen Mrenna, and Peter Z. Skands, "PYTHIA 6.4 Physics and Man- ual," JHEP, 05, 026 (2006), arXiv:hep-ph/0603175.
A Brief Introduction to PYTHIA 8.1. Torbjorn Sjostrand, Stephen Mrenna, Peter Z Skands, 10.1016/j.cpc.2008.01.036arXiv:0710.3820Comput. Phys. Commun. 178hep-phTorbjorn Sjostrand, Stephen Mrenna, and Peter Z. Skands, "A Brief Introduction to PYTHIA 8.1," Comput. Phys. Commun., 178, 852-867 (2008), arXiv:0710.3820 [hep-ph].
DELPHES 3, A modular framework for fast simulation of a generic collider experiment. J De Favereau, C Delaere, P Demin, A Giammanco, V Lematre, A Mertens, M Selvaggi, 10.1007/JHEP02(2014)057arXiv:1307.6346JHEP. 02357hep-exJ. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lematre, A. Mertens, and M. Sel- vaggi (DELPHES 3), "DELPHES 3, A modular framework for fast simulation of a generic collider experiment," JHEP, 02, 057 (2014), arXiv:1307.6346 [hep-ex].
Search for supersymmetry with two and three leptons and missing transverse momentum in the final state at √ s = 13 TeV with the ATLAS detector. "Search for supersymmetry with two and three leptons and missing transverse momentum in the final state at √ s = 13 TeV with the ATLAS detector," (2016).
Constraining the Inert Doublet Model using Vector Boson Fusion. Daniel Dercks, Tania Robens, 10.1140/epjc/s10052-019-7436-6arXiv:1812.07913Eur. Phys. J. C. 79hep-phDaniel Dercks and Tania Robens, "Constraining the Inert Doublet Model using Vector Boson Fusion," Eur. Phys. J. C, 79, 924 (2019), arXiv:1812.07913 [hep-ph].
Measurements of Higgs boson properties in the diphoton decay channel in proton-proton collisions at √ s = 13 TeV. A M Sirunyan, CMS10.1007/JHEP11(2018)185arXiv:1804.02716JHEP. 11hep-exA.M. Sirunyan et al. (CMS), "Measurements of Higgs boson properties in the diphoton decay channel in proton-proton collisions at √ s = 13 TeV," JHEP, 11, 185 (2018), arXiv:1804.02716 [hep-ex].
Searches for the Zγ decay mode of the Higgs boson and for new high-mass resonances in pp collisions at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS10.1007/JHEP10(2017)112arXiv:1708.00212JHEP. 10112hep-exM. Aaboud et al. (ATLAS), "Searches for the Zγ decay mode of the Higgs boson and for new high-mass resonances in pp collisions at √ s = 13 TeV with the ATLAS detector," JHEP, 10, 112 (2017), arXiv:1708.00212 [hep-ex].
Higgs boson decays to γγ and Zγ in models with Higgs extensions. Wei Cheng, Kei Chiang, Yagyu, 10.1103/PhysRevD.87.033003arXiv:1207.1065Phys. Rev. D. 8733003hep-phCheng-Wei Chiang and Kei Yagyu, "Higgs boson decays to γγ and Zγ in models with Higgs extensions," Phys. Rev. D, 87, 033003 (2013), arXiv:1207.1065 [hep-ph].
| []
|
[
"arXiv:hep-ph/9608222v1 3 Aug 1996 Cosmological Implications of Radiatively Generated Axion Scale",
"arXiv:hep-ph/9608222v1 3 Aug 1996 Cosmological Implications of Radiatively Generated Axion Scale"
]
| [
"Kiwoon Choi \nDepartment of Physics\nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n305-701TaejonKorea\n",
"Eung Jin Chun \nDepartment of Physics and Center for Theoretical Physics\nChungbuk National University Cheongju\n360-763ChungbukKorea\n",
"Jihn E Kim \nSeoul National University\n151-742SeoulKorea\n"
]
| [
"Department of Physics\nDepartment of Physics\nKorea Advanced Institute of Science and Technology\n305-701TaejonKorea",
"Department of Physics and Center for Theoretical Physics\nChungbuk National University Cheongju\n360-763ChungbukKorea",
"Seoul National University\n151-742SeoulKorea"
]
| []
| We study cosmological implications of supersymmetric axion models in which the axion scale is generated radiatively. Such models lead to the so-called thermal inflation and subsequent reheating should be constrained not to yield a too large axion energy density at the time of nucleosynthesis. We examine how plausible it is that this nucleosynthesis constraint is satisfied for both hadronic and Dine-Fischler-Srednicki-Zhitnitskii type axion models. Baryogenesis and the possibility for raising up the cosmological upper bound on the axion scale in thermal inflation scenario are also discussed. | 10.1016/s0370-2693(97)00465-6 | [
"https://export.arxiv.org/pdf/hep-ph/9608222v1.pdf"
]
| 14,910,648 | hep-ph/9608222 | 8d9f6a1842bd784b896da3b4aa03e1e0060e110c |
arXiv:hep-ph/9608222v1 3 Aug 1996 Cosmological Implications of Radiatively Generated Axion Scale
Kiwoon Choi
Department of Physics
Department of Physics
Korea Advanced Institute of Science and Technology
305-701TaejonKorea
Eung Jin Chun
Department of Physics and Center for Theoretical Physics
Chungbuk National University Cheongju
360-763ChungbukKorea
Jihn E Kim
Seoul National University
151-742SeoulKorea
arXiv:hep-ph/9608222v1 3 Aug 1996 Cosmological Implications of Radiatively Generated Axion Scale
We study cosmological implications of supersymmetric axion models in which the axion scale is generated radiatively. Such models lead to the so-called thermal inflation and subsequent reheating should be constrained not to yield a too large axion energy density at the time of nucleosynthesis. We examine how plausible it is that this nucleosynthesis constraint is satisfied for both hadronic and Dine-Fischler-Srednicki-Zhitnitskii type axion models. Baryogenesis and the possibility for raising up the cosmological upper bound on the axion scale in thermal inflation scenario are also discussed.
One of the attractive solutions to the strong CP problem is to introduce an anomalous Peccei-Quinn (PQ) symmetry U(1) P Q [1]. This solution predicts a pseudo-Goldstone boson, the (invisible) axion [2,3], whose decay constant F a is tightly constrained by astrophysical and cosmological arguments. The allowed band of the axion scale F a lies between 10 10 GeV and 10 12 GeV [4] which is far away from the already known two mass scales, the electroweak scale and the Planck scale M P = 1/ √ 8πG N . It is certainly desirable that this intermediate scale appears as a dynamical consequence when the known mass scales are set up in the theory.
This indeed happens [5] in some class of spontaneously broken supergravity models which are commonly considered as the underlying structure of the supersymmetric standard model. In such a scheme, as was recently emphasized, the early universe experiences the socalled thermal inflation and subsequently a period dominated by coherently oscillating flaton fields [6]. The aim of this paper is to examine cosmological implications of PQ flatons in supergravity models with a radiative mechanism generating the axion scale.
One possible cosmological consequence of PQ flatons is the impact on the big-bang nucleosynthesis through their decay into axions. In the scheme under consideration, PQ flatons have generally order-one coupling to the Goldstone boson (the axion) in the unit of 1/F a [7]. As we will argue later, axions produced by decaying flatons are hardly thermalized. In this paper, we first consider the energy density of these unthermalized axions at the time of nucleosynthesis together with its implications for both hadronic axion models [2] and Dine-Fischler-Srednicki-Zhitnitskii (DFSZ) type models [3]. Even when one takes a rather conservative limit on the axion energy density, this consideration provides a meaningful restriction for generic hadronic axion models and also for DFSZ type models with a rather large flaton mass.
As another cosmological implications of PQ flatons, we consider the possibility of rais-ing up the cosmological upper bound on the axion scale F a through the late time entropy production [8] by oscillating PQ flatons. We argue that F a can be pushed up to about 10 15 GeV without any cosmological difficulty in thermal inflation scenario. Finally, we point out that the Dimopoulos-Hall (DH) mechanism [9] for a late time baryogenesis can be naturally implemented in thermal inflation scenario. In the conclusion, we note that the case of n = 2 or 3 [see Eq. (3) below] provides a very concordant cosmological scenario.
We begin by describing how the intermediate axion scale can be radiatively generated in supergravity models in which the PQ fields correspond to flat directions. Let us consider a variant of the model of Ref. [5] with superpotential
W = k φ n+2 1 φ 2 M n P + h φ n+1 1 H 1 H 2 M n P + h N NNφ 1 + h L LH 2 N + · · ·(1)
where H 1,2 are the usual Higgs doublets, N is the right-handed neutrino component and the ellipsis denotes the supersymmetric standard model part of the superpotential. In order to implement the PQ symmetry, two gauge singlet superfields φ 1,2 with PQ charges q 1,2 are introduced. The structure of the superpotential is determined by the PQ charge assignment:
q 2 = −(n+2)q 1 , q H 1 +q H 2 = −(n+1)q 1
and so on. Obviously the PQ fields φ 1 and φ 2 correspond to flat directions when nonrenormalizable interactions and supersymmetry breaking effects are ignored. This model can be considered as a supersymmetric generalization of the DFSZ axion model (but endowed with a radiative mechanism generating the axion scale) in the sense that the Higgs doublets carry nonzero PQ charges. Note that the second term in the superpotential yields the correct scale for the Higgs mass parameter µ = h φ 1 n+1 /M n P upon spontaneous breaking of the PQ symmetry [10]. Taking into account the radiative effects of the strong Yukawa coupling h N NNφ 1 , the soft mass-squared of φ 1 becomes negative at scales around F a ≃ φ 1 , and thereby driving φ 1 to develop vacuum expectation value at an intermediate scale. This Yukawa coupling is also necessary to keep the field φ 1 in thermal equilibrium at high temperature T > m 1 for which φ 1 = 0. Neglecting the field φ 2 , the renormalization group improved scalar potential for the singlets is given by
V = V 0 − m 2 1 |φ 1 | 2 + k 2 |φ 1 | 2n+4 M 2n P ,(2)
where m 2 1 is positive and of order m 2 3/2 , and V 0 is a constant of order m 2 3/2 F 2 a which is introduced to make V ( φ 1 ) = 0. Clearly the minimum of this scalar potential breaks U(1) P Q by
φ 1 ≃ F a ≃ (m 3/2 M n P ) 1/n+1 ,(3)
where we have ignored the coefficients of order unity. The integer n fixes the size of the axion scale. For the smallest value n = 1, the axion scale F a ≃ m 3/2 M P fits into the usual allowed band of the axion scale: 10 10 GeV < ∼ F a < ∼ 10 12 GeV. Later we will argue that the upper bound on F a can be relaxed and thus a bigger value of n is allowed also.
The above radiative mechanism generating the axion scale has substantial influence on the history of the universe [6,11]. At high temperature, φ 1 receives a thermal mass δm 2 1 ≃ |h N | 2 T 2 ≫ m 2 1 leading to φ 1 = 0. This thermal mass is generated by right-handed neutrinos in the thermal bath. Note that the right-handed neutrino N becomes massless when φ 1 = 0 and thus copiously produced when T ≫ m 1 . During this period, φ 2 = 0 also. When the temperature falls below T ≃ V 1/4 0 , which is about m 3/2 F a , the universe is dominated by the vacuum energy density V 0 and thus there appears a short period of thermal inflation.
Below T < m 1 ≃ m 3/2 , the effective mass of φ 1 becomes negative and then φ 1 develops an intermediate scale VEV given by Eq. (3). With φ 1 ≃ F a , the other flaton field φ 2 develops also a VEV of order F a through the A-type soft SUSY breaking term, kAφ n+2 1 φ 2 /M n P , in the scalar potential. This procedure makes the thermal inflation end and subsequently the early universe experiences a period dominated by coherently oscillating PQ flaton fields φ 1 and φ 2 .
More precisely, the oscillating flaton corresponds to a combination of the two complex scalar fields φ 1 and φ 2 which is orthogonal to the axion field
a = i c i arg(φ i ) where c i = q i φ i 2 /F a .
NS bound. After the period of coherent oscillation, the universe would be reheated by the decay products of the oscillating flaton ϕ. A feature peculiar to the PQ flatons is that their decay products include axion as one of the main components [7,11]. The energy density of these axions at the time of nucleosynthesis (NS), (ρ a ) N S , should satisfy the conventional nucleosynthesis bound on the extra energy density:
ρ a ρ ν N S ≤ δN ν .(4)
Here ρ ν denotes the energy density of a single species of relativistic neutrino and δN ν is the number of extra neutrino species allowed by nucleosynthesis. In the past, δN ν has been argued to be 0.3 or even smaller as 0.04 [12]. However, although claimed to be quite conservative, more careful recent analyses do not exclude even δN ν = 1.5 [13]. Here we do not take any specific value of δN ν , but examine the implications of the above NS bound for δN ν = 0.1 ∼ 1.5
with the hope that one can push δN ν down to the value 0.1 in the future.
Before evaluating (ρ a ) N S , let us first determine the reheat temperature T RH by parameterizing the width of the flaton decay into thermalizable particles as
Γ ϕ = B −1 a M 3 ϕ /64πF 2 a .
Here M ϕ denotes the flaton mass and the prefactor B −1 a will be presumed to be of order 10, which is a proper choice for (ρ a ) N S to satisfy the above NS bound. The reheat temperature is then given by
T RH ≃ 1.2g −1/4 RH M P Γ ϕ ≃ 1 0.1 B a 1/2 10 12 GeV F a M ϕ 300 GeV 3/2 GeV ,(5)
where g RH ≡ g * (T RH ) counts the effective number of relativistic degree of freedom at T RH .
The entropy production factor S after /S before for this reheating is of order V 0 /m 3 3/2 T RH which is roughly of order 10 2 (M P /m 3/2 ) (5n−1)/(2n+2) . This huge entropy dumping at relatively late time was considered as a promising source for erasing out various unwanted cosmological relics, especially, cosmologically dangerous string moduli [11].
In order to evaluate (ρ a ) N S , one needs to know whether axions produced by the late flaton decay have ever been in thermal equilibrium with the thermalized plasma of normal light particles. If axions were in thermal equilibrium at some moment but later frozen out at
temperature T f , we would have ρ a ρ ν N S = 4 7 43/4 g * (T f ) 4/3 .(6)
However if axions have never been in equilibrium, (ρ a ) N S is simply determined by the effective branching ratio B a measuring how large fraction of flatons are converted into axions during the reheating. Roughly B a ≃ Γ a /Γ tot with the decay width Γ a of ϕ → 2a, however axions can be produced also by the secondary decays of the decay products of flatons. For unthermalized axions at T RH , the ratio between ρ a and the energy density ρ r of thermalized radiation would be simply B a /(1 − B a ). We then have
ρ a ρ ν N S = 43 4 B a 1 − B a 4 7 43/4 g RH 1/3 .(7)
In order to see whether axions have ever been in thermal equilibrium, let us consider the axion interaction rate Γ int = σv N r where σ denotes the cross section for the axion scattering off the thermalized radiation with energy density ρ r and number density N r . A careful look at of the reheating process indicates that ρ r ∼ R −3/2 , N r ∼ R −1/2 , and ρ ϕ ∼ R −3 e −tΓtot during the reheating period between t 0 and t D ≃ Γ −1 tot where R denotes the scale factor and t 0 corresponds to the time when the relativistic particles produced by the flaton decay become the major part of the radiation [8]. A simple dimensional analysis implies that the axion cross section can be written as σ = (γ 1 + γ 2 (m/E) 2 )/4πF 2 a , where E denotes the center of mass energy, γ 1,2 are dimensionless constants of order unity or less, and m corresponds to the mass of target particle. We then have σv = (γ
1 +γ 2 (m/ E 0 ) 2 (R/R 0 ) 2 )/4πF 2 a .
With the informations given above, it is straightforward to see that the ratio Z = Γ int /H is an increasing function of R during the reheating period of t 0 < t < t D .
Let us now consider the behavior of Z = Γ int /H after the reheating. At t > t D with T < T RH , the entropy production almost ends and thus N r ∼ R −3 ∼ T 3 , H ∼ R −2 ∼ T 2 as in the standard radiation dominated universe with an adiabatic expansion. Using Eq. (3) with
M ϕ ≃ m 3/2 and Eq. (5), we find Γ int H = 3 × 10 −2γ g 1/2 * T M P F a 2 = 10 −2γ T T RH g * 10 2 M ϕ M P 3(n−1)/2(n+1) ,(8)whereγ = (γ 1 + γ 2 (m/ E ) 2 ).
For n > 1, the above result for t > t D together with the fact that Z = Γ int /H is an increasing function of R during t 0 < t < t D readily implies that Z ≪ 1 and thus axions have never been in equilibrium. For the case of n = 1, we need a bit more discussion about the size ofγ. Obviously at tree level, any nontrivial axion couplings to SU(2) × U(1) non-singlet fields arise as a consequence of SU(2) × U(1) breaking. In other words, tree level axion couplings to normal fields are induced by the mixing with the Higgs doublets. As a result, tree level axion couplings can be described effectively by dimensionless coupling constants which are of order m/F a where m corresponds to the mass of the particle that couples to the axion. This means that the energy dependent part of the axion cross section, i.e. the γ 2 -part, is due to tree level axion couplings, while the energy independent γ 1 -part is due to the loop-induced axion couplings like αs 4πFa aG µνG µν . As a result,γ is suppressed either by the loop factor ( 1 8π 2 ) 2 or by the relativistic factor (m/ E ) 2 . Then we can safely takeγ < ∼ 1, implying Z ≪ 1 for the case of n = 1 also.
In the above, we have argued that Z = Γ int /H ≪ 1 and thus axions have never been in thermal equilibrium. Then the axion energy density at nucleosynthesis is given by Eq. (7) and the NS bound (4) leads to
B a 1 − B a ≤ 0.24 δN ν 1.5 g RH 43/4 1/3 .(9)
The above nucleosynthesis limit on B a depends mildly upon the reheat temperature T RH through the factor (g RH /g NS ) 1/3 where g NS ≡ g * (T NS ) = 43/4, while it is rather sensitive to the discordant number δN ν which is presumed here to be in the range 0.1 ∼ 1.5 [12,13].
For T RH above 0.2 GeV but below the superparticle mass, we have g RH /g NS = 6 ∼ 10, while g RH /g NS = 1 ∼ 3 for T RH < 0.2 GeV. We thus have just a factor two variation of the limit when T RH varies from the lowest allowed value 6 MeV [16] to the superparticle mass of order 100 GeV. In summary, the NS limit (9) indicates that we need to tune the effective branching ratio B a to be less than 1/3 ∼ 0.02 for δN ν = 0.1 ∼ 1.5.
Implication on flaton couplings. We now discuss the implications of the NS bound (9) for generic supersymmetric axion models with a radiative mechanism generating the axion scale. Since the models under consideration involve too many unknown free parameters, we just examine how plausible it is that the NS limit (9) is satisfied for the unknown parameters simply taking their natural values. To proceed, let us write the flaton couplings responsible for the flaton decay as
L ϕ = L P Q + L SSM ,(10)
where L P Q describes the couplings to the fields in PQ sector, while L SSM describes the couplings to the fields in supersymmetric standard model (SSM) sector. Schematically L P Q is given by
L P Q = ϕ 2F a M 2 ϕ a 2 + M 2 ϕ ϕ ′2 + (Mφφφ + h.c)(11)
where a, ϕ ′ , andφ denote the axion, other flaton, and flatino respectively. In the above, we have ignored the model-dependent dimensionless coefficients of each terms which are of order unity in general.
The flaton couplings to SSM fields are more model dependent. In DFSZ type models, flaton couplings to the SSM sector are essentially due to the mixing with the Higgs doublets.
Then flaton couplings can be read off by making the replacement
H i → v i + x ij v j F a ϕ,(12)
where v i = H i , x ij 's are model-dependent coefficients which are generically of order unity.
Then again schematically
L SSM = ϕ F a (M 1 χχ ′ + M 2 χλ + h.c.) + M 2 3 A µ A µ + M 2 4 zz ′ + M 2 5 |z| 2 ,(13)
where z and z ′ denote spin zero fields in the SSM, e.g. squarks, sleptons and Higgs, with their fermionic partners χ and χ ′ , while (A µ , λ) stands for the gauge multiplets which become massive due to the Higgs doublets VEVs, i.e W and Z. The order of magnitude estimate of the dimensionful coefficients leads to:
M 1 ≃ M χ , M 2 ≃ M 3 ≃ M W , M 2 4 ≃ M χ (A + µ cot β), and M 2 5 ≃ M 2 χ + M 2 W cos 2β,
where M χ and M W denote the masses of χ and W respectively, tan β = v 2 /v 1 , and again we have ignored the coefficients of order unity.
As is well known, besides DFSZ type models, there are another interesting class of axion models named as hadronic axion models. In hadronic axion models, all SSM fields carry vanishing PQ charge and as a result flaton couplings to SSM fields appear as loop effects. As an example of supersymmetric hadronic axion model with a mechanism generating the axion scale radiatively, let us consider a model with
W = k φ n+2 1 φ 2 M n P + h Q QQ c φ 1 + · · · ,(14)
where φ 1,2 are gauge singlet flatons, and Q and Q c stand for additional heavy quark and antiquark superfields. Again the soft mass-squared of φ 1 becomes negative at scales around F a by the radiative corrections involving the strong Yukawa coupling h Q QQ c φ 1 , thereby generating the axion scale as φ 1 ≃ φ 2 ≃ F a . A peculiar feature of this type of hadronic axion models is that at tree level flatons do not couple to SSM fields, while there are nonzero couplings to PQ fields as Eq. (11). Flaton couplings to SSM fields are then induced by the loops of Q and Q c , yielding
L SSM = α s 2π ϕ F a 1 4 G a µν G aµν + iλ a γ · ∂λ a ) ,(15)
where (G a µν , λ a ) denotes the gluon supermultiplet (and possibly other gauge multiplets) and again we ignored dimensionless coefficients of order unity. flaton-axion coupling given by x a M 2 ϕ ϕa 2 /2F a , we find Γ a Γt 1 = 1 32
x a x 1 2 2M ϕ Mt 2 4 1 − 4M 2 t 1 M 2 ϕ −1/2 .(16)
This shows that B a can be smaller than about 10 −2 for the parameter range: x a ≈ x 1 and
4M 2 t 1 < M 2 ϕ < 1 4 M 2 t 2 .
Relaxation of the bound on F a . The reheat temperature can not be arbitrarily low in order to be compatible with the big bang nucleosynthesis. Since flatons produce large number of hardrons, the bound T RH > 6 MeV has to be B met [14]. With Eq. (5), this leads to the upper bound:
F a < ∼ 2 × 10 14 0.1 B a 1/2 M ϕ 300 GeV 3/2 GeV .(17)
Once one uses the relation F a ≃ (M ϕ M n P ) 1/n+1 , this means that only n = 1, 2, and 3 are allowed by the big-bang nucleosynthesis.
As is well known, another upper bound on the axion scale can be derived by requiring that the coherent axion energy density produced by an initial misalignment should not exceed the critical density [15]. If there is no entropy production after the axion start to oscillate at around T ≃ 1 GeV, this lead to the usual bound: F a < ∼ 10 12 GeV. When n = 2 or 3, the corresponding axion scale F a ≃ (M ϕ M n P ) 1/n+1 would exceed this bound. However in this case, the reheat temperature (5) goes below 1 GeV. Then the coherent axions may be significantly diluted by the entropy dumped from flaton decays, thereby allowing F a much bigger than 10 12
GeV [8].
Axion production in matter-dominated universe, e.g. flaton oscillation dominated universe, has been considered in Ref. [14,16] assuming m a (T ) ∝ T −4 . For our computation, we take the power-law fit of the temperature dependent axion mass [17]:
m a (T ) ≃ 7.7 × 10 −2 m a (T = 0)(Λ QCD /T ) 3.7 .
Axion oscillation starts at T a for which m a (T a ) = 3H(T a ): GeV .
We refer the reader to paper [18] for the available formulae. If T a > T RH , the coherent axion energy density is diluted by the entropy produced between T a and T RH . At the end of the entropy dumping (around T RH ), the coherent axion number density in unit of the entropy density is given by Y f ≃ θ 2 F 2 a m a (T a )R 3 a /S f where θ denotes the initial misalignment angle of the axion field, R a is the scale factor at T a and S f is the total entropy at T RH . The ratio of the axion energy density to the critical energy density at present is given by
Ω a h 2 50 ≃ 3.3 × 10 17 F a 10 12 GeV 1.5 Γ ϕ GeV 0.98 Λ QCD 200 MeV −1.9 ≃ 1 0.1 B a 10 12 GeV F a 0.44 M ϕ 300 GeV 2.9 Λ QCD 200 MeV −1.9 (19) where we have used Γ ϕ ≃ B −1 a M 3 ϕ /32πF 2 a .
The above result is valid only for n ≥ 2 yielding T RH < T a . As we have anticipated, it shows that the case of n = 2 or 3 with F a ≃ (M ϕ M n P ) 1/n+1 yields a coherent axion energy density not exceeding the critical density although the corresponding F a exceeds 10 12 GeV. Furthermore, in this case of n = 2 or 3, axions can be a good dark matter candidate for an appropriate value of M ϕ , which was not possible for n = 1.
We remark that diluting the coherent axions with T RH < T a is allowed only when R-parity is broken. If not, stable lightest supersymmetric particles (LSP) produced after the flaton decay would overclose the universe. This can be avoided if the reheat temperature is bigger than the decoupling temperature of LSP which is typically M LSP /20. However this is usually above 1 GeV, i.e. above T a . Consequently, the usual upper bound F a < ∼ 10 12 GeV can not be relaxed when the reheat temperature is bigger than M LSP /20. We stress here that even when R-parity is broken and thus LSP cannot be a dark matter candidate, coherent axions can be a viable dark matter candidate when n = 2 or 3.
Baryogenesis. Thermal inflation driven by PQ flatons may dilute away any pre-existing baryon asymmetry. However, PQ flatons themselves can produce baryon asymmetry after the reheating through the DH mechanism [9]. A complicated Affleck-Dine type baryogenesis after thermal inflation has also been explored in Ref. [19]. Our previous discussion of flaton couplings in DFSZ type models indicates that flatons going to stops can be the most efficient decay channel. The decay-produced stops subsequently decay to generate a baryon asymmetry provided that the baryon-number violating operator, e.g., λ ′′ 332 U c 3 D c 3 D c 2 and the corresponding complex trilinear soft-term are present. Note that the PQ symmetry [see Eq. (1)] can be arranged so that dangerous lepton-number violating operators LQD c , LLE c are forbidden for the proton stability.
In order for the baryon asymmetry not to be erased the reheat temperature (5) has again to be less than few GeV [9]. This again means that the DH mechanism can work only for n = 2 or 3 [see Eqs. (3) and (5)]. The produced baryon asymmetry is
η ≡ n B n γ ≃ 5.3 T RH M ϕ ∆B ,(20)
where ∆B is the baryon asymmetry generated by each flaton decay into stop-antistop pair.
Using Eq. (5) and the estimate of ∆B given in [9], we find
η 3 × 10 −10 ≃ |λ ′′ 332 | 2 arg(Am * 1/2 ) 10 −2 0.1 B a 1/2 10 14 GeV F a M ϕ 300 GeV 1/2 ,(21)
where arg(Am * 1/2 ) denotes the CP violating relative phase which is constrained to be less than 10 −2 for superparticle masses of order 100 GeV [20]. For n = 3, the desired amount of baryon asymmetry can be achieved only when λ ′′ 332 is of order unity, while for n = 2 it can be done with a smaller λ ′′ 332 .
In conclusion, we have examined some cosmological consequences of supersymmetric axion models in which the axion scale is radiatively generated as F a ≃ (m 3/2 M n P ) 1/n+1 . In such models, the early universe inevitably experiences a period dominated by the coherent oscillation of PQ flatons which start to oscillate at temperature around m 3/2 . Then a significant amount of oscillating PQ flatons can decay into axions, thereby yielding a too large axion energy density at the time nucleosynthesis. This consideration puts a limit on the effective branching ratio B a measuring how large fraction of oscillating flatons are converted into axions: it should be less than 1/3 ∼ 0.02 depending upon our choice of the allowed extra number of neutrino species δN ν = 0.1 ∼ 1.5. Models of hadronic axion with a radiative mechanism would yield B a > ∼ 0.5 unless the flaton coupling to the lightest flatino is unusually large. This is essentially because the flaton couplings to SSM are loop suppressed compared to the couplings to PQ sector. DFSZ type models with a radiative mechanism is more interesting since it can provide a rationale for the size of the µ term (and also the scale for neutrino masses). If the flaton mass M ϕ ≫ M SSM denoting the typical mass in supersymmetric standard model, DFSZ type models would also suffer from the same difficulty as that of hadronic axion models. However for M ϕ comparable to M SSM , requiring B a to be about 1/10 does not provide any meaningful constraint on DFSZ type models. If one wishes to achieve a smaller B a , say about 10 −2 in DFSZ type models, one then needs a kind of tuning of the model. Flaton decays into the lighter stops is then picked out as one of the efficient decay channels leading to such a small B a provided 4M 2 t 1 < M 2 ϕ < 1 4 M 2 t 2 . Another interesting cosmological consequence of decaying flatons is the relaxation of the cosmological upper bound on the axion scale. For the axion scale bigger than 10 12 GeV, the entropy production by PQ flatons ends after the axion field starts to oscillate by QCD instanton effects, thereby diluting the coherent axion energy density in a rather natural way.
With this late time entropy production by PQ flatons, the upper bound on the axion scale F a can be pushed up to about 10 15 GeV, but at the expense of breaking R-parity to avoid a too large mass density of relic LSP. Then the integer n which determines the axion scale in terms of m 3/2 and M P can take n = 1, 2 or 3.
It is likely that any pre-existing baryon asymmetry is completely diluted by the huge entropy dumping in thermal inflation scenario. As the PQ flatons are expected to decay dominantly into stops, the DH mechanism for the late time baryogenesis can work in a natural manner when n = 2 or 3 so that the reheat temperature does not exceed 1 GeV. With broken R-parity, LSP is no more stable and can not be a dark matter candidate. In this scenario, coherent axions can provide a critical mass density of the universe by saturating the cosmological bound on F a which now can be as large as 10 15 GeV.
Interestingly enough, we now observe that the case of n = 2 or n = 3 provides a very concordant cosmological scenario: (i) a proper baryon asymmetry is generated by the DH mechanism using baryon-number violating interaction λ ′′ U c D c D c , (ii) potentially dangerous coherent axions (with F a ≫ 10 12 GeV) are diluted by the late time entropy production, (iii) both the baryogenesis and axion dilution require R-parity to be broken, and then diluted coherent axions constitute dark matter in the universe,
Such models typically contain two basic mass scales, M P and the scale of local supersymmetry breaking M S in the hidden sector leading to m 3/2 = M 2 S /M P = 10 2 ∼ 10 3 GeV. Supergravity interactions then generate soft supersymmetry breaking terms in the supersymmetric standard model sector which are of order m 3/2 . In this scenario, radiative corrections to the Higgs doublet mass-squared associated with the large top quark Yukawa coupling can naturally lead to the electroweak symmetry breaking at the scale M W ≃ m 3/2 . When the PQ fields which are responsible for the spontaneous violation of U(1) P Q correspond to flat directions of the model, the intermediate axion scale F a can also be radiatively generated in terms of M P and m 3/2 .
It is now easy to notice that, due to the loop suppression in L SSM , most of oscillating flatons in hadronic axion models decay first into either axion pairs, or lighter flaton pairs, or flatino pairs, as long as the decays are kinematically allowed. Lighter flatons would experience similar decay modes, while flatinos decay into axion plus a lighter flatino. Then in the first round of reheating, flatons are converted into either axions or the lightest flatinos. The lightest flatinos will eventually decay into SSM particles. Because of kinematical reasons, e.g. the mass relation M ϕ > 2Mφ and the phase space suppression factor (1 − 4M 2 ϕ /M 2 ϕ ) 1/2 in the decay ϕ → 2φ, more than half of the original flatons would be converted into axions, i.e. the effective branching ratio B a > ∼ 1/2, unless the flaton coupling to the lightest flatino is unusually large. This is in conflict with the NS limit(9)even for the most conservative choice δN ν = 1.5, implying that hadronic axion models with a radiative mechanism can be compatible with the big-bang nucleosynthesis only when the models are tuned to have an unusually large flaton coupling to the lightest flatino. In DFSZ type models, flatons have tree level couplings to SSM fields which are of order M SSM /F a or M 2 SSM /F a where M SSM collectively denotes the mass parameters in the SSM, e.g. M t , M W , µ, A, and so on [see Eq. (13) and the discussions below it]. Thus if M ϕ ≫ M SSM , the reheating procedure would be similar as that of hadronic axion models and then the NS limits provides a meaningful constraint on the flaton couplings to the PQ sector. For the case that M ϕ is comparable to M SSM , the most conservative choice of δN ν = 1.5 would not provide any meaningful restriction on DFSZ type models. However it is still nontrivial to achieve B a significantly smaller than 1/10. A careful examination of the flaton couplings in DFSZ type models suggests that, among the decays into SSM particles, the decay channels to the top (t) and/or stop (t) pairs are most important. Flaton coupling to the top (stop) is of order M t /F a (M 2
t /F a ), while the coupling to the axion is of order M 2 ϕ /F a . As a result, B a significantly smaller than 1/10 implies that the flaton couplings to the top and/or stop are unusually large in view of the relation M ϕ > 2M t (2Mt). One of the efficient way to achieve such a small B a is to assume that there is a sort of mass hierarchy between the lighter stop mass-squaredM 2 t 1 and the heavier stop-mass squared M 2 t 2 , allowing for instance 4M 2 t 1 < M 2 ϕ < 1 4 M 2 t 2 .This would be the case whenM 2 t + M 2 t ≃ M 2 t c + M 2 t ≃ M t (A + µ cot β)where M 2 t and M 2 t c denote the soft squark masses. Since the flaton couplings to stops are determined not only by the mass parameters (e.g. M t and A) but also by additional dimensionless parameters x ij defined in Eq.(12), the flaton coupling to the lighter stopt 1 would be of order M 2 t 2 /F a , not the order of M 2 t 1 /F a . To be more explicit, let us write this coupling as x 1 M 2 t 2 ϕ|t 1 | 2 /F a . With the
References
. R D Peccei, H R Quinn, Phys. Rev. Lett. 381791Phys. Rev. DR. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38 (1977) 1440 ; Phys. Rev. D 16 (1977) 1791 .
. J E Kim, Phys. Rev. Lett. 43103J. E. Kim, Phys. Rev. Lett. 43 (1979) 103;
. M A Shifman, V I Vainstein, V I Zakharov, Nucl. Phys. B. 166493M. A. Shifman, V. I. Vainstein and V. I. Zakharov, Nucl. Phys. B 166 (1980) 493.
. A R Zhitnitskii, Sov. J. Nucl. Phys. 31260A. R. Zhitnitskii, Sov. J. Nucl. Phys. 31 (1980) 260;
. M Dine, W Fischler, M Srednicki, Phys. Lett. B. 104199M. Dine, W. Fischler and M. Sred- nicki, Phys. Lett. B 104 (1981) 199.
For a review see. J E Kim, Phys. Rep. 1501For a review see, J. E. Kim, Phys. Rep. 150 (1987) 1;
E W Kolb, M S Turner, The Early Universe. Addison-WesleyE. W. Kolb and M. S. Turner, The Early Universe (Addison-Wesley, 1990).
. H Murayama, H Suzuki, T Yanagida, Phys. Lett. B. 291418H. Murayama, H. Suzuki and T. Yanagida, Phys. Lett. B 291 (1992) 418.
. D H Lyth, E D Stewart, Phys. Rev. Lett. 75and references thereinD. H. Lyth and E. D. Stewart, Phys. Rev. Lett. 75 (1995) 201, and references therein.
. E J Chun, A Lukas, Phys. Lett. B. 35743E. J. Chun and A. Lukas, Phys. Lett. B 357 (1995) 43.
. M Dine, W Fischler, Phys. Lett. B. 120137M. Dine and W. Fischler, Phys. Lett. B 120 (1983) 137;
. P J Steinhardt, M S Turner, Phys. Lett. B. 12951P. J. Steinhardt and M. S. Turner, Phys. Lett. B 129 (1983) 51.
. S Dimopoulos, L J Hall, Phys. Lett. B. 196135S. Dimopoulos and L. J. Hall, Phys. Lett. B 196 (1987) 135.
. J E Kim, H P Nilles, Phys. Lett. B. 138150J. E. Kim and H. P. Nilles, Phys. Lett. B 138 (1984) 150;
. E J Chun, J E Kim, H P Nilles, Nucl. Phys. B. 370105E. J. Chun, J. E. Kim and H. P. Nilles, Nucl. Phys. B 370 (1992) 105.
. D H Lyth, E D Stewart, Phys. Rev. D. 531784D. H. Lyth and E. D. Stewart, Phys. Rev. D 53 (1996) 1784.
. T P Walker, G Steigman, D N Schramm, K A Olive, H Kang, Astrophys. J. 37651T. P. Walker, G. Steigman, D. N. Schramm, K. A. Olive and H. Kang, Astrophys. J. 376 (1991) 51;
. P Kernan, L Krauss, Phys. Rev. Lett. 723309P. Kernan and L. Krauss, Phys. Rev. Lett. 72 (1994) 3309;
. K A Olive, G Steigman, Phys. Lett. B. 354357K. A. Olive and G. Steigman, Phys. Lett. B 354 (1995) 357.
) 3981, and FERMILAB-Pub-96/122-A, astro-ph/9606059. C J Copi, D N Schramm, M S J Turner ; P, S Kernan, Sarkar, astro-ph/9603071CWRU-P3-96. 75C. J. Copi, D. N. Schramm and M. S. Turner, Phys. Rev. Lett. 75 (1995) 3981, and FERMILAB-Pub-96/122-A, astro-ph/9606059; P. J. Kernan and S. Sarkar, CWRU-P3- 96, astro-ph/9603045; C. Y. Cardall and G. M. Fuller, astro-ph/9603071.
. G Lazarides, R K Schaefer, D Seckel, Q Shafi, Nucl. Phys. 346193G. Lazarides, R. K. Schaefer, D. Seckel and Q. Shafi, Nucl. Phys. B346 (1990) 193.
. J Preskill, M Wise, F Wilczek, Phys. Lett. B. 120127J. Preskill, M. Wise and F. Wilczek, Phys. Lett. B 120 (1983) 127;
. L Abbott, P Sikivie, 133L. Abbott and P. Sikivie, ibid (1983) 133;
. M Dine, W Fischler, 137M. Dine and W. Fischler, ibid (1983) 137.
. G Lazarides, Q Panagiotakopoulos, Shafi, Phys. Lett. B. 192323G. Lazarides, C Panagiotakopoulos and Q. Shafi, Phys. Lett. B 192 (1987) 323.
. M S Turner, Phys. Rev. D. 33889M. S. Turner, Phys. Rev. D 33 (1986) 889;
. D J Gross, R D Pisarski, L G Yaffe, Rev. Mod. Phys. 5343D. J. Gross, R. D. Pisarski and L. G. Yaffe, Rev. Mod. Phys. 53 (1981) 43.
. R J Scherrer, M S Turner, Phys. Rev. D. 31681R. J. Scherrer and M. S. Turner, Phys. Rev. D 31 (1985) 681.
E D Stewart, M Kawasaki, T Yanagida, hep-ph/9603324RESCEU-8-96. E. D. Stewart, M. Kawasaki and T. Yanagida, RESCEU-8-96, hep-ph/9603324.
. J Ellis, S Ferrara, D V Nonopoulos, Phys. Lett. B. 114231J. Ellis, S. Ferrara and D. V. Nonopoulos, Phys. Lett. B 114 (1982) 231;
. W Buchmueller, D Wyler, Phys. Lett. B. 121321W. Buchmueller and D. Wyler, Phys. Lett. B 121 (1983) 321;
. J Polchinski, M Wise, Phys. Lett. B. 125393J. Polchinski and M. Wise, Phys. Lett. B 125 (1983) 393;
. F Aguila, M Gavela, J Grifols, A Mendez, Phys. Lett. B. 12671F. del Aguila, M. Gavela, J. Grifols and A. Mendez, Phys. Lett. B 126 (1983) 71.
| []
|
[
"Testing Multiple Inequality Hypotheses : A Smoothed Indicator Approach *",
"Testing Multiple Inequality Hypotheses : A Smoothed Indicator Approach *"
]
| [
"Le-Yu Chen \nInstitute of Economics\nDepartment of Economics\nAcademia Sinica\nUniversity College London\n\n",
"Jerzy Szroeter \nInstitute of Economics\nDepartment of Economics\nAcademia Sinica\nUniversity College London\n\n"
]
| [
"Institute of Economics\nDepartment of Economics\nAcademia Sinica\nUniversity College London\n",
"Institute of Economics\nDepartment of Economics\nAcademia Sinica\nUniversity College London\n"
]
| []
| This paper proposes a class of origin-smooth approximators of indicators underlying the sum-of-negative-part statistic for testing multiple inequalities. The need for simulation or bootstrap to obtain test critical values is thereby obviated. A simple procedure is enabled using fixed critical values. The test is shown to have correct asymptotic size in the uniform sense that supremum finite-sample rejection probability over null-restricted data distributions tends asymptotically to nominal significance level. This applies under weak assumptions allowing for estimator covariance singularity. The test is unbiased for a wide class of local alternatives. A new theorem establishes directions in which the test is locally most powerful. The proposed procedure is compared with predominant existing tests in structure, theory and simulation. | 10.1016/j.jeconom.2013.10.004 | [
"https://arxiv.org/pdf/1206.6053v1.pdf"
]
| 55,939,423 | 1206.6053 | 6ffb2a1aad42bcedf605bc2dd424b5d8ba84576a |
Testing Multiple Inequality Hypotheses : A Smoothed Indicator Approach *
26 Jun 2012 Revision : June 2012
Le-Yu Chen
Institute of Economics
Department of Economics
Academia Sinica
University College London
Jerzy Szroeter
Institute of Economics
Department of Economics
Academia Sinica
University College London
Testing Multiple Inequality Hypotheses : A Smoothed Indicator Approach *
26 Jun 2012 Revision : June 2012arXiv:1206.6053v1 [stat.ME]TestMultiple inequalitiesOne-sided hypothesisComposite nullBinding constraintsAsymptotic exactnessCovariance singularityIndicator smooth- ing
This paper proposes a class of origin-smooth approximators of indicators underlying the sum-of-negative-part statistic for testing multiple inequalities. The need for simulation or bootstrap to obtain test critical values is thereby obviated. A simple procedure is enabled using fixed critical values. The test is shown to have correct asymptotic size in the uniform sense that supremum finite-sample rejection probability over null-restricted data distributions tends asymptotically to nominal significance level. This applies under weak assumptions allowing for estimator covariance singularity. The test is unbiased for a wide class of local alternatives. A new theorem establishes directions in which the test is locally most powerful. The proposed procedure is compared with predominant existing tests in structure, theory and simulation.
Introduction
This paper is concerned with the problem of testing the null hypothesis H 0 that the true value of a finite p-dimensional parameter vector µ is non-negative versus the alternative that at least one element of µ is strictly negative. A major problem for testing such hypotheses has been dependence of null rejection probability on the unknown subset of binding inequalities (zero-valued µ j ). Under H 0 , the asymptotic distribution of a nontrivial test statistic is typically degenerate at interior points (all elements of µ strictly positive) of parameter space. But at boundary points (one or more elements zero), that distribution is non-degenerate and may depend on the number and position of the zero elements but not on strict positives. In consequence, determining the critical value to be used for the test at some nominal significance level α is a nontrivial issue. The classic least favorable configuration (LFC) approach seeks the parameter point in the null that maximizes the rejection probability (e.g., see Perlman (1969) and Robertson, Wright and Dykstra (1988)). This principle risks yielding tests which have comparatively low power against sequences of alternatives converging to boundary points which are not LFC. To improve test power, recent literature has proposed using data-driven selection of the true binding inequalities in place of the LFC point to compute test critical values. Whatever the critical value, it is important to demonstrate that null rejection probability does not exceed α uniformly over all H 0 -compliant data generating processes for sample size large enough. Such uniformity has been emphasized in recent literature (e.g., see Mikusheva (2007), Romano and Shaikh (2008), Andrews and Guggenberger (2009), Andrews and Soares (2010) and Linton et al. (2010)) to ensure validity of asymptotic approximation to actual finite sample test size especially when the test statistic has a limiting distribution which is discontinuous on parameter space. Regardless of whether the binding inequalities are fixed according to the LFC or determined via a stochastic selection mechanism, the functional forms of test statistics proposed in this literature are generally non-smooth and hence computation of test critical values requires simulation or bootstrap.
The contributions of the present paper are as follows. We develop a multiple inequality test whose implementation does not require computer intensive methods. The central idea is to construct a sequence of origin-smooth approximators of indicators underlying the sum-ofnegative-part statistic for testing multiple inequalities. The approximation is a form of indicator smoothing in the spirit of Horowitz (1992), enabling standard asymptotic distribution results and obviating simulation and bootstrap computation of test critical values. Moreover, the test allows for estimator covariance singularity.
The test statistic of this paper has a non-degenerate asymptotic distribution of simple analytic form at boundary points of the null hypothesis but becomes degenerate at interior points. Despite this type of discontinuity, the test critical value can be fixed ex ante without compromising asymptotic validity in the uniform sense that the limit of finite sample test size (defined as supremal rejection probability over all H 0 -compatible data generating processes) is equal to the nominal size. We prove that this uniformity property holds for every approximator in a wide class allowed by the paper.
The smoothing design of this paper embodies a data driven weighting scheme which automatically concentrates the test statistic onto those parameter estimates signaling binding inequalities.
This feature is connected to methods of binding inequality selection used in Hansen (2005), Chernozhukov et al. (2007), Andrews and Soares (2010) and Linton et al. (2010). Indeed, the smoother can also be interpreted as an asymptotic selector and the key component of our test statistic coincides with the sum of elements of the difference between the estimated and recentered nullcompatible mean used to obtained the simulated test critical values for Andrews and Soares (2010)'s generalized moment selection (GMS) based tests. The difference itself, however, is not within the class of test statistics covered by the theory of these authors but its properties emerge from the theory developed in the present paper.
The relative computational ease of the test of this paper might be expected to carry a cost in terms of power. However, as we show, the test is consistent against all fixed alternatives and is unbiased for a wide class of local alternatives. In comparison with existing tests, its relative strength varies with the particular direction of local alternative. We provide a new theorem establishing directions in which the test is locally most powerful. Monte Carlo results support the theory and reveal that finite sample performance of the present test is not dominated by the GMS based tests.
We now review relevant test methods in addition to the works cited above. The QLR test has been well developed in the inequality test literature. See, e.g. Perlman (1969), Kodde and Palm (1986), Wolak (1987Wolak ( , 1988Wolak ( , 1989Wolak ( , 1991, Gourieroux and Monfort (1995, chapter 27) and Silvapulle and Sen (2005, chapters 3-4). This test is also applied in the moment inequality literature (see Rosen (2008), Andrews and Guggenberger (2009) and Andrews and Soares (2010)). The asymptotic null distribution of the QLR test statistic generally has no analytical form. Since computing this test statistic requires solution of a quadratic optimization program subject to non-negativity constraints, simulation and bootstrapping for the test critical value is particularly heavy.
An extreme value (EV) form of test statistic was developed by White (2000) in the context of comparing predictive abilities among forecasting models. Such a statistic is lighter on computation but its asymptotic null distribution remains non-standard. Hansen (2005) incorporates estimation of actual binding inequalities to bootstrap null distribution of the extreme value statistic. Hansen's refinement is a special case of the GMS based critical value estimation proposed by Andrews and Soares (2010) who also consider a broad class of test functions including both the QLR and other simpler forms using negative-part functions.
The rest of the paper is organized as follows. Section 2 summarizes the method of Andrews and Soares (2010) for testing with estimated critical values which embody the GMS procedure for estimation of binding inequalities. We contrast that with the smoothing approach of this paper and highlight connecting features. Section 3 sets out functional assumptions on the class of smoothers and completes construction of the test statistic. Section 4 states basic distributional assumptions on parameter estimators and presents asymptotic null distribution of the test statistic. Section 5 establishes key results on asymptotic size of the test. Section 6 studies test consistency and local power. Section 7 presents results of some Monte Carlo simulation studies. Section 8 concludes. Appendix A derives the details of an adjustment component of the test statistic. Appendix B provides proofs of theoretical results of the paper. Appendix C gives examples of covariance matrix singularity and illustrates how they can fit into our framework.
Recentering, Selection and Smoothing in Inequality Tests
Let µ = (µ 1 , µ 2 , ..., µ p ) ′ be a column vector of (functions of) parameters appearing in an econometric model. We are interested in testing :
H 0 : µ j ≥ 0 for all j ∈ {1, 2, ..., p} versus H 1 : µ j < 0 for at least one j.
(2.1)
We assume that there exists a vector µ of parameter estimators based on sample size T such that √ T ( µ− µ) is asymptotically multivariate normal with mean 0 and covariance V consistently estimated by V . The vector µ and matrix V may depend on common parameters but this is generally kept implicit for notational simplicity.
Recentering and Generalized Moment Selection in Critical Value Estimation
Recent improved tests developed by Andrews and Soares (2010) of the hypothesis (2.1) are distinguished by their use of estimated critical values embodying a selection rule to statistically decide which inequalities are binding (µ j = 0). In brief, these tests proceed operationally as follows.
A statistic S( √ T µ, V ) is first computed for some fixed function S(., .). The asymptotic critical value of the statistic is then obtained by simulation (or resampling) as the appropriate quantile of the distribution of S(Z + K(T ) µ, V ) where Z is an artificially generated vector such that Z ∼ N (0, V ) conditionally on data, µ is a recentered null-compatible mean and
K(T ) = o( √ T )
is some positive "tuning" function increasing without bound as T −→ ∞. Basic recentering defines µ j = 0 for K(T ) µ j ≤ 1. Setting µ j = 0 amounts to selecting j as the index of a binding Data-dependent selection of binding constraints reduces possible inefficiencies arising from fixing all the elements of µ to be zero (least favorable). On the other hand, regardless of how µ is constructed, simulation (or bootstrap) is still needed since the asymptotic distribution of the statistic used in this literature is generally non-standard. This applies even to test statistics which aggregate individual discrepancy values min( µ j , 0) in a simple manner. They include the extreme value form studied by Hansen (2005) and the sum
constraint. For K(T ) µ j > 1, µ j is defined to ensure K(T ) µ j −→ ∞ as T −→ ∞,p j=1 [− √ T min( µ j , 0)] (2.2)
lying within the very wide class of right-tailed tests studied by Andrews and Soares (2010).
The Smoothed Indicator Approach
Let 1{.} denote the indicator taking value unity if the statement inside the bracket is true and zero otherwise. The root cause of non-standard distribution of (2.2) is the discontinuity at the origin of the indicator 1{x ≤ 0} underlying the negative-part function min(x, 0) = 1{x ≤ 0}x.
To overcome this problem, the present paper investigates an indicator smoothing approach as follows.
First, we approximate the function min(x, 0) by Ψ T (x)x where {Ψ T (x)} is a sequence of nonnegative and non-increasing functions each of which is continuously differentiable at the origin and converges pointwise (except possibly at the origin) as T −→ ∞ to the indicator function 1{x ≤ 0}. We refer to Ψ T (x) as an (origin-smoothed) indicator smoother or a smoothed indicator for 1{x ≤ 0}.
In this paper, we will focus on the class of smoothed indicators generated as Ψ T (x) = Ψ(K(T )x) for some fixed function Ψ and a "tuner" K(T ) of the type mentioned in Subsection 2.1. The functional form of Ψ includes decumulative distribution functions for continuous variates as well as discrete yet origin-smooth functions. We therefore replace the individual negative-part statistic √ T min( µ j , 0) of (2.2) by √ T Ψ T ( µ j ) µ j . Subject to regularity conditions set out later, Ψ T ( µ j ) = o p (1/ √ T ) for strictly positive µ j and hence the term √ T Ψ T ( µ j ) µ j vanishes asymptotically. For zero-valued µ j , Ψ T ( µ j ) tends to Ψ(0) in probability and
√ T Ψ T ( µ j ) µ j is asymptotically equivalent to Ψ(0) √ T µ j .
Second, we consider a left-tailed test based on the statistic that replaces (2.2) with
p j=1 √ T Ψ T ( µ j ) µ j − Λ T ( µ j , v jj ) (2.3)
where v jj is the jth diagonal element of V and Λ T is an adjustment term approximating the It is worth noting that the approach to achieve asymptotic normality in this paper is distinct from alternative devices such as those of Dykstra (1991) and Menzel (2008) who demonstrate that even the QLR statistic can be asymptotically normal when p, the dimension of µ, is viewed as increasing with T to infinity. Recent papers by Lee and Whang (2009) and Lee, Song and Whang (2011) obtain asymptotic normality for a class of functional inequality test statistics. Their particular device (poissonization) requires µ to be infinitely dimensional at the outset.
expectation of [Ψ T ( µ j ) − Ψ(0)] √ T µ j evaluated at µ j = 0.
By contrast, in the framework of testing finite and fixed p inequalities, the present paper (and its preliminary versions (Chen andSzroeter (2006, 2009) and Chen (2009, Chapter 3)) where a prototype asymptotically normal test statistic appears) uses only large T asymptotics and an indicator smoothing device. The strategy adopted by this work in testing is akin to Horowitz (1992) who sought to resolve non-standard asymptotic behavior in estimation by replacing a discrete indicator function with a smoothed version. Therefore, the smoothing mechanism in-vestigated by this paper to obtain standard asymptotic distribution results could also be of theoretical interest in its own right.
Smoothed Indicator Class and Test Procedure
We now formally set out regularity conditions on the smoothed indicator Ψ T (x), x ∈ R. We
require that Ψ T (x) = Ψ(K(T )x) (3.1)
where Ψ(.) and K(T ) are functions satisfying the following assumptions:
[A1] Ψ(x) is a non-increasing function and 0 ≤ Ψ(x) ≤ 1 for x ∈ R. [A3] K(T ) is positive and increasing in T . [A2] enables smoothing for asymptotic normality through zero-valued µ j , whilst [A6] creates data-driven importance weighting in the sense that each µ j corresponding to strictly positive µ j is likely to contribute ever less to the value of the test statistic as T increases.
[A4] K(T ) −→ ∞ and K(T )/ √ T −→ 0 as T −→ ∞. [A5] Ψ(x) −→ 1 as x −→ −∞. [A6] √ T Ψ(K(T )x) −→ 0 as T −→ ∞ for x > 0.
In consequence, the statistic will be asymptotically dominated by those µ j corresponding to zero or negative µ j , detection of which is the very purpose of the test.
= 0, √ T Ψ T ( µ j ) µ j in (2.3) is asymptotically equivalent to Ψ(0) √ T µ j , the difference √ T Ψ T ( µ j ) µ j − Ψ(0) √
T µ j remains nonpositive in large samples. Whilst asymptotically negligible, this may be size-distorting in finite samples. To systematically offset that effect, the adjustment term Λ T is constructed as follows to approximate the expectation of
[Ψ T ( µ j ) − Ψ(0)] √ T µ j .
Under Assumption [A2], there are finite increasing values a 1 , ..., a n for some n ≥ 1 such that Ψ(x) is continuously differentiable in intervals (−∞, a 1 ), (a 1 , a 2 ), ..., (a n , ∞). Because Ψ is bounded and non-increasing, its one-sided limits Ψ(a
− i ) ≡ lim x−→a − i Ψ(x) and Ψ(a + i ) ≡ lim x−→a + i Ψ(x) for i ∈ {1, 2, .
.., n} exist. Let ψ(x), x ∈ R be the "extended" derivative of Ψ defined as the left-hand limit of ψ(x). Namely, ψ(x) ≡ lim y−→x − ψ(y). Then the algebraic form of Λ T whose detailed derivation is given in Appendix A can be written as
Λ T ( µ j , v jj ) = v jj ψ(K(T ) µ j )K(T )/ √ T − v jj n i=1 (Ψ(a − i ) − Ψ(a + i ))φ( a i √ T v jj K(T ) ) (3.2)
where φ is the standard normal density function.
For the simple choice Ψ(x) = 1{x ≤ 1} used to form the statistic (2.4), ψ = 0 and there is a single discontinuity at x = 1 so the proxy simplifies to
Λ T ( µ j , v jj ) = − v jj φ( √ T v jj K(T ) ). (3.3)
On the other hand, for everywhere continuously differentiable Ψ, ψ(x) = ψ(x) for x ∈ R and Ψ(a − i ) = Ψ(a + i ) for i ∈ {1, 2, ..., n}. Hence Λ T for such case simplifies to
Λ T ( µ j , v jj ) = v jj ψ(K(T ) µ j )K(T )/ √ T . (3.4)
Note that since Ψ is non-increasing, for any T , Λ T ( µ j , v jj ) given by (3.2) is non-positive by construction. Besides, under Assumption [A4] Λ T ( µ j , v jj ) tends to zero in probability as T tends to infinity. Hence for those µ j = 0, the impact of adjusting √ T Ψ T ( µ j ) µ j with the term Λ T ( µ j , v jj ) on test behavior is asymptotically negligible though the adjustment (3.2) is applied for each j ∈ {1, 2, .., p}.
Finally, we consider a further useful generalization by replacing each µ j in (2.3) with θ j µ j for any positive scalar θ j , which can be fixed known or estimated. Choosing θ j to be inverse of the estimated asymptotic standard deviation of µ j amounts to conducting the test on t-ratios. Other choices of θ j are discussed in Appendix C which deals with estimator covariance singularity issues.
With this enhancing feature, the adjustment term Λ T ( µ j , v jj ) is replaced by Λ T ( θ j µ j , θ 2 j v jj ). We now present the test procedure as follows.
Let Ψ, Λ, e p be the p dimensional column vectors and ∆ be the diagonal matrix defined as
Ψ ≡ (Ψ(K(T ) θ 1 µ 1 ), Ψ(K(T ) θ 2 µ 2 ), ..., Ψ(K(T ) θ p µ p )) ′ , (3.5) Λ ≡ (Λ T ( θ 1 µ 1 , θ 2 1 v 11 ), Λ T ( θ 2 µ 2 , θ 2 2 v 22 ), ..., Λ T ( θ p µ p , θ 2 p v pp )) ′ , (3.6) e p ≡ (1, 1, ..., 1) ′ , (3.7) ∆ ≡ diag( θ 1 , θ 2 , ..., θ p ). (3.8) Let Q 1 ≡ √ T Ψ ′ ∆ µ − e ′ p Λ (3.9) Q 2 ≡ Ψ ′ ∆ V ∆ Ψ. (3.10)
We define the test statistic as
Q = Φ(Q 1 /Q 2 ) if Q 2 > 0 1 if Q 2 = 0 (3.11)
where Φ(x) is the standard normal distribution function. For asymptotic significance level α, we
reject H 0 if Q < α.
The test statistic Q is therefore a form of tail probability or p-value.
We now sketch the reasoning which validates the test. Formal theorems are given later.
Intuitively, we should reject H 0 if Q 1 is too small. For those parameter points under H 0 for which the probability limit of Q 2 is nonzero, Q 2 will be strictly positive with probability approaching one. Then the ratio Q 1 /Q 2 will exist and be asymptotically normal. By contrast, for all points under H 1 , the value of Q 1 will go in probability to minus infinity. Therefore, in cases where Q 2 is positive, we propose to reject H 0 if Q 1 /Q 2 is too small compared with the normal distribution.
Note that our assumptions on the smoothed indicators do not rule out discrete but originsmooth Ψ functions such as the step-at-unity example of Section 7.1. For such a discrete function, Ψ will be a null vector with probability approaching one when all µ j , j ∈ {1, 2, ..., p}, are strictly positive. In this case, Q 2 is also zero by (3.10) with probability approaching one. Therefore, occurrence of the event Q 2 = 0 is possible and signals that we should not reject H 0 . Note that it is not an adhoc choice to set Q = 1 when Q 2 = 0 occurs because the probability limit of Φ(Q 1 /Q 2 ) is also one when all µ j parameters are strictly positive and Ψ is an everywhere positive function. 3
Distributional Assumptions and Asymptotic Null Distribution
We begin by stating the following high-level assumptions which enable us to derive some basic asymptotic properties of the test. Except for [D2], these assumptions are standard.
Define ∆ as the diagonal matrix ∆ ≡ diag(θ 1 , θ 2 , ..., θ p ) where θ j is strictly positive and its estimator θ j is almost surely strictly positive for j ∈ {1, 2, ..., p}. Let d(µ) be defined as the p dimensional vector whose jth element equals 0, Ψ(0), 1 when µ j > 0, µ j = 0, µ j < 0 respectively. For notational simplicity, we keep implicit the possible dependence of the true values of the parameters µ, V and ∆ on the underlying data generating process.
We assume that, as T tends to infinity,
[D1] √ T ( µ − µ) d −→ N (0, V ) where V is some finite positive semi-definite matrix.
The variance V need not be invertible but must satisfy the following condition (whose verification is illustrated in Appendix C).
[D2] V ∆d(µ) = 0 for non-zero d(µ).
Assumption [D2] amounts to saying that the asymptotic distribution of
√ T d(µ) ′ ∆( µ − µ) should not be degenerate. [D3] V p −→ V for some almost surely positive semi-definite estimator V . [D4] ∆ p −→ ∆. Now let J denote the set {1, 2, ..., p} and decompose this as J = A ∪ M ∪ B, where A ≡ {j ∈ J : µ j > 0}, M ≡ {j ∈ J : µ j = 0}, B ≡ {j ∈ J : µ j < 0}.
Let U (0, 1) denote a scalar random variable that is uniformly distributed in the interval [0, 1]. We now present the asymptotic null distribution of the test statistic.
(1) If M = ∅, then Q d −→ U (0, 1). (2) If M = ∅, then Q p −→ 1.
Part (1) of this theorem reflects the fact that, for any fixed data generating process whose µ value lies on the boundary of null hypothesis space, the distribution of the test statistic Q is asymptotically non-degenerate and given (3.11), the limiting distribution of the ratio Q 1 /Q 2 is standard normal. This justifies the idea of smoothing for normality. Moreover, Q has the same limiting distribution at each boundary point. Part (2) says that, at any fixed data generating process whose µ value lies in the interior of null hypothesis space, the asymptotic distribution of Q is degenerate and Q will take value above α with probability tending to 1.
Asymptotic Test Size
Pointwise and Uniform Asymptotic Control of Test Size
Theorem 1 shows that the test statistic Q is not asymptotically pivotal since its limiting distribution and hence the asymptotic null rejection probability depend on the true value of µ. By definition, the pointwise asymptotic size of the test is the supremum of the asymptotic rejection probability viewed as a function of µ on the domain defined by H 0 . So Theorem 1 implies that this size equals the nominal level α and hence the test is asymptotically exact in the pointwise sense. However, pointwise asymptotic exactness is a weak property. It is desirable to ensure the convergence of the test size to the nominal level holds uniformly over the null-restricted parameter and data distribution spaces. In this section we present results showing that the test size is asymptotically exact in the uniform sense.
To distinguish between pointwise and uniform modes of analysis, we need some additional notation. Note that parameters such as µ and V are functionals of the underlying data generating distribution. Suppose the data consist of i.i.d. vectors x t (t = 1, ..., T ) drawn from a joint distribution G. We henceforth use the notation P G (.) to make explicit the dependence of probability on G. Let Γ denote the set of all possible G compatible with prior knowledge or presumed specification of the data generating process. Then Assumptions [D1] -[D4] amount to restrictions characterizing the class Γ. Let Γ 0 be the subset of Γ that satisfies the null hypothesis.
In the present test procedure, "Q < α" is synonymous with "Q rejects H 0 ". Hence, the rejection probability of the test is P G (Q < α) and the finite sample test size is sup G∈Γ0 P G (Q < α).
Though Theorem 1 implies that convergence of rejection probability is not uniform over G ∈ Γ 0 , the test can be shown to be uniformly asymptotically level α (Lehmann and Romano (2005, p. 422)) in the sense that lim sup
T −→∞ sup G∈Γ0 P G (Q < α) ≤ α.
(5.1) Inequality (5.1) and Part (1) of Theorem 1 together imply the test size is asymptotically exact in the uniform sense that lim sup
T −→∞ sup G∈Γ0 P G (Q < α) = α. (5.2)
The property (5.2) is important for the asymptotic size to be a good approximation to the finitesample size of the test. 4 Such uniformity property has been emphasized in recent literature (e.g., see Mikusheva (2007), Romano and Shaikh (2008), Andrews and Guggenberger (2009) and Andrews and Soares (2010)) particularly when limit behavior of the test statistic can be discontinuous. Accordingly, we establish the validity of (5.2) in Theorem 2.
Before presenting the formal regularity conditions ensuring (5.2), we explain here how (5.2) is possible despite asymptotic non-pivotality of the test statistic. First note that by (3.11),
P G (Q < α) ≤ P G (Q 1 − z α Q 2 < 0) (5.3)
where z α is the α quantile of the standard normal distribution. The transformed statistic (Q 1 − z α Q 2 ) is still not asymptotically pivotal but it can be shown that, given any arbitrary sufficiently small (relative to model constants) positive scalar η, we have with probability at least (1 − η)
for all sufficiently large T that
Q 1 − z α Q 2 ≥ r ′ T √ T ( µ − µ) − (z α c 2 (η) + c 1 (η)) r ′ T V r T
where r T , µ and V are non-stochastic G-dependent quantities such that either r T = 0 or r ′ T V r T is bounded away from zero over G ∈ Γ 0 , whilst c 1 (η) and c 2 (η) are non-stochastic functions that do not depend on G and c 1 (η) −→ 0 and c 2 (η) −→ 1 as η −→ 0. Therefore,
P G (Q 1 − z α Q 2 < 0) ≤ P G (r ′ T √ T ( µ − µ) < (z α c 2 (η) + c 1 (η)) r ′ T V r T ) + η (5.4)
whose right hand will tend, uniformly over G giving non-zero r T , to Φ(z α c 2 (η) + c 1 (η)) + η which is also automatically a weak upper bound on (5.4) for the case r T = 0. This uniformly valid probability bound therefore applies to (5.3) for arbitrarily small η hence implies that (5.1) holds. Equality is obtained by invoking Theorem 1 which says α is actually attained as the limit of P G (Q < α) evaluated at any fixed G ∈ Γ 0 whose µ has at least a zero-valued element.
The explanation provided above is indicative but short of a formal proof. In the next subsection we present additional "uniform" assumptions, strengthening the existing "pointwise" assumptions [D1] -[D4] of Section 4, that are needed to make the argument rigorous. The full proof, along with examples to illustrate some of the assumptions, will be found in the Appendix B.
Uniform Asymptotic Exactness of Test Size
In this section we rigorously address the issue of asymptotic exactness of test size in the uniform sense given by (5.2
Y ≡ √ T ( µ − µ), δ T ≡ K(T )/ √ T . Note that Assumption [A4] implies that δ T −→ 0 as T −→ ∞. For any matrix m, let m ≡ max{|m ij |} where m ij denotes the (i, j)-th element of m.
Assumption [U1] : For any finite scalar value η > 0,
lim T −→∞ inf G∈Γ0 P G (δ T Y < η, || V − V G || < η) = 1.
Assumption [U2] : Let Φ(.) denote the standard normal distribution function. Then given any finite scalar c, lim
T −→∞ sup G∈Γ0 sup β:β ′ VGβ=1 |P G (β ′ Y ≤ c) − Φ(c)| = 0.P G (δ T Y < η, || V − V G || < η) ≥ P G (δ T Y < η) + P G (|| V − V G || < η) − 1 to deduce that|P GT (β ′ T Y ≤ c) − Φ(c)| = 0 (5.6) for all non-stochastic sequences (G T , β T ) satisfying G T ∈ Γ 0 and β ′ T V GT β T = 1. By the i.i.d. assumption, β ′ T Y is 1/ √ T times the sum of T variates β ′ T (x t − E GT (x t )
) which are mutually i.i.d. with mean 0 and variance 1 for each T when β ′ T V GT β T = 1. This meets the requirements of the double array version of the classic Lindeberg-Feller central limit theorem thus establishing asymptotic unit normality of β ′ T Y hence verifying (5.6). For the next assumption, recall that θ j is the jth diagonal element of the matrix ∆. For notational simplicity, the general dependence of θ j and ∆ on G will be kept implicit.
Assumption [U3] : (i) There are finite positive scalars λ and λ ′ such that λ ′ ≤ θ j ≤ λ,
(j = 1, 2, ..., p) uniformly over G ∈ Γ 0 . (ii) For any finite scalar value η > 0, lim T −→∞ inf G∈Γ0 P G ( ∆ − ∆ < ηδ T ) = 1.
Assumption [U3] holds automatically when ∆ is numerically specified by the user hence ∆ = ∆. It also allows θ j to be 1/ √ v jj where v jj is the jth diagonal element of V G provided that v jj is bounded below by some constant, say L > 0, uniformly over G ∈ Γ 0 . 6 In such case,
θ j − θ j ≤ | v jj − v jj | √ 2L −3/2 (5.7)
when | v jj − v jj | < L/2. 7 Hence in the sample mean example described after Assumption [U2], we can verify [U3]-(ii) by applying the Chebychev inequality to show that
P G (| v jj − v jj | < ηδ T )
also tends to 1 uniformly over G ∈ Γ 0 .
For any given positive scalar σ, let d σ (µ) denote the p dimensional vector whose jth element equals Ψ(0) when 0 ≤ µ j ≤ σ and equals 0 otherwise.
Assumption [U4] : There are finite positive real scalars ω, ω ′ and σ such that the following
hold uniformly over G ∈ Γ 0 : (i) V G < ω. (ii) d σ (µ) ′ ∆V G ∆d σ (µ) > ω ′ for all non-zero d σ (µ).
Assumption [U4]-(i) is simply a boundedness assumption which automatically holds when
V G is a correlation matrix. [U4]-(ii) holds automatically when the smallest eigenvalue of V G is bounded away from zero over G ∈ Γ 0 . Note that [U4]-(ii), essentially strengthening Assumption [D2], requires that the asymptotic variance of √ T d σ (µ) ′ ∆( µ − µ)
be bounded away from zero for all non-zero d σ (µ). This is a high level assumption whose verification will be illustrated in examples of Appendix C.
We can now present the following theorem establishing asymptotic exactness of the test in the uniform sense.
0 < α < 1/2, lim sup T −→∞ sup G∈Γ0 P G (Q < α) = α.
Asymptotic Power of the Test
In this section, we study the asymptotic power properties of the test. Proof of all results are presented in the Appendix. For notational simplicity, we suppress the dependence of probability and parameters on the underlying data generating distribution. We first show that the test is consistent against fixed alternative hypotheses. 6 Assumption [U3]-(ii) is stronger than requiring consistency of θ j as an estimator of θ j . An alternative
approach is to strengthen Assumption [U2] by taking Y to be √ T ( ∆ µ − ∆µ) rather than just √ T ( µ − µ)
. But that would be implicitly assuming √ T ( θ j − θ j ) is asymptotically normal (or degenerate). Such an assumption is even stronger than [U3]-(ii) and quite unnecessary for our results. 7 By mean value expansion,
θ j − θ j = | v jj − v jj | /(2|v jj | 3/2 )
where v jj lies between v jj and v jj . Thus when
| v jj − v jj | < L/2, inequality (5.7) follows by noting that |v jj − v jj | ≤ | v jj − v jj |.
Theorem 3 (Consistency) Given [A1] -[A6] with [D1] -[D4], the following is true under
H 1 : µ j < 0 for some j ∈ {1, 2, ..., p}. P (Q < α) −→ 1 as T −→ ∞.
Besides consistency, we are also interested in the local behavior of the test. In order to derive a local power function, we consider a sequence of µ values in the alternative-hypothesis space tending at rate T −1/2 to a value γ ≡ (γ 1 , γ 2 , ..., γ p ) ′ on the boundary of the null-hypothesis space. Specifically, we represent the jth element of µ of such a local sequence as
µ j = γ j + c j √ T (6.1)
where γ j ≥ 0 and c j are constants such that γ j = 0 and c j < 0 hold simultaneously for at least one j.
1{γ j = 0}θ j c j κ ≡ p i=1 p j=1 1{γ i = 0}1{γ j = 0}θ i θ j v ij
where v ij denotes the (i, j)-th element of variance matrix V . Assume κ > 0. Then, as T −→ ∞,
P (Q < α) −→ Φ(z α − κ −1/2 τ ), (6.2)
where z α is the α quantile of the standard normal distribution.
Theorem 4 implies that the test has power exceeding size against all core sequences because the composite drift parameter τ is necessarily negative for such local scenarios. By contrast, tests based on LFC critical values can be biased against core local sequences tending to boundary points off the origin. This is easily seen for statistics such as EV and QLR which are continuous in their arguments. In such cases, local power under any core sequence (6.1) tends to rejection probability at the boundary point µ = (γ 1 , γ 2 , ..., γ p ) ′ . Unless this point is the LFC itself, rejection probability there will be smaller than that at any LFC point by definition. Hence the LFC critical value based test is biased against core local alternatives. A similar argument is given in Hansen (2003Hansen ( , 2005.
Against non-core local sequences, our test can be biased because a trade-off comes into force between negative and positive c j as Theorem 4 shows. Some degree of local bias is common in multivariate one-sided tests and exists even in GMS procedures using estimated rather than LFC test critical values, as noted by Andrews and Soares (2010, p.146, comment (vi)). However, the exact local direction at which a test exhibits strength or weakness may vary across tests. Depending on the off-diagonal elements of V , the local directions −δV θ can be for either core or non-core sequences. 8 Theorem 5 implies that along such local alternatives, the present test is not biased and its limiting local power is not dominated by those of existing tests based on GMS critical values. Note that the result of Theorem 5 does not require specification of particular functional forms of S(., .). It is achieved by indirectly exploiting the Neyman-Pearson lemma. Some special forms are used in Section 7 for numerical illustration.
Monte Carlo Simulation Studies
In this section we conduct a series of Monte Carlo simulations to study the finite sample performance of the test. All tables of simulation results are placed together at the end of the section.
The Specification of Smoothed Indicator
Our objective is to investigate how well the asymptotic theory of the test works in finite sample simulations. For this purpose, we choose Ψ functions which are simple, recognized and not contrived. It would be premature at this stage to undertake a more elaborate exercise to find an optimal combination of Ψ(x) and K(T ).
For the specification of Ψ, the following functions are heuristic choices that are widely adopted in research on smoothed threshold crossing models.
Normal : Ψ N or (x) ≡ 1 − Φ(x) Logistic : Ψ Log (x) ≡ (1 + exp(x)) −1
Besides Ψ N or and Ψ Log , the following simple choice of Ψ, mentioned in Section 2.2, is also valid.
Step-at-unity : Ψ Step (x) ≡ 1{x ≤ 1}
As regards the choice of K(T ), the following two specifications closely match tuning parameters
The Simulation Setup
The simulation experiments are designed as follows. We choose a nominal test size of α = 0.05. We use R = 10000 replications for simulated rejection probabilities. In each replication, we generate i.i.d. observations {x t } T t=1 with T = 250 according to the following scheme :
x t = µ + V 1/2 w t (7.1)
where w t is a p dimensional random vector whose elements are i.i.d. from distribution G w .
We compute µ and V as the sample average and sample variance of the generated data. We take the scalars θ j = 1/ √ v jj and θ j = 1/ v jj where v jj and v jj are the jth diagonal elements of V and V respectively. This simple simulation setup is also adopted by Andrews and Soares (2010) and Andrews and Barwick (2012) in simulation study of the GMS tests. For G w , we consider three distributions: standard normal, logistic and U (−1, 2), the uniform distribution on the interval [−1, 2]. All of these distributions are centered and scaled such that E(w t,j ) = 0 and V ar(w t,j ) = 1 for j ∈ {1, 2, ..., p}. Standard normality of G w is the benchmark. The logistic distribution has thicker tails than the normal whilst the support of a uniform distributed random variate is bounded. The latter two distributions are included to assess the test performance under finite sample non-normality of µ. For comparison, we also conduct simulations using the following test statistics:
S 1 = − min{ √ T θ 1 µ 1 , √ T θ 2 µ 2 , ..., √ T θ p µ p , 0}, S 2 = min µ:µ≥0 T ( µ − µ) ′ V −1 ( µ − µ), S 3 = p j=1 (min{ √ T θ j µ j , 0}) 2 , S 4 = p j=1 [− √ T min( θ j µ j , 0)].
The extreme value form S 1 is essentially Hansen (2005)'s test statistic appropriated for testing multiple non-negativity hypotheses. S 2 is the classic QLR test statistic. The critical values for tests based on S 1 to S 4 are estimated using bootstrap coupled with the GMS procedure of the elementwise t-test type as suggested by Andrews and Soares (2010) and
Andrews and Barwick (2012). We use 10000 bootstrap repetitions for calculation of the GMS test critical values. The tuning parameter in the GMS procedure is set to be the SIC or LIL type (Andrews and Soares (2010, p. 131)). For ease of reference, let S j (SIC) and S j (LIL) denote the GMS test using statistic S j with tuning SIC and LIL respectively. Furthermore, let Q(Ψ, K) denote the present test implemented with its smoothed indicator specified by Ψ and K.
We consider simulation scenarios based on p ∈ {4, 6, 10}. For multivariate simulation design, we have to be more selective on the specifications of µ and V parameters of (7.1). Concerning the µ vector, we follow a design similar to that previously employed by Hansen (2005, p. 373) in simulation study of the test size performance. To be specific, µ is the p dimensional vector given by
µ 1 = 0, µ j = λ(j − 1)/(p − 1) for p ≥ j ≥ 2
where λ ∈ {0, 0.25, 0.5}. Note that the λ values are introduced to control the extent to which inequalities satisfying the null hypothesis are in fact non-binding. Regarding the variance matrix V , we set V to be a Toeplitz matrix with elements V i,j = ρ j−i for j ≥ i, where ρ ∈ {0, −0.5, 0.5}. This greatly simplifies the specification for off-diagonal elements of V but still allows for presence of various degrees of both positive and negative correlations.
For power studies, we consider the µ vector given by
µ = −δV θ + ǫ µ (7.2)
where δ ∈ {0.15, 0.1, 0.05}, V is the variance matrix given as above, θ = (θ 1 , θ 2 , ..., θ p ) ′ , ǫ ∈ {0, 0.5, 0.8} and µ is the vector with µ j = δ for 1 ≤ j ≤ p/2 and µ j = −δ for p/2 < j ≤ p.
Simulation results
We report the simulated maximum null rejection probability (MNRP) and average power (AP)
for each test. Given G w , the maximization for the MNRP is over all H 0 compatible combinations of µ and ρ values whilst given both G w and ǫ, the averaging for AP is over all H 1 compatible µ and ρ configurations. We now turn to Tables 2, 3, 4 giving AP results of the tests. For the unperturbed direction (ǫ = 0), Theorem 5 of Section 6 indicates that the Q(Ψ, K) test is locally more powerful than the GMS tests considered in the simulations. Along such local direction, irrespective of the underlying G w , the simulation results indicate that the Q(Ψ, K) tests dominate the GMS tests in AP performance. The GMS QLR test (S 2 ) is not far behind. Hansen's test (S 1 ), which is arguably the most stable in terms of MNRP performance, has distinctly lower power. But it is still a good performer. For the perturbed directions (ǫ ∈ {0.5, 0.8}), while the Q(Ψ, K) tests still outperform the S 1 tests, they do not generally dominate other versions of the GMS tests but the AP differences are not large.
We comment on the comparative performance of the Q(Ψ, K) tests with the S 4 tests. Their comparison is of particular interest since the present test essentially attempts to smooth the statistic S 4 . The smoothed version is less costly in computation because its critical value is obtained without resampling. We compare S 4 (SIC) with Q(Ψ Step , K SIC ) and Q(Ψ Log , K SIC ).
The simulation results suggest that the Q(Ψ Step , K SIC ) and Q(Ψ Log , K SIC ) tests have similar degree of size control as S 4 (SIC). Against the alternative hypothesis, Q(Ψ Log , K SIC ) has slightly larger power than S 4 (SIC) in all 27 cases while Q(Ψ Step , K SIC ) outperforms S 4 (SIC) in 18 out of the 27 cases. These findings suggest that implementational advantage of the present test based on smoothing does not appear to be achieved at the cost of test performance.
Perusing all the other entries in Tables 2, 3, 4, it seems that the different variants of the Q(Ψ, K) test perform quite similarly to one another retaining power well in excess of 0.73 throughout. What these results illustrate is that the Q(Ψ, K) test has identifiable directions of strength as indicated theoretically by this paper. Given the simulation results above, the Q(Ψ Step , K SIC ) and Q(Ψ Log , K SIC ) tests work at least as well as other Q(Ψ, K) versions examined here but have better size performance. Hence while K SIC is the preferred tuner, both Ψ Step and Ψ Log are the recommended smoothers.
Conclusions
This paper develops a test of multiple inequality hypotheses whose implementation does not require computationally intensive procedures. The test is based on origin-smooth approximation of indicators underlying the sum-of-negative-part statistic. This yields a simply structured statistic whose asymptotic distribution, whenever non-degenerate, is normal under the null hypothesis.
Hence test critical values can be fixed ex ante and are essentially based on the unit normal distribution. Moreover, the test is applicable under weak assumptions allowing for estimator covariance singularity.
We have proved that the size of the test is asymptotically exact in the uniform sense. The test is consistent against all fixed alternative hypotheses. We have derived a local power function and used it to demonstrate that the test is unbiased against a wide class of local alternatives.
We have also provided a new theoretical result pinpointing directions of alternatives for which the test is locally most powerful.
We have performed simulations which illustrate the potential of the test to be of practical inferential value along with simplicity and speed. These simulations, carried out for a range of p values, also shed light on the choice of smoothed indicator. They suggest that when coupled with the SIC type tuner, both the logistic and the step-at-unity smoothers perform well in finite samples. These are the recommended choices for test implementation. The simulation study also compares the test of this paper with several different tests which estimate critical values using the GMS procedure. We find that the test appears to be a viable complement to the GMS critical value estimation methodology.
A Supplementary Derivation of
Λ T ( µ j , v jj )
The term Λ T ( µ j , v jj ) acts as an approximation for the expectation of [Ψ T ( µ j ) − Ψ(0)] √ T µ j evaluated at µ j = 0. Under regularity condition [D1], when µ j = 0, the distribution of √ T µ j for T sufficiently large is approximately normal with mean zero and variance v jj . Let X denote any scalar random variable distributed as N (0, c).
Define h T ≡ K(T )/ √ T . Given (3.1), Λ T ( µ j , v jj ) is thus constructed to approximate E((Ψ(h T X) − Ψ(0))X) = E(Ψ(h T X)X) with c = v jj .
In what follows, we take as read the notation and definitions stated between equations (3.1) and
(3.2).
Define a 0 ≡ −∞ and a n+1 ≡ ∞. Let φ denote the standard normal density function. Note that
E(Ψ(h T X)X) = n+1 i=1 ai/hT ai−1/hT Ψ(h T x)xφ(x/ √ c)/ √ cdx = √ c n+1 i=1 ai/hT ai−1/hT h T ψ(h T x)φ(x/ √ c)dx − n i=1 (Ψ(a − i ) − Ψ(a + i ))φ( a i h T √ c ) (A.1) = ch T E( ψ(h T X)) − √ c n i=1 (Ψ(a − i ) − Ψ(a + i ))φ( a i h T √ c ) (A.2)
where (A.1) follows from integration by parts and re-arrangement of terms in the sum and (A.2) follows by using [A2] which implies ψ(x) = ψ(x) almost everywhere. Taking c = v jj and plugging in the parameter estimates, we hence construct Λ T ( µ j , v jj ) as
Λ T ( µ j , v jj ) ≡ v jj ψ(K(T ) µ j )K(T )/ √ T − v jj n i=1 (Ψ(a − i ) − Ψ(a + i ))φ( a i √ T v jj K(T )
).
(A.3)
We now comment on the derivative term in the expression (A.3). Since h T goes to zero as T increases, E( ψ(h T X)) tends to ψ(0) by Assumption [A2] and the Dominated Convergence
Theorem. The limit value ψ(0) also coincides with the probability limit of ψ(K(T ) µ j ) for the case µ j = 0. Hence, we use ψ(K(T ) µ j ) instead of E( ψ(h T X)) to account for the slope effect, 9
thus allowing the derivative term to depend on the estimate µ j . This has the advantage that for non-zero valued µ j , ψ(K(T ) µ j ) itself also tends to zero and hence yields faster convergence of Λ T to zero when the function Ψ further has the properties of lim
x−→−∞ ψ(x) = lim x−→∞ ψ(x) = 0.
Specifications of Ψ satisfying these properties are numerous, including the logistic and the normal smoothers given in Section 7.1.
B Proofs of Theoretical Results
The
B.1 Probability Limits of the Smoothed Indicator
We first prove a lemma that states the probability limits of the smoothed indicator Ψ T ( θ j µ j ), which will be referred to in the proofs of some theorems in this paper.
Lemma 1 (Probability Limits of the Smoothed Indicator )
Assume [D1] and [D4]
. Then the following results are valid as T −→ ∞. Proof. To show part (1), for ε > 0 and for η > 0, we want to find some T (ε, η) > 0 such that for T > T (ε, η),
(1) If j ∈ A and [A1], [A3], [A6] hold, then √ T Ψ T ( θ j µ j ) p −→ 0. (2) If j ∈ M and [A2], [A4] hold, then Ψ T ( θ j µ j ) p −→ Ψ(0).P ( √ T Ψ T ( θ j µ j ) ≤ ε) ≥ 1 − η.
By [D1] and [D4], we have θ j µ j p −→ θ j µ j , which is strictly positive for j ∈ A. Then there is a T 1 (η) such that for T > T 1 (η),
P (θ j µ j /2 ≤ θ j µ j ≤ 3θ j µ j /2) ≥ 1 − η.
Therefore, by [A1] and [A3] we have
1 − η ≤ P (Ψ T (3θ j µ j /2) ≤ Ψ T ( θ j µ j ) ≤ Ψ T (θ j µ j /2)) ≤ P (Ψ T ( θ j µ j ) ≤ Ψ T (θ j µ j /2)) ≤ P ( √ T Ψ T ( θ j µ j ) ≤ √ T Ψ T (θ j µ j /2))
where the first inequality follows because Ψ is a non-increasing function.
[A6] implies that √ T Ψ T (θ j µ j /2) −→ 0 as T −→ ∞. Therefore, there is some T 2 (ε) such that for T > T 2 (ε), √ T Ψ T (θ j µ j /2) < ε. Combining all these results, part (1) in this lemma follows by choosing T (ε, η) = max(T 1 (η), T 2 (ε)).
To show part (2), note that If j ∈ M , by [D1] and [D4], we have √ T θ j µ j = Op(1). By [A4],
K(T )/ √ T = o(1) so that K(T ) θ j µ j p −→ 0. By [A2]
, Ψ is continuous at origin. Therefore, part
(2) follows from the application of the continuous mapping theorem.
To show part (3), for ε > 0 and for η > 0, we want to find some T (ε, η) > 0 such that for T > T (ε, η),
P (1 − ε ≤ Ψ T ( θ j µ j ) ≤ 1 + ε) ≥ 1 − η.
Following the proof given in part (1), we have that there is a T 1 (η) such that for T > T 1 (η)
1 − η ≤ P (θ j µ j /2 ≤ θ j µ j ≤ 3θ j µ j /2) ≤ P (Ψ T (3θ j µ j /2) ≤ Ψ T ( θ j µ j ) ≤ Ψ T (θ j µ j /2)).
Note that if j ∈ B, then θ j µ j < 0 and thus by [A5], Ψ T (θ j µ j /2) −→ 1 and Ψ T (3θ j µ j /2) −→ 1.
Then there is some T 3 (ε) such that for T > T 3 (ε), Ψ T (θ j µ j /2) ≤ 1 + ε and Ψ T (3θ j µ j /2) ≥ 1 − ε. Therefore, part (3) follows by choosing T (ε, η) = max(T 1 (η), T 3 (ε)).
B.2 Asymptotic Properties of
√ T Ψ T ( θ j µ j ) θ j µ j
Based on Lemma 1, we derive the asymptotic properties of the components corresponding to
j ∈ A, j ∈ M, j ∈ B of the sum j∈J √ T Ψ T ( θ j µ j ) θ j µ j .
The results are stated in the following lemma.
√ T θ j µ j d −→ N (0, θ 2 j v jj )
. Therefore, part (ii) follows by applying part (2) of Lemma 1. To show part (iii), note that for j ∈ B,
√ T Ψ T ( θ j µ j ) θ j µ j = Ψ T ( θ j µ j ) √ T θ j ( µ j − µ j ) + Ψ T ( θ j µ j ) √ T θ j µ j . (B.1)
Therefore, part (iii) follows from the fact that by [D1], [D4] and part (3) of Lemma 1, the first term on the right hand side of (B.1) is Op(1) and the second term goes to −∞ in probability.
B.3 Asymptotic Properties of
Λ T ( θ j µ j , θ 2 j v jj )
The following lemma states the asymptotic properties of the adjustment term Λ T ( θ j µ j , θ 2 j v jj ) defined by (3.2).
Lemma 3 (Asymptotic Properties of Λ T ( θ j µ j , θ 2 j v jj ))
Assume [A1], [A2], [A4], [D3] and [D4]. Then for
j ∈ J, Λ T ( θ j µ j , θ 2 j v jj ) p −→ 0.
Proof. By [A1] and [A2] and the properties of standard normal density function, we find that
Λ T ( θ j µ j , θ 2 j v jj ) ≤ θ 2 j v jj K(T ) √ T b Ψ + 2 θ 2 j v jj π −1 K(T ) √ T n i=1 a −2 i where b Ψ
B.4 Proof of Theorem 1
Proof of part (1) :
By Lemma 3 and under H 0 , the quantity Q 1 may be written as
Q 1 = j∈A √ T Ψ T ( θ j µ j ) θ j µ j + j∈M √ T Ψ T ( θ j µ j ) θ j µ j + o p (1)
which, by part (i) of Lemma 2, is asymptotically equivalent in probability to merely , we also find that
j∈M √ T Ψ T ( θ j µ j ) θ j µ j .Q 2 ≡ Ψ ′ ∆ V ∆ Ψ p −→ Ψ(0)ω 1/2 M .
From these results about Q 1 and Q 2 and the definition (3.11) of Q, we conclude that Q equals to Φ(Q 1 /Q 2 ) with probability tending to 1 as T −→ ∞ and thus Q d −→ U (0, 1).
Proof of part (2) :
When M is empty yet H 0 holds, only the sums taken for j ∈ A remain in the definitions of Q 1 and Q 2 hence the following analysis is confined to j ∈ A. We distinguish between smoothed indicators which are such that Ψ T (x) = 0 for all T sufficiently large when x > 0 and smoothed indicators such that Ψ T (x) remains strictly positive for x > 0 for all T . In the former case, part (1) of Lemma 1 implies that P (Ψ T ( θ j µ j ) = 0) −→ 1 for j ∈ A and hence P (Q 2 = 0) −→ 1 and thus P (Q = 1) −→ 1. Now we consider the latter case where Ψ T (x) > 0 for x > 0 regardless of T . This happens for everywhere positive Ψ functions. Then the quantity Υ j ≡ θ j Ψ T ( θ j µ j ) is almost surely strictly positive for all j ∈ A. By eigenvalue theory, for all T ,
Q 2 ≤ λ max j∈A Υ 2 j ≤ p λ max max j∈A { Υ j } (B.2)
where λ max is the largest eigenvalue of V . Note that (B.2) holds even if Q 2 = 0, which under current scenario could only happen because of singularity of V and V . However, when P (Q 2 = 0) −→ 1, we have P (Q = 1) −→ 1 and thus part (2) of the theorem follows.
Note that for j ∈ J, equation (3.2) and Assumptions [A1] and [A2] imply that the term
Λ T ( θ j µ j , θ 2 j v jj ) is non-positive for all T .
Hence, since all µ j are positive by supposition, as T −→ ∞, by (3.9) we have that
Q 1 ≥ max j∈A { Υ j } min j∈A { √ T µ j }.
with probability tending to 1. Because the mapping from a positive semi-definite matrix to its maximum eigenvalue is continuous on the space of such matrices, by [D3] we have λ max p −→ λ max where λ max is the largest eigenvalue of V . By [D2], 0 < λ max < ∞ and thus we have
Q 1 /Q 2 ≥ min j∈A { √ T µ j }/ p λ max
with probability tending to 1 as T −→ ∞. Since √ T µ j goes to infinity as T −→ ∞ for j ∈ A, it follows that Q = Φ(Q 1 /Q 2 ) p −→ 1.
B.5 Proof of Theorem 3
Since rejection of H 0 occurs if Q < α for the test statistic (3.11), it suffices for consistency to show that under H 1 , Q 2 goes in probability to some positive constant and Q 1 goes to minus infinity as T −→ ∞. By (3.5) and Lemma 1, the probability limit of Ψ under H 1 is the p dimensional vector whose jth element is [1{µ j < 0} + Ψ(0)1{µ j = 0}]. Therefore, by [D3] and [D4]
Q 2 ≡ Ψ ′ ∆ V ∆ Ψ p −→ d(µ) ′ ∆V ∆d(µ),
which is strictly positive by the regularity condition [D2]. On the other hand, Lemma 2 implies that √ T Ψ T ( θ j µ j ) θ j µ j is bounded in probability for j ∈ J\B but tends to negative infinity for j ∈ B. Furthermore, Lemma 3 implies that Λ T ( θ j µ j , θ 2 j v jj ) = o p (1) for j ∈ J. Under H 1 , B is non-empty and thus Q 1 /Q 2 goes to −∞ in probability and hence P (Q < α) −→ 1 as T −→ ∞ .
B.6 Proof of Theorem 4
Under the assumed form of local sequence (6.1), for all j we have
K(T ) θ j µ j = (K(T )/ √ T ) θ j [ √ T ( µ j − µ j ) + c j ] + K(T ) θ j γ j where γ j ≥ 0.Ψ(0) p j=1 1{γ j = 0}θ j [ √ T ( µ j − µ j ) + c j ]
and thus has an asymptotic normal distribution with mean Ψ(0)τ and variance Ψ(0) 2 κ. Using similar arguments, it is straightforward to see that Q 2
p −→ Ψ(0) √ κ. Therefore, Q 1 /Q 2 d −→
N (κ −1/2 τ , 1) from which the assertion of Theorem 4 follows.
B.7 Proof of Theorem 5
We shall establish that for any non-zero vector c, noting that such test has power equal to Φ(z α + √ c ′ V −1 c) which is therefore not smaller than P (S(Z + c, V ) > q α ), the power of another test at level α which rejects the null hypothesis µ X = 0 if and only if S(X, V ) > q α .
Φ(z α + √ c ′ V −1 c) ≥ P (S(Z + c, V ) > q α ) (B
B.8 Sufficient Condition for Assumption [U2]
The following lemma provides a sufficient condition for Assumption [U2] of Section 5. Recall
that Y ≡ √ T ( µ − µ).
Lemma 4 Assumption [U2] holds provided that given any finite scalar c,
lim T −→∞ |P GT (β ′ T Y ≤ c) − Φ(c)| = 0 (B.4)
for any sequence (G T , β T ) satisfying G T ∈ Γ 0 and β ′ T V GT β T = 1.
Proof. Let
f T (G, β) ≡ |P G (β ′ Y ≤ c) − Φ(c)|. Let S denote the set {(G, β) : G ∈ Γ 0 , β ∈ Σ(G)} where the set Σ(G) ≡ {β ∈ R p : β ′ V G β = 1}. Note that sup G∈Γ0 sup β∈Σ(G) f T (G, β) = sup (G,β)∈S f T (G, β). (B.5)
Since for any ε > 0, there is a pair (G T (ε), β T (ε)) in S such that f T (G, β) < ε.
Hence Assumption [U2] follows by noting that ε is arbitrary chosen and f T ≥ 0.
B.9 Proof of Theorem 2
We aim to establish the inequality lim sup
T −→∞ sup G∈Γ0 P G (Q < α) ≤ α. (B.6)
Then Theorem 2 follows by combining together the results implied by (B.6) and Part (1) of Theorem 1.
Let z α be the α quantile of the standard normal distribution. The test rejects the null hypothesis if and only if Q 2 > 0 and Q 1 − z α Q 2 < 0. Therefore,
P G (reject H 0 ) ≤ P G (Q 1 − z α Q 2 < 0). (B.7)
The strategy of the proof is to demonstrate that P G (Q 1 − z α Q 2 < 0) is asymptotically bounded by the nominal size α uniformly for all G satisfying the null hypothesis. That then validates (B.6) via (B.7). Note that −z α > 0 for 0 < α < 1/2 as used in this theorem. By (3.9), (3.10) and non-positivity of the Λ T term, we have
Q 1 ≥ p j=1 Ψ(K(T ) θ j µ j ) √ T θ j µ j Q 2 = p i=1 p j=1 Ψ(K(T ) θ i µ i )Ψ(K(T ) θ j µ j ) θ i θ j v ij
where v ij and v ij are the (i, j) elements of V and V G , respectively. For notational simplicity, the dependence of µ and v ij on G is kept implicit.
Now we give details of the proof. For ease of presentation, they are organized in the following headed subsections.
Lower Bound for the Difference
(Q 1 − z α Q 2 )
Let δ T ≡ K(T )/ √ T . For any η > 0, define the set
R T (µ) ≡ {j : 0 ≤ K(T )µ j ≤ 2ηδ T }.
We show that, with probability tending to 1 uniformly over G ∈ Γ 0 as T −→ ∞,
Q 1 − z α Q 2 ≥ Q 1,RT − z α Q 2,RT (B.8) where Q 1,RT ≡ j∈RT (µ) Ψ(K(T ) θ j µ j ) √ T θ j µ j , Q 2,RT ≡ i∈RT (µ) j∈RT (µ) Ψ(K(T ) θ i µ i )Ψ(K(T ) θ j µ j ) θ i θ j v ij .
We follow the convention that summation over an empty set yields value zero. Note that (B.8)
automatically holds when R T (µ) = {1, 2, ..., p}. For R T (µ) being a proper subset of {1, 2, ..., p},
we rely on the fact (proved in the next subsection) that, with probability tending to 1 uniformly over G ∈ Γ 0 as T −→ ∞,
K(T ) µ j > ηδ T for j / ∈ R T (µ) (B.9)
and, for R T (µ) nonempty,
Q 2,RT > ω ′ /2 > 0 (B.10)
where ω ′ is the constant defined in Assumption [U4]-(ii). Let m be any index such that m / ∈ R T (µ) and θ m µ m ≤ θ j µ j for all j / ∈ R T (µ). Since Ψ is non-negative, (B.9) implies
Q 1 ≥ Q 1,RT + Ψ(K(T ) θ m µ m ) θ m ηδ −1 T . (B.11)
Furthermore, by [A1] the function Ψ is non-increasing and Ψ ≤ 1. Thus, (B.9) and (B.10)
together imply
|Q 2,RT − Q 2 | ≤ Q 2 2,RT − Q 2 2 /Q 2,RT ≤ p 2 Ψ(K(T ) θ m µ m ) ∆ 2 V 2/ω ′ . (B.12)
Given that −z α > 0, when R T (µ) is empty, (B.11) alone implies (B.8). With R T (µ) nonempty, (B.11) and (B.12) together imply (B.8) provided
θ m ηδ −1 T ≥ −z α p 2 ∆ 2 V 2/ω ′ . (B.13)
We show that under the null hypothesis, (B.9), (B.10) and (B.13) will indeed hold for η small enough and T large enough (yielding δ T small enough by Assumption [A4]) under the key event E η T described next.
2. The Key Event E η T and Lower Bound for the Difference (Q 1,RT − z α Q 2,RT )
Let Y j be the jth element of Y ≡ √ T ( µ − µ). For η > 0, define the event
E η T ≡ {δ T Y < η, || V − V G || < η, ∆ − ∆ < ηδ T }
which holds with probability tending to 1 uniformly over G ∈ Γ 0 as T −→ ∞ by Assumptions
[A4], [U1] and [U3]-(ii). Since K(T ) µ j = K(T )µ j + δ 2 T Y j , under the null hypothesis the event E η T implies the inequality (B.9). To show that the event E η T also implies (B.10) and (B.13), and then derive the key result (B.18) of this subsection, we first need to draw out the following inequalities (B.14) -(B.17).
Note that when 0 ≤ K(T )µ j ≤ 2ηδ T , we have that by Assumption [U3]-(i) and under the
event E η T , √ T θ j µ j ≥ θ j Y j − 3η 2 , (B.14) K(T ) θ j µ j ≤ 3ηδ T (λ + ηδ T ). (B.15)
By Assumption [A2], Ψ(x) is differentiable on |x| ≤ 3ηδ T (λ + ηδ T ) for η small enough and T large enough. Therefore, given Ψ ≤ 1, the event E η T and inequalities (B.14) and (B.15) imply
that Ψ(K(T ) θ j µ j ) √ T θ j µ j ≥ Ψ(0)θ j Y j − 3(λb Ψ (λ + ηδ T ) + 1)η 2
where b Ψ denotes the bound on the derivative of Ψ(x) defined in Assumption [A2]. Hence, when η < 1 and δ T < 1, we may certainly write
Q 1,RT ≥ Ψ(0) j∈RT (µ) θ j Y j − C 1 η (B.16)
where C 1 is a fixed positive quantity given values of p, λ and b Ψ . By Assumptions [U3]-(i) and
[U4]-(i) and using similar arguments with η < 1 and δ T < 1, we can obtain a bound for Q 2 2,RT under the event E η T as the following
Q 2 2,RT ≥ Ψ(0) 2 i∈RT (µ) j∈RT (µ) θ i θ j v ij − C 2 η (B.17)
where C 2 is fixed and positive given values of p, λ, ω, b Ψ and Ψ(0).
We can choose η to satisfy η < min{1, ω ′ /(2C 2 )} and choose T such that 2ηδ
T /K(T ) < σ,
where σ is the constant defined in Assumption [U4] by which the right-hand side of (B.17) is larger than ω ′ /2 and hence inequality (B.10) is satisfied. Using Assumptions [U3]-(i) and [U4]-(i),
under the event E η T , we see θ m > λ ′ −δ T η whilst ∆ 2 V ≤ (λ+δ T η) 2 (ω+η). Since δ −1 T −→ ∞ by Assumption [A4]
, given η > 0, (B.13) will indeed hold for large enough T . Finally, let r T denote the p dimensional vector whose jth element is θ j if j ∈ R T (µ) and zero, otherwise. Then given that −z α > 0 and with η small enough and T large enough, (B.16) and (B.17) together imply
Q 1,RT − z α Q 2,RT ≥ Ψ(0)r ′ T Y − C 1 η − z α Ψ(0) 2 r ′ T V G r T − C 2 η. (B.18)
The Probability Bounds
We have shown above how occurrence of the event E η T implies the inequality (B.8) given η small enough and T large enough. Hence
P G (Q 1 − z α Q 2 < 0) ≤ 1 − P G (E η T ) + P G (Q 1 − z α Q 2 < 0, E η T ) ≤ 1 − P G (E η T ) + P G (Q 1,RT − z α Q 2,RT < 0) (B.19)
where the last term of (B.19) is zero when R T (µ) is empty. For non-empty R T (µ), using (B.18)
yields P G (Q 1,RT − z α Q 2,RT < 0) ≤ P G (r ′ T Y − z α r ′ T V G r T − C 2 η/Ψ(0) 2 < C 1 η/Ψ(0)). (B.20)
vector of parameters β ≡ (β 1 , β 2 , ..., β r ) ′ . The restrictions being tested are synthesized into the one-sided form µ ≥ 0 with µ = (µ 1 , µ 2 , ..., µ p ) ′ = Cβ + b where C is a known p × r matrix and b is a known p dimensional vector of constants. We assume an asymptotically normal estimator β is available with non-singular asymptotic variance matrix Ω. Since V = CΩC ′ , V value induced by any G ∈ Γ is necessarily singular when r < p. In the third example, we consider a different scenario where singularity arises only for some specific V values.
Example 1: Triangle Restriction
For a Cobb-Douglas production function with capital and labor elasticity coefficients β 1 and β 2 , the restrictions being tested β 1 ≥ 0, β 2 ≥ 0 and β 1 + β 2 ≤ 1 (non-increasing returns to scale) form a triangle for the graph of (β 1 , β 2 ). Here r = 2, p = 3 and
µ = (µ 1 , µ 2 , µ 3 ) ′ = (β 1 , β 2 , 1 − β 1 − β 2 ) ′ . (C.1)
Verification of [D2] and [U4]-(ii) :
Note that V = CΩC ′ where Ω is the variance matrix of the asymptotic distribution of √ T ( β − β) and
C ′ = 1 0 −1 0 1 −1 , C ′ ∆d(µ) = θ 1 0 −θ 3 0 θ 2 −θ 3 d(µ).
We assume the primitive condition that the smallest eigenvalue of Ω is bounded away from zero over all G ∈ Γ. Assumption [D2] is true since C ′ ∆d(µ) being zero for non-zero d(µ) would require all elements of d(µ) to be non-zero, in turn requiring all elements of µ given by (C.1) to be negative or zero, which is impossible. For Assumption [U4]-(ii), we note that for sufficiently small σ, the only non-zero values for d σ (µ) possible under the null hypothesis are Ψ(0) multiples of (1, 0, 0) ′ , (0, 1, 0) ′ , (0, 0, 1) ′ , (1, 1, 0) ′ , (1, 0, 1) ′ , (0, 1, 1) ′ , because it is not possible for more than two of the elements of µ to simultaneously lie between 0 and σ < 1/3 as µ 1 + µ 2 + µ 3 = 1.
Therefore, given Assumption [U3]-(i) and the primitive condition on Ω, Assumption [U4]-(ii) is satisfied here.
Example 2: Interval Restrictions with Fixed Known End-Points
Suppose the r dimensional parameter vector β is hypothesized to satisfy interval restrictions l ≤ β ≤ u, where l and u are numerically specified. In this case, p = 2r and µ = ((β−l) ′ , (u−β) ′ ) ′ .
An estimator β is available such that √ T ( β − β) is asymptotically normal with variance Ω whose smallest eigenvalue is assumed primitively to be bounded away from zero over all G ∈ Γ. Note that V = CΩC ′ where C ′ = [I r , −I r ]. Thus, C ′ ∆d(µ) is the r dimensional vector whose jth element is [1{β j < l j } + Ψ(0)1{β j = l j }]θ j − [1{β j > u j } + Ψ(0)1{β j = u j }]θ j+r (C.2) then C ′ ∆d σ (µ) is also a non-zero vector of length which is bounded away from zero. Given the primitive condition on Ω, Assumption [U4]-(ii) is thus satisfied for any σ > 0.
We now comment on testing interval hypothesis of the Case II type within the framework of this paper. For validity of the test, it suffices to choose any single equality hypothesis indexed by h ∈ S e and specify θ h = θ h+r at the outset. This single asymmetry requirement is the only operational difference compared with Case I. Moreover, since v h,h = v h+r,h+r where v h,h denotes the h-th diagonal element of V , weighting inversely proportional to standard error is not ruled out. The user can indeed set
θ h+r = (1 + ε)θ h with θ h = 1/ √ v h,h , ε > −1 and ε = 0. (C.5)
Here ε is a non-stochastic quantity chosen by the user to control the degree of deviation from perfect standardization of the estimate µ h+r . The weighting scheme (C.5) ensures that the test has exact asymptotic size in the uniform sense and is consistent against all fixed alternatives.
On the other hand, Theorem 4 suggests that the user can specify ε < 0 (or reverse) to attach more (or less) weight to detection of violation of H 0 in the direction of β h < l h .
Note that asymmetric weighting (C.5) adopted here can be viewed as "perturbing" both Q 1 and Q 2 from the values they would take under symmetry. One might think to perturb only Q 2 to ensure that singularity does not cause division by (near) zero. For example, one could perturb V in the expression (3.10) defining Q 2 in a manner akin to Andrews and Barwick (2012) who adjust the QLR test statistic by perturbing V with a diagonal matrix when the determinant of the correlation matrix induced by V is smaller than some pre-specified threshold.
This alternative approach can allow for symmetric weighting. However unperturbed Q 1 will asymptotically converge to zero and hence rejection probability will tend to zero under the null and local alternative scenarios where all non-degenerate interval inequalities are non-binding. By contrast, the procedure (C.5) perturbing both Q 1 and Q 2 in a balanced way ensures that the ratio Q 1 /Q 2 stays asymptotically standard normal in the null even when the only binding constraints are the equality hypotheses. It thus enables non-zero test power to be retained in the aforementioned scenarios of local alternatives.
Example 3: Interval Restrictions with Unknown End-Points
In Example 2, testing the inequalities l ≤ β ≤ u was performed on fixed known interval endpoints. Suppose now that l and u are not known but are parameters which satisfy l ≤ u and can take a continuum of values including those which make (u − l) arbitrarily close to zero as well as precisely zero. There is no point estimator for β but consistent estimators l and u are available having joint asymptotic normal distribution with variance matrix Ω. This, for the univariate case, is the scenario considered by Imbens and Manski (2004) and Stoye (2009). For clarity, we stay with the setup where β is a scalar. We consider testing H 0 : l ≤ β 0 ≤ u for a numerically specified candidate value β 0 for β. We then take µ = (β 0 − l, u − β 0 ) ′ and µ = (β 0 − l, u − β 0 ) ′ . The asymptotic distribution of √ T ( µ − µ) is normal with variance V = Ω 11 −Ω 12
−Ω 12 Ω 22 .
For any given l and u, there is no reason why V should be singular. However, Stoye (2009. p. 1304, Lemma 3) demonstrates that, if one insists on P ( u ≥ l) = 1 holding over the underlying data generating distribution space where the difference (u − l) is bounded away from infinity and the elements Ω 11 and Ω 22 bounded away from zero and infinity, then V necessarily depends on (u − l) in such a way that Ω 12 − Ω 11 −→ 0 and Ω 22 − Ω 11 −→ 0 as u − l −→ 0. Thus, singularity of V where Ω 11 = Ω 22 = Ω 12 must be allowed for. In this example, the weights θ 1 and θ 2 are chosenly asymmetrically and setting ε to be greater (smaller) than zero amounts to attaching more (or less) weight to detection of violation of H 0 in the direction u < β 0 . The ε-perturbation arguments adopted here are indeed based on those given in Case II of Example 2. The value of the perturbation parameter ε is a user's input to the test procedure. The choice does not affect validity of the results concerning asymptotic test size and consistency. Asymmetry does affect local power but, by the same device, offers the user an opportunity to input a subjective assessment of the relative importance of different directions of violation of the null hypothesis.
[
A2] Ψ(0) > 0 and, throughout some open interval containing x = 0 and at all except possibly a finite number of points outside that interval, Ψ(x) has a continuous first derivative ψ(x) that is bounded absolutely by a finite positive constant. The left-hand limits of ψ(y) as y approaches x exist at any x ∈ R.
Assumptions [A1]-[A6] are very mild and satisfied by all the particular Ψ functions including step-at-unity, logistic and normal, discussed in Section 7.1 and used in the simulations of this paper. Assumption [A4] regulates the rate at which the "tuning" parameter K(T ) can grow and, in the context of Andrews and Soares (2010) discussed in Subsection 2.1, enables consistent selection of binding constraints. Forms of tuning are also used by Chernozhukov et al. (2007) and Linton et al. (2010).
Theorem 1 (
1Pointwise Asymptotic Null Distribution) Given [A1], [A2], [A3], [A4], [A6] with [D1] -[D4], the following are true under H 0 : µ j ≥ 0 for all j ∈ J with limits taken along T −→ ∞.
how the high-level Assumptions [U1] and [U2] may be verified, consider the leading example where µ and V are the sample mean and variance of i.i.d. random vectors x t , (t = 1, 2, ..., T ) with joint distribution G. 5 Then the simple but not necessarily the weakest primitive condition guaranteeing both Assumptions [U1] and [U2] is that the first four moments of every element of x t exist and are bounded uniformly over G ∈ Γ 0 . This condition allows the application of the Chebychev inequality to components of the right-hand side of the inequality
Therefore, different tests are complementary rather than competing. To obtain a formal result, we consider a local sequence converging to the origin, namely γ j = 0 for j ∈ {1, 2, ..., p}. Let c denote the vector (c 1 , c 2 , ..., c p ) ′ . Under such a local scenario, the GMS procedure will asymptotically treat all inequalities as binding in the critical value calculation. Thus the asymptotic distribution of the statistic S( √ T µ, V ) of Subsection 2.1 is the same as that of S(Z + c, V ) and the test rejection probability tends toP (S(Z + c, V ) > q α ) (6.3) where q α is the (1−α) quantile of S(Z, V ) under Z ∼ N (0, V ). We now present a theorem showing that the test of this paper is locally most powerful for a non-empty subclass of directions. Let θ denote the vector of diagonal elements of the matrix ∆.Theorem 5 Suppose the variance matrix V is positive definite and γ j = 0 for j ∈ {1, 2, ..., p} in the local sequence (6.1). Then for every testing function S(., .) such that P (S(Z, V ) > q α ) = α under Z ∼ N (0, V ), the asymptotic local power in (6.2) is at least α and is not smaller than (6.3) when c = −δV θ for any positive scalar δ.
used in recent literature on inference of moment inequality models (See e.g. Chernozhukov et al. (2007) and Andrews and Soares (2010)). These choices are SIC : K SIC (T ) ≡ T / log(T ) LIL : K LIL (T ) ≡ T /(2 log log(T )) The first name reflects a connection with the Schwarz Information Criterion (SIC) for model selection and the second with the Law of the Iterated Logarithm (LIL).
S 3 is the modifiedmethod-of-moments (MMM) statistic considered in the literature of moment inequality models (see, e.g. Chernozhukov et al. (2007), Romano and Shaikh (2008), Andrews and Guggenberger (2009) and Andrews and Soares(2010)). S 4 is the raw sum-of-negative-part statistic which can be transformed by smoothing into the key component of the test of the present paper.
For ǫ = 0 ,
0the design (7.2) mimics the local direction as suggested by Theorem 5 under which the test Q(Ψ, K) is expected to outperform other tests. When ǫ is non-zero, the local direction in favor of the present test is perturbed with another vector µ containing mixture of positive and negative elements. Such µ may incur power trade-off in light of Theorem 4 and thus the perturbation parameter ǫ controls the degree of deviation toward µ and enables some sensitivity check of test power performance.
section presents proofs of all theoretical results stated in the paper. Proofs of Theorems 1, 3, 4 and 5 (pointwise asymptotics and local power) along with preliminary Lemmas 1, 2 and 3 are presented in Subsections B.1 -B.7. Proofs of Lemma 4 providing a sufficient condition for Assumption [U2] and Theorem 2 (uniform asymptotics) are given separately in Subsections B.8 and B.9 of the Appendix. Recall that J denotes the set {1, 2, ..., p} and the sets A, M , and B are defined as A ≡ {j ∈ J : µ j > 0}, M ≡ {j ∈ J : µ j = 0}, B ≡ {j ∈ J : µ j < 0}.
( 3 )
3If j ∈ B and [A1], [A3], [A5] hold, then Ψ T ( θ j µ j ) p −→ 1.
Lemma 2 ()
2Asymptotic Properties of √ T Ψ T ( θ j µ j ) θ j µ j ) Let v jj denote the jth diagonal element of V . Assume [D1] and [D4]. Then the following results are valid as T −→ ∞.(i) If j ∈ A and [A1], [A3], [A6] hold, then √ T Ψ T ( θ j µ j ) θ j µ If j ∈ M and [A2], [A4] hold, then √ T Ψ T ( θ j µ j ) θ j µ j d −→ N (0, (Ψ(0)θ j ) 2 v jj ). (iii) If j ∈ B and [A1], [A3], [A5] hold, then √ T Ψ T ( θ j µ j ) θ j µ j p −→ −∞.Proof. Note that part (i) follows from [D1], [D4] and part (1) of Lemma 1. To show part (ii), by [D1] and [D4], if j ∈ M , we have that
denotes the finite positive bound on the derivative of Ψ given in Assumption [A2]. Note that [A2] also implies a 2 i > 0 for each i. By [A4], [D3] and [D4], the right-hand side of the inequality above is o p (1) and thus Lemma 3 follows.
which, by [D1], [D2], [D4] and part (2) of Lemma 1, is asymptotically normal with mean zero and strictly positive variance equal to Ψ(0) 2 ω M where ω M ≡ d ′ M ∆V ∆d M in which d M denotes the p dimensional vector whose jth element is unity for j ∈ M but zero for j / ∈ M . Using similar arguments along with [D3]
In the case γ j = 0, Assumptions [A4], [D1] and [D4] imply that K(T ) θ j µ j p −→ 0 as T −→ ∞ . By [A2] and the continuous mapping theorem, this then implies that Ψ(K(T ) θ j µ j ) p −→ Ψ(0). On the other hand, if γ j > 0, (6.1) implies that there is some δ > 0 such that µ j > γ j − δ > 0 for all T sufficiently large. So under [A1], [A3], [A6], [D1] and [D4], we have that √ T Ψ T ( θ j µ j ) θ j µ j p −→ 0 by using arguments closely matching the proof of part (1) of Lemma 1. Therefore, from these results and by (6.1), [D1], [D4] and Lemma 3, Q 1 is asymptotically equivalent in probability to
every testing function S(., .) such that P (S(Z, V ) > q α ) = α under Z ∼ N (0, V ). The theorem then follows by noting that the left-hand side of (B.3) when c = −δV θ coincides with the power function (6.2) under the local direction specified by the theorem. To show (B.3), consider an imaginary situation where X is the observable random vector that is distributed as Z + µ X where Z ∼ N (0, V ). For given V , a simple application of the Neyman-Pearson lemma (Lehmann and Romano (2005, p. 60, Theorem 3.2.1)) implies that a most powerful test at level α of the simple null hypothesis µ X = 0 versus the simple alternative µ X = c is to reject the null if and only if −c ′ V −1 X/ √ c ′ V −1 c < z α . Hence (B.3) holds by
f
T (G, β) < f T (G T (ε), β T (ε)) + ε,Assumption (B.4) used with equality (B.
Verification of [D2] and [U4]-(ii) : For Assumption [D2], note that under the maintained assumption that l ≤ u, the vector d(µ) can be non-zero only if it takes one of the following forms: (1, 0) ′ , (0, 1) ′ , (Ψ(0), 0) ′ , (0, Ψ(0)) ′ , (Ψ(0), Ψ(0)) ′ . The first four of these cannot make V ∆d(µ) = 0. The last form can only occur when l = β 0 = u in which case we haveV ∆d(µ) = Ψ(0)[θ 1 Ω 11 − θ 2 Ω 12 , −θ 1 Ω 12 + θ 2 Ω 22 ] ′ . (C.6)Note that (C.6) is zero only if V is singular and θ 1 /θ 2 = Ω 12 /Ω 11 = Ω 22 /Ω 12 . Singularity occurs in Stoye's scenario where the model allows for Ω 11 = Ω 22 = Ω 12 . Since the weights θ 1 and θ 2 are chosen by the user, we can use θ 1 = 1/ √ Ω 11 and θ 2 = (1+ε)/ √ Ω 22 where ε is a pre-specified nonstochastic and non-zero quantity satisfying ε > −1. Then Assumptions [D2] holds regardless of singularity of V . For Assumption [U4]-(ii), we only need to consider the null hypothesis. In this case, the possible forms of non-zero d σ (µ) can take are (Ψ(0), 0) ′ , (0, Ψ(0)) ′ and (Ψ(0), Ψ(0)) ′ . It is easily seen that d σ (µ) ′ ∆V ∆d σ (µ) equals Ψ(0) 2 for the first, Ψ(0) 2 (1 + ε) 2 for the second, and Ψ(0) 2 ε 2 + 2(1 + ε)(1 − Ω 12 / √ Ω 11 Ω 22 ) for the third form. Hence Assumption [U4]-(ii) holds.
this being simply achieved by taking µ j = µ. Basic selection as stated here is a special case of the Andrews and Soares (2010) Generalized Moment Selection (GMS) procedure. 1 2010, pp. 131-132) with due allowance for standardization of parameter estimates. See also Andrews and Barwick (2012, pp. 8-9) for various examples of the GMS selection rules.1 Indeed, this selection rule corresponds to use of moment selection function ϕ
(2)
j
considered by Andrews and
Soares (
This expectation is non-positive, though shrinking to zero in large samples.2 Under suitable regularity conditions Λ T , whose detailed construction is given in Section 3, is non-positive for all T but converges to zero in Since both µ and µ are available as a by-product of the mainstream tests of Subsection 2.1, one may as well perform a test on their difference. The asymptotic distribution of (2.4) does not itself require simulation and recentering, so there is no circularity of argument. Though(2.4) and the GMS test procedure are closely related, it is important to stress that the present test enforces data driven selection of binding inequalities through smoothed indicators within the test statistic itself rather than at the stage of critical value estimation. Therefore, the class ofprobability. Hence, under the null hypothesis the statistic (2.3) will be asymptotically either
degenerate or equivalent in distribution to a normal variate and thus critical values for a test
using (2.3) will not require simulation.
Besides indicator smoothing, it is also appropriate to view Ψ T as a form of binding inequal-
ity selection akin to the aforementioned GMS procedure. The smoothed indicators in (2.3)
essentially embed a data driven weighting scheme which automatically concentrates the statistic
(2.3) onto those parameter estimates signaling binding inequalities. Indeed, consider the specific
smoothed indicator constructed as Ψ T (x) = 1{K(T )x ≤ 1}. Such Ψ T (x) simply shifts the point
of discontinuity away from the origin whilst still acting as a pure zero-one selector. Then the
GMS based recentering described in Subsection 2.1 would amount to setting µ j = (1−Ψ T ( µ j )) µ j .
In this case, the statistic (2.3) is equal to
p
j=1
√
T ( µ j − µ j ) + o p (1).
(2.4)
statistics (2.3) does not lie in the otherwise very wide class covered by the work of Andrews and
Soares (2010).
). For this purpose, we strengthen Assumptions [D1] -[D4] by the following Assumptions [U1] -[U4] where objects such as K(T ) have already been defined in Assumptions [A1] -[A6]. Define the vector Y and the scalar δ T as
Assumption [U1] holds. To verify Assumption [U2] we first note that, by Lemma 4 proved in the Appendix, it is sufficient for (5.5) that lim T −→∞
Theorem 2 (Uniform Asymptotic Exactness of Test Size) Given Assumptions [D1] -[D4], suppose Assumptions [U1] -[U4] also hold. Assume some G ∈ Γ 0 has µ value containing at least one zero-valued element. Then under Assumptions [A1], [A2], [A3], [A4], [A6] and given
draw ever closer to compliance as T increases. In the easily-visualized case p = 2, all points on the boundary of null-restricted space are limits of core sequences. Non-core sequences can only converge to the origin, a single point compared to the continuum of the full boundary. We may now state :The sequence (6.1) is said to be core if c j < 0 holds in every instance of γ j = 0. A
core local sequence corresponds to Neyman-Pitman drift in the original sense (McManus (1991))
whereby parameter values conflicting with the null hypothesis are imagined ceteris paribus to
Theorem 4 (Local Power) Assume [A1], [A2], [A3], [A4], [A6] and [D1], [D3], [D4] hold
with the elements µ j of µ taking the T-dependent forms as specified by (6.1). Define
τ ≡
p
j=1
Table 1lists the MNRP values in three block columns side by side for the three specifications of G w . The AP values generated by three ǫ values are then listed separately 94 for the 72 values of the GMS tests. Plainly, the Q(Ψ, K) test is no more prone to over-rejection than the GMS tests. A common feature across all tests is that over-rejection tends to increase with p. However, only 2 out of 54 Q(Ψ, K) entries and 4 out of 72 GMS entries exceed 0.065. These excesses amount to less than 5% of a table of 126 simulated entries.We now examine the sensitivity of MNRP to the underlying data generating distribution G w .For all tests,Table 1exhibits little systematic difference attributable to the three different specifications of G w . These figures suggest that the MNRP results are not sensitive to finite sample non-normality. Furthermore, for each test, regardless of G w ,Table 1suggests that use of SIC type tuner in place of the LIL can yield better control of test size. This finding is consistent with the simulation studies ofAndrews and Soares (2010, pp. 149-152) demonstrating that the SIC tuner tends to give better MNRP properties. Overall, Q(Ψ Step , K SIC ) and Q(Ψ Log , K SIC ) have better MNRP results among the class of Q(Ψ, K) tests and their size performance is comparable to that of the four SIC tuned GMS tests.for each G w in Tables 2, 3 and 4.
In Table 1, the primary interest is how close the MNRP values are to the nominal 5% signif-
icance level, particularly in cases of over-rejecting. In that respect, we compare the percentage
of values not exceeding 0.05, 0.055, 0.06, 0.065. These percentages are about 18, 51, 87, 96 for
the 54 Q(Ψ, K) values and 9, 52, 79,
Table 1 :
1Simulated Maximum Null Rejection Probability for T = 250DGP G w
N (0, 1)
Logistic
U (−1, 2)
Number of inequalities
4
6
10
4
6
10
4
6
10
Q(Ψ Step , K SIC )
.049 .056 .055 .052 .054 .056 .051 .052 .055
Q(Ψ Log , K SIC )
.046 .053 .055 .046 .054 .057 .048 .052 .058
Q(Ψ N or , K SIC )
.050 .059 .061 .050 .058 .063 .050 .056 .063
Q(Ψ Step , K LIL )
.051 .059 .059 .053 .056 .059 .051 .053 .057
Q(Ψ Log , K LIL )
.049 .056 .057 .048 .057 .060 .048 .053 .059
Q(Ψ N or , K LIL )
.054 .062 .065 .052 .059 .066 .053 .058 .066
S 1 (SIC)
.050 .052 .054 .049 .052 .053 .051 .052 .053
S 2 (SIC)
.050 .054 .053 .052 .055 .054 .050 .050 .054
S 3 (SIC)
.050 .056 .052 .050 .051 .057 .052 .052 .056
S 4 (SIC)
.051 .058 .054 .053 .054 .057 .052 .055 .058
S 1 (LIL)
.053 .055 .055 .051 .054 .056 .054 .054 .056
S 2 (LIL)
.058 .061 .061 .059 .063 .063 .058 .058 .061
S 3 (LIL)
.056 .061 .057 .055 .058 .065 .058 .058 .064
S 4 (LIL)
.059 .068 .066 .060 .064 .070 .061 .065 .070
Table 2 :
2Simulated Average Power for T = 250, G w = N (0, 1)ǫ = 0
ǫ = 0.5
ǫ = 0.8
Number of inequalities
4
6
10
4
6
10
4
6
10
Q(Ψ Step , K SIC )
.770 .837 .900 .773 .840 .904 .783 .849 .909
Q(Ψ Log , K SIC )
.754 .827 .893 .783 .849 .910 .813 .872 .927
Q(Ψ N or , K SIC )
.741 .814 .882 .780 .845 .906 .817 .875 .928
Q(Ψ Step , K LIL )
.752 .822 .886 .761 .830 .895 .780 .847 .906
Q(Ψ Log , K LIL )
.748 .821 .888 .781 .847 .908 .815 .874 .928
Q(Ψ N or , K LIL )
.734 .807 .875 .778 .844 .903 .819 .876 .928
S 1 (SIC)
.593 .626 .650 .699 .728 .761 .774 .803 .831
S 2 (SIC)
.714 .781 .847 .784 .844 .901 .834 .887 .937
S 3 (SIC)
.678 .735 .793 .750 .804 .858 .805 .854 .899
S 4 (SIC)
.730 .794 .855 .767 .830 .886 .808 .864 .913
S 1 (LIL)
.594 .626 .650 .700 .729 .762 .776 .805 .832
S 2 (LIL)
.716 .782 .848 .785 .846 .903 .836 .889 .939
S 3 (LIL)
.678 .736 .794 .751 .805 .860 .808 .856 .902
S 4 (LIL)
.732 .795 .857 .769 .833 .889 .811 .868 .916
Table 3 :
3Simulated Average Power for T = 250, G w = Logisticǫ = 0
ǫ = 0.5
ǫ = 0.8
Number of inequalities
4
6
10
4
6
10
4
6
10
Q(Ψ Step , K SIC )
.772 .839 .900 .774 .841 .903 .781 .850 .910
Q(Ψ Log , K SIC )
.757 .828 .893 .785 .851 .910 .813 .875 .929
Q(Ψ N or , K SIC )
.744 .815 .882 .781 .847 .906 .817 .878 .930
Q(Ψ Step , K LIL )
.753 .824 .886 .763 .831 .894 .779 .848 .908
Q(Ψ Log , K LIL )
.751 .823 .888 .783 .849 .908 .815 .876 .930
Q(Ψ N or , K LIL )
.738 .808 .874 .780 .845 .904 .819 .878 .930
S 1 (SIC)
.599 .629 .651 .697 .729 .762 .775 .803 .831
S 2 (SIC)
.718 .782 .847 .784 .845 .901 .834 .889 .938
S 3 (SIC)
.681 .737 .794 .750 .803 .858 .806 .855 .901
S 4 (SIC)
.734 .795 .854 .768 .830 .886 .807 .866 .915
S 1 (LIL)
.600 .629 .651 .699 .730 .763 .777 .805 .833
S 2 (LIL)
.719 .784 .849 .786 .846 .903 .837 .891 .940
S 3 (LIL)
.682 .738 .796 .751 .805 .861 .808 .857 .903
S 4 (LIL)
.735 .797 .856 .771 .833 .889 .811 .869 .919
Table 4 :
4Simulated Average Power for T = 250, G w = U (−1, 2)ǫ = 0
ǫ = 0.5
ǫ = 0.8
Number of inequalities
4
6
10
4
6
10
4
6
10
Q(Ψ Step , K SIC )
.769 .837 .899 .775 .842 .902 .782 .849 .908
Q(Ψ Log , K SIC )
.754 .826 .892 .785 .850 .910 .812 .874 .926
Q(Ψ N or , K SIC )
.741 .813 .880 .781 .846 .906 .817 .876 .927
Q(Ψ Step , K LIL )
.752 .821 .885 .763 .832 .894 .779 .847 .907
Q(Ψ Log , K LIL )
.749 .820 .886 .784 .848 .908 .815 .876 .927
Q(Ψ N or , K LIL )
.735 .806 .873 .780 .844 .903 .819 .878 .928
S 1 (SIC)
.594 .623 .652 .698 .727 .758 .773 .801 .830
S 2 (SIC)
.715 .778 .846 .784 .843 .900 .834 .887 .937
S 3 (SIC)
.678 .733 .793 .749 .803 .858 .805 .854 .899
S 4 (SIC)
.730 .793 .852 .768 .831 .886 .807 .866 .914
S 1 (LIL)
.594 .623 .652 .699 .728 .759 .775 .803 .831
S 2 (LIL)
.716 .780 .848 .785 .845 .902 .836 .889 .939
S 3 (LIL)
.679 .734 .794 .751 .805 .860 .807 .857 .901
S 4 (LIL)
.731 .794 .853 .770 .833 .889 .811 .869 .918
Note that Ψ T ( µ j ) µ j ≤ Ψ(0) µ j for any T because the function Ψ T (x) = Ψ(K(T )x) is constructed to be non-negative and non-increasing in x.
The case of Ψ being everywhere positive is more complicated because Q 2 can then be almost surely strictly positive. If all µ j parameters are strictly positive, both numerator and denominator in the ratio Q 1 /Q 2 tend to zero in probability. See Appendix B.4 for analysis of the asymptotic properties of the test statistic Q in that case.
Note that the notion of asymptotic test size using lim sup T −→∞ sup G∈Γ 0 P G (Q < α) is stronger than its pointwise version sup G∈Γ 0 lim sup T −→∞ P G (Q < α). See Lehmann and Romano (2005, p. 422) for an illustrating example in which pointwise asymptotic size can be a very poor approximation to the finite sample test size.
This simple average framework is used extensively in recent literature on inference for (unconditional) moment inequality models. See, e.g. Chernozhukov et al. (2007), Romano and Shaikh (2008), Rosen (2008), Andrews and Guggenberger (2009), Andrews and Soares (2010), Andrews and Barwick (2012) and references cited therein.
Note that the vector −δV θ necessarily contains at least one negative element since V is positive definite, θ is a positive vector and δ is a postive scalar.
By taking X ∼ N (0, c) with c = v jj , E( ψ(h T X)) can be computed using numerical integral as ∞ −∞ ψ(h T x)φ(x/ v jj )/ v jj dx.
The probability in the right-hand side of (B.20) may be written aswhereNote that by [U4]-(ii), we have that with T large enough, 0 ≤ C 1,RT ≤ C 1 /(Ψ(0) √ ω ′ ) andHence, given z α < 0 and η small enough, the probability (B.21)Given the fact that β T is non-stochastic with β ′ T V G β T = 1, Assumption [U2] implies that given η, for any ξ > 0, there is a threshold T * (η, ξ) such that for T > T * (η, ξ), the probability (B.22) will be smaller than√ ω ′ )) + ξ + ε from which by letting T −→ ∞ in accordance with T > max{T * (η, ξ), T * * (η, ε)} as the scalars η, ξ and ε approach zero, it follows that lim sup T −→∞ sup G∈Γ0 P G (Q 1 − z α Q 2 < 0) ≤ α.C Covariance Singularity ExamplesIn this appendix section, we present three examples of estimator covariance singularity for which the high level assumptions [D2] and [U4]-(ii) are verified. Recall that G is the joint distribution from which the underlying individual data vector is randomly sampled. Γ is the set of all possible G compatible with presumed specification of the data generating process and Γ 0 is the subset of Γ that satisfies the null hypothesis. All parameter values such as µ and V depend on the point G of evaluation but we keep that implicit to avoid notational clutter.In the first two examples, the econometric model is initially characterized by an r dimensional
Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure. D W K Andrews, P J Barwick, Econometrica. forthcomingAndrews, D. W. K. and P. J. Barwick (2012), "Inference for Parameters Defined by Moment Inequalities: A Recommended Moment Selection Procedure", Econometrica, forthcoming.
Validity of Subsampling and Plug-in Asymptotic Inference for Parameters Defined by Moment Inequalities. D W K Andrews, P Guggenberger, Econometric Theory. 25Andrews, D. W. K. and P. Guggenberger (2009), "Validity of Subsampling and Plug-in Asymptotic Inference for Parameters Defined by Moment Inequalities", Econometric The- ory, 25, 669-709.
Inference for Parameters Defined by Moment Inequalities Using Generalized Moment Selection. D W K Andrews, G Soares, Econometrica. 78Andrews, D. W. K. and G. Soares (2010), "Inference for Parameters Defined by Moment Inequalities Using Generalized Moment Selection", Econometrica, 78, 119-157.
Econometric Inference Involving Discrete Indicator Functions: Dynamic Discrete Choice and Multiple Inequality Tests, PhD dissertation. L-Y Chen, Department of Economics, University College LondonChen, L-Y. (2009), Econometric Inference Involving Discrete Indicator Functions: Dynamic Discrete Choice and Multiple Inequality Tests, PhD dissertation, Department of Economics, University College London.
Constraint Chaining: A New Technique for Testing Multiple One-Sided Hypotheses. L-Y Chen, J Szroeter, University College Londonworking paperChen, L-Y, and J. Szroeter (2006), "Constraint Chaining: A New Technique for Testing Multiple One-Sided Hypotheses", working paper, University College London.
Hypothesis testing of multiple inequalities: the method of constraint chaining. L-Y Chen, J Szroeter, Cemmap working paper, CWP13/09, Institute for Fiscal Studies. LondonChen, L-Y, and J. Szroeter (2009), "Hypothesis testing of multiple inequalities: the method of constraint chaining", Cemmap working paper, CWP13/09, Institute for Fiscal Studies: London.
Estimation and confidence regions for parameter sets in econometric models. V Chernozhukov, H Hong, E Tamer, Econometrica. 75Chernozhukov, V., H. Hong, and E. Tamer (2007), "Estimation and confidence regions for parameter sets in econometric models", Econometrica, 75, 1243-1284.
Asymptotic Normality for Chi-Bar-Square Distributions. R Dykstra, Canadian Journal of Statistics. 19Dykstra, R. (1991), "Asymptotic Normality for Chi-Bar-Square Distributions", Canadian Journal of Statistics, 19, 297-306.
C Gourieroux, A Monfort, Statistics and Econometric Models. Cambridge University Press2Gourieroux, C. and A. Monfort (1995), Statistics and Econometric Models, Vol. 2. Cam- bridge University Press.
Asymptotic Tests of Composite Hypotheses. P R Hansen, Economics WP 2003-09. Brown UniversityHansen, P.R. (2003), "Asymptotic Tests of Composite Hypotheses", Economics WP 2003- 09, Brown University.
A Test for Superior Predictive Ability. P R Hansen, Journal of Business and Economic Statistics. 23Hansen, P.R. (2005), "A Test for Superior Predictive Ability", Journal of Business and Economic Statistics, 23, 365-380.
A Maximum Score Estimator for the Binary Response Model. J Horowitz, Econometrica. 60Horowitz, J. (1992), "A Maximum Score Estimator for the Binary Response Model", Econo- metrica, 60, 505-531.
Confidence Intervals for Partially Identified Parameters. G W Imbens, C F Manski, Econometrica. 72Imbens, G. W. and Manski, C. F. (2004), "Confidence Intervals for Partially Identified Parameters", Econometrica, 72, 1845-1857.
Wald Criteria for Jointly Testing Equality and Inequality Restrictions. D A Kodde, F C Palm, Econometrica. 54Kodde, D.A. and F.C. Palm (1986), "Wald Criteria for Jointly Testing Equality and In- equality Restrictions", Econometrica, 54, 1243-1248.
Nonparametric tests of conditional treatment effects. S Lee, Y-J Whang, Cemmap working paper, CWP36/09, Institute for Fiscal Studies. LondonLee S. and Y-J Whang (2009), "Nonparametric tests of conditional treatment effects," Cemmap working paper, CWP36/09, Institute for Fiscal Studies: London.
Cemmap working paper, CWP12/11, Institute for Fiscal Studies. S Lee, K Song, Y-J Whang, LondonTesting functional inequalitiesLee S., Song K. and Y-J Whang (2011), "Testing functional inequalities," Cemmap working paper, CWP12/11, Institute for Fiscal Studies: London.
E Lehmann, J P Romano, Testing Statistical Hypotheses. New YorkSpringer3rd edLehmann, E. and Romano, J. P. (2005), Testing Statistical Hypotheses, 3rd ed. New York : Springer.
An improved bootstrap test of stochastic dominance. O Linton, K Song, Y-J Whang, Journal of Econometrics. 154Linton, O., Song, K. and Y-J. Whang (2010), "An improved bootstrap test of stochastic dominance", Journal of Econometrics, 154, 186-202.
Who Invented Local Power Analysis ?. D Mcmanus, Econometric Theory. 7McManus, D. (1991), "Who Invented Local Power Analysis ?", Econometric Theory, 7, 265-268.
Estimation and Inference with Many Moment Inequalities. K Menzel, Department of Economics, MITUnpublished Working PaperMenzel, K. (2008), "Estimation and Inference with Many Moment Inequalities", Unpub- lished Working Paper, Department of Economics, MIT.
Uniform Inference in Autoregressive Models. A Mikusheva, Econometrica. 75Mikusheva, A. (2007), "Uniform Inference in Autoregressive Models", Econometrica, 75, 1411 -1452.
One-Sided Testing Problems in Multivariate Analysis. M D Perlman, Annals of Mathematical Statistics. 40Perlman, M.D. (1969), "One-Sided Testing Problems in Multivariate Analysis", Annals of Mathematical Statistics, 40, 549-567.
. T Robertson, F T Wright, R L Dykstra, Order Restricted Statistical Inference. WileyRobertson, T., F. T. Wright, and R. L. Dykstra (1988), Order Restricted Statistical Infer- ence, New York : Wiley.
Inference for identifiable parameters in partially identified econometric models. J P Romano, A M Shaikh, Journal of Statistical Planning and Inference. 138Romano, J. P. and Shaikh, A.M. (2008), "Inference for identifiable parameters in partially identified econometric models", Journal of Statistical Planning and Inference, 138, 2786- 2807.
Confidence sets for partially identified parameters that satisfy a finite number of moment inequalities. A Rosen, Journal of Econometrics. 146Rosen, A. (2008), "Confidence sets for partially identified parameters that satisfy a finite number of moment inequalities," Journal of Econometrics, 146, 107-117.
M J Silvapulle, P K Sen, Constrained Statistical Inference. New YorkWileySilvapulle, M.J. and P.K. Sen (2005), Constrained Statistical Inference, New York : Wiley.
More on Confidence Intervals for Partially Identified Parameters. J Stoye, Econometrica. 77Stoye, J. (2009), "More on Confidence Intervals for Partially Identified Parameters", Econo- metrica, 77, 1299-1315.
A Reality Check for Data Snooping. H White, Econometrica. 68White, H. (2000), "A Reality Check for Data Snooping", Econometrica, 68, 1097-1126.
An Exact Test for Multiple Inequality and Equality Constraints in the Linear Regression Model. F Wolak, Journal of the American Statistical Association. 82Wolak, F.(1987), "An Exact Test for Multiple Inequality and Equality Constraints in the Linear Regression Model", Journal of the American Statistical Association, 82, 782-793.
Duality in Testing Multivariate Hypotheses. F Wolak, Biometrika. 75Wolak, F. (1988), "Duality in Testing Multivariate Hypotheses", Biometrika, 75, 611-615.
Testing Inequality Constraints in Linear Econometric Models. F Wolak, Journal of Econometrics. 41Wolak, F. (1989), "Testing Inequality Constraints in Linear Econometric Models", Journal of Econometrics, 41, 205-235.
The Local Nature of Hypothesis Tests Involving Inequality Constraints in Nonlinear Models. F Wolak, Econometrica. 59Wolak, F. (1991), "The Local Nature of Hypothesis Tests Involving Inequality Constraints in Nonlinear Models", Econometrica, 59, 981-995.
Case I : All hypothesized intervals are non-degenerate. Case I : All hypothesized intervals are non-degenerate
ii) for null hypothesis given by Case I : Note that under H 1 , β j < l j or β j > u j for some j ≤ r and thus (C.2) is either θ j or −θ j+r for some j ≤ r. Hence C ′ ∆d(µ) is non-zero and Assumption [D2] holds under the alternative hypothesis. ≤ r. Given that u j > l j for all j, there is some j such that expression (C.3) equals either Ψ(0)θ j or −Ψ(0)θ j+r whenever d(µ) is non-zero under the null hypothesis. Hence, Assumption [D2] is verified. We now verify the high level assumption [U4]-(ii. Under the null hypothesis, the jth element of C ′ ∆d σ (µ) is Ψ(0). 1{l j + σ ≥ β j ≥ l j }θ j − 1{u j ≥ β j ≥ u j − σ}θ j+rFor Case I, the null hypothesis concerns only non-degenerate intervals in the sense that u j > l j for all j ≤ r. Verification of [D2] and [U4]-(ii) for null hypothesis given by Case I : Note that under H 1 , β j < l j or β j > u j for some j ≤ r and thus (C.2) is either θ j or −θ j+r for some j ≤ r. Hence C ′ ∆d(µ) is non-zero and Assumption [D2] holds under the alternative hypothesis. ≤ r. Given that u j > l j for all j, there is some j such that expression (C.3) equals either Ψ(0)θ j or −Ψ(0)θ j+r whenever d(µ) is non-zero under the null hypothesis. Hence, Assumption [D2] is verified. We now verify the high level assumption [U4]-(ii). Under the null hypothesis, the jth element of C ′ ∆d σ (µ) is Ψ(0)[1{l j + σ ≥ β j ≥ l j }θ j − 1{u j ≥ β j ≥ u j − σ}θ j+r ].
,r} (u j −l j )/2, if d σ (µ) is a non-zero, then there is some j such that expression (C.4) equals either Ψ(0)θ j or −Ψ(0)θ j+r and thus C ′ ∆d σ (µ) is a non-zero vector of length which is bounded away from zero by Assumption. , , U3]-(i). Given the primitive eigenvalue assumption on Ω, this completes verification of Assumption [U4]-(iiFor σ < min j∈{1,2,...,r} (u j −l j )/2, if d σ (µ) is a non-zero, then there is some j such that expression (C.4) equals either Ψ(0)θ j or −Ψ(0)θ j+r and thus C ′ ∆d σ (µ) is a non-zero vector of length which is bounded away from zero by Assumption [U3]-(i). Given the primitive eigenvalue assumption on Ω, this completes verification of Assumption [U4]-(ii).
At least one hypothesized interval is degenerate. I I Case, Case II : At least one hypothesized interval is degenerate
at least one interval is specified to be degenerate (i.e. l j = u j for some j ≤ r) in the null hypothesis. For Case, I I , Let S e denote the subset of {1, 2, ..., r} such that l j = u j holds for all j ∈ S eFor Case II, at least one interval is specified to be degenerate (i.e. l j = u j for some j ≤ r) in the null hypothesis. Let S e denote the subset of {1, 2, ..., r} such that l j = u j holds for all j ∈ S e
(ii) for null hypothesis given by Case II : Under H 1 , Assumption [D2] holds by the same arguments as given in Case I. Verification of [D2] and [U4]-. Under H 0 , (C.3) becomesVerification of [D2] and [U4]-(ii) for null hypothesis given by Case II : Under H 1 , Assumption [D2] holds by the same arguments as given in Case I. Under H 0 , (C.3) becomes
In this case, Assumption [D2] still holds but the restriction that θ j = θ j+r for at least one j ∈ S e has to be imposed. Ψ(0) (θ j − θ j+r ) for all j ∈ S e. This extra restriction guarantees that C ′ ∆d(µ) is not equal to zero for all non-zero d(µ). and thus [D2] is fulfilled. We now verify the high level assumption [U4]-(ii). Note that [U4]-(ii) only concerns the null hypothesis under which (C.4) becomes Ψ(0) (θ j − θ j+r ) for all j ∈ S e . Therefore, providedΨ(0) (θ j − θ j+r ) for all j ∈ S e . In this case, Assumption [D2] still holds but the restriction that θ j = θ j+r for at least one j ∈ S e has to be imposed. This extra restriction guarantees that C ′ ∆d(µ) is not equal to zero for all non-zero d(µ) and thus [D2] is fulfilled. We now verify the high level assumption [U4]-(ii). Note that [U4]-(ii) only concerns the null hypothesis under which (C.4) becomes Ψ(0) (θ j − θ j+r ) for all j ∈ S e . Therefore, provided
| []
|
[
"Generation of Higher Dimensional Modal Entanglement Using a Three Waveguide Directional Coupler",
"Generation of Higher Dimensional Modal Entanglement Using a Three Waveguide Directional Coupler"
]
| [
"Divya Bharadwaj \nDepartment of Physics\nIIT Delhi\n110016New DelhiIndia\n",
"K Thyagarajan \nDepartment of Physics\nIIT Delhi\n110016New DelhiIndia\n",
"Michal Karpinski \nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK\n",
"Konrad Banaszek \nFaculty of Physics\nUniversity of Warsaw\nWarsawPoland\n"
]
| [
"Department of Physics\nIIT Delhi\n110016New DelhiIndia",
"Department of Physics\nIIT Delhi\n110016New DelhiIndia",
"Clarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK",
"Faculty of Physics\nUniversity of Warsaw\nWarsawPoland"
]
| []
| In this paper, we propose a method for the generation of higher dimensional modal entanglement through type II spontaneous parametric down conversion process using a three waveguide directional coupler in a periodically poled lithium niobate substrate. We show that by a proper design, it is possible to achieve an output state of two photons occupying three different spatial modes. The advantage of using such waveguide structure is its flexibility and the design space availability to achieve desired characteristics of the photon pairs generated in the down conversion process. | 10.1103/physreva.00.003800 | [
"https://export.arxiv.org/pdf/1503.05760v1.pdf"
]
| 118,474,278 | 1503.05760 | ffeed0a96b60ca003628b5b4d71f772d58c72157 |
Generation of Higher Dimensional Modal Entanglement Using a Three Waveguide Directional Coupler
Divya Bharadwaj
Department of Physics
IIT Delhi
110016New DelhiIndia
K Thyagarajan
Department of Physics
IIT Delhi
110016New DelhiIndia
Michal Karpinski
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUK
Konrad Banaszek
Faculty of Physics
University of Warsaw
WarsawPoland
Generation of Higher Dimensional Modal Entanglement Using a Three Waveguide Directional Coupler
In this paper, we propose a method for the generation of higher dimensional modal entanglement through type II spontaneous parametric down conversion process using a three waveguide directional coupler in a periodically poled lithium niobate substrate. We show that by a proper design, it is possible to achieve an output state of two photons occupying three different spatial modes. The advantage of using such waveguide structure is its flexibility and the design space availability to achieve desired characteristics of the photon pairs generated in the down conversion process.
I. INTRODUCTION
Entangled photons have become one of the most important ingredients in the field of quantum computing, quantum cryptography [1,2] and quantum teleportation [3]. Generation and manipulation of entangled states in different degrees of freedom such as polarization, spatial and spectral have been extensively studied in the literature for various applications in quantum communication protocols such as quantum key distribution (QKD) [4][5], entanglement swapping [6], quantum super dense coding [7] etc. Most of these studies involve two dimensional entanglements. Recently, focus has shifted towards studying higher dimensional entanglement i.e. entanglement in more than two modes. Quantum states with higher dimensional entanglement provide larger information capacity and increased noise threshold in comparison to entangled system in two dimensions [8,9]. In the literature several ways have been proposed to generate higher dimensional entanglement such as higher dimensional entanglement of orbital angular momentum (OAM) of photons [10,11], higher dimensional time-bin entanglement [12] etc.
In this paper, we propose a method to generate a higher dimensional mode entangled photon pairs through type II spontaneous parametric down conversion (SPDC) process in a three waveguide directional coupler. It is shown that by an appropriate design it is possible to achieve an output state of two photons occupying three different spatial modes. This structure has an advantage of flexibility and the design space availability to achieve desired characteristics of the photon pairs generated in the down conversion process in comparison to the multimode single waveguide structure. In addition to that, we also show that with the same structure and quasi phase matched (QPM) grating, it is possible to get an output state of two photons occupying four different possible combinations of three spatial modes. This state corresponds to partial high dimensional entanglement. The partially entangled states find applications in probabilistic teleportation [13,14].
II. PRINCIPLE
We consider a three waveguide directional coupler consisting of three identical single mode waveguides each of width 'a' separated by a distance 'd' such that the three waveguide coupler supports three normal guided modes (two symmetric and one antisymmetric) referred to as 0, 1 and 2 as shown in Figure 1. We represent the effective indices of the modes at different frequencies by nm where α = p, s, i for pump, signal and idler respectively and m = 0, 1, 2 for the three modes. We consider generation of photon pairs using degenerate parametric down conversion with type II phase matching with a horizontally (H) polarized pump and consider the signal to be horizontally (H) polarized and the idler to be vertically (V) polarized. Using an adiabatic evolution of the waveguide structure from a single waveguide to the three coupled waveguides (see Figure 1), at the pump wavelength it is possible to excite the H-polarization of either the fundamental symmetric mode or the first excited anti symmetric mode of the three waveguide region.
The parametric down conversion process from the pump photon to signal idler pair would depend on the phase matching or quasi phase matching condition that is satisfied as well as on the overlap integral between the interacting pump, signal and idler modes. We consider two cases:
A. Pump photon in the fundamental symmetric mode
We first assume the pump to be in the fundamental symmetric mode of the three waveguide coupler. Due to parity conservation, the H-polarized fundamental symmetric pump mode can only down convert to any of the following combination of pairs of modes of signal and idler of orthogonal polarization: Hs0Vi0; Hs1Vi1; Hs0Vi2; Hs2Vi0 and Hs2Vi2, where H and V correspond to horizontal and vertical polarization states, subscripts s and i correspond to signal and idler and the integer corresponds to the mode number. The remaining combinations of symmetric-anti symmetric pairs of signal idler pairs are not allowed due to parity conservation requirement [15][16][17].
In order to achieve higher dimensional entangled photon pairs, we need to properly design the waveguides and their separation so that three of the above mentioned processes occur with the same probability. In order to show that this is possible, we choose the following three possibilities of down conversion process: : Hs0Vi0; Hs1Vi1 and Hs2Vi2; these have been chosen since in the case of the three waveguide directional coupler all the three processes are characterized by almost the same efficiency. In such a case the output is expected to be an entangled state given by:
0 01 0 0 02 1 1 [ , , s i s i i d C H V C H V 03 2 2 ,] si C H V (1)
where, C01, C02 and C03 are constants which are determined by the field overlap integral and the phase matching function. These three processes require quasi phase matching (QPM) nonlinear gratings with spatial frequencies given by: where, ( , ) p s i is the pump (signal, idler) wavelength, 0 j represents grating period for the j th process. Thus when the pump photon (assumed to be horizontally polarized and in 0 spatial mode) is incident on the waveguide, it will down convert via any one of the following three (j=1, 2, 3) processes:
0 0 0 0 1 1 0 2 2 p s i p s i p s i H H V H H V H H V
The output state will be maximally entangled if the values of coefficients 01 02 03 C ,C and C (in Eq. (1)) become equal. Since these coefficients depend upon the overlap integrals, effective indices of the interacting modes at the pump, signal and idler wavelengths and also the phase matching function, it will be shown in Sec. IV that by an appropriate design of the waveguide and their separation it is possible to achieve this condition and thus obtain a higher dimensional entangled state. We will also show in Sec. IV that by an appropriate choice of the waveguide parameters all the spatial frequencies required as per Eq. (2) can be made very close to each other i.e. 01 02 03
K K K K ,
thus providing the possibility of all the processes using a single QPM grating.
B. Pump photon in the first excited anti symmetric mode
We next consider the pump to be in the first excited mode of the three waveguide coupler. In this case the H-polarized first excited anti symmetric pump mode can only down convert to any of the following combination of pairs of modes of signal and idler of orthogonal polarization: Hs0Vi1; Hs1Vi0; Hs1Vi2 and Hs2Vi1 due to parity conservation. In such a case the output is expected to be an entangled state given by: , , ]
s i s i C H V C H V (3)
These four processes require quasi phase matching (QPM) nonlinear gratings with spatial frequencies given by: Thus when the pump photon (assumed to be horizontally polarized and in spatial mode 1) is incident on the waveguide, it will down convert via any one of the following four (j=1, 2, 3, 4) processes:
1 0 1 1 1 0 1 1 2 1 2 1 p s i p s i p s i p s i H H V H H V H H V H H V
The output state will be maximally entangled if the values of coefficients 11 12 13 ,, C C C and 14 C (in Eq. (3)) become equal. Since these coefficients depend upon the overlap integrals, effective indices of the interacting modes at the pump, signal and idler wavelengths and also the phase matching function, it will be shown in Sec. IV that by an appropriate design of the waveguide and their separation it is possible to achieve this condition and thus obtain a partial high dimensional entangled state. We will also show in Sec. IV that by an appropriate choice of the waveguide parameters all spatial frequencies required can be made very close to each other i.e. 11 12 13 14
K K K K K ,
thus providing the possibility of all the processes using a single QPM grating.
III. ANALYSIS
We consider the three waveguide directional coupler in zcut periodically poled lithium niobate ( Figure 2) with the optic axis along z-axis so that the H-polarization (along the y-direction) corresponds to ordinary polarization while the V-polarization (along the z-direction) corresponds to the extraordinary polarization. Here the waveguide width and depth are assumed to be such that the three waveguide coupler supports only the 00, 10 and 20 modes. In order to shorten the symbols, we represent the 00,10 and 20 modes of channel waveguide by 0, 1 and 2 respectively. In order to obtain the output down converted quantum state, we need to analyze the normal modes of the three waveguide directional coupler. As described in Appendix A, a standard approximate analysis can be used to obtain the propagation constants as well as the modal field distributions of the H-and V-polarized normal modes of the coupler.
( ) ( ) 0 ( , ) ( , ) ( , ) ( ) ( ) ( ) m m p s i p s i p s i u r Y y Z z (5)
where, m represents the mode number and the subscripts represent pump, signal and idler fields. The overall field is written as a product of the y-dependence and z-dependence; the waveguide geometry supports only the 0 order mode along the vertical direction. Now, the electric fields at pump, signal and idler corresponding to different modes are represented by the following equations [18]:
( ) ( ) ( ) cos( ) m m m p p m p p E u r A x t ( ) ( ) ( ) ( ) () 1 2 mm p p p p i x t i x t m pm u r A e e (6) ( ) ( ) ( ) ( ) † 2 0ˆ( ) 2 mm ss i x i x mm s s sm sm s sm hc E i d u r a e a e nL (7) ( ) ( ) ( ) ( ) † 2 0ˆ( ) 2 m m i i i x i x mm i i im im i im hc E i d u r a e a e nL (8)
where, L represents the length of interaction, We assume the pump to be described by a classical field as it is assumed to be strong. Here, m A is the amplitude of pump in the m th mode. In Eq. (7) and Eq. (8) â and † a represent the annihilation and creation operators of the generated signal (s) and idler (i) photons corresponding to the m th mode. The second order nonlinear polarization in the medium is given by:
( , ) () ( , )( , )0 , 2 NL k klm l m lm P d E E (9)
where, l E represents the l th component of the total electric field within the medium. The interaction Hamiltonian is given by:
int H UdV (10)
where,
kk U P dE
In accordance with the interaction picture, the overall output state is given as:
int 0 ,0 i H t si e (11)
where, 0 ,0 si correspond to vacuum state i.e. no signal and idler photon.
A. Pump is fundamental symmetric mode:
In this case, the electric field components are given by:
(0) (0) (1) (2) (0) (1) (2) 0, x y p s s s z i i i E E E E E E E E E E (12)
Using Eq. (9) and (12) in Eq. (10) and considering only the quasi phase matched processes since other terms are negligibly small as compared to the phase matched terms, we get interaction Hamiltonian as:
(0) (0) (0) (0) (1) (1) int 0 24 4[ p s i p s i H d E E E E E E (0) (2) (2) ] p s i E E E dV (
Using the interaction picture (Eq. (11)) we can obtain the overall output quantum state (neglecting the vacuum state contribution) as:
0 j k is the phase mismatch given by: (0) (0) (0) 01 (0) (1) (1) 02 (0) (2) (2) 03 p s i psi p s i k K kK k K (16)
0 j I represent the overlap integrals between the pump, signal and idler of j=1, 2, and 3 processes, and are given by: In this case, the electric field components are given by:
(0) (0) (0) 01 (0) (1) (1) 02 (0) (2)(2)(1) (0) (1) (2) (0) (1)(2)
0,
x y p s s s E E (18)
Using Eq. (9), (10), (18) and considering only the phase matched processes as other terms are negligibly small as compared to the phase matched terms, we get interaction Hamiltonian as:
(1) (0) (1) (1) (1) (0) int 0 24 4[ p s i p s i H d E E E E E E (1) (1) (2)(1)
Using the interaction picture (Eq. (11)) the output state is given as:
(22)
1 j I = overlap integrals between the pump, signal and idler of j=1, 2, and 3 processes, and its expression is given by:
(1) (0) (1) 11 (1) (1) (0) 12 (1) (1) (2) 13
(1) (2)(1)
The efficiency of the four down conversion processes for j=1, 2, 3 and 4 are proportional to 2 1 j F .
In Figure 2, we have shown that the three waveguides are again recombined to a single multimode waveguide supporting three normal modes. As the transformation from the triplet of channels to the output three moded waveguide is unitary, the fundamental symmetric (0), the first excited antisymmetric (1) and the second excited symmetric (2) normal modes at signal and idler wavelengths would respectively excite the fundamental first, excited antisymmetric and second excited symmetric modes of three-moded waveguide. Thus in the three moded single waveguide region, the modal entanglement would be preserved but now instead of in terms of the normal modes of three coupled waveguides, it will be modal entanglement among the three normal modes of the output three moded waveguide. If required, the three modes can be spatially separated into three different output waveguides by using an asymmetric three waveguide splitter in which all the three waveguide are single-moded but with different propagation constants (which can be easily obtained by choosing different widths of the three waveguides). In such a device which is an extension of the asymmetric Y-splitter that is used in integrated optics, the fundamental symmetric mode at both the signal and the idler wavelengths will exit from the waveguide having highest propagation constant (upper most waveguide in Figure 3), the second excited symmetric modes will exit from the waveguide having the lowest propagation constant (lowest waveguide in Figure 3) while the first excited antisymmetric mode will exit from the middle waveguide having propagation constant in between the other two waveguides. This way the modes in which the photons are generated can be separated spatially leading to path entangled photon pairs.
If we number the output waveguides shown in Figure 3 as I, II and III, then the output quantum state for the first case would be described by: Figure 3. When the photons are separated into three output waveguides, each of the output waveguide will have an intensity profile characteristic of that waveguide and the quantum state of the output state would be described by Eq. (24) and (25) from which the probability of finding the photons in any of the waveguides with a particular polarization can be easily determined. It may be also worth mentioning here that the photons are separated at the output by their polarizations. That is, polarization defines two subsystems (we have one photon in each polarization), and sets of mutually orthogonal spatial modes define Hilbert spaces of individual subsystems.
IV. NUMERICAL SIMULATIONS
In order to show the feasibility of the idea, we consider generation of entangled photons in a directional coupler device using titanium indiffused lithium Niobate channel waveguide. For the analysis we assume the waveguides to have a step index profile. The value of the lithium niobate substrate refractive index ( s n ) for different wavelengths and different polarizations were calculated using Sellmeier equation given in Ref [19].The refractive index difference ( n ) for a waveguide has been calculated using Ref [20]. We have carried out numerical simulations and optimization of the waveguide parameters assuming a pump wavelength of 675 nm.
We consider the three identical channel waveguide of width = 6 µm and depth = 7 µm separated by a distance of 6 µm. The propagation constant of normal modes of three waveguide directional coupler can be obtained by solving the Eigen value equation which is obtained by writing the fields in all regions and applying appropriate boundary conditions. After calculating the propagation constant of normal modes at pump, signal and idler wavelengths, we can obtain the field distribution of these modes. Details are given in the Appendix. Figure 4 shows the transverse electric field pattern of the three normal modes along y-direction at pump, signal and idler for the three processes involved when the H-polarized fundamental symmetric mode is incident on waveguide at pump wavelength of 675nm. For waveguide width of a = 6 µm, depth = 7 µm and separation d = 6µm, the overlap integral of all three process are almost equal and the grating spatial frequency required is 0.9074 m -1 . Figure 6 shows the variation of down conversion efficiency vs signal wavelength and QPM grating for L = 2.55 mm for all three process. From Figure 6(a), it can be seen that the three curves intersect at a signal wavelength of 1350 nm. Thus by using a narrow band wavelength filter at the signal wavelength 1350 nm, we can obtain higher dimensional mode entangled photon pairs at the output. The role of the filter is two-fold: Not only it is used to select the point where equal conversion efficiencies are achieved, but also to remove spectral correlations between the signal and idler photons, which could lead to a significant reduction of entanglement quality [21].
A. Pump is in the fundamental symmetric mode
In order to show the dependence on the QPM grating period, in Figure 6(b) we have plotted the variation of efficiency with QPM grating spatial frequency K keeping the signal and idler wavelength to be 1350 nm.
It can be seen from Figure 6(b) that at QPM grating spatial frequency K = 0.9074m -1 all the three processes intersect and have same efficiency. Thus, using a single grating of grating period Λ = 6.92 µm in three waveguide coupler and an appropriate wavelength filter, we can obtain high dimensional entangled photon pairs at output.
We have found that for an error in the grating period of ± 50 nm, the signal wavelength changes by ± 5 nm and to achieve degeneracy we may tune the pump wavelength by ±2.5nm to obtain the maximally entangled state. Figure 7 shows the variation of the efficiency of the three possible down conversion processes as a function of the signal wavelength with slightly different grating periods. As can be seen from Figure 7, the curves still intersect at one value of signal wavelength which changes with the period. However choosing an appropriate wavelength filter can provide us with mode entangled degenerate pairs of photons even with slight errors in the grating period. Thus small errors in the grating period can be taken into account by a small tuning of the pump wavelength so that degenerate mode entangled photon pairs are generated.
B. Pump is in the first excited antisymmetric normal mode
When H-polarized first excited anti symmetric normal mode is incident on a waveguide at pump wavelength of 675 nm then it down convert via the four processes as described in Sec. II through a degenerate type II SPDC process. The square of normal modes of three waveguide coupler involved in these processes are shown in Figure 8. In this case also, with the same waveguide parameters and length of crystal, the overlap integral of all the four processes become almost equal. Figure 9 shows the variation of down conversion efficiency vs signal wavelength and QPM grating for L = 2.55 mm for all three process. In this case also, it can be seen from Figure 9 (a) that the four curves intersect at a signal wavelength of 1350 nm, but have the broader spectrum as compared to case 1. Thus by using a wavelength filter around the signal wavelength 1350 nm, we can obtain partial higher dimensional mode entangled photon pairs at the output.
In order to show the dependence on the QPM grating period, in Figure 9(b) we have plotted the variation of efficiency with QPM grating spatial frequency K keeping the signal and idler wavelength to be 1350 nm. It can be seen from Figure 9(b) that at QPM grating spatial frequency K = 0.9074m -1 all the four processes intersect and have same efficiency. Thus, using a single grating of grating period Λ = 6.92µm in three waveguide coupler, we can obtain partial high dimensional entangled photon pairs at output.
We would like to mention that the modal entanglement properties of the photon pair could be demonstrated by full quantum state tomography applied to the spatial degree of freedom. One could use for example measurement of the Wigner function demonstrated in Ref [22]. This scheme has direct extension to more than one photon and could be used to reconstruct the complete quantum state of the two-photon system.
Another route could be to use a carefully selected set of mutually non-orthogonal bases which would demonstrate entanglement through e.g. violation of Bell's inequalities. This has been done in the case of orbital angular momentum [11,23] and could be extended to other sets of modes with the help of spatial light modulators.
V. CONCLUSION
We have shown that by an appropriate design of a three waveguide directional coupler, it is possible to generate mode entangled state in a higher dimensional space by proper designing of the device. Also with the same device and having the same grating period we can obtain the partial high dimensional entanglement with more tolerance. The advantage of using this waveguide structure is the flexibility and the design space availability to achieve desired characteristics of the photon pairs generated in the down conversion process. Further investigation on coupled waveguide structures are expected to provide the possibility of designing waveguide devices generating a larger range of entangled and hyper entangled photon pairs.
u r Y y Z z where, () () m j Y
y is the field pattern of the j-polarized m th mode corresponding to the waveguide structure along y-direction and is given by: Zz is the field pattern of the j-polarized n th mode corresponding to the waveguide structure along zdirection and is given by:
Thus, for calculating the propagation constant of ypolarized mn mode, we solve following set of Eigen value equation of TM mode along the y-direction and TE mode along the z-direction. Now, for calculating the propagation constant of zpolarized field, we solve following set of Eigen value equation corresponding to TE mode along y-direction and TM mode along z-direction.
FIG. 1 :
1(Color online) Schematic diagram of an array of three identical single mode waveguides and the field distributions of the three normal modes.
FIG. 2 :
2(Color online) Schematic of the channel waveguide of three waveguide coupler having QPM grating of period
can be described by the following equation:
pump (signal, idler) in the m th mode along yis the electric field profile of pump (signal, idler) of fundamental mode along z-direction.
propagation constant of the pump(signal, idler) in the m th mode, h is the Planck's constant, c is the speed of light and 0 is the free space permittivity.
F
B. Pump is first excited antisymmetric mode:
FIG. 3 :
3(Color online) Schematic of waveguide structure describing the how the modes are separated at output.
FIG. 4 :
4(Color online) Normalized transverse modal field distributions along y -direction in a three waveguide coupler (a) fundamental symmetric normal mode (0) at pump, signal and idler wavelengths; (b) fundamental symmetric mode normal mode (0) at pump wavelength and the first excited anti symmetric normal mode (1) at signal and idler wavelengths; (c) fundamental symmetric mode normal mode (0) at pump wavelength and the second excited symmetric normal mode (2) at signal and idler wavelengths
Figure 5
5online) Square of the normalized transverse modal intensity distributions along y -direction in a three waveguide coupler (a) fundamental symmetric normal mode (0) at pump, signal and idler wavelengths; (b) fundamental symmetric mode normal mode (0) at pump wavelength and the first excited anti symmetric normal mode (1) at signal and idler wavelengths; (c) fundamental symmetric mode normal mode (0) at pump wavelength and the second excited symmetric normal mode (2) at signal and idler wavelengths
online) Variation of the efficiency of the three down conversion processes (a) as a function of signal wavelength; (b) as a function of QPM grating.
online) Variation of the efficiency of the three down conversion processes as a function of signal wavelength having grating period (a) Λ =6.874 µm; (b) Λ= 6.974µm
FIG. 8 :
8(Color online) Square of the normalized transverse modal field distributions along y -direction in a three waveguide coupler (a) first excited antisymmetric normal mode (1) at pump and idler wavelength and the fundamental symmetric normal mode (0) at signal wavelengths in a three waveguide coupler; (b) first excited antisymmetric normal mode (1) at pump and signal wavelength and the fundamental symmetric normal mode (0) at idler wavelengths; (c) first excited antisymmetric normal mode (1) at pump and signal wavelengths and the second excited symmetric normal mode (2) at idler wavelength (d) first excited antisymmetric normal mode (1) at pump and idler wavelengths and the second excited symmetric normal mode (2) at signal wavelength
online) Variation of the efficiency of the four down conversion processes (a) as a function of signal wavelength; (b) as a function of QPM grating
ACKNOWLEDGEMENTSThe authors wish to thank Dr. Rafal Demkowicz-Dobrzanski, Faculty of Physics, University of Warsaw, Poland for technical discussions. This research was partially supported by the "Polish NCBiR under the ERA-NET CHIST-ERA project QUASAR".APPENDIX-AConsider a three waveguide coupler as shown inFigure 10(a) and its corresponding separable refractive index profile is shown inFigure 10The refractive index profile corresponding to above structure can be written as[The above profile is correctly represent the refractive index profile of three channel waveguide coupler in every region except in the corner regions where the two profiles differ by an amount of, which is very small ,so for correction in corner region we have used the Perturbation method[24]. The transverse modal field profile in the three waveguide coupler is given by: The y-polarized field would corresponds to a TM mode in the y-direction and TE mode in the z-direction whereas the z-polarized field would corresponds to a TE mode in the ydirection and TM mode in the z-direction and obtained the propagation constant by solving corresponding Eigen value equations.
. A V Sergienko, M Atatüre, Z Walton, G Jaeger, B E A Saleh, M C Teich, Phys. Rev. A. 602622A. V. Sergienko, M. Atatüre, Z. Walton, G. Jaeger, B. E. A. Saleh, and M.C. Teich, Phys. Rev. A 60, R2622 (1999).
. E Knill, R Laflamme, G J Milburn, Nature (London). 40946E. Knill, R. Laflamme, and G. J. Milburn, Nature (London) 409, 46 (2001).
. D Bouwmeester, J Pan, K Mattle, M Eibl, H Weinfurter, A Zeilinger, Nature. 390575D. Bouwmeester, J. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Nature (London) 390, 575 (1997).
. T Jennewein, C Simon, G Weihs, H Weinfurter, A Zeilinger, Phys.Rev. Lett. 844729T. Jennewein, C. Simon, G. Weihs, H. Weinfurter, and A. Zeilinger, Phys.Rev. Lett. 84, 4729 (2000).
. D S Naik, C G Peterson, A G White, A J Berglund, P G Kwiat, Phys. Rev. Lett. 844733D. S. Naik, C. G. Peterson, A. G. White, A. J. Berglund, and P. G. Kwiat, Phys. Rev. Lett. 84, 4733 (2000).
. E Megidish, A Halevy, T Shacham, T Dvir, L Dovrat, H S Eisenberg, Phys. Rev. Lett. 110210403E. Megidish, A. Halevy, T. Shacham, T. Dvir, L. Dovrat, and H. S. Eisenberg, Phys. Rev. Lett. 110, 210403 (2013).
. K Mattle, H Weinfurter, P G Kwiat, A Zeilinger, Phys. Rev. Lett. 764656K. Mattle, H. Weinfurter, P. G. Kwiat, and A. Zeilinger, Phys. Rev. Lett. 76, 4656 (1996).
. H Bechmann-Pasquinucci, W Tittel, Phys. Rev. A. 6162308H. Bechmann-Pasquinucci and W. Tittel, Phys. Rev. A 61, 062308 (2000).
. N J Cerf, M Bourennane, A Karlsson, N Gisin, Phys. Rev. Lett. 88127902N. J. Cerf, M. Bourennane, A. Karlsson, and N. Gisin, Phys. Rev. Lett. 88, 127902 (2002).
. A Mair, A Vaziri, G Weihs, A Zeilinger, Nature. 412A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, Nature 412, 313 -316 (2001).
. A Vaziri, G Weihs, A Zeilinger, Phys. Rev. Lett. 89240401A. Vaziri, G. Weihs, and A. Zeilinger Phys. Rev. Lett. 89, 240401 (2002).
R Thew, A Acin, H Zbinden, N Gisin, arXiv:quant-ph/0307122Quantum Inf. Comput. 4R. Thew, A. Acin, H. Zbinden and N. Gisin, Quantum Inf. Comput. 4, 093-101(2004), arXiv: quant- ph/0307122
. W. -L Li, C. -F Li, G.-C Guo, Phys. Rev. A. 6134301W. -L. Li, C. -F. Li, and G.-C. Guo, Phys. Rev. A 61, 034301(2000).
. Dai Wei Jia-Hua, - Hong, Zhang Yi, Ming, Commun. Theor. Phys. 60WEI Jia-Hua, Dai Hong-Yi and Zhang Ming, Commun. Theor. Phys. 60, 651-657 (2013).
D B Anderson, J T Boyd, J D Mcmullen, Dielectric waveguide phase matching of infrared parametric interactions in Proceedings of the Symposium on Submillimeter Waves. J. FoxNew YorkPolytechnic Institute of Brooklyn Press20D. B. Anderson, J. T. Boyd, and J. D. McMullen, Dielectric waveguide phase matching of infrared parametric interactions in Proceedings of the Symposium on Submillimeter Waves, 20, edited by J. Fox (Polytechnic Institute of Brooklyn Press, New York, 1971), pp. 191-210.
. M G Roelofs, A Suna, W Bindloss, J D Bierlein, J. Appl. Phys. 764999M. G. Roelofs, A. Suna, W. Bindloss, and J. D. Bierlein, J. Appl. Phys. 76, 4999 ( 1994 )
. M Karpinski, C Radzewicz, K Banaszek, Appl. Phys. Lett. 94181105M. Karpinski, C. Radzewicz, and K. Banaszek, Appl. Phys. Lett. 94, 181105 (2009)
. K Thyagarajan, J Lugani, S Ghosh, K Sinha, A Martin, D B Ostrowsky, O Alibart, S Tanzilli, Phys. Rev. A. 8052321K. Thyagarajan, J. Lugani, and S. Ghosh, K. Sinha A. Martin, D. B. Ostrowsky, O. Alibart, and S. Tanzilli, Phys. Rev. A 80, 052321 (2009).
. D E Zelmon, D L Small, D Jundt, J. Opt. Soc. Am. B. 14D. E. Zelmon, D. L. Small, D. Jundt, J. Opt. Soc. Am. B 14, 3319-3322(1997)
. S Fouchet, A Carenco, C Daguet, R Guglielmi, L Riviere, IEEE J. Light Tech. 5700S. Fouchet, A. Carenco, C. Daguet, R. Guglielmi, and L. Riviere; IEEE J. Light Tech. 5, 700 (1987)
A B U'ren, K Banaszek, I A Walmsley, arXiv:quant-ph/0305192Quantum Information and Computation. 3A.B. U'Ren, K. Banaszek, I.A. Walmsley, Quantum Information and Computation 3, 480 (2003), arXiv:quant-ph/0305192
. E Mukamel, Banaszek, C Ia Walmsley, Dorrer, Optics Letters. 28E Mukamel, K Banaszek, IA Walmsley, C Dorrer, Optics Letters 28, 1317-1319 (2003).
. A C Dada, J Leach, G S Buller, M J Padgett, E Andersson, Nature Physics. 7A. C. Dada, J. Leach, G. S. Buller, M. J. Padgett, and E. Andersson, Nature Physics 7, 677-680 (2011)
. A Kumar, K Thyagarajan, A K Ghatak, Opt. Lett. 8A. Kumar, K. Thyagarajan, and A. K. Ghatak, Opt. Lett. , 8, 63-65(1983).
| []
|
[
"Chaos in Mean Motion Resonances of the Kuiper Belt",
"Chaos in Mean Motion Resonances of the Kuiper Belt"
]
| [
"Fred A Franklin \nHarvard--Smithsonian Center for Astrophysics\n\n",
"Paul R Soper \nHarvard--Smithsonian Center for Astrophysics\n\n"
]
| [
"Harvard--Smithsonian Center for Astrophysics\n",
"Harvard--Smithsonian Center for Astrophysics\n"
]
| []
| A recent paper of ours[2012[ , arXiv: 1207] claimed that some level of population density in the outer Kuiper belt, i.e., the sparsely populated region beyond the 1/2 mean motion resonance (mmr) with Neptune at 47.8 AU, was likely to endure for two separate reasons: 1) bodies captured into high--order mmrs during Neptune's outward migration, but now lying there, were at least semi-permanent members or, more specifically, a realistic model showed that only 2 bodies from a total of the 43 captured had escaped from 6 mmrs [from 6/13, at 50.4 AU, 5/11, 4/9, 3/7, 2/5, out to 1/3 at 62.6 AU] over integration times of 4.6 byr and 2) the many other outer belt bodies that were not in resonance, originated as escapers from the less retentive inner belt [a = or < a(1/2)] mmrs. This class, most often with large eccentricities, e, and inclinations, i, remained temporarily in this belt, likely to be replaced later or into the future by additional escapes from the same sources. We now want to add that, while point 1) remains intact, 2) could use an extension because in Paper 1 we focused chiefly on escapes from the 1/2 resonance and not so much on other mmrs, in particular none of the other n/n+1 ones. These have an observational interest as some of them, 2/3, 3/4 and 4/5 have librating members, while asFig. 1indicates, still higher n/n+1 ratios do not. There is therefore some value in calculating their ability to capture bodies and to inquire into the lifetimes of those captured. We already know that questions of stability generally deal with an on--going process: many captured bodies later escape, reaching the outer belt and elsewhere. Here we approach stability by evaluating the degree of chaos in many mmrs, but we have not carried through very long term integrations. By way of a summary, we can recap a few results from Paper 1 where sets of 500 bodies were placed for possible capture chiefly into mmrs between 1/2 and 1/3 as Neptune migrated by 7 AU over 10 myr. Of 69 with e(o) < 0.15 captured into 1/2, 31 escaped over 4.6 byr, with 10 of these doing so 'moderately recently', after 3 byr and 3 of those after 4.2 byr. Fully 2/3rds of the 38 remaining move in distinctly chaotic orbits, arguing that escapes will continue. Adding to the escapers from 1/2 were 9 others from a total of the 40 captured into higher inner belt mmrs, 3 each from 3/5 and 4/7 over times from 3.1 to 4.6 byr. Nearly all these escaping bodies occasionally moved into the region beyond a(1/2), for a very wide range of durations but averaging near 300 myr. It is on this basis that we concluded that the outer belt has been, and will in the future continue to be, well--stocked and that such bodies can eventually also make excursions into other parts of the solar system. Our plan in this paper is to expand some of these efforts to include the other n/n+1 | null | [
"https://arxiv.org/pdf/1403.5138v1.pdf"
]
| 117,052,619 | 1403.5138 | 8693e2d52ada804f33c8cc2f0f5730f1fb559dd3 |
Chaos in Mean Motion Resonances of the Kuiper Belt
Fred A Franklin
Harvard--Smithsonian Center for Astrophysics
Paul R Soper
Harvard--Smithsonian Center for Astrophysics
Chaos in Mean Motion Resonances of the Kuiper Belt
1
A recent paper of ours[2012[ , arXiv: 1207] claimed that some level of population density in the outer Kuiper belt, i.e., the sparsely populated region beyond the 1/2 mean motion resonance (mmr) with Neptune at 47.8 AU, was likely to endure for two separate reasons: 1) bodies captured into high--order mmrs during Neptune's outward migration, but now lying there, were at least semi-permanent members or, more specifically, a realistic model showed that only 2 bodies from a total of the 43 captured had escaped from 6 mmrs [from 6/13, at 50.4 AU, 5/11, 4/9, 3/7, 2/5, out to 1/3 at 62.6 AU] over integration times of 4.6 byr and 2) the many other outer belt bodies that were not in resonance, originated as escapers from the less retentive inner belt [a = or < a(1/2)] mmrs. This class, most often with large eccentricities, e, and inclinations, i, remained temporarily in this belt, likely to be replaced later or into the future by additional escapes from the same sources. We now want to add that, while point 1) remains intact, 2) could use an extension because in Paper 1 we focused chiefly on escapes from the 1/2 resonance and not so much on other mmrs, in particular none of the other n/n+1 ones. These have an observational interest as some of them, 2/3, 3/4 and 4/5 have librating members, while asFig. 1indicates, still higher n/n+1 ratios do not. There is therefore some value in calculating their ability to capture bodies and to inquire into the lifetimes of those captured. We already know that questions of stability generally deal with an on--going process: many captured bodies later escape, reaching the outer belt and elsewhere. Here we approach stability by evaluating the degree of chaos in many mmrs, but we have not carried through very long term integrations. By way of a summary, we can recap a few results from Paper 1 where sets of 500 bodies were placed for possible capture chiefly into mmrs between 1/2 and 1/3 as Neptune migrated by 7 AU over 10 myr. Of 69 with e(o) < 0.15 captured into 1/2, 31 escaped over 4.6 byr, with 10 of these doing so 'moderately recently', after 3 byr and 3 of those after 4.2 byr. Fully 2/3rds of the 38 remaining move in distinctly chaotic orbits, arguing that escapes will continue. Adding to the escapers from 1/2 were 9 others from a total of the 40 captured into higher inner belt mmrs, 3 each from 3/5 and 4/7 over times from 3.1 to 4.6 byr. Nearly all these escaping bodies occasionally moved into the region beyond a(1/2), for a very wide range of durations but averaging near 300 myr. It is on this basis that we concluded that the outer belt has been, and will in the future continue to be, well--stocked and that such bodies can eventually also make excursions into other parts of the solar system. Our plan in this paper is to expand some of these efforts to include the other n/n+1
mmrs and especially 2/3 where the overall majority of librating bodies, now numbering about 200 in that mmr, are currently known. One goal is to account for the distribution in the inner Kuiper belt, a < a(1/2) and to check for other bodies likely to escape. To apply this step means folding in two other topics: 1) the likely distribution of KBO's before Neptune's migration and 2) estimates of capture probabilities. Figures 1 and 2 begin this study by plotting the e's as a function of semimajor axis, a, from a = 32 to 66 AU of some of the 550 Kuiper belt bodies, observed for 4 or more oppositions, known in that region as of early 2013. Figure 1 shows the population in and around 6 first--order mmrs, 6/7 to 1/2, indicating an absence of members in the 5/6 mmr and no more than 2 [with e's > 0.3] in 6/7, and the observed eccentricity ranges of bodies in the four others. [Here we define a mmr by the mean motion ratio, KBO/Neptune.] For a comparison, Figs. 3 to 8 measure the degree of chaos in each of the six. These figures also indicate the e's where secondary resonances lie. The latter occur when the libration frequency is commensurate with a term in the apsidal frequency and an equivalence of the two, which corresponds to an overlap of two resonances, develops chaotic behavior. Circled crosses denote orbits that have escaped from a mmr, but we have not carried out a systematic survey looking for escapes from all of them, nor integrated most orbits beyond 500 myr. Integrations over the solar system's age for all orbits with Lyapunov times given by log T(L) < 3 or even 3.5 to establish a better link between chaos and escape must wait for another time. The Lyapunov time used here has been defined in earlier papers and is measured in orbital periods of Neptune. For the present we can assume with some assurance that orbits with log T(L) < 3 are unlikely to persist today. A quick and slightly extrapolated summary based on Figs. 3 to 8 concludes that 1/2 can contain quite regular orbits in the range 0.1 < e < 0.35 and that 2/3 shows regularity at e < 0.05 and between 0.1 < e < 0.33, but this resonance has definite chaotic regions 0.07 < e < 0.1 and beyond e 0.33. At the higher e's stable librations are confined to amplitudes of only a few degrees. Two other mmrs, 3/4 and 4/5, show stable orbits up to e 0.17 in the latter and to nearly 0.24 in the former, though both develop narrow chaotic zones at an individual secondary resonances. The ability of a mmr to prevent close approaches to the primary begins to fail for larger values of e. This is especially true at 5/6 where only a narrow e range up to 0.07 corresponds to long lasting orbits. It was therefore a surprise to find a more extended stable area at 6/7, cf. Fig. 8, up to e 0.14 but the 2 bodies near 6/7 in Fig. 1 with large e's must be only temporarily in resonance. At still higher ratios, 7/8 and 8/9 show no sign of regularity beyond e = 0.04. This set of figures calls for two other remarks: 1) the effect of secondary resonances becomes most pronounced when the apsidal motion, dw/dt, is anticlockwise and 2) P(w) replaces P*(w) on Figs. 6 and 7 because here the secondary resonance is a commensurability between the libration period P*(l) and P(w) itself rather than a shorter period component, P*(w). Figure 2 also identifies unstable regions at 4 higher order mmrs in the outer belt, a > 47.8 AU, by providing values of e at which orbits show 2 levels of chaos. Solid horizontal lines lie where log T(L) falls to 3 and the dashed one where it reaches 3.5. The length of these lines, 1 AU, is a generous estimate of the total uncertainty of current semimajor axes, an estimate that also applies to bodies in Fig. 1. Quite regular orbits can exist for e's less than the dashed marker. In all cases in this study, we have probed to locate the least chaotic orbits. That is, at any a and e, we could alter initial angular variables so as to find more chaotic, even colliding orbits, but we have plotted the most nearly regular ones. For mmrs, this means searching for the most stable librations. But there is no reason to suppose that all captured bodies will always find their way quickly into the most stable configuration.
Some Conclusions from Figures 1 to 8
The next paragraphs consider these figures after dividing them into two groups----the first one discusses the implications derivable from those with ratios equal to or higher than 4/5 plus a few remarks about 3/4, before then comparing 2/3 with 1/2. En route we shall introduce a new set of capture probabilities, P(c). The absence of bodies at 5/6 and the higher mmrs might be [poorly] accounted for if the primordial belt began where Neptune's migration ceased to move the 5/6 mmr outward---therefore near 34 AU. But rather than arbitrarily placing a supposed boundary that left very few objects for later capture, a sounder way is to start by considering capture probabilities at these higher ratios. This plan is fueled by the knowledge that, while Jupiter readily captures at 2/1 and 3/2 in the asteroid belt [and equally well expels one--time captures from the former thanks to chaos owing to resonance overlap], the probability of capture at 4/3 is very low. We found in Paper 1 for Neptune and the Kuiper belt, that P(c) at 1/2 lies near 25%, for a migration time of 10 myr with the range depending somewhat upon the e(o)'s of bodies encountered.
[Later when we compare 2/3 and 1/2, we consider also the finding that P(c)s may depend on migration time as well.] For this paper, we again use an e--folding migration time of 10 myr and have obtained 2 sets of P(c)s for the following first-order mmrs, based on an integration of 400 bodies initially in the ranges 27.5 < a(o) < 39.5, 0.02 < e(o) < 0.08 and i(o)'s < 8 deg. Table I. Capture probabilities for a migration time, T(M) = 10 million yr. P(c) in %, plus number of bodies in ( ). Resonance Initial Capture Remaining after 200 myr P(c,s) P(c,l) 6/7 7 (6) <1 (1) 5/6 3 (3) (0) 4/5 9 (12) 1 (2) 3/4 9.7 (18) 6 (12) 2/3 18.6 (46) 14.6 (36) increase e by > 0.10, or equally, capture occurred at a smaller e(o) earlier in the outward migration, since a(o) for 3/4 = 28 AU when Neptune moves by 7 AU. The 2/3 mmr is an exciting and abundant place with a total of some 200 bodies, including also Pluto, lying within or very near this mmr. A comparison of Fig. 4 with Fig. 1, where the former shows a chaotic region, owing to secondary resonance near e = 0.08, and an instance of escape from there at 330 myr, suggests a dynamical reason why the population in Fig. 1 at e < 0.10 is somewhat reduced. The same comparison argues that escapes have diminished the number of bodies with e > or 0.33 and also adds to the real possibility of future departures.
To turn now to other characteristics of 2/3 via a comparison with 1/2: Figs. 3 and 4 show some similarities but Paper 1 showed that at 1/2 the chaos is generated by the travelling of secondary resonances arising from variability in the libration frequency rather than changes in the apsidal terms and that this condition extends the chaotic region throughout the region, e < 0.1. Severe chaos leads to escape in both for e's much above 0.3 at 2/3 and above 0.35 for 1/2. At the same time there are differences between the stability diagram in Fig. 3 and the observations in Fig. 1. First, an explanation for the paucity of bodies in Fig. 1 for e < 0.1 at 1/2 lies in a choice between dynamical instability acting on captured bodies or the absence of bodies available for capture. Either option is likely enough----the second just requiring very few objects initially lying a few AUs less than a(1/2) now at 47.8. But reasons why the population in 1/2 is much lower than 2/3 above e 0.2 is not so easily settled. To examine further, consider Table II that compares the currently known populations of these two mmrs and also of 2/5 [to be referred to later] over the constant range of de = 0.2 where reasonably regular orbits exist and also where the number of captures would be the same if we assume that the initial volume density was equal for all of them. The N's are based on elements, available from the Minor Planet Center as of December, 2013, with a semimajor axis range corresponding to objects lying within +/--0.5 AU of the mmr. This approximate figure covers both observational uncertainty and a measure of the normal amplitude of libration. To prepare the N's for comparisons, we begin by making full use of capture probabilities, P(c), to predict an expected value for the 2/3 vs 1/2 ratio. We have already mentioned a result from Paper 1 of a P(c) for 1/2 after 4.6 byr of 21% for the range 0.04 < e(o) < 0.10 and T(M) = 10 myr. In this paper we have obtained for 2/3 in the similar range, 0.02 < e(o) < 0.08, values of 19%, and later 13%, first after 2 T(M) and then 2 byr. The comparison is not exact, but we can reasonably assume that 1/2 is half--again more likely to capture and retain that is 2/3, and therefore to determine their final population ratio, provided that both scanned through an initially equivalent particle field. A prediction using just these P(c)s would then provide the result obtained by an observer situated in the outer solar aystem, equally distant from both of them. For a terrestrial observer to predict what the locally measured ratio should be requires a correction because, relative to 2/3, a larger number remain 'undiscovered' in the more distant 1/2 resonance. This correction depends on the <a>s and <e>s of the two and is given approximately by the 4th power of the mean perihelion distance where most discoveries are made, the Q's listed in Table II. It turns out also to be 1.5, arguing that the observed numbers for 1/2 relative to 2/3 should be reduced by the same factor as had determined the former's higher P(c). We are left with the clear conclusion that we should expect to see the same number or maybe more in 1/2----a few more because its capturing region is more extensive [cf Table II]----when we make the equal encounter number assumption. Table II's listing of the observed population ratio of 5 to 1 favoring 2/3 is so far different from the predicted one that assumptions need questioning. A wise step would be to relax, or better to abandon, the provisional claim of a constant initial density of capturable bodies for both. We can be sure, from Figs. 3 and 4, showing that levels of chaos in these two mmrs are so very similar in the 0.1 < e < 0.33 range, that we cannot place any hope for an explanation relying on dynamical instability alone. Two possible resolutions, acting together or separately, come to mind: 1) simply put, fewer bodies with e > 0.2 now in 1/2 could just reflect an initial distribution that tapers off before and then largely terminates near 46 AU or 2) the possibility investigated by Chiang and Jordan (2002) that the relative P(c)s for 1/2 and 2/3 depend critically on the migration time, T(M). They determined that 1/2 is twice as effective as 2/3 for T(M) = 10 myr, but the efficiency of 2/3 overwhelms 1/2 by at least a factor of 5 for T(M) = 1 myr. These results apply to bodies captured and retained for 10 times T(M). If we take the Chang and Jordan results at face value, their arguments imply that the corrected or true current 2/3 to 1/2 population ratio of 3 to 1 would follow if T(M) is a few myr. [N.B. The true ratio derived from the 'raw' observed one, does require the incompleteness correction be applied to 1/2.] Table III begins a partial check on this claim by repeating the calculation of Table I for T(M) = 1 myr: Tables I and III leads first to a likely result that P(c) at 2/3 falls by a factor of 2 or 3 when T(M) drops to 1 myr, while in Paper 1 we found a decline in P(c,l) at 1/2 but by no more than 30% when T(M) was reduced from 10 to [only] 3 myr. Despite the approximate nature of this comparison, it still leads us to doubt whether lowering T(M) in the range from 10 to 1 myr is the sole contributor to the goal of greatly reducing the population ratio of 1/2 to 2/3. A caveat remains since integrations in this paper do not extend over the solar system's age. Another well populated mmr is 2/5 at 55.4 AU, currently with 20 likely members, about one--half as many as 1/2. At one time we planned to include possible inferences from it that might complement this discussion. We finally decided against doing so for the following reason: the 1st order 2/3 and 1/2 mmrs react to the eccentricity dependence of possible captures very similarly, both efficiently capturing bodies with e(o) < 0.1 and showing a distaste for ones with e(o)'s > 0.1. Higher order mmrs like the 3rd order 2/5 show a greater preference for e(o) > 0.1 This characteristic makes any conclusion sensitive to the initial e distribution, or leaves any comparison of 1/2 and 2/5 with too wide a range of possibilities to be really instructive, though the observed ratio of two to one, which requires no incompleteness correction, is a broadly consistent one. We suggest then that both the details of the initial particle distribution and a migration time somewhat shorter than 10 myr could lead to a definite shortage of KBOs at 1/2. But we do claim that a falling initial population density after 42 AU, dropping to very low levels after 46 AU, is well established, a result that is also vigorously endorsed by the lack of bodies with e < 0.25, a limit that rises with semimajor axis up to 0.35, in all mmrs situated beyond 50 AU. More quantitative progress needs a considerable number of detailed simulations that vary initial elements, the spatial distribution of bodies, migration times in the 1 to 10 mmr range and characteristics of the migration mechanism itself. These lengthy remarks raise another point: the abundant nest of objects from 42.5 < a < 44.5 AU, conspicuous in Fig. 1, would have provided a wealth of candidates for 1/2 during its migration through that region, a passage that would have filled 1/2, 2/5 and others up to e 0.2 and beyond. The current scarcity of KBOs in this part of 1/2 makes it likely enough that this populous grouping concentrated near <a> = 44 AU probably owes its origin to event(s) occurring either after or near the end of the migration----perhaps a family derived from a fragmenting collision, that could yield a large number of low e and i bodies. In any event, it need not necessarily be taken by itself as pertaining to features of the pre--migration solar system. The tiny current population of non--resonant objects with a < a(2/3) raises another question. Table I of P(c)s shows that the preliminary percentage of captures into 2/3, P(c,s), is about 19% for T(M) = 10 myr, with other mmrs at even smaller values. The capture efficiency may depend on T(M), but it is still a fair question: what became of the remaining majority that were left behind, apparently never captured? Simulations provide a quantitative reply: during migration almost all such bodies were at least briefly captured or perturbed by higher order mmrs so that they would spend a short time affected by one mmr, then escaping with orbital parameters altered by enough to be subject to the effects of another, again resulting in a temporary capture. This cascading into and out of several mmrs occurs over times of a few thousand to hundreds of thousands of years. It might be called a scattering that slowly increases the initial e's and i's of a body to the point where it leaves the region 30 < a < 40 AU. It's this repeated behavior that also tends to develop inclinations, leading to the population frequently referred to as "the scattered disk". Our simulations indicate that <i>s of 20 to 25 deg. occur quite frequently and ones as high as 50 to 60 deg. occasionally, with such bodies regularly moving out, often temporarily, beyond a(1/2). This is an effective process but it does have a limited number of exceptions: about 8% [31 of 400] of those in the 10 T(M) integration with objects lying in the range 27.5 < a(o) < 39.5 AU were left undisturbed, with their e(o)s and i(o)s remaining essentially unchanged. Close to 65% of these had a(o)s = or > 36.5 AU, a fact that may be a consequence of the assumed e--folding exponential representation of migration----the 2/3 mmr travels linearly from 30.3 to 36 AU. But this 65% is of some interest as Fig. 1 shows the presence of a number of low e bodies, probably not in any resonance, at least not in 2/3, from 37 to 39 AU. However, for the vast majority of primordial bodies and especially those with a(o) < 37 AU, it is not an exaggeration to say that really the best way to preserve the identities of the initial population in this region was for its members to have been incorporated via capture into a very low order mmr, especially 2/3. Included in Fig. 1 are three higher order mmrs, 5/8, 3/5 and 4/7 where, at least for the latter two, a fair number of bodies appears to be trapped. We have therefore considered their stability in Fig. 9. The case of 4/7 shows evidence of a secondary resonance near e = 0.1 but, apart from that, all three show no sign of marked chaos until e rises above 0.25. The two, 3/5 and 4/7, seem quite well populated, comparably so, and Paper 1 obtained their P(c)s as 15 and 7%. The large number at small e in the higher order 4/7 mmr may well be another consequence of a later event mentioned earlier. This argument for the seeming overpopulation of 4/7 compared to 5/8 is also reinforced by the fact that both these 3rd order mmrs have similar P(c)s. In the integration of 400 bodies we did note 3 captures into regular orbits at 4/7 and a few more into regular ones into 5/7, 9/13 and even 11/16 In our earlier paper on the outer belt we noted two high order mmrs, 5/11 and 6/13, that captured and retained a few bodies over the age of the solar system. With this in mind, apparent concentrations evident in Fig. 1, in the region a < a(1/2), at 6/11 [45.07 AU], 7/13 [45.46] and even 8/15 [45.75] seem quite believable. KBOs located in the 4 mmrs beyond 1/2 shown in Fig. 2 provide added insights. First we have found that, although all show severe chaos for e > 0.4, 3 of 4 do exhibit quite regular orbits for e's to 0.40 and for the 4th, 3/7, as high as 0.35. This fact and the complete absence of bodies with e's less than 0.3 in 2/5 strongly implies that none were ever available for capture in the region outwards of 46 AU. We can elaborate on and reinforce this point by expanding some Paper 1 results which found that all 19 bodies captured into 2/5 remained librating there for 4.6 byr----no escapes even among those with small e's or also among the more chaotic examples. Indicators of chaos for the 19 in that integration are: log T(L) > 4.5 4.5 --3.5 < 3.5 No. of bodies 6 8 5 Their mean e's ranged between 0.14 and 0.40, averages that were well--defined for the more regular ones, but where individual values occasionally dipped to 0 for about half the group. Asking where a version of those hypothetical bodies of low e would now lie on a plot like Fig. 2 leads to the quite firm conclusion that the pre-migration distribution was one that could not supply any low e, e < 0.3, specimens and therefore that the very early outer boundary lay 46 AU. We can apply a similar argument at 1/2 from details in Paper 1, with more numbers but to a lesser effect. There, of the 69 captures, 38 remained after 4.6 byr. Eight moved in regular orbits with log T(L) > 4.5 ; the remainder showed various amounts of chaos with the lowest log T(L) = 3.1. The eight were characterized by 0.24 < e < 0.30 and the larger group by <e> between 0.09 and 0.30, all with e's sometimes dropping to zero. The observed population at 1/2 up to almost e = 0.2 is quite sparse, once again suggesting that few objects were ever present beyond 46 AU. One final example complements this picture. At the 1/3 mmr Fig. 2 indicates the presence of a few likely members with e's up to 0.42, arguing for an increase, de, of at least 0.3 from pre--migration values of e(o) maybe as large as 0.15 [A reason for introducing an e(o) as large as 0.15 arises as Paper 1 found that mmrs of order > 1 surprisingly show equal or higher P(c)s in the e(o) range above 0.1 as for values below it.] Figure 10 shows that de = 0.3 implies that Neptune's migration must have driven 1/3 by 16 AU. [This number may also be estimated from the approximate solution from the Lagrange planetary equations.] So large a distance means that these high e bodies were gathered up at a(o)s lying < 46.5 AU. Moving 1/3 by 16 AU requires that Neptune migrated by 7.7 AU. Using 2/5 as an example provides much the same result, though asking for the extreme value, de = 0.4, implies that Neptune moved by almost 10 AU, probably an upper limit.
Concluding Remarks
The efforts described here began some time ago with a plan just to examine chaos in mean motion resonances and to look for the presence and possible influence of secondary resonances. Once some results were in hand, we decided to see how, in cooperation with other effects, they might bear on the evolution and distribution of bodies in the Kuiper belt even though other studies have already done much to clarify and interpret them. A few concluding remarks follow: first it is quite striking how well plots of the degree of chaos vs eccentricity shown in Figs. 3--9 match the observed e distribution at the mmrs, arguing that many--to--most escapes were a phenomenon of the past, or that once migration drives e's to a critical level, escapes will eventually occur. This situation implies that recent and future escapes will happen with decreasing frequency and/or that current ones result mainly from perturbations that transform a critical libration into an unstable one. Put differently: because the average time of outer belt residency is 1/10 of the solar system's age [cf Paper 1] something like this type of replenishing is called for. Our plan to map the secondary resonances within mmrs in the belt has proved rather unrewarding----they are present and identifiable but the chaos they generate has a minimal impact on the observed population. In a sense, it is unfortunate that where their influence in 1/2 at small e is most demonstrable, evidence we have gathered strongly suggests that the initial distribution provided too few candidates for capture. As we have already emphasized, the truncation of the initial distribution inferred from assorted mmrs and especially from 1/2, has made a major contribution to an explanation for the near complete absence of low e bodies in the outer belt resonances. For shorter migration times, the current population probably reflects a cooperation between the dependence of capture probability on T(M) and the nature of the initial distribution as the dominant factors. A last comment concerns the extent of the Kuiper belt. Since the mmrs 5/6, 6/7,... can only scatter but not capture bodies with any degree of permanence, they locate, really force, the current inner boundary of the belt to lie near 34 AU. At the other end, the reduced membership in 1/2 and the eccentricity distribution in mmrs farther out, specifically the increasing e's below which very few objects have been discovered, help to place the initial outer boundary for capturable objects close to 46 AU. Unlike the inner boundary, this one would seemingly pre--date any migration and so serve as a definition of the outer extent of asteroidal size bodies in the very early solar system. Fig. 1. Distribution of KBO's between 32 < a < 49 AU as of spring 2013, with principal mean motion resonances marked. We suspect that much of the concentration of bodies at/near 4/7 at 43.7 AU may be due to a post--migration collision. Fig. 2. A continuation of Fig. 1 to 49 < a < 66 AU. Solid or dashed lines measure e's above which integrations and/or Lyapunov times indicate that escapes are either quite certain or may be possible. Fig. 3. Chaos occurring at the 1/2 resonance from simulations of 400 bodies, e(o) < 0.08, i(o) < 8 deg. Circled crosses indicate cases that escaped in times < 500 myr. All crosses correspond to the least chaotic orbit, found by varying orbital parameters. P*'s denote either the libration period or the short period term in apsidal motion. Where their values are equal or commensurate, chaos clearly develops. Fig. 4. Chaotic behavior at the 2/3 resonance from the same simulations, The 'cw' and 'acw' labels here and elsewhere denote clockwise of anticlockwise direction of apsidal motion. The 1:1 secondary resonance near e = 0.08 is particularly severe, with an escape noted at 330 myr. Figs. 5--8. Chaos at 3/4 to 6/7. Taken as a group, Figs. 3--8 provide a survey of 6 1st order resonances, indicating where dynamically stable orbits occur and therefore where KBO's may----or may not----be expected to persist. P(w) replaces P*(w) when a commensurability involves the apsidal motion itself, not a shorter period term. Fig. 9. Chaos at 3 well--populated higher order resonances in the inner belt. Fig. 10. Eccentricity increase as a function of semimajor axis during a migration. Crosses are from the 2/5 resonance, darkened squares from 1/3.
Figure Captions
Table II .
IIObserved populations at three mean motion resonances
mmr a(o) and da for a 7 AU e range <Q> N
migration of Neptune
2/3
30.3 9.2
0.13 --0.33
30.4
198
1/2
36.7 11.1
0.20 --0.40
33.4
41
2/5
42.5 12.9
0.25 --0.45
36.0
20
Table III .
IIICapture probabilities for T(M) = 1 myr. The details ofTable III differ from those of Table Ionly as it draws upon just 200 bodies, but the P(c,l)s also extend for 200 myr.] A comparison ofP(c) in %, plus number of bodies in ( ).
Resonance Initial captures Remaining after 200 myrs Remarks
P(c,s) P(c,l)
6/7
4 (2)
0
5/6
11 (4)
3 (1)
very chaotic
4/5
7 (5)
0
3/4
15 (15)
11 (11)
6 regular
2/3
11 (15)
4 (5)
3 regular
[
. E Chiang, A Jordan, F Franklin, P Soper, arXiv:1207.4762Astron. J. 124Chiang, E. and Jordan, A., Astron. J. 124, 3430, 2002. Franklin, F. and Soper, P., 2012, arXiv: 1207.4762
| []
|
[
"Femtosecond electrons probing currents and atomic structure in nanomaterials",
"Femtosecond electrons probing currents and atomic structure in nanomaterials"
]
| [
"Melanie Müller \nFritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany\n",
"Alexander Paarmann \nFritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany\n",
"Ralph Ernstorfer \nFritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany\n"
]
| [
"Fritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany",
"Fritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany",
"Fritz-Haber-Institut\nMax-Planck-Gesellschaft\nFaradayweg 4-6D-14195BerlinGermany"
]
| []
| The investigation of ultrafast electronic and structural dynamics in low-dimensional systems like nanowires and two-dimensional materials requires femtosecond probes providing high spatial resolution and strong interaction with small volume samples. Low-energy electrons exhibit large scattering cross sections and high sensitivity to electric fields, but their pronounced dispersion during propagation in vacuum so far prevented their use as femtosecond probe pulses in time-resolved experiments. Employing a laser-triggered pointlike source of either divergent or collimated electron wave packets, we developed a hybrid approach for femtosecond point projection microscopy and femtosecond low-energy electron diffraction. We investigate ultrafast electric currents in nanowires with sub-100 femtosecond temporal and few 10 nm spatial resolutions and demonstrate the potential of our approach for studying structural dynamics in crystalline single-layer materials. | 10.1038/ncomms6292 | [
"https://export.arxiv.org/pdf/1405.4992v2.pdf"
]
| 205,331,632 | 1405.4992 | dcf4641b7271f34af3273b10c6df0df3af340328 |
Femtosecond electrons probing currents and atomic structure in nanomaterials
arXiv:1405.4992v2; 02 September 2014 September 2014
Melanie Müller
Fritz-Haber-Institut
Max-Planck-Gesellschaft
Faradayweg 4-6D-14195BerlinGermany
Alexander Paarmann
Fritz-Haber-Institut
Max-Planck-Gesellschaft
Faradayweg 4-6D-14195BerlinGermany
Ralph Ernstorfer
Fritz-Haber-Institut
Max-Planck-Gesellschaft
Faradayweg 4-6D-14195BerlinGermany
Femtosecond electrons probing currents and atomic structure in nanomaterials
arXiv:1405.4992v2; 02 September 2014 September 20141 2 arXiv:1405.4992v2; 02
The investigation of ultrafast electronic and structural dynamics in low-dimensional systems like nanowires and two-dimensional materials requires femtosecond probes providing high spatial resolution and strong interaction with small volume samples. Low-energy electrons exhibit large scattering cross sections and high sensitivity to electric fields, but their pronounced dispersion during propagation in vacuum so far prevented their use as femtosecond probe pulses in time-resolved experiments. Employing a laser-triggered pointlike source of either divergent or collimated electron wave packets, we developed a hybrid approach for femtosecond point projection microscopy and femtosecond low-energy electron diffraction. We investigate ultrafast electric currents in nanowires with sub-100 femtosecond temporal and few 10 nm spatial resolutions and demonstrate the potential of our approach for studying structural dynamics in crystalline single-layer materials.
Introduction
One-and two-dimensional crystalline materials have emerged as fundamental building blocks for nanoscale devices [1][2][3] . Compared to the respective bulk materials, the reduced dimensionality of the translational symmetry has profound effects on the ground state properties of nanomaterials as well as on the coupling between electronic, nuclear and spin degrees of freedom, dictating the dynamical behavior. As all devices operate in states out of equilibrium, and as the dwell time of excited electrons in nanostructures is comparable to the time scale of typical relaxation processes, electron-lattice-spin interactions crucially determine the functionality of future nanodevices. A range of ultrafast laser-based techniques is nowadays available for probing the evolution of electronic, optical, structural and magnetic properties of solids after a sudden perturbation like optical excitation, providing invaluable information on the mutual coupling of electronic, nuclear and spin degrees of freedom as well as of transport properties. Despite femtosecond temporal resolution, the investigation of ultrafast processes in nanoscaled, low-dimensional systems additionally requires high spatial resolution 4-6 as well as high sensitivity sufficient for investigating small sample volumes, i.e., femtosecond probe pulses strongly interacting with the sample. Electrons with sub-keV kinetic energies, here referred to as low-energy electrons, exhibit exceptionally high scattering cross section and a de Broglie wavelength on the order of 1 Å, which, in principle, allows for achieving atomic resolution both in imaging as well as diffraction approaches. Whereas the spatial resolution of current techniques for time-resolved nanoscale imaging of electric fields relies on the near field enhancement at nanostructures 5,6 , the high sensitivity of low-energy electrons to electric fields further permits the investigation of weak field distributions in the vicinity of nanoobjects 7 . While the generation of few-femtosecond electron pulses is readily achieved by photoemission [8][9][10][11] , the biggest challenge in using low-energy electrons as ultrafast probe is to maintain femtosecond duration of the electron pulses during delivery to the sample.
Unlike optical laser pulses, femtosecond electron pulses suffer from temporal broadening in vacuum during propagation to the sample, especially at low energies 12 . Many-electron pulses can be strongly affected by space charge broadening due to Coulomb repulsion 13 .
Furthermore, even single electron wave packets experience significant dispersive broadening depending on their initial energy distribution 14 . Temporal compression techniques can be used to obtain femtosecond many-electron pulses at a distant sample 15 , but have yet to be demonstrated for low electron energies. Alternatively, space charge broadening can be eliminated by using single electron pulses at high repetition rates 16,17 . Still, achieving femtosecond time resolution with dispersing sub-keV single electron pulses further requires considerable reduction of the propagation distances 18,19 . In our approach, we accomplish femtosecond time resolution by minimizing the electron propagation length down to the µmrange in combination with using single electron pulses. We developed a compact hybrid approach for femtosecond low-energy electron diffraction (fsLEED) and femtosecond point projection microscopy (fsPPM) with electron energies in the range 20 to 1000 eV. A lasertriggered metal nanotip provides a compact point-like source of coherent femtosecond electron wave packets [8][9][10][11] , optionally collimated for diffraction or spatially diverging for microscopy 7,19,20 . Employing the microscopy mode of operation, we investigate ultrafast currents in axially doped InP nanowires (NWs) with femtosecond temporal and nm spatial resolution. The potential of the diffraction mode to study ultrafast structural dynamics in twodimensional materials is demonstrated by recording high-quality diffraction images of singlelayer graphene with femtosecond electron pulses. For collimation and energy tuning, we place the tip inside an electrostatic microlens, being either directly coated onto the shaft of the tip 21 or using a metal-coated ceramic microtube.
Results
Figures
Examples of the potential and electric field in the vicinity of the tip's apex for the imaging and diffraction mode are plotted in Figures 1c) and 1d), respectively. The electric field strength at the apex can be adjusted via the lens voltage independent of the tip voltage, enabling energy tuning at a constant emission current 21 . For diffraction, the electron beam is collimated by flattening the potential field lines around the apex. This is accompanied by a reduction of DC field enhancement, and no field emission is possible in the diffraction mode.
However, the nanotip still enhances the optical laser field, leading to localized photoemission from the apex 22 . The photoemission process at the tip is characterized by measuring an interferometric autocorrelation of the photocurrent with the tip as nonlinear medium 23 , as plotted in Figure 1e). The peak/baseline ratio of 27:1 reveals a 3 rd order emission process, implying that the electron emission is temporally confined to ~3 fs in case the tip is illuminated with 5 fs-laser pulses (laser system described in the methods summary).
Femtosecond point projection microscopy
We performed fsPPM measurements on axially doped p-i-n InP nanowires 24 with a 60 nm long i-segment in the center, spanning across 2 µm holes in a gold substrate, see Figure 2a). A projection image of a single NW recorded in field emission mode at a distance 20 µm and at 90 eV electron energy is shown in Figure 2b). Noticeably, the wire diameter appears bright and much larger than its projected real space diameter. Due to the low electron energies, the projection image is in fact not a shadow image of the spatial shape of the nanoobject, but is rather revealing the local electrostatic field in the objects near-surface region deflecting the electron trajectories 7,25 . These static lensing effects critically depend on extrinsic parameters such as the tip field 7,26 , and intrinsic parameters like work function variations, e.g. between the NW and the substrate. The transient change of the NW diameter after fs laser excitation is plotted in figure 3c) for both segments along the NW, indicated by the two lines in figure 3a). At temporal overlap we observe a clear pump-induced, spatially inhomogeneous change of , which axially varies along the NW, as apparent in the difference image taken at 150 fs in Figure 3b). We observe a difference in the maximum amplitudes of the transient signal of for the two segments. Both transients have a fast initial rise, followed by a multi-exponential decay on the femtosecond to picosecond time scale.
In addition to the intentional axial doping, we expect the NWs to exhibit an effective radial doping induced by surface states, pinning the Fermi level and leading to band bending far into the NW 28 , as sketched in Figure 3d). The associated surface-space-charge field strongly differs for the different doping types, being larger for the p-than for the n-doped segment 28 .
In particular, the effective radial doping profile of the p-segment changes from p-doping in the NW bulk to n-doping at the NW surface, whereas the n-segment exhibits a radial n-n +profile, leading to an effective n-n+ doping axially along the NW surface, which reduces the axial doping contrast without photoexcitation. After above-bandgap photoexcitation, electrons and holes homogeneously generated in the NW bulk are radially separated by the surface field, leading to radial photocurrents and , respectively. This carrier separation, however, transiently reduces the surface band bending due to screening of the space-charge fields 30 , leading to a transient shift of the vacuum level, indicated by the red shaded area in Figure 3d).
As this is accompanied by a change of the local electric field at the NW surface, we can monitor these shifts by a transient change of the projected NW diameter being directly proportional to (see Supplementary Fig. S5). Consequently, the derivative shown in the inset of Figure 3c) is a direct measure of the photo-induced radial currents inside the NW. The spatial inhomogeneity and the different dynamics of the photo-induced effect result from the local doping contrast along the NW.
The relaxation of the photo-induced effect is governed by the transport properties and the electronic structure of the NW segments. A detailed discussion of the different relaxation processes is beyond the scope of this letter. Here, we limit the discussion to the fast initial dynamics which provide an upper limit for the time resolution of our fsPPM setup.
Considering that the built-in radial electric field is on the order of several 10 kV cm -1 for heavily doped wires 28 , we assume a drift velocity of the photoexcited carriers as high as the saturation velocity in InP, which is cm s -1 31 . With a wire radius of 15 nm, we expect a drift time of approximately 200 fs, which agrees well with the observed ten-to-ninety rise times of 140 fs and 230 fs of p-and n-segment, respectively. Hence, we interpret the fast initial dynamics as direct measure of radial photocurrent in the nanowire, and conclude that the observed dynamics reflect the carrier dynamics and is not limited by the temporal resolution of our instrument, which according to simulations is expected to be less than 50 fs in the imaging mode 14 . These results demonstrate the feasibility of fsPPM as a novel approach for probing ultrafast currents on the nanoscale with fs temporal resolution.
Femtosecond LEED
We further want to discuss the suitability of our setup to study ultrafast structural dynamics in Figure 4a) shows a diffraction pattern recorded in transmission at 500 µm and 650 eV electron energy exhibiting the six-fold symmetry of the two-dimensional hexagonal lattice of graphene 33 . Noteworthy, even for monolayer samples, diffraction patterns of very high quality can be recorded at very low electron dose rate (< 1 e -Å -2 s -1 ) owing to the high scattering cross section of sub-keV electrons 34 . Hence, the implementation of fsLEED for studying structural dynamics in singleand few-layer systems is clearly favorable compared to conventional high energy femtosecond electron diffraction 35 .
To study the structural dynamics of such two-dimensional materials after photoexcitation with an ultrashort laser pulse, electron pulses with a length significantly below one picosecond at the sample position are desirable in the diffraction mode. In Figure 4c) the expected full width at half maximum (FWHM) electron pulse duration is plotted as a function of tip-sample distance for different electron energies, where the focusing condition is adjusted to provide a constant spatial resolution in the diffraction patterns corresponding to a transverse coherence length of ~30 nm (described in more detail in the Supplementary Section III.b).
The pulse duration decreases sub-linearly with shorter propagation length, , with 0.83, which can be explained by the distance-dependent reduced inhomogeneity of the acceleration field at the apex in the diffraction mode 14 . So far, the shortest possible distances in the diffraction mode are ~150 µm, restricted by vacuum breakthrough at the electron lens, limiting the electron pulse duration to ~300 fs, see Figure 4c). Future improvements of the lens design should allow distances as close as 20 µm, i.e. distances comparable to the imaging mode, pushing the time resolution of diffraction experiments to the 100 fs range.
We also calculate the electron spot size at the sample and compare it to the experiment.
Owing to the absence of space charge and due to the confined emission area, the electron pulses can be focused down to a few µm on the sample, as shown in Figure Ultimately, such small electron spot sizes avoid spatial averaging over large domains with multiple crystal orientations, providing an ultrafast structural probe with single-crystal selectivity on µm length scales.
Discussion
We realized a novel approach for femtosecond point projection microscopy and diffraction
Methods
Setup
The setup is operated by two different laser systems depending on the specific application.
For generation of photoelectrons from the tip, a part of the laser output is focused on the tip to
Simulations
The electron pulse duration and spot size at the sample in the fsLEED mode are simulated by classically calculating the single electron trajectories between tip and sample assuming radial symmetry around the tip axis. For the weak field regime in the case of multiphoton photoemission, we can neglect the effect of the optical laser field on the propagation.
Gaussian distributions are assumed for the initial electron energy, the emission point along the tip apex as well as the initial electron momentum. More information on the simulations and detailed numbers are given in the Supplementary Section III.a.
Samples
InP nanowires with axial p-i-n doping structure are grown as described in reference 24 and mechanically transferred to a gold substrate with a regular pattern of 2 µm holes. Graphene samples are purchased from reference 32 and used without any subsequent treatment.
II. Data analysis a) Projection image analysis
The DC projection image of the p-i-n NW in Figure 2b) is analyzed by taking line profiles at different positions along the NW, see Figure S2.a, and fitting a double error function to the data. The projected NW diameter increases from the substrate contacts at the hole edges towards the NW center (indicated by the white dashed line in Figure S2.a), as plotted in Figures S2.b and S2.c, and saturates close to the center where the i-segment is expected. Noticeably, we observe a constant difference
( ) ( ) ( )
at each position away from the NW center of ( ) 60 nm, see Figure S2.d. This inhomogeneity clearly indicates different surface fields on both sides of the NW, as expected e.g. for different doping types.
b) Analysis of the time-resolved data
For each delay frame, the projected width of the nanowire was fitted with a double error function and averaged over line scans, separately in the blue and green regions indicated in Fig. 3a) of the main text. The dynamics of the extracted values as a function of the delay time τ plotted in Fig. 3c) of the main text were best fitted empirically with three exponentials
( ) ( ) ( ( ) ( ) ( ) )(1)
convolved with a Gaussian. Here, ( ) is the Heaviside function, and are the amplitudes and decay rates of the different decay contributions, respectively, and is the zero time delay. The constant offsets and represent the initial value (before pump) and
long-lived contribution to ( ), respectively.
III. Numerical simulations
The numerical simulations were performed with a similar approach as described in reference 1 .
A finite element method (FEM) is used to model the electrostatic field between the electron gun and the sample, and in the case of PPM, the detector.
Propagation of single electron wave packets inside the electrostatic field is simulated classically using a Runge-Kutta algorithm. The shape of the tip apex is modeled by a half sphere with a 15 nm radius and the shaft has an half opening angle of 13.5°.
a) Simulation of projection images
To simulate projection images, we calculate the classical single electron trajectories in three dimensions with cartesian coordinates { }, see Figure S3, with the nanowire (NW) spanning across a round hole in x-direction and the tip pointing along the z-direction. Hence, we can choose the
x-z-plane as symmetry plane to reduce the computational cost.
The sample is modeled by a 200 nm thin metal layer with a 2 µm hole centered on the z-axis.
( ) [ ( )] ( )(2)
with the (cumulative) probability function Ultimately, these fields deflect the electron trajectories close to the NW surface, causing significant lensing effects influencing the projection images. In conclusion, by numerical simulation of the electron trajectories taking into account all experimental parameters, we can reproduce the recorded projections and relate the observed NW diameters to specific distributions of the potential and electric field at the sample.
( ) [ ( √ )](3)
Simulation of electron pulse duration and spot size in fsLEED
Assuming cylindrical symmetry, the simulations for the electron pulse duration and spot size in the diffraction mode closely follow the procedure described in reference 1 , but additionally including the electron lens.
We The time-of-flight distribution of the electrons critically depends on the exact field distribution around the tip axis, which changes with tip-sample distance as well as with the tip and lens voltages, respectively. Therefore, we defined an experimentally meaningful focusing condition to compare the results obtained for various distances and electron energies. From the experimental point of view, it is reasonable to assume a constant resolution in the diffraction patterns, i.e., a constant coherence length. In diffraction experiments, the transverse coherence length is usually defined as the ratio between the width of the diffraction spot on the detector, , and its radial position , 3
1a) and 1b) show the two operation modes for fsPPM and fsLEED, respectively. A tungsten nanotip is positioned at sub-mm distances in front of the sample. Photoelectrons are generated by focusing an ultrashort laser pulse on the negatively biased tip and are accelerated towards the grounded sample. For time-resolved pump-probe experiments, a second laser pulse is focused on the sample under 45° and the arrival time between the two pulses can be varied with an optical delay stage. Projection images and diffraction patterns are recorded with a microchannel plate (MCP) as electron detector positioned 10 cm behind the sample (more details on the setup are described in the Methods and in the Supplementary Section I).
Furthermore
, we observe a step of the projected NW diameter close to the NW center (the detailed analysis can be found in the Supplementary Section II.a). Figure 2c) shows line profiles through the NW at two different positions along the wire, revealing a difference of 60 nm in the projected sample plane. This contrast can be explained by different electric fields surrounding the NW induced by spatial variations of the work function. Numerical simulations show that the observed step corresponds to a difference of the local potential in the 100 meV range, and a difference in the radial electric field around the NW on the order of a few MV m -1 (more details on the simulations can be found in the Supplementary Section III.a). In general, the homogeneity of the projected width of a NW with constant radius depends on its specific surface condition, i.e. its doping level, crystal structure and chemical composition 27-29 .
low-dimensional materials by fsLEED. Very recently, Gulde et al. demonstrated the capability of low-energy electrons to study the structural dynamics of a bilayer system on the ps time scale 18 . Here, we introduce an alternative approach for the implementation of timeresolved LEED utilizing the potential of our electron gun design to realize very short propagation distances of the focused beam on µm length scales, therefore minimizing temporal broadening of the electron pulse. The capability of our setup to record high quality LEED patterns of monolayer samples is shown by focusing the electron beam onto single layer graphene suspended over a lacey carbon film 32 .
4b), where we plot the radially averaged profile revealing a spot size of 1.4 µm of the focused electron beam. The calculated FWHM spot size, plotted in Figure 4d), linearly decreases with the tip-sample distance down to a few µm, where the slope , i.e. the beam divergence, depends on the tip voltage according to ⁄ , reflecting our assumption of constant coherence in the diffraction pattern. Small deviations between simulation and measurement can be due to differences in the probability distributions used for the emission statistics (see Supplementary Section III.b) and due to slightly different focusing conditions.
using low-energy electron pulses photo-generated from a metal nanotip. We demonstrated the excellent capability of fsPPM for nanoscale imaging of small electric fields around semiconductor nanowires with femtosecond time resolution. In general, fsPPM enables direct spatiotemporal probing of ultrafast processes on nanometer dimensions in the near-surface region of nanostructures, such as ultrafast carrier dynamics and currents, dynamics of interfacial fields as well as ultrafast plasmonics. Ultimately, taking advantage of the high sensitivity of sub-keV femtosecond electron pulses combined with the magnification provided by PPM, our approach potentially allows the investigation of ultrafast phenomena on length scales down to the molecular level36 . In addition to real space imaging, low-energy electron pulses are ideal probes for studying structural dynamics of 2D crystalline materials on the femtosecond time scale by time-resolved diffraction. Using a nanotip as miniaturized electron gun for fsLEED allows to reduce the electron propagation length to the 100 µm range and to minimize temporal broadening to the 100 fs range. Combining the high surface sensitivity of low-energy electrons with femtosecond time resolution, fsLEED will reveal real-time information on structural dynamics and energy transfer processes in monolayer 2D materials and inorganic 37 as well as organic 38 composite heterostructures thereof.
spot size (1/e 2 radius), with the polarization along the tip axis. For time-resolved pump-probe measurements, the second output part is focused onto the sample under an angle of 45°. The arrival time between the electron probe and the optical pump pulse is varied by an optical delay stage integrated in the pump arm (a detailed sketch of the setup is shown in the Supplementary Section I). The interferometric autocorrelation in Figure 1e) was measured at 80 MHz repetition rate with 5 fs pulses and a fluence of 0.14 mJ cm -2 , with the collimated electron beam at 400 eV electron energy and a copper grid as anode at a distance of ~1 mm. The fsPPM data was measured at 1 MHz repetition rate with 16 fs pulses, with a fluence of 0.7 mJ cm -2 focused on the tip and 0.2 mJ cm -2 to pump the NWs. An integration time of 2 s was used for each projection image, and the data is averaged over 10 subsequent scans for every delay point. Temporal overlap in Figures 3a)-c) is defined by the empirical multi-exponential fit to the data, see Supplementary Section II.b. For the diffraction data, 5 fs pulses at 80 MHz repetition rate were focused on the tip at a fluence of 0.22 mJ cm -2 , and diffraction patterns are recorded with an integration time of 0.5 s and averaged over 100 frames. Nanotips with 20-100 nm radii are electrochemically etched from 150 µm polycrystalline tungsten wire. The outer surface of a ceramic tube with an inner (outer) diameter of 200 µm (500 µm) was coated with 100 nm chromium as electron lens. The tip is centered inside the tube and protrudes ~150 µm from the lens. Two additional electrostatic lenses are installed behind the sample to collimate the large diffraction angles obtained in LEED on the plane MCP screen. In the imaging mode these lenses are switched off. A piezodriven 10-axis positioning system is used for precise alignment of the electron gun and sample inside the laser focuses and relative to each other. All experiments are performed under ultrahigh vacuum conditions (10 -10 mbar).
Figure 1 :
1Setup for time-resolved low-energy electron imaging and diffraction. Photoelectrons, generated from a nanotip by an ultrashort laser pulse, are accelerated towards the sample positioned several µm away from the tip for either (a) point projection microscopy of nanoobjects (divergent electron beam), or (b) low-energy electron diffraction of 2-dimensional crystalline samples (collimated beam). A pump laser pulse, variably delayed from the electron probe, photo-excites the sample for time-resolved experiments. An electrostatic lens is used to switch from the divergent imaging mode (c, curved potential lines and strong inhomogeneous field ) to the collimated diffraction mode (d, flattened potential and reduced electric field ), each at a tip voltage -200 V, but different lens voltages -200 V and -730 V, respectively. Temporally confined electron emission is verified by measuring the interferometric autocorrelation photocurrent from the tip, revealing a 3 rd -order emission process (e).
Figure 2 :
2Point projection microscopy of axially doped nanowires. InP nanowires (radius 15 nm, length 3.5 µm) with p-i-n axial doping profile and 60 nm i-segment in the center are spanned across 2 µm holes in a gold substrate (a). Instead of being a real shadow image of the objects shape, projection images are strongly influenced by local fields surrounding the NW, which becomes apparent by the bright NW projection recorded in constant current (field emission) mode at a tip voltage of -90 V (b, scale bar 500 nm). Additionally, a spatial inhomogeneity of the projected diameter along the NW with a step of 60 nm from the left to the right side of the NW center (marked by the white arrows in (b)) is observed (c). This corresponds to a potential difference in the 100 meV range and a difference in the radial field around the NW on the order of a few MV m -1 , as found by simulations (more information on the analysis of the NW diameter is found in the Supplementary Section II and on the simulations in the Supplementary Section III.a).
Figure 3 :
3Femtosecond imaging of ultrafast photocurrents in InP NWs. (a) Projection image of the same NW as in Figure 2b) recorded in pulsed fsPPM mode at negative time delays. Photoecxitation by an ultrashort laser pulse leads to a transient, spatially inhomogeneous change of the projected NW diameter (b, normalized difference plot). (Data recorded at 70 eV electron energy, scale bars 500 nm). Different dynamical behavior and amplitudes of the transient diameter change are observed for the two segments along the NW (c), where an empirical three-exponential function was fitted to the data. Both segments show a fast initial photo-induced effect with ten-to-ninety rise times in the p-and nsegments of 140 fs and 230 fs, respectively, followed by multi-exponential decay on the fs-tofew ps time scale. As is directly proportional to the transient electric field change, the derivate ⁄ plotted in the inset in (c) is a direct measure of the instantaneous photocurrent inside the NW. Surface states cause effective radial doping leading to band bending at the NW surface as sketched in (d), where r is the radial coordinate, causing a radial photocurrent of electrons, and holes, , after photoexcitation. This leads to a pumpinduced transient shift of the conduction band edge and valence band edge , and hence a shift of the vacuum level (red shaded area), compared to the reference level (given by the environment), with the magnitude of the shift depending on the specific band bending and doping level.
Figure 4 :MelanieFigure S1 :
4S1LEED of free-standing monolayer graphene with fs electron pulses. LEED pattern of monolayer suspended graphene recorded in transmission at a tip-sample distance of 500 µm and 650 eV electron energy (a, inset: hexagonal lattice of graphene). Due to the confined emission area and small propagation distances, the pulsed electron beam can be collimated down to a spot size of 1-2 µm (FWHM) on the sample (b), shown here for 200 µm. The electron pulse duration in (c) is obtained by the FWHM of the arrival time distribution of single electron wave packets for distances between 20 and 500 µm and electron energies from 100 to 600 eV. A sub-linear dependence with 0.83 is observed ( 1 for the dashed line). Equivalently, the dependence of the electron spot size , defined as the FWHM of the radial position distribution at the sample position, is plotted in (d), which is in good agreement with the experimental observations. The dependence on the tip voltage results from the underlying focusing conditions. Further details to the simulations can be found in the Supplementary Section III.b. Müller, Alexander Paarmann, and Ralph Ernstorfer Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, D-14195 Berlin, Germany I. Experimental setup Experimental setup. Detailed description is found in the main text. The setup for femtosecond point projection microscopy (fsPPM) and low-energy electron diffraction (fsLEED) is shown in Figure S1. Two fs laser systems are used: First, an ultrabroadband 800 nm Ti:Sa oscillator running at 80 MHz repetition rate, providing 5 fs pulses with ~2 nJ pulse energy. Second, a cavity-dumped 800 nm Ti:Sa oscillator with variable repetition rate up to 2 MHz delivers 16 fs pulses with 30 nJ pulse energy. Both laser systems can be alternatively incorporated in the same optical setup. A beam stabilization (not shown in the figure) ensures accurate and reproducible alignment of the laser inside the ultrahigh vacuum (UHV) chamber. The laser output is split into two arms for the optical pump and Figure S2: DC image analysis. Analysis of the projected diameter along the NW (as indicated by the lines in (a)) reveals a constant difference between the left (b) and right (c) side from the NW center of 60 nm (d) at all positions away from the center towards the hole edges. excitation of photoelectrons from the tip as probe, where an optical delay stage is used to vary the delay between pump and probe. An interferometric autocorrelator can be inserted in the probe arm to measure an interferometric autocorrelation of the photocurrent from the tip. Both laser beams are focused by off-axis parabolic mirrors installed inside UHV. A 4-axis positioning stage provides full position alignment of the tip and tilting along the laser beam direction and is used to position the tip (with the electron microlens attached) inside the laser focus. Full position and angle alignment of the sample is achieved by a hexapod-type 6-axis positioning table. The tip can be moved into the pump focus and localized photoemission from the apex is used to precisely align the pump focus position relative to the original tip position. Bias voltages and up to -2 kV are applied to the tip and the electrostatic microlens depending on the operation mode. Photoelectrons are accelerated towards the grounded sample and amplified by a microchannel plate detector ( ) combined with a phosphor screen ( ). A scientific CMOS camera is used to record the images outside UHV. For fsLEED, two electrostatic (ES) lenses at positive bias voltages and are installed behind the sample to reduce the size of the diffraction pattern in order to fit onto the MCP screen.
Figure S3 :
S3PPM simulation geometry. Electrons with initial velocity v and emission angles and in x-and y-direction, respectively, are accelerated to the sample, and possibly deflected by electric fields in the sample vicinity. Projection images are then evaluated in the distant detector plane.
Figure S4 :
S4Potential and electric field of a p-i-n NW. Potential distribution (a) of a p-i-n NW (15 nm radius) with a 500 mV potential step at the NW center and an additional offset of 1.5 V to the substrate . Corresponding electric fields in the xand y-direction are plotted in (b) and (c), respectively. All distributions are plotted in the x-y-plane at µm. All scale bars are 200 nm.
the respective potentials and of the p-and n-doped segments, and with and being the position and width of the i-segment along the x-direction, respectively. In Figures S4.a)-c), examples of the potential and electric fields and are plotted in the x-y-plane at µm for a NW with 30 nm radius positioned 20 µm away from the tip, where a potential step of 500 meV is applied at the NW center with an offset 1.5 V. Owing to the nanometer dimensions, electric field strengths of several MV m -1 are obtained at such small potentials differences. Even at 0 V, without any potential differences applied to the sample, the electric field strength at the NW surface can reach magnitudes on the MV m -1 scale due to the influence of the tip electric field.
Figure S5 :
S5Projection images and linear field dependence. Two examples of calculated projection images of a NW with 100 nm at 10 µm distance and -50 V tip voltage are shown for 0 V and 0.5 V (a) and for 0.7 V and 0.35 V (b), respectively (Scale bars 200 nm). The corresponding potential distributions are sketched below the projection images, with being the potential of the substrate. The transition from dark to bright projections is indicated by the threshold potential . The dependence of the width and sign of the projected NW diameter on the NW potential is plotted in (c) for NW radii from 20 to 100 nm and two different tip voltages, respectively, revealing a linear dependence on the NW bias. Due to the large computational cost for calculating the projection images, we compute the electron trajectories for a regular grid of emission angles and in x-and y-direction, respectively, assuming electron emission normal to the tip surface. In addition, a single electron energy is considered since a finite energy distribution has an insignificant effect on the spatial resolution in the projection images compared to other experimental effects like mechanical vibrations and drifts during image acquisition. Projection images are generated by analyzing the arrival positions of all trajectories on the detector plane. Assuming equal emission probability for all trajectories, the image intensity is calculated by phase space mapping between the initial condition and the detector arrival position, integrated over the regular grid of initial conditions. Figures S5.a) and S5.b) show exemplary projections of a p-i-n NW with constant radius obtained for two different potential distributions, revealing their significance on the projected NW image. The diameter of the projection and its sign, i.e., being a dark or a bright 'shadow', of a certain NW segment depends on its electrostatic potential relative to the substrate, the NW diameter and the tip voltage and distance, respectively. In Figure S5.c) the linear dependence of the projected diameter on the voltage applied to the NW is plotted for various NW radii and two different tip voltages. The threshold voltage indicating the transition from dark (positive ) to bright (negative ) projections decreases with smaller NW radius and lower tip voltage, respectively, and very thin wires appear bright even a 0 V bias due to the effect of tip electric field.
choose Gaussian distributions for the electron kinetic energy , for the emission angle (emission normal to the tip surface), as well as for the momentum distributions at each emission point within and outside the simulation plane, implemented by the angles and , respectively, see Figures S6.a)-c). In particular, the out-of-plane angel can be mapped onto the velocity of the electron by ( ), effectively reducing the initial electron energy, as the out-of-plane momentum does not affect the arrival time but only induces a precession of the trajectories and their arrival positions around the z-axis (no fields in azimuthal direction due to cylindrical symmetry). Here, , adopting the distributions given in reference 2 .
Figure S6 :
S6Simulation geometry for LEED. Definition of emission angles (a), normal to the tip surface, and (b) and (c) accounting for the in-and out-of-plane momentum distributions. (d) Sketch of the geometric parameters used to define the focusing condition. arXiv:1405.4992v2; 02 September 2014 where a is the lattice constant of the investigated sample. For a spherical detector, can be defined as the projection of the arc length on a planar detection plane, see Figure S6.d), and is proportional to the diffraction angle , i.e., ( ) in first approximation. According to Bragg's law and the momentum energy relation for non-relativistic electrons with kinetic energy , we then obtain ( ) ⁄ . Hence, a constant coherence length for all electron energies requires ( ) ⁄ and likewise ( ) ⁄ for the spot size at the sample in the case of field free propagation between sample and detector. In the simulations, this focusing condition is realized by calculating the required electric field strength at the apex which leads to the desired target spot sizes. The calculations shown in the corresponding letter in Figures 4b) and d) are computed assuming an initial spot size with a standard deviation of 15 µm at -100 V. The beam divergence given by the slopes in Figure 4d) show the desired dependence ( ) ⁄ as plotted in Figure S7. We thus obtain a corresponding spot size on the detector of 0.37 mm at a distance of 10 cm. With the Bragg angle of 29.7°, giving 0.057 mm, and using the lattice constant 2.465 Å of graphene, we obtain a transverse coherence length of 38 nm for the above given values. In the same way, we calculate for the coherence length at -600 V (with 6.12 µm) a value of 35 nm, justifying our initial assumptions.
Figure S7 :
S7Voltage dependence of the beam divergence. Fitted values of the slopes, i.e. the beam divergence, from the data inFigure 4d) in the corresponding letter plotted against the tip voltage.
15 .
15Van Oudheusden, T. et al. Compression of Subrelativistic Space-Charge-Dominated Electron Bunches for Single-Shot Femtosecond Electron Diffraction. Phys. Rev. Lett. control of nanotip field electron emitters. Appl. Phys. Lett. 103, 213506 (2013). 22. Arbouet, A., Houdellier, F., Marty, R. & Girard, C. Interaction of an ultrashort optical pulse with a metallic nanotip: A Green dyadic approach. J. Appl. Phys. Borgström, M. T. et al. Precursor evaluation for in situ InP nanowire doping. 26. Weierstall, U., Spence, J. C. H., Stevens, M. & Downing, K. H. Point-projection electron imaging of tobacco mosaic virus at 40eV electron energy.105, 264801 (2010).
16. Baum, P. On the physics of ultrashort single-electron pulses for time-resolved
microscopy and diffraction. Chem. Phys. 423, 55-61 (2013).
17. Aidelsburger, M., Kirchner, F. O., Krausz, F. & Baum, P. Single-electron pulses for
ultrafast diffraction. Proc. Natl. Acad. Sci. U. S. A. 107, 19714-19719 (2010).
18. Gulde, M. et al. Ultrafast low-energy electron diffraction in transmission resolves
polymer/graphene superstructure dynamics. Science (80-. ). 345, 200-204 (2014).
19. Quinonez, E., Handali, J. & Barwick, B. Femtosecond photoelectron point projection
microscope. Rev. Sci. Instrum. 84, 103710 (2013).
20. Fink, H. W. & Schönenberger, C. Electrical conduction through DNA molecules.
Nature 398, 407-410 (1999).
21.
st
i
t
for energy
and current 112, 053103
(2012).
23. Hommelhoff, P., Kealhofer, C. & Kasevich, M. A. Ultrafast Electron Pulses from a
Tungsten Tip Triggered by Low-Power Femtosecond Laser Pulses. Phys. Rev. Lett. 97,
247402 (2006).
24. Nanotechnology 19, 445602 (2008).
25.
i
i
i
i s pi
s
ti
t i i s
Appl. Phys. Lett. 74, 618-620 (1999).
Micron 30, 335-338
(1999).
27. Mikkelsen, A. & Lundgren, E. Surface science of free standing semiconductor
nanowires. Surf. Sci. 607, 97-105 (2013).
28. Hjort, M. et al. Surface chemistry, structure, and electronic properties from microns to
the atomic scale of axially doped semiconductor nanowires. ACS Nano 6, 9679-9689
(2012).
The NW is formed by a cylinder with radius embedded in the sample. To account for work function variations between the NW and the substrate as well as to the environment (e.g.due to different materials), bias voltages
and
are applied to the sample substrate
and the NW, respectively. Additionally, a potential distribution accounting for axial work
function variations along the NW, e.g. due to doping effects, can be applied to the NW. To
simulate an axial p-i-n doping structure, we model the potential distribution of the NW along
the x-direction by
AcknowledgementsWe thank M. Borgström and A. Mikkelsen for providing the nanowire samples and helpful discussions. We thank A. Melnikov and A. Alekhin for access to their laser system and analyzed the data and performed the numerical simulations; all authors discussed the results and co-wrote the paper.Corresponding authorsEmail: [email protected] (M.M.) or [email protected] (R.E.).Competing financial interestsTh th s p ti fi i i t sts
The rise of graphene. A K Geim, K S Novoselov, Nat. Mater. 6Geim, A. K. & Novoselov, K. S. The rise of graphene. Nat. Mater. 6, 183-191 (2007).
One-Dimensional Nanostructures: Synthesis, Characterization, and Applications. Y Xia, Adv. Mater. 15Xia, Y. et al. One-Dimensional Nanostructures: Synthesis, Characterization, and Applications. Adv. Mater. 15, 353-389 (2003).
Indium phosphide nanowires as building blocks for nanoscale electronic and optoelectronic devices. X Duan, Y Huang, Y Cui, J Wang, C M Lieber, Nature. 409Duan, X., Huang, Y., Cui, Y., Wang, J. & Lieber, C. M. Indium phosphide nanowires as building blocks for nanoscale electronic and optoelectronic devices. Nature 409, 66- 69 (2001).
Direct imaging of free carrier and trap carrier motion in silicon nanowires by spatially-separated femtosecond pump-probe microscopy. M M Gabriel, Nano Lett. 13Gabriel, M. M. et al. Direct imaging of free carrier and trap carrier motion in silicon nanowires by spatially-separated femtosecond pump-probe microscopy. Nano Lett. 13, 1336-40 (2013).
Photon-induced near-field electron microscopy. B Barwick, D J Flannigan, A H Zewail, Nature. 462Barwick, B., Flannigan, D. J. & Zewail, A. H. Photon-induced near-field electron microscopy. Nature 462, 902-906 (2009).
Adaptive subwavelength control of nano-optical fields. M Aeschlimann, Nature. 446Aeschlimann, M. et al. Adaptive subwavelength control of nano-optical fields. Nature 446, 301-304 (2007).
Low energy electron point source microscopy: beyond imaging. A Beyer, A Gölzhäuser, J. Phys. Condens. Matter. 22343001Beyer, A. & Gölzhäuser, A. Low energy electron point source microscopy: beyond imaging. J. Phys. Condens. Matter 22, 343001 (2010).
Attosecond control of electrons emitted from a nanoscale metal tip. M Krüger, M Schenk, P Hommelhoff, Nature. 475Krüger, M., Schenk, M. & Hommelhoff, P. Attosecond control of electrons emitted from a nanoscale metal tip. Nature 475, 78-81 (2011).
Field-driven photoemission from nanostructures quenches the quiver motion. G Herink, D R Solli, M Gulde, C Ropers, Nature. 483Herink, G., Solli, D. R., Gulde, M. & Ropers, C. Field-driven photoemission from nanostructures quenches the quiver motion. Nature 483, 190-193 (2012).
Field Emission Tip as a Nanometer Source of Free Electron Femtosecond Pulses. P Hommelhoff, Y Sortais, A Aghajani-Talesh, M A Kasevich, Phys. Rev. Lett. 96Hommelhoff, P., Sortais, Y., Aghajani-Talesh, A. & Kasevich, M. A. Field Emission Tip as a Nanometer Source of Free Electron Femtosecond Pulses. Phys. Rev. Lett. 96, 1-4 (2006).
Localized Multiphoton Emission of Femtosecond Electron Pulses from Metal Nanotips. C Ropers, D R Solli, C P Schulz, C Lienau, T Elsaesser, Phys. Rev. Lett. 9843907Ropers, C., Solli, D. R., Schulz, C. P., Lienau, C. & Elsaesser, T. Localized Multiphoton Emission of Femtosecond Electron Pulses from Metal Nanotips. Phys. Rev. Lett. 98, 043907 (2007).
Design of a miniature picosecond low-energy electron gun for time-resolved scattering experiments. R Karrer, H J Neff, M Hengsberger, T Greber, J Osterwalder, Rev. Sci. Instrum. 724404Karrer, R., Neff, H. J., Hengsberger, M., Greber, T. & Osterwalder, J. Design of a miniature picosecond low-energy electron gun for time-resolved scattering experiments. Rev. Sci. Instrum. 72, 4404 (2001).
Ultrafast electron optics: Propagation dynamics of femtosecond electron packets. B J Siwick, J R Dwyer, R E Jordan, R J Miller, J. Appl. Phys. 921643Siwick, B. J., Dwyer, J. R., Jordan, R. E. & Miller, R. J. D. Ultrafast electron optics: Propagation dynamics of femtosecond electron packets. J. Appl. Phys. 92, 1643 (2002).
Coherent femtosecond low-energy single-electron pulses for timeresolved diffraction and imaging: A numerical study. A Paarmann, J. Appl. Phys. 112113109Paarmann, A. et al. Coherent femtosecond low-energy single-electron pulses for time- resolved diffraction and imaging: A numerical study. J. Appl. Phys. 112, 113109 (2012).
Surface effects on the atomic and electronic structure of unpassivated GaAs nanowires. M Rosini, R Magri, ACS Nano. 4Rosini, M. & Magri, R. Surface effects on the atomic and electronic structure of unpassivated GaAs nanowires. ACS Nano 4, 6021-31 (2010).
Subpicosecond carrier transport in GaAs surface-space-charge fields. T Dekorsy, T Pfeifer, W Kütt, H Kurz, Phys. Rev. B. 47Dekorsy, T., Pfeifer, T., Kütt, W. & Kurz, H. Subpicosecond carrier transport in GaAs surface-space-charge fields. Phys. Rev. B 47, 3842-3849 (1993).
A temperature dependent model for the saturation velocity in semiconductor materials. R Quay, C Moglestue, V Palankovski, S Selberherr, Mater. Sci. Semicond. Process. 3Quay, R., Moglestue, C., Palankovski, V. & Selberherr, S. A temperature dependent model for the saturation velocity in semiconductor materials. Mater. Sci. Semicond. Process. 3, 149-155 (2000).
. Ted Pella, Inc, Ted Pella, inc. at <http://www.tedpella.com/Support_Films_html/Graphene-TEM- Support-Film.htm>
The structure of suspended graphene sheets. J C Meyer, Nature. 446Meyer, J. C. et al. The structure of suspended graphene sheets. Nature 446, 60-63 (2007).
Quantitative electron spectroscopy of surfaces: A standard data base for electron inelastic mean free paths in solids. M P Seah, W A Dench, Surf. Interface Anal. 1Seah, M. P. & Dench, W. A. Quantitative electron spectroscopy of surfaces: A standard data base for electron inelastic mean free paths in solids. Surf. Interface Anal. 1, 2-11 (1979).
Femtosecond electron diffraction: heralding the era of atomically resolved dynamics. G Sciaini, R J Miller, Reports Prog. Phys. 7496101Sciaini, G. & Miller, R. J. D. Femtosecond electron diffraction: heralding the era of atomically resolved dynamics. Reports Prog. Phys. 74, 096101 (2011).
Graphene Unit Cell Imaging by Holographic Coherent Diffraction. J.-N Longchamp, T Latychevskaia, C Escher, H.-W Fink, Phys. Rev. Lett. 110255501Longchamp, J.-N., Latychevskaia, T., Escher, C. & Fink, H.-W. Graphene Unit Cell Imaging by Holographic Coherent Diffraction. Phys. Rev. Lett. 110, 255501 (2013).
Van der Waals heterostructures. A K Geim, I V Grigorieva, Nature. 499Geim, a K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419-425 (2013).
Graphene-organic composites for electronics: optical and electronic interactions in vacuum, liquids and thin solid films. A Schlierf, P Samorì, V Palermo, J. Mater. Chem. C. 23129Schlierf, A., Samorì, P. & Palermo, V. Graphene-organic composites for electronics: optical and electronic interactions in vacuum, liquids and thin solid films. J. Mater. Chem. C 2, 3129 (2014).
Coherent femtosecond low-energy single-electron pulses for timeresolved diffraction and imaging: A numerical study. A Paarmann, J. Appl. Phys. 112113109Paarmann, A. et al. Coherent femtosecond low-energy single-electron pulses for time- resolved diffraction and imaging: A numerical study. J. Appl. Phys. 112, 113109 (2012).
Strong-field photoemission from surfaces: Theoretical approaches. S V Yalunin, M Gulde, C Ropers, arXiv:1405.4992v2Phys. Rev. B. 84195426Yalunin, S. V., Gulde, M. & Ropers, C. Strong-field photoemission from surfaces: Theoretical approaches. Phys. Rev. B 84, 195426 (2011). arXiv:1405.4992v2; 02 September 2014
Femtosecond electron diffraction: heralding the era of atomically resolved dynamics. G Sciaini, R J Miller, Reports Prog. Phys. 7496101Sciaini, G. & Miller, R. J. D. Femtosecond electron diffraction: heralding the era of atomically resolved dynamics. Reports Prog. Phys. 74, 096101 (2011).
| []
|
[
"On ABC spectral radius of uniform hypergraphs",
"On ABC spectral radius of uniform hypergraphs"
]
| [
"Hongying Lin \nSchool of Mathematics\nSouth China University of Technology\n510641GuangzhouP.R. China\n",
"Bo Zhou \nSchool of Mathematical Sciences\nSouth China Normal University\n510631GuangzhouP.R. China\n"
]
| [
"School of Mathematics\nSouth China University of Technology\n510641GuangzhouP.R. China",
"School of Mathematical Sciences\nSouth China Normal University\n510631GuangzhouP.R. China"
]
| []
| Given a k-uniform hypergraph G with vertex set [n] and edge set E(G), the ABC tensor ABC(G) of G is the k-order n-dimensional tensor withThe ABC spectral radius of a uniform hypergraph is the spectral radius of its ABC tensor. We give tight lower and upper bounds for the ABC spectra radius, and determine the maximum ABC spectral radii of uniform hypertrees, uniform non-hyperstar hypertrees and uniform non-power hypertrees of given size, as well as the maximum ABC spectral radii of unicyclic uniform hypergraphs and linear unicyclic uniform hypergraphs of given size, respectively. We also characterize those uniform hypergraphs for which the maxima for the ABC spectral radii are actually attained in all cases. | null | [
"https://export.arxiv.org/pdf/2303.14929v2.pdf"
]
| 257,767,253 | 2303.14929 | 1b825ea1768d059949113c84ecaae613e6716335 |
On ABC spectral radius of uniform hypergraphs
28 Mar 2023
Hongying Lin
School of Mathematics
South China University of Technology
510641GuangzhouP.R. China
Bo Zhou
School of Mathematical Sciences
South China Normal University
510631GuangzhouP.R. China
On ABC spectral radius of uniform hypergraphs
28 Mar 2023ABC tensorABC spectral radiusuniform hypergraphH-eigenvalue
Given a k-uniform hypergraph G with vertex set [n] and edge set E(G), the ABC tensor ABC(G) of G is the k-order n-dimensional tensor withThe ABC spectral radius of a uniform hypergraph is the spectral radius of its ABC tensor. We give tight lower and upper bounds for the ABC spectra radius, and determine the maximum ABC spectral radii of uniform hypertrees, uniform non-hyperstar hypertrees and uniform non-power hypertrees of given size, as well as the maximum ABC spectral radii of unicyclic uniform hypergraphs and linear unicyclic uniform hypergraphs of given size, respectively. We also characterize those uniform hypergraphs for which the maxima for the ABC spectral radii are actually attained in all cases.
Introduction
Given a positive integer k ≥ 2, a k-uniform hypergraph G consists of a finite set of vertices V (G) a set of hyperedges (or simply edges) and E(G) ⊆ 2 V (G) such that each edge contains exactly k vertices. We call the numbers of vertices and edges of G as the order and size of G, respectively. A uniform hypergraph is a k-uniform hypergraph for some k. A linear hypergraph is one in which every two distinct edges intersect in at most one vertex.
Let G be a k-uniform hypergraph of order n with vertex set V (G) = [n] := {1, . . . , n}. For i ∈ V (G), denote by E i (G) the set of edges containing i, and the degree of i in G, denoted by d G (i) or simply d i , is |E i (G)|. The hypergraph G is regular if all the degree of its vertices are equal. Assume that E(G) = ∅ for any hypergraph G in this paper.
For integers k and n with 2 ≤ k < n, a k-order n-dimensional complex or real tensor (or hypermatrix) T is a multidimensional array of n k elements of the form T = (T i 1 ,...,i k ), where 1 ≤ i 1 , . . . , i k ≤ n. A k-order n-dimensional real tensor is said to be a nonnegative tensor if all its entries are nonnegative. For a k-order n-dimensional real tensor T and an n-dimensional vector x = (x 1 , . . . , x n ) ⊤ , the product T x k−1 is defined to be an n-dimensional vector so that for i ∈ [n],
(T x k−1 ) i = i 2 ∈[n] · · · i k ∈[n] T i,i 2 ,...,i k x i 2 . . . x i k ,
while T x k is defined as the following homogeneous polynomial
T x k = i 1 ∈[n] · · · i k ∈[n] T i 1 ,...,i k x i 1 . . . x i k . So T x k = x ⊤ (T x k−1 ). Let x [k] = (x k 1 , . .
. , x k n ) ⊤ . Lim [24] and Qi [28] proposed independently the concepts of eigenvalues and eigenvectors of a k-order n-dimensional real tensor T . A complex λ is called an eigenvalue of T , if the system of homogeneous polynomial equations
T x k−1 = λx [k−1] , i.e., (T x k−1 ) i = λx k−1 i for all i ∈ [n]
has a nonzero solution x. The vector x is called an eigenvector of T corresponding to λ, and the equalities
i 2 ∈[n] · · · i k ∈[n]
T i,i 2 ,...,i k x i 2 . . . x i k = λx k−1 i for i = 1, . . . , n are called the (λ, x)-eigenequations of T . Moreover, if both λ and x are real, then we call λ an H-eigenvalue and x an H-eigenvector of T , see also [30,31]. The spectral radius of T is the maximum modulus of its eigenvalues, denoted by ρ(T ). Let G be a k-uniform hypergraph of order n. Recall that the adjacency tensor A(G) of G is defined as [5] A(G) i 1 ,...,i k = Fix i ∈ {1, . . . , n}. If {i, i 2 , . . . , i k } ∈ E(G), then A(G) i,τ (i 2 ),...,τ (i k ) = 1 (k−1)! for any permutation τ in the symmetric group of degree k − 1, and as there are (k − 1)! such permutations, one has i 2 ,...,i k ∈ [n] A(G) i,i 2 ,...,i k =
e∈E i (G) 1 (k − 1)! (k − 1)! = e∈E i (G) 1 = d i .
That is, the i-th row sum of A(G) is just the degree of the i-th vertex of G. The ABC tensor ABC(G) of the k-uniform hypergraph G as the k-order n-dimensional tensor with entries
ABC(G) i 1 ,...,i k = 1 (k − 1)! k i∈e d i − k i∈e d i if e = {i 1 , . . . , i k } ∈ E(G), 0 otherwise.
The term 'ABC' is abbreviated from atom-bond connectivity that comes from chemistry [7]. The ABC tensor of the hypergraph G may be viewed as the adjacency tensor of an edgeweighted hypergraph G w in which an edge e = {i 1 , . . . , i k } ∈ E(G) has weight k i∈e d i − k i∈e d i based on the degrees of the vertices in the edge. For a k-uniform hypergraph G, the ABC eigenvalues of G are defined as the eigenvalues of its ABC tensor, and in particular, the ABC spectral radius of G is defined as the spectral radius of its ABC tensor, denoted by ρ ABC (G). That is, ρ ABC (G) = ρ(ABC(G)). Recall that the spectral radius of a hypergraph G is the spectral radius of its adjacency tensor of G, denoted by ρ A (G), see, e.g., [25,26].
An ordinary graph is a just 2-uniform hypergraph, so the ABC tensor of a graph G is just the ABC matrix of G, which was proposed by Estrada [7] in the context of molecular graphs based on early work, see, e.g. [8]. For edge {i, j},
d i +d j −2 d i d j
is interpreted as the probability of visiting edge ij from i or j. Such interpretation in the context of molecular graphs is related to the polarizing capacity of the bond considered. Since the work of Estrada [7], the spectral properties of the ABC matrix of a graph have received much attention, see, e.g. [2,3,17,23,35]. For a connected graph G on n ≥ 3 vertices, Ghorbani et al. [17] and Chen [3] independently showed that the path and the complete graph are the unique ones that minimize and maximizes the ABC spectral radius. If G is a tree of order n ≥ 2, Chen [2] showed that ρ ABC (G) ≤ √ n − 2 with equality if and only if G is the star. If G is a unicyclic graph of order n ≥ 3, Li and Wang [23] showed that ρ ABC (G) is minimum (maximum, respectively) if and only if G is the cycle (G is obtainable from the star by adding an edge, respectively), which was conjectured early in [17]. Further study of the ABC spectral radius of unicyclic graphs and bicyclic graphs may be found in [35,36].
Additionally, the ABC index of the k-uniform hypergraph G is defined as
σ ABC (G) = 1 (k − 1)! e∈E(G) k i∈e d i − k i∈e d i .
If k = 2, the ABC index (abbreviated from the atom-bond connectivity index) has been much studied, see, e.g., [6,11,13,14,20,38], just to mention but a few. Very recently, Estrada [9] proposed a statistical-mechanical theory, which is exemplified by deriving the ABC index (and generalizations) as well as others.
Let G be an r-uniform hypergraph with r ≥ 2. For integer k > r, the k-th power of G, denoted by G k , is defined to be the k-uniform hypergraph with edge set E(G k ) = {e ∪ {v e,1 , . . . , v e,k−r } : e ∈ E(G)} and vertex set V
(G k ) = V (G) ∪ {v e,i : e ∈ E(G), i = 1, . . . , k − r}, where v e,i = v f,j for any {e, f } ⊆ E(G) and i, j ∈ {1, . . . , k − r}. Let G r = G.
A power hypergraph is a k-uniform hypergraph for some k ≥ 3 such that it is the k-th power of some ordinary graph [19]. A hypergraph is a non-power hypergraph if it is not a power hypergraph.
Hypergraph theory found applications in chemistry [16,21,21]. The molecular structures with polycentric delocalized bonds my be represented by hypergraphs [21], where vertices correspond to individual atoms, edges of cardinality at least three correspond to delocalized polycentric bonds, and edges of cardinality two correspond to simple covalent bonds. This avoided defects peculiar for ordinary molecular graphs and facilitated the task of comparing the ordinary molecular structures with the structures containing polycentric bonds. By comparative analysis of topological and information indices for eight series of molecular structures in [22], it was demonstrated that the hypergraph model gives a higher accuracy of molecular structure description.
In this article, we extend the study of the ABC spectral properties of ordinary graphs from begun with Estrada [7] to the study of the more general ABC spectral properties of uniform hypergraphs. For the ABC spectral properties of k-uniform hypergraphs, there are difference for the case k ≥ 3 and the case k = 2 (see Section 7 below). Generalizing and extending the ABC spectral properties from ordinary graphs to uniform hypergraphs, we establish tight lower and upper bounds for the ABC spectral radius of k-uniform hypergraphs, and determine the k-uniform hyppertree of fixed size with first and second maximum ABC spectral radii and the non-power k-uniform hyppertree of fixed size with maximum ABC spectral radius, as well as the unique k-uniform unicyclic hyergraph of fixed size with maximum ABC spectral radius and the linear k-uniform unicyclic hypergraph of fixed size with maximum ABC spectral radius. We list the main results as below.
Theorem 1.1. Let G be a connected k-uniform hypergraph of order n. Then
min k i∈e d i − k : e ∈ E(G) ≤ ρ ABC (G) ≤ max k i∈e d i − k : e ∈ E(G)
with either equality if and only if the sum of degrees of vertices from each edge is a constant.
Corollary 1.1. Let G be a connected k-uniform hypergraph of order n with minimum degree δ and maximum degree ∆. Then
k √ kδ − k ≤ ρ ABC (G) ≤ k √ k∆ − k
with either equality if and only if G is regular.
Theorem 1.2. Let G be a connected k-uniform hypergraph with maximum degree ∆ ≥ 2. Then ρ ABC (G) ≤ k ∆ − 1 ∆ ρ A (G)
with equality if and only if ω G (e) = ∆−1 ∆ for any e ∈ E(G). By previous theorem, any upper bound on the spectral radius of the adjacency tensor will lead to an upper bound on the ABC spectral radius.
A hypertree is connected hypergraph without cycles. An edge in a hypergraph is a pendant edge if it contains at most one vertex of degree greater than one. A hyperstar is a hypertree of which every edge is a pendant edge. Denote by S m,k the k-uniform hyperstar with m edges. The center of S m,k is defined as the vertex of degree m in S m,k . For m ≥ 3 and 1 ≤ a ≤ m−1 2 , let D m,a be the double star obtained by adding an edge between the centers of two disjoint stars S a,2 and S m−1−a,2 .
Theorem 1.3. For k ≥ 2, let G be a k-uniform hypertree of size m ≥ 1. Then ρ ABC (G) ≤ k √ m − 1 with equality if and only if G ∼ = S m,k . Moreover, if G is different from S m,k , then ρ ABC (G) ≤ k m 2 − 3m + 3 + (m − 1) 2 + (m − 2) 4 2(m − 1)
with equality if and only if G ∼ = D k m,1 . Let S m,k;m−3,1,1 be the k-uniform hypertree obtained from S m−2,k with two chosen vertices of degree one in a common edge by adding a pendant edge at each of them.
4(m − 2)t 3 − (4m 2 − 19m + 27)t 2 + (4m 2 − 23m + 34)t − (m − 3) 2 = 0.
For g ≥ 2 and k ≥ 3, a k-uniform hypercycle of length g, denoted by C g,k , is a k-uniform hypergraph whose vertices may be labelled as v 1 , . . . , v g(k−1) so that the set is {e 1 , . . . , e g },
where e i = {v (i−1)(k−1)+1 , . . . , v i(k−1)+1 } for i = 1, . . . , g with v g(k−1)+1 ≡ v 1 . For m ≥ 2, k ≥ 3 and g = 2, 3, let U (k)
m,g be the k-uniform unicyclic hypergraph obtained from a kuniform hypercycle of length g by adding m − g pendant edges at a vertex of degree 2.
t 3 − √ 2 2 t 2 − m 2 − 4m + 5 m − 1 t + √ 2(m 2 − 5m + 6) 2(m − 1) = 0.
Unlike the linear nature of spectral properties of various matrices associated to a graph, the spectral properties of tensors associated to a hypergraph have a nonlinear dependence on the adjacency of the hypergraph.
Preliminaries
Let G be a hypergraph with v, w ∈ V (G). A path from v to w in G is a set of distinct vertices v 1 , . . . , v ℓ+1 and a set of distinct edges e 1 , . . . , e ℓ for some ℓ such that for i = 1, . . . , ℓ,
{v i , v i+1 } ⊆ e i , and for j > i + 1, e i ∩ e j = ∅, where v 1 = v and v ℓ = w. A hypergraph G is connected if for every pair of vertices u, v ∈ V (G), there is a path from u to v in G.
A cycle in G is a set of distinct vertices v 1 , . . . , v ℓ and a set of distinct edges e 1 , . . . , e ℓ for some ℓ ≥ 2 such that for i = 1, . . . , ℓ, {v i , v i+1 } ⊆ e i (with v ℓ+1 ≡ v 1 ), and for |i − j| > 1, e i ∩e j = ∅ (with e ℓ+1 ≡ e 1 ). A hypertree is connected hypergraph without cycles. A unicyclic hypergraph is connected hypergraph with exactly one cycle. An ordinary tree is a 2-uniform hypertree. Generally, an ordinary unicyclic graph is a 2-uniform unicyclic hypergraph in which the length of its unique cycle is at least three.
A pendant vertex of a hypergraph is a vertex of degree one. Let G be a k-uniform hypergraph with u ∈ V (G) and u i ∈ V (G) for i = 2, . . . , k. The hypergraph with vertex set V (G) ∪ {u, u 2 , . . . , u k } and edge set E(G) ∪ {u, u 2 , . . . , u k } is said to be obtained from G by adding a new pendant edge {u, u 2 , . . . , u k } at u.
A nonnegative k-order n-dimensional tensor T is said to be weakly irreducible [12,27] if for any J with ∅ = J ⊂ [n], there is at least one entry T i 1 ,...,i k = 0 with i 1 ∈ J and i j ∈ {1, . . . , n} \ J for some j = 2, . . . , k.
We need the following lemmas. The first lemma is the Perron-Frobenius Theorem for nonnegative tensors, see [ Let T be a nonnegative k-order n-dimensional tensor. Lemma 2.1(i) says that ρ(T ) is an H-eigenvalue of T , and there is a nonnegative eigenvector corresponding to ρ(T ). A nonnegative n-dimensional vector x is k-unit n i=1 x k i = 1. Lemma 2.1(ii) says that if T is weakly irreducible, then there is a unique k-unit positive vector associated with ρ(T ).
Let T 1 and T 2 be k-order n-dimensional real tensors. If T 2 − T 1 is nonnegative, then we write T 1 ≤ T 2 . The following lemma is [32,Theorem 3.4] (see also [18,
[32] Let T 1 and T 2 be nonnegative k-order n-dimensional tensors such that T 1 ≤ T 2 and T 2 is weakly irreducible. Then ρ(T 1 ) ≤ ρ(T 2 ). Moreover, if T 1 = T 2 , then ρ(T 1 ) < ρ(T 2 ).
Let G be a k-uniform hypergraph of order n. It is known that A(G) is weakly irreducible if and only if G is connected [27], so ABC(G) is weakly irreducible if and only if G is connected. Thus, if G is connected, then ρ ABC (G) is the maximum H-eigenvalue of ABC(G), and there is a unique k-unit positive vector corresponding to ρ ABC (G). Let G be a k-uniform hypergraph of order n. The Randić tensor R(G) of G is defined to be the tensor of order k and dimension n with entries
R(G) i 1 ,...,i k = 1 (k−1)! k k j=1 d i j if {i 1 , . . . , i k } ∈ E(G), 0 otherwise.
It is the normalized adjacency tensor in [18]. If k = 2, it is just the Randić matrix [15]. The following lemma is an extension of [15,Theorem 2.3], see also a different treatment in [4, pp. 2-4].
Lemma 2.4. Let G be a nontrivial connected k-uniform hypergraph of order n. Then ρ(R(G)) = 1.
Proof. Let x = ( k √ d 1 , . . . , k √ d n ) ⊤ . For i 1 = 1, . . . , n, if {i 1 , i 2 , . . . , i k } ∈ E(G), then R(G) i 1 ,τ (i 2 ),...,τ (i k ) = 1 (k − 1)! 1 k k j=1 d i j
for any permutation τ in the symmetric group of degree k − 1, and as there are (k − 1)! such permutations, one has ] . It follows that 1 is an eigenvalue of R(G) with a positive eigenvector x.
(R(G)x k−1 ) i 1 = i 2 ∈[n] · · · i k ∈[n] R(G) i 1 ,i 2 ,...,i k k j=2 x i j = e∈E i 1 (G) 1 k k j=1 d i j k j=2 k d i j = e∈E i 1 (G) 1 k d i 1 = 1 k d i 1 e∈E i 1 (G) 1 =( k d i 1 ) k−1 , so R(G)x k−1 = x [k−1
As G is connected, R(G) is weakly irreducible, so by Lemma 2.1, we have ρ(R(G)) = 1.
Lemma 2.5. Let G be a connected k-uniform hypergraph. Let x be the k-unit positive eigenvector of A(G) (ABC(G), respectively)) corresponding to ρ A (G) (ρ ABC (G), respectively). Let σ be an automorphism of G. Then x u = x v provided that σ(u) = v.
Proof. Suppose that P is the permutation matrix corresponding to the automorphism σ of G, i.e., P ij = 1 if and only if
σ(i) = j for i ∈ V (G). Let Q ∈ {A(G), ABC(G)}. Then Q(G) = P Q(G)P ⊤ . So x ⊤ (Q(G)x) = x ⊤ (P Q(G)P ⊤ x) = (P ⊤ x) ⊤ Q(G)P ⊤ x.
Note that P ⊤ x is positive and i∈V (G) y k i = i∈V (G) x k i = 1, where y = P ⊤ x. So P ⊤ x is also a k-unit positive eigenvector corresponding to ρ(Q). By Lemma 2.1 (ii), one has P ⊤ x = x.
Hence, x u = x v if σ(u) = v.
Bounds for ABC spectral radius
For an edge e of a k-uniform hypergraph G, set ω G (e) = i∈e d i −k i∈e d i . Theorem 3.1. Let G be a connected k-uniform hypergraph of order n. Then
ρ ABC (G) ≥ n −1 k!σ ABC (G) with equality if and only if e∈E i (G) k ω G (e) is a constant for i = 1, . . . , n.
Proof. Setting x to be the k-unit n-dimensional vector n − 1 k (1, . . . , 1) ⊤ , we have
(ABC(G)x k−1 ) i = i 2 ∈[n] · · · i k ∈[n] ABC(G) i,i 2 ,...,i k k j=2 x i j = i 2 ∈[n] · · · i k ∈[n] ABC(G) i,i 2 ,...,i k n − k−1 k =n − k−1 k e∈E i (G) 1 (k − 1)! k ω G (e)(k − 1)! =n − k−1 k e∈E i (G) k ω G (e), so ABC(G)x k =x ⊤ (ABC(G)x k−1 ) = n i=1 n − 1 k n − k−1 k e∈E i (G) k ω G (e) =n −1 n i=1 e∈E i (G) k ω G (e) =n −1 k e∈E(G) k ω G (e) =n −1 k!σ ABC (G).
Note that ABC(G) is symmetric. So, by Lemma 2.3, we have ρ ABC (G) ≥ n −1 k!σ ABC (G) with equality if and only if
n − k−1 k e∈E i (G) k ω G (e) = ρ ABC (G) n − 1 k k−1 , i.e., e∈E i (G) k ω G (e) is a constant for i = 1, . . . , n.
Proof of Theorem 1.1. Let
c = min k i∈e d i − k : e ∈ E(G) and C = max k i∈e d i − k : e ∈ E(G) .
Then
cR(G) ≤ ABC(G) ≤ CR(G).
As G is connected, ABC(G) and CR(G) are weakly irreducible. So, by Lemmas 2.2 and 2.4, we have
c = cρ(R(G)) = ρ(cR(G)) ≤ ρ ABC (G) ≤ ρ(CR(G)) = Cρ(R(G)) = C. Suppose that ρ ABC (G) = a, where a = c, or C. By Lemma 2.2, ABC(G) = aR(G), so k i∈e d i − k = a for any edge e of G, i.e., i∈e d i = a k + k is a constant for any edge e of G. Conversely, if i∈e d i is a constant for any edge e of G, then c = C, so cR(G) = ABC(G) = CR(G), implying that c = ρ ABC (G) = C.
This result extends [3, Theorem 2.8] from graphs to hypergraphs. Denote by K (k) n the complete k-uniform hypergraph, that is the hypergraph with vertex set {1, . . . , n} such that any k vertices form an edge.
Corollary 3.1. Let G be a connected k-uniform hypergraph of order n. Then
ρ ABC (G) ≤ k k n − 1 k − 1 − k with equality if and only if G ∼ = K (k) n . Lemma 3.1. Let a 1 , . . . , a k be positive integers, where a 1 ≥ 2 and k ≥ 2. Let f (a 1 , . . . , a k ) = k i=1 a i − k k i=1 a i .
Then
f (a 1 , . . . , a k ) < f (a 1 , . . . , a k−1 ).
Proof. It is evident that
k−1 i=1 a i ≥ k, so a k k−1 i=1 a i − a k k ≥ k−1 i=1 a i − k, i.e., k i=1 a i − k ≤ a k k−1 i=1 a i − a k (k − 1).
So the result follows.
Proof of Theorem 1.2. For any edge e of G, we have
ω G (e) ≤ max{d i : i ∈ e} − 1 max{d i : i ∈ e} ≤ ∆ − 1 ∆ by Lemma 3.1 and the fact that t−1 t is strictly increasing when t ≥ 2. So ABC(G) ≤ k ∆−1 ∆ A(G). Note that A(G) is weakly irreducible. So, by Lemma 2.2, we have ρ ABC (G) ≤ k ∆−1 ∆ ρ A (G) with equality if and only if ABC(G) = k ∆−1 ∆ A(G), i.e.
, ω G (e) = ∆−1 ∆ for any e ∈ E(G).
By previous theorem, any upper bound on the spectral radius of the adjacency tensor will lead to an upper bound on the ABC spectral radius.
ABC eigenvalues of of power hypergraphs
In [39], it is shown that the adjacency eigenvalues of a power hypergraph of a graph is determined by the adjacency eigenvalues of the graph. In the following theorem, we establish a relation between the ABC eigenvalues of an r-uniform hypergraph and the ABC eigenvalues of its k-th power hypergraph, where 2 ≤ r < k.
Theorem 4.1. For k > r ≥ 2, let G be an r-uniform hypergraph, and ρ be a nonzero ABC eigenvalue of G. Then ρ r k is an ABC eigenvalue of G k . Proof. Let x be a nonzero eigenvector corresponding to the ABC eigenvalue ρ of G. By the (ρ, x)-eigenequations of ABC(G), we have
ρx r−1 i = e∈E i (G) r j∈e d j − r j∈e d j j∈e\{i} x j (4.1) for each i ∈ V (G). Let y be a column vector of dimension |V (G k )| such that y i = x r k i if i ∈ V (G), j∈e d j − r j∈e d j 1 rk j∈e x j ρ 1 k if i ∈ {v e,s : s = 1, . . . , k − r}
for some e ∈ E(G).
Recall that any e ∈ E(G) corresponds naturally to e = e ∪ {v e,1 , . . . , v e,k−r }. We show that
(ABC(G k )y k−1 ) i = ρ r k y k−1 i for all i ∈ V (G k ). If i ∈ V (G)
, then, bearing in mind (4.1), we have
(ABC(G k )y k−1 ) i = e∈E i (G k ) k j∈ e d j − k j∈ e d j j∈ e\{i} y j = e∈E i (G) k j∈e d j − r j∈e d j j∈e\{i} y j j∈ e\e y j = e∈E i (G) j∈e d j − r j∈e d j 1 k j∈e\{i} x r k j j∈e d j − r j∈e d j 1 rk j∈e x j ρ 1 k k−r = x k−r k i ρ k−r k e∈E i (G) j∈e d j − r j∈e d j 1 r j∈e\{i} x j = x k−r k i ρ k−r k · ρx r−1 i = ρ r k x r(k−1) k i = ρ r k y k−1 i . If i ∈ V (G k ) \ V (G), then there is some e ∈ E(G) such that i = v e,s for some s = 1, . . . , k − r, so (ABC(G k )y k−1 ) i = j∈e d j − r j∈e d j 1 k j∈e y j j∈ e\e\{i} y j = j∈e d j − r j∈e d j 1 k j∈e x r k j j∈e d j − r j∈e d j 1 rk j∈e x j ρ 1 k k−r−1 = j∈e d j − r j∈e d j k−1 rk j∈e x k−1 k j ρ k−r−1 k = ρ r k j∈e d j − r j∈e d j 1 rk j∈e x j ρ 1 k k−1 = ρ r k y k−1 i . Thus ABC(G k )y k−1 = ρ r k y [k−1]
. From the construction of y, y is a nonzero vector. It so follows that ρ r k is an ABC eigenvalue of G k .
For m ≥ g ≥ 3, let U m,g be the unicyclic graph obtained from a cycle of length g by adding m − g pendant edges at a vertex of the cycle.
f (t) = t 3 − √ 2 2 t 2 − m 2 − 4m + 5 m − 1 t + √ 2(m 2 − 5m + 6) 2(m − 1) . (4.2) Proof. Let v 1 v 2 v 3 be the cycle such that v 1 is of degree m − 1 in U m,3 , and v 0 be a pendant vertex at v 1 . Let x be the 2-unit positive eigenvector of ABC(U m,3 ) corresponding to ρ = ρ ABC (U m,3 ). Let x i = x v i for i = 0, 1, 2. By Lemma 2.5 and the (ρ, x)-eigenequations of U m,3 , we have ρx 0 = m − 2 m − 1 x 1 , ρx 1 = (m − 3) m − 2 m − 1 x 0 + 2 1 2 x 2 , ρx 2 = 1 2 x 1 + 1 2 x 2 .
Since x is nonzero, the above homogeneous linear system in the variables x 0 , x 1 , x 2 has a nontrivial solution. Then the determinant of its coefficient matrix is zero. By direct calculation, the determinant is equal to
f (ρ) = 0. So ρ(U m
ABC spectral radius of hypertrees
For m ≥ 4, m ≥ k ≥ 3, m − 3 ≥ a 1 ≥ · · · ≥ a k ≥ 0 and k i=1 a i = m − 1, let S m,k;a 1 ,.
..,a k be the k-uniform hypergraph obtained by adding a i pendant edges at v i in an edge {v 1 , . . . , v k }. If s with 1 ≤ s ≤ k is the largest number such that a s > 0, we write S m,k;a 1 ,...,as instead of S m,k;a 1 ,...,a k .
Lemma 5.1. [34] For k ≥ 2, let G be a k-uniform hypertree of size m ≥ 5 different from S m,k and D k m,1 . Then (i) ρ A (G) ≤ ρ A (D k m,2 ) < ρ A (D k m,1 ) < ρ A (S m,k ) with equality if and only if G ∼ = D k m,2 . (ii) For k ≥ 3 and m ≥ 6, if G is non-power and G ≇ S m,k;m−3,1,1 , then ρ A (G) ≤ ρ A (S m,k;m−4,2,1 ) < ρ A (S m,k;m−3,1,1 )
with equality if and only if G ∼ = S m,k;m−4,2,1 .
Proof of Theorem 1.3. The first part follows from or from Theorem 1.1.
Next, we calculate ρ A ((D k m,2 )) and ρ ABC (D k m,1 ). Let v 1 v 2 v 3 v 4 be a path of D m,2 such that v 2 and v 3 are of degrees m−2 and 3, respectively. Let x be the 2-unit positive eigenvector of A(D m,2 ) corresponding to ρ = ρ A (D m,2 ). Let
x i = x v i for i = 1, . . .ρx 1 = x 2 , ρx 2 = (m − 3)x 1 + x 3 , ρx 3 = x 2 + 2x 4 , ρx 4 = x 3 .
Then ρ is the largest root of the determinant of the coefficient matrix of the above homogeneous linear system in the variables x 1 , x 2 , x 3 , x 4 . i.e., ρ A ((D m,2 )) is the largest root of
ρ 4 − mρ 2 + 2m − 6 = 0. It follows that ρ A ((D m,2 )) = m+ √ m 2 −8m+24 2
. Thus by a result in [39] on the relationship between the eigenvalues of the adjacency tensors of a hypergraph and its kth power, we have
ρ A ((D k m,2 )) = k m+ √ m 2 −8m+24 2 . Let v 1 v 2 v 3 v 4 be a path of D m,1 such that v 2 is of degree m − 1 and v 3 is of degree 2.
Let y be the 2-unit positive eigenvector of ABC(D m,1 ) corresponding to ρ = ρ ABC (D m,1 ). Let y i = y v i for i = 1, . . . , 4. By Lemma 2.5 and the (ρ, y)-eigenequations of ABC(D m,1 ), we have
ρy 1 = m − 2 m − 1 y 2 , ρy 2 = (m − 2) m − 2 m − 1 y 1 + 1 2 y 3 , ρy 3 = 1 2 y 2 + 1 2 y 4 , ρy 4 = 1 2 y 3 .
Then ρ ABC (D m,1 ) is the largest root of
2ρ 4 − 2(m 2 − 3m + 3) m − 1 ρ 2 + (m − 2) 2 m − 1 = 0, which implies that ρ ABC (D m,1 ) = m 2 −3m+3+ √ (m−1) 2 +(m−2) 4 2(m−1) . From Theorem 4.1, ρ ABC (D k m,1 ) = k m 2 −3m+3+ √ (m−1) 2 +(m−2) 4 2(m−1)
. Now, we prove the result. It is trivial if m = 3. Suppose that m ≥ 4. Let G be a k-uniform hypertree different from S m,k of size m. Suppose that G is different from D k m,1 . Then the maximum degree of G is at most m − 2. So, by Theorem 1.2 and Lemma 5.1(i), we have
ρ ABC (G) ≤ k m − 3 m − 2 ρ A (G) ≤ k m − 3 m − 2 k m + √ m 2 − 8m + 24 2 = k (m − 3)(m + √ m 2 − 8m + 24) 2(m − 2)
.
So, it suffices to show that
k (m − 3)(m + √ m 2 − 8m + 24) 2(m − 2) < ρ ABC (D k m,1 ),
which is indeed true as
ρ k ABC (D k m,1 ) − (m − 3)(m + √ m 2 − 8m + 24) 2(m − 2) = m 2 − 3m + 3 + (m − 1) 2 + (m − 2) 4 2(m − 1) − (m − 3)(m + √ m 2 − 8m + 24) 2(m − 2) = 1 2(m − 2)(m − 1) m 2 − 3m + 3 + (m − 1) 2 + (m − 2) 4 (m − 2) −(m − 3)(m − 1) m + √ m 2 − 8m + 24 > 1 2(m − 2)(m − 1) m 2 − 3m + 3 + (m − 2) 2 (m − 2) −(m − 3)(m − 1) m + √ m 2 − 8m + 24 = 1 2(m − 2)(m − 1) m 3 − 7m 2 + 18m − 14 − (m − 3)(m − 1) √ m 2 − 8m + 24 = 1 2(m − 2)(m − 1) (m − 3)(m − 1) m − 3 − (m − 4) 2 + 8 + 3m − 5 > 0
for m ≥ 3. This completes the proof. . Eliminating x 1 and x 3 from (5.3), we have Proof. First, we prove (i).
ρx 2 1 = 3 m − 3 m − 2 x 1 x 2 , (5.2) ρx 2 2 = (m − 3) 3 m − 3 m − 2 x 2 1 + 3 m − 1 4(m − 2) x 2 3 , (5.3) ρx 2 3 = 3 m − 1 4(m − 2) x 2 x 3 + 3 1 2 x 2 4 ,(5.ρ 3 ρ 3 − 1 2 2 − (m − 3) 2 m − 2 ρ 3 − 1 2 2 − m − 1 4(m − 2) ρ 6 = 0, i.e., η(ρ 3 ) = 0. So ρ ABC (S m
Let v 1 e 1 v 2 e 2 v 3 e 3 v 4 be a path in T m,1 such that v 2 and v 3 are the vertices of degrees m − 3 and 3, respectively. Let v 5 is a vertex of degrees 2 in e 2 , and v 6 be a pendant vertex in the pendant edge at v 5 . Let ρ 1 = ρ ABC (T m,1 ). Let x be the 3-unit positive eigenvector of ABC(T m,1 ) corresponding to ρ 1 . Let x i = x v i for i = 1, . . . , 6. By Lemma 2.5 and the (ρ 1 , x)-eigenequations of ABC(T m,1 ), we have
ρ 1 x 2 1 = 3 m − 4 m − 3 x 1 x 2 , (5.6) ρ 1 x 2 2 = (m − 4) 3 m − 4 m − 3 x 2 1 + 3 m − 1 6(m − 3) x 3 x 5 , (5.7) ρ 1 x 2 3 = 3 m − 1 6(m − 3) x 2 x 5 + 2 3 2 3 x 2 4 , (5.8) ρ 1 x 2 4 = ρ 1 x 2 5 = 3 m − 1 6(m − 3) x 2 x 3 + 3 1 2 x 2 6 ,(5.ρ 3 1 − 4 3 x 2 3 = 3 m − 1 6(m − 3) x 2 x 5 ρ 2 1 and ρ 3 1 − 1 2 x 2 5 = 3 m − 1 6(m − 3) x 2 x 3 ρ 2 1 , so x 3 = 3 m−1 6(m−3) ρ 2 1 (ρ 3 1 − 1 2 ) 1 3 (ρ 3 1 − 4 3 ) 2 3 x 2 and x 5 = 3 m−1 6(m−3) ρ 2 1 (ρ 3 1 − 1 2 ) 2 3 (ρ 3 1 − 4 3 ) 1 3 x 2 . Eliminating x 1 , x 3 and x 5 from (5.7), we have h 1 (ρ 3 1 ) = 0, where h 1 (t) = t 3 − 3m 2 − 18m + 31 3(m − 3) t 2 + 11m 2 − 84m + 164 6(m − 3) t − 2m 2 − 16m + 32 3(m − 3) .
So ρ 1 is the largest root of h 1 (t 3 ) = 0. Bearing in mind the expression for η(t) in (5.1), we have
η(t) = h 1 (t) − (m − 5)p 1 (t) 12(m − 2)(m − 3) , where p 1 (t) = (3m − 1)t 2 + (10m 2 − 57m + 70)t − 5m 2 + 28m − 35.
Let t 1 be the largest root of h 1 (t). If m = 6, then h 1 (t) = 9t 3 −31t 2 +28t−8
9
. It is easy to seen that h
1 (3) = 85 9 , h(1)1 (3) = 100 9 and h(2)1 (3) = 6. Since h (3−i) 1 (t) is strictly increasing for t ≥ 3 as h (4−i) 1(3)
(3) > 0 with i = 1, 2, 3, h 1 (t) is strictly increasing for t ≥ 3. Noting that h 1 (2) = − 4 9 and h 1 (3) = 40 9 , t 1 lies in (2, 3). As p 1 (t) is increasing for t ∈ [2, 3] and p 1 (t) > p 1 (2) = 13, we have η(t 1 ) = h 1 (t 1 ) − p 1 (t 1 ) 144 < − p 1 (2) 144 < 0. So t 1 is less than the largest root of η(t) = 0. Suppose that m ≥ 7. Note that h
1 (t) = 3t 2 − 6m 2 − 36m + 62 3(m − 3) t + 11m 2 − 84m + 164 6(m − 3) , h(1)1 (t) = 6t − 6m 2 − 36m + 62 3(m − 3) , h(2)
1 (t) = 6.
Then
h (1) 1 (m − 4) = 6m 3 − 67m 2 + 224m − 204 6(m − 3) > 0, h(2)1 (m − 4) = 2(6m 2 − 45m + 77) 3(m − 3) > 0,
and h
h 1 (m − 5) = − m 3 − 5m 2 − 36m + 184 6(m − 3) < 0, h 1 (m − 4) = (m − 4)(5m 2 − 54m + 140) 6(m − 3) > 0. So t 1 lies in (m−5, m−4). It is easy to see that p 1 (t) is strictly increasing for t ∈ [m−5, m−4]. So p 1 (t) > p 1 (m − 5) = 13m 3 − 143m 2 + 468m − 410 > 0 for t ∈ (m − 5, m − 4). Then η(t 1 ) = h 1 (t 1 ) − (m − 5)p 1 (t 1 ) 12(m − 2)(m − 3) < 0.
So t 1 is less than the largest root of η(t) = 0. By Lemma 5.2, ρ ABC (S m,3;m−3,1,1 ) is the largest root of η(t 3 ) = 0. Thus ρ 3 1 = t 1 < ρ 3 (S m,3;m−3,1,1 ). That is, ρ ABC (T m,1 ) < ρ ABC (S m,3;m−3,1,1 ). Let v 1 e 1 v 2 e 2 v 3 e 3 v 4 e 4 v 5 be a path in T m,2 such that there is a pendant edge at a vertex in e 3 \ {v 3 , v 4 }. Let v 6 and v 7 be pendant vertices in e 2 and a pendant edge at v 3 , respectively. Let ρ 2 = ρ ABC (T m,2 ). Let x be the 3-unit positive eigenvector of ABC(T m,2 ) corresponding to ρ 2 . Let x i = x v i for i = 1, . . . , 7. By Lemma 2.5 and the (ρ 2 , x)-eigenequations of ABC(T m,2 ), we have
ρ 2 x 2 1 = 3 1 2 x 1 x 2 , (5.12) ρ 2 x 2 2 = 3 1 2 x 2 1 + 3 1 2 x 3 x 6 , (5.13) ρ 2 x 2 3 = 3 1 2 x 2 x 6 + 3 m − 2 4(m − 3) x 2 4 + (m − 5) 3 m − 4 m − 3 x 2 7 ,(5.
14) ρ 2 x 3 . Eliminating x 2 , x 4 , x 6 and x 7 from (5.14), it follows that
ρ 2 x 2 4 = 3 m − 2 4(m − 3) x 3 x 4 + 3 1 2 x 2 5 , (5.15) ρ 2 x 2 5 = 3 1 2 x 4 x 5 , (5.16) ρ 2 x 2 6 = 3 1 2 x 2 x 3 , (5.17) ρ 2 x 2 7 = 3 m − 4 m − 3 x 3 x 7 .h 2 (ρ 3 2 ) = 0, where h 2 (t) = t 3 − 4m 2 − 29m + 60 4(m − 3) t 2 + 2m 2 − 17m + 37 2(m − 3) t − m 2 − 9m + 20 4(m − 3) .
So ρ 2 is the largest root of h 2 (t 3 ) = 0. It is easily seen that
η(t) = h 2 (t) − p 2 (t) 4(m − 2)(m − 3) ,
where p 2 (t) = (6m 2 − 34m + 39)t 2 − (7m 2 − 39m + 46)t + 2m 2 − 11m + 13.
As h (1)
2 (t) = 3t 2 − 4m 2 − 29m + 60 2(m − 3) t + 2m 2 − 17m + 37 2(m − 3) , h(2)2 (t) = 6t − 4m 2 − 29m + 60 2(m − 3) , h(3)2 (t) = 6, we have h (1) 2 (m − 4) = 2m 3 − 19m 2 + 47m − 11 2(m − 3) > 0, h(2)2 (m − 4) = 8m 2 − 55m + 84 2(m − 3) > 0. Since h (3−i) 2 (t) is strictly increasing for t ≥ m − 4 as h (4−i) 2 (m − 4) > 0 with i = 1, 2, 3, h 2 (t) is strictly increasing for t ≥ m − 4. Note that h 2 (m − 5) = − 3m 3 − 37m 2 + 109m + 32 4(m − 3) < 0 if m = 6, 7, 8, h 2 (m − 6) = − 3m 3 − 37m 2 + 109m + 32 4(m − 3) < 0 if m ≥ 9,η(t 2 ) = h 2 (t 2 ) − p 2 (t 2 ) 4(m − 2)(m − 3) < − p 2 (m − 6) 4(m − 2)(m − 3) < 0.
So t 2 is less than the largest root of η(t) = 0. By Lemma 5.2, we have ρ ABC (T m,2 ) < ρ ABC (S m,3;m−3,1,1 ). Now, we prove (ii).
Let
ρ 3 x 2 1 = 3 m − 4 m − 3 x 1 x 2 , (5.19) ρ 3 x 2 2 = (m − 4) 3 m − 4 m − 3 x 2 1 + 3 1 2 x 3 x 6 , (5.20) ρ 3 x 2 3 = 3 1 2 x 2 x 6 + 3 3 8 x 2 4 , (5.21) ρ 3 x 2 4 = 3 3 8 x 3 x 4 + 3 1 2 x 2 5 , (5.22) ρ 3 x 2 5 = 3 1 2 x 4 x 5 , (5.23) ρ 3 x 2 6 = 3 1 2 x 2 x 3 .h 3 (t 3 ) = 0, where h 3 (t) = t 3 − 8m 2 − 49m + 83 8(m − 3) t 2 + 11m 2 − 82m + 158 8(m − 3) t − 2m 2 − 15m + 29 8(m − 3) .
So ρ 3 is the largest root of h 3 (t 3 ) = 0. Note that
η(t) = h 3 (t) − (m − 4)p 3 (t) 8(m − 2)(m − 3) , where p 3 (t) = (3m − 1)t 2 + (3m 2 − 22m + 28)t + m − 1.+ 1 16 ≈ −0.5603. If m = 6, then h 3 (t) = 24t 3 −77t 2 +62t−11 24 . Note that h 3 (t) is strictly increasing for t ≥ 3, because h (3−i) 3 (t) is strictly increasing for t ≥ 3 as h (4−i) 3 (3) > 0 with i = 1, 2, 3. As h 3 (2) < 0 and h 3 (3) > 0, t 3 lies in (2, 3). Since p 3 (t) is increasing for t ∈ [2, 3] and p 3 (2) > 0, η(t 3 ) = h 3 (t 3 )− p 3 (t 3 ) 48 < − p 3 (2) 48 < 0.
So t 3 is less than the largest root of η(t) = 0 for m = 5, 6. Suppose that m ≥ 7. As h (1)
3 (t) = 3t 2 − 8m 2 − 49m + 83 4(m − 3) t + 11m 2 − 82m + 158 8(m − 3) , h(2)3 (t) = 6t − 8m 2 − 49m + 83 4(m − 3) , h(3)3 (t) = 6, we have h (1) 3 (m − 4) = 8m 3 − 91m 2 + 320m − 330 8(m − 3) > 0, h(2)3 (m − 4) = 16m 2 − 119m + 205 4(m − 3) > 0. Since h (3−i) 3 (t) is strictly increasing for t ≥ m − 4 as h (4−i) 1 (m − 4) > 0 with i = 1, 2, 3, h 3 (t) is strictly increasing for t ≥ m − 4. Note that h 3 (m − 5) = − 2m 3 − 24m 2 + 81m − 53 4(m − 3) < 0, h 3 (m − 4) = 4m 3 − 59m 2 + 285m − 453 8(m − 3) > 0.
So t 3 lies in (m − 5, m − 4). As p 3 (t) is strictly increasing for t ∈ [m − 5, m − 4], we have
p 3 (t) ≥ p 3 (m − 5) = 2(3m 3 − 34m 2 + 112m − 83) > 0 for t ∈ (m − 5, m − 4). Then η(t 3 ) = h 3 (t 3 ) − (m − 4)p 3 (t 3 ) 8(m − 2)(m − 3) < 0.
So t 3 is less than the largest root of η(t) = 0. Thus ρ ABC (T m,3 ) < ρ ABC (S m,3;m−3,1,1 ). As
ρ 4 x 2 1 = 3 m − 4 m − 3 x 1 x 2 , (5.25) ρ 4 x 2 2 = (m − 4) 3 m − 4 m − 3 x 2 1 + 3 m − 2 4(m − 3) x 3 x 6 , (5.26) ρ 4 x 2 3 = 3 m − 2 4(m − 3) x 2 x 6 + 3 1 2 x 4 x 7 , (5.27) ρ 4 x 2 4 = 3 1 2 x 3 x 7 + 3 1 2 x 2 5 , (5.28) ρ 4 x 2 5 = 3 1 2 x 4 x 5 , (5.29) ρ 4 x 2 6 = 3 m − 2 4(m − 3) x 2 x 3 + 3 1 2 x 2 8 , (5.30) ρ 4 x 2 7 = 3 1 2 x 3 x 4 , (5.31) ρ 4 x 2 8 = 3 1 2 x 6 x 8 .ρ 3 4 − (m − 4) 2 m − 3 x 2 2 = 3 m − 2 4(m − 3) ρ 2 4 x 3 x 6 . (5.33)
From (5.30) and (5.32), we have
ρ 3 4 − 1 2 x 2 6 = 3 m − 2 4(m − 3) ρ 2 4 x 2 x 3 . (5.34)
By (5.33) and (5.34), we have
x 2 = 3 m−2 4(m−3) ρ 2 4 ρ 3 4 − (m−4) 2 m−3 2 3 ρ 3 4 − 1 2 1 3 x 3 , so x 2 = 3 m−2 2(m−3) ρ 3 4 − 1 2 1 3 ρ 4 ρ 3 4 − (m−4) 2 m−3 2 3 x 4 .
By (5.27) and (5.31) , we have
ρ 3 4 x 3 3 − ρ 3 4 x 3 7 = 3 m − 2 4(m − 3) ρ 2 4 x 2 x 3 x 6 ,(5.ρ 3 4 − (m − 4) 2 m − 3 x 3 2 = ρ 3 4 x 3 3 − ρ 3 4 − 1 2 x 3 4 . (5.37)
Eliminating x 2 and x 3 from (5.37), it follows that
h 4 (ρ 3 4 ) = 0, where h 4 (t) = 1 8(m − 3) (2t − 1)(4(m − 3)t 2 − (4m 2 − 27m + 50)t + 4m 2 − 32m + 64).
So ρ 4 is the largest root of h 4 (t 3 ) = 0. Note that
η(t) = h 4 (t) − p 4 (t) 8(m − 2)(m − 3) ,
where
p 4 (t) = (4m 2 − 20m + 14)t 2 + (4m 3 − 45m 2 + 154m − 152)t − 2m 3 + 22m 2 − 74m + 74.
Let t 4 be the largest root of h 4 (t) = 0. If m = 6, then t 4 = 2 since h 4 (t) = (2t−1)(3t−2)(t−2) 6 , and so η(t 4 ) = − 29 16 . Thus t 4 is less than the largest root of h 4 (t) = 0.
Suppose that m ≥ 7. As h (1)
4 (t) = 3t 2 + −16m 2 t + 100mt − 176t + 12m 2 − 91m + 178 8(m − 3) , h(2)4 (t) = 6t + −4m 2 + 25m − 44 2(m − 3) , h(3)4 (t) = 6, we have h (1) 4 (m − 4) = 8m 3 − 88m 2 + 293m − 270 8(m − 3) > 0, h (2) 4 (m − 4) = 8m 2 − 59m + 100 2(m − 3) > 0. Since h (3−i) 4 (t) is strictly increasing for t ≥ m − 5 as h (4−i) 4 (m − 5) > 0 with i = 1, 2, 3, h 4 (t) is strictly increasing for t ≥ m − 5. Note that h 4 (m − 5) = − (2m − 11)(m 2 − 3m − 14) 8(m − 3) < 0,η(t 4 ) = h 4 (t 4 ) − p 4 (t 4 ) 8(m − 2)(m − 3) < 0.ρx 3 1 = 4 m − 4 m − 3 x 2 1 x 2 ,(5.38)ρx 3 2 = (m − 4) 4 m − 4 m − 3 x 3 1 + 4 m − 1 8(m − 3) x 3 3 , (5.39) ρx 3 3 = 4 m − 1 8(m − 3) x 2 x 2 3 + 4 1 2 x 3 4 ,(5.ρ 3 ρ 4 − 1 2 3 = 0, where h(t) = t 4 − 8m 2 − 51m + 91 8(m − 3) t 3 + 6m 2 − 45m + 87 4(m − 3) t 2 − 6m 2 − 47m + 93 8(m − 3) t + m 2 − 8m + 16 8(m − 3) .
So ρ is the largest root of h(t 4 ) = 0. By the expression for η(t) given in (5.1), we have
tη(t) = h(t) − (m − 4)p(t) 8(m − 2)(m − 3) ,
where p(t) = 5(m − 1)t 3 + 4(m 2 − 7m + 9)t 2 − (4m 2 − 25m + 33)t + m 2 − 6m + 8.
Let t 0 be the largest root of h(t) = 0. If m = 5, then h(t) = (4t−1)(4t 3 −8t 2 +4t−1)
16
. Note that h(t) is strictly increasing for t ≥ 2, because h (4−i) (t) is strictly increasing for t ≥ 2 as h (5−i) (2) > 0 with i = 1, 2, 3, 4. As h(1) < 0 and h(2) > 0, t 0 lies in (1, 2). Since p(t) is increasing for t ∈ [1, 2] and p(1) > 0,
t 0 η(t 0 ) = h(t 0 ) − p(t 0 ) 48 < − p(1) 48 < 0.
So t 0 is less than the largest root of η(t) = 0. Suppose that m ≥ 6. As
h (1) (t) = 4t 3 − 24m 2 − 153m + 273 8(m − 3) t 2 + 6m 2 − 45m + 87 2(m − 3) t − 6m 2 − 47m + 93 8(m − 3) , h (2) (t) = 12t 2 − 24m 2 − 153m + 273 4(m − 3) t + 6m 2 − 45m + 87 2(m − 3) , h (3) (t) = 24t − 24m 2 − 153m + 273 4(m − 3) , h (4) (t) = 24,
we have
h (1) (m − 4) = 8m 4 − 111m 3 + 525m 2 − 909m + 291 8(m − 3) > 0, h (2) (m − 4) = 3(8m 3 − 89m 2 + 315m − 346) 4(m − 3) > 0, h (3) (m − 4) = 3(24m 2 − 173m + 293) 4(m − 3) > 0. Since h (4−i) (t) is strictly increasing for t ≥ m − 4 as h (5−i) (m − 4) > 0 with i = 1, 2, 3, 4, h(t) is strictly increasing for t ≥ m − 4. Note that h(m − 5) = − m 4 − 8m 3 − 42m 2 + 526m − 1206 8(m − 3) < 0, h(m − 4) = (m − 4)(7m 3 − 99m 2 + 462m − 713) 8(m − 3) > 0, the largest root of h(t) = 0 is in (m − 5, m − 4).
Noting p(t) is increasing for t ∈ [m − 5, m − 4], we have p(t) ≥ p(m − 5) = 9m 4 − 152m 3 + 912m 2 − 2224m + 1698 > 0 for t ∈ (m − 5, m − 4). Then recalling the largest root, say t 0 of h(t) = 0 is in (m − 5, m − 4), we get
t 0 η(t 0 ) = h(t 0 ) − (m − 4)p(t 0 ) 8(m − 2)(m − 3) < 0.
So t 0 is less than the largest root of η(t) = 0. By Lemma 5. Suppose that m ≥ 5. Let G be a non-power k-uniform hypertree of size m that maximizes the ABC spectral radius. Then we only need to show that G ∼ = S m,k;m−3,1,1 .
If m = 5, then G ∼ = S 5,k;2,1,1 , T k 5,3 when k = 3, and G ∼ = S 5,k;2,1,1 , S 5,k;1,1,1,1 , T k 5,3 when k In either case, we have G ∼ = S m,k;m−3,1,1 . Suppose that m ≥ 6. Suppose that G ∼ = S m,k;m−3,1,1 . It suffices to show that ρ ABC (G) < ρ ABC (S m,k;m−3,1,1 ).
As G is a non-power k-uniform hypertree, the diameter of G is at least 3. As G is a non-power k-uniform hypertree and is different from S m,k;m−3,1,1 , the maximum degree G is at most m − 3. Note that if the diameter of G is at least 5, then the maximum degree of G at most m−4. So the maximum degree of G at most m − 4. By Theorem 1.2 and Lemma 5.1(ii), we have
ρ ABC (G) ≤ k m − 5 m − 4 ρ A (G) ≤ k m − 5 m − 4 ρ A ((S mρx k−1 1 = x k−2 1 x 2 , ρx k−1 2 = (m − 4)x k−1 1 + x 3 x 5 x k−3 7 , ρx k−1 3 = x 2 x 5 x k−3 7 + x k−1 4 , ρx k−1 4 = x 3 x k−2 4 , ρx k−1 5 = x 2 x 3 x k−3 7 + 2x k−1 6 , ρx k−1 6 = x 5 x k−2 6 ρx k−1 7 = x 2 x 3 x 5 x k−4 7 .
By similar argument in the proof of Lemma 5.3 (i), it is obtainable that ρ 3k − m ρ 2k + (3m − 10) ρ k − 2m + 8 = 0, and so ρ is the largest root of h(t k ) = 0, where
h(t) = t 3 − mt 2 + (3m − 10)t − 2m + 8. (5.42) Note that h(m − 2) = m 2 − 10m + 20 > 0, h(m − 3) = −(3m − 10) < 0, h m − √ m 2 − 9m + 60 3 = 2(m 2 − 9m + 30) √ m 2 − 9m + 60 9 + 3m 2 − 29m + 72 3 > 0. So the largest root of h(t) lies in (m − 3, m − 2). It follows that ρ ∈ ( k √ m − 3, k √ m − 2). Note that by (5.1), η k m − 5 m − 4 t k = q t k 4(m − 2)(m − 4) 3 , (5.43) where q(t) = 4(m − 2)(m − 5) 3 t 3 − (4m 2 − 19m + 27)(m − 5) 2 (m − 4)t 2 + (m − 4) 2 (4m 2 − 23m + 34)(m − 5)t − (m − 3) 2 (m − 4) 3 .
By easy calculation and using (5.42), we have
q(t) = 4(m − 2)(m − 5) 3 h(t) + r(t),
where
r(t) = t 2 (7m 2 − 63m + 108)(m − 5) 2 − t(m − 5)(8m 4 − 129m 3 + 738m 2 − 1760m + 1456) + (m − 4)(7m 4 − 122m 3 + 767m 2 − 2032m + 1856).
Consider the axis of symmetry of the quadratic function r(t) on t. As
8m 4 − 129m 3 + 738m 2 − 1760m + 1456 2(7m 2 − 63m + 108)(m − 5) < m − 3,
r(t) is strictly increasing for t ∈ (m − 3, +∞). Let s(m) = −r(m − 2), i.e., s(m) = m 6 − 31m 5 + 398m 4 − 2632m 3 + 9284m 2 − 16356m + 11184.
Note that
s (1) (m) = 6m 5 − 155m 4 + 1592m 3 − 7896m 2 + 18568m − 16356, s (2) (m) = 30m 4 − 620m 3 + 4776m 2 − 15792m + 18568, s (3) (m) = 120m 3 − 1860m 2 + 9552m − 15792,η t k = q ρ k 4(m − 2)(m − 4) 3 = 4(m − 2)(m − 5) 3 h( ρ k ) + r( ρ k ) 4(m − 2)(m − 4) 3 = r( ρ k ) 4(m − 2)(m − 4) 3 < 0,
So t is less than the largest root of η(t k ) = 0, i.e., k m−5 m−4 ρ A ((S m,k;m−4,2,1 )) < ρ ABC (S m,k;m−3,1,1 ). Thus ρ ABC (G) < ρ ABC (S m,k;m−3,1,1 ).
ABC spectral radius of unicyclic hypergraphs
For integers m ≥ 2, k ≥ 3, g = 2, 3 and a i for 1 ≤ i ≤ k with 0 ≤ a i ≤ m − g and k i=1 a i = m − g, let U m,k,g (a 1 , . . . , a k ) be the unicyclic graph obtained from a cycle u 1 e 1 u 2 . . . u g e g u 1 by adding a i pendant edges at v i , where e 1 = {v 1 , . . . , v k }, v 1 = u 1 and v k = u 2 . Then U (k) m,g ∼ = U m,k,g (m − g, 0, . . . , 0). It is evident that U (k) m,3 ∼ = U k m,3 . Lemma 6.1. Let k ≥ 3, g = 2, 3, m ≥ 3 and a i for 1 ≤ i ≤ k be integers such that a 1 ≥ a k ≥ 0, a 2 ≥ · · · ≥ a k−1 ≥ 0 and k i=1 a i = m − g. Then ρ ABC (U m,k,g (a 1 , . . . , a k )) ≤ ρ ABC (U (k) m,g ) with equality if and only if a 1 = m − g and a 2 = · · · = a k = 0.
Proof. Denote by U m,k,g the class of hypergraphs U m,k,g (a 1 , . . . , a k ) with a 1 ≥ a k ≥ 0, a 2 ≥ · · · ≥ a k−1 ≥ 0 and k i=1 a i = m − g. Let G = U m,k,g (a 1 , . . . , a k ) be a hypergraph in U m,k,g with maximum ABC spectral radius. Let x be the k-unit positive eigenvector of ABC(G) corresponding to ρ ABC (G). Then by Lemma 2.3, ρ ABC (G) = ABC(G)x k . Let u 1 e 1 u 2 . . . u g e g u 1 be the cycle of G as defined, where e 1 = {v 1 , . . . , v k }, v 1 = u 1 and v k = u 2 . Let v ′ i with 1 ≤ i ≤ k be a pendant vertex in a pendant edge at v i , u be a pendant vertex in e 2 , and v be a pendant vertex in e 3 when g = 3. By Lemma 2.5, the entry of x corresponding to each pendant vertex in an edge is the same.
It is evident that a 1 ≤ m − g. Suppose that a 1 ≤ m − g − 1. Then a s ≥ 1 for some s = 2, . . . , k. Let H be the unicyclic hypergraph obtained from G by moving all pendant edges from v j to v 1 , where 2 ≤ j ≤ k.
Assume that
x v i = max 1≤j≤k x v j , where 1 ≤ i ≤ k.
Suppose that g = 2. Let y be a vector such that
y v 1 = x v i , y v i = x v 1 and y w = x w for w ∈ V (G) \ {v 1 , v i } if i > 1 and y = x otherwise. By Lemma 2.3, ρ ABC (H) ≥ ABC(G)y k . So 1 k (ρ ABC (H) − ρ ABC (G)) ≥ 1 k ABC(H)y k − 1 k ABC(G)x k = k w∈e 1 d H (w) − k w∈e 1 d H (w) y v 1 . . . y v k + k w∈e 2 d H (w) − k w∈e 2 d H (w) y v 1 y v k y k−2 u + k j=1 a j k d H (v 1 ) − 1 d H (v 1 ) y v 1 y k−1 v ′ j − k w∈e 1 d G (w) − k w∈e 1 d G (w) x v 1 . . . x v k + k w∈e 2 d G (w) − k w∈e 2 d G (w) x v 1 x v k x k−2 u + k j=1 a j k d G (v j ) − 1 d G (v j ) x v j x k−1 v ′ j = k 1 2 x v 1 . . . x v k + k 1 2 x v i x v k x k−2 u + k j=1 a j k m − 1 m x v i x k−1 v ′ j − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k − k a 1 + a k + 2 (a 1 + 2)(a k + 2) x v 1 x v k x k−2 u − a 1 k a 1 + 1 a 1 + 2 x v 1 x k−1 v ′ 1 − k−1 j=2 a j k a j a j + 1 x v j x k−1 v ′ j − a k k a k + 1 a k + 2 x v k x k−1 v ′ k = k 1 2 − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k + k 1 2 x v i − k a 1 + a k + 2 (a 1 + 2)(a k + 2) x v 1 x v k x k−2 u + a 1 k m − 1 m x v i − k a 1 + 1 a 1 + 2 x v 1 x k−1 v ′ 1 + k−1 j=2 a j k m − 1 m x v i − k a j a j + 1 x v j x k−1 v ′ j + a k k m − 1 m x v i − k a k + 1 a k + 2 x v k x k−1 v ′ k .
By direct checking, we have k 1 2 ≥ k a 1 + a k + 2 (a 1 + 2)(a k + 2) .
As m = k j=1 a j + 2, it is easy to see (a 1 + 2)(a k + 2) k−1 i=2 (a i + 1) ≥ 2m, so we have
k 1 2 ≥ k m (a 1 + 2)(a k + 2) k−1 i=2 (a i + 1)
.
As a function of t, t t+1 is strictly increasing for t > 0, so k m − 1 m > max k a 1 + 1 a 1 + 2 , k a 2 a 2 + 1 , . . . , k a k−1 a k−1 + 1 , k a k + 1 a k + 2 .
By these inequalities and the above estimate for 1 k (ρ ABC (H) − ρ ABC (G)), we have
1 k (ρ ABC (H) − ρ ABC (G)) ≥ a s k m − 1 m x v i − k a s a s + 1 x vs x k−1 v ′ s if s < k a s k m − 1 m x v i − k a k + 1 a k + 2 x vs x k−1 v ′ s if s = k > 0, so ρ ABC (H) > ρ ABC (G)
, which is a contradiction. Thus a 1 = m − 2 and a 2 = · · · = a k = 0. Suppose next that g = 3. First, suppose that 1 ≤ i ≤ k − 1. Let y be a vector such that
y v 1 = x v i , y v i = x v 1 and y w = x w for w ∈ V (G) \ {v 1 , v i } if i > 1 and y = x otherwise. Note that k 1 2 ≥ k a 1 +a k +2 (a 1 +2)(a k +2) , k 1 2 ≥ k m−1 (a 1 +2)(a k +2) k−1 i=2 (a i +1) and k m−2 m−1 > max k a 1 +1
a 1 +2 , k a 2 a 2 +1 , . . . , k a k−1 a k−1 +1 , k a k +1 a k +2 . By Lemma 2.3, we have
1 k (ρ ABC (H) − ρ ABC (G)) ≥ 1 k ABC(H)y k − 1 k ABC(G)x k = k 1 2 y v 1 . . . y v k + k 1 2 y u 3 y v k y k−2 u + k 1 2 y u 3 y v 1 y k−2 v + k j=1 a j k m − 2 m − 1 y v 1 y k−1 v ′ j − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k − k 1 2 x u 3 x v k x k−2 u − k 1 2 x u 3 x v 1 x k−2 v − a 1 k a 1 a 1 + 1 x v 1 x k−1 v ′ 1 − k−1 j=2 a j k a j a j + 1 x v j x k−1 v ′ j − a k k a k a k + 1 x v k x k−1 v ′ k = k 1 2 − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k + k 1 2 (x v i − x v 1 ) x u 3 x k−2 v + a 1 k m − 2 m − 1 x v i − k a 1 + 1 a 1 + 2 x v 1 x k−1 v ′ 1 + k−1 j=2 a j k m − 2 m − 1 x v i − k a j a j + 1 x v j x k−1 v ′ j + a k k m − 2 m − 1 x v i − k a k + 1 a k + 2 x v k x k−1 v ′ k > 0.
So ρ ABC (H) > ρ ABC (G), a contradiction. Now, suppose that i = k. Let z be a vector such
that z v 1 = x v k , z v k = x v 1 , z w = x v for w ∈ e 2 \ {u 2 , u 3 }, z w = x u for w ∈ e 3 \ {u 1 , u 3 } and z w = x w for w ∈ V (G) \ (e 1 ∪ e 2 ∪ e 3 \ {u 3 }). By Lemma 2.3, we have 1 k (ρ ABC (H) − ρ ABC (G)) ≥ 1 k ABC(H)z k − 1 k ABC(G)x k = k 1 2 z v 1 . . . z v k + k 1 2 z u 3 z v k z k−2 u + k 1 2 z u 3 z v 1 z k−2 v + k j=1 a j k m − 2 m − 1 z v 1 z k−1 v ′ j − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k − k 1 2 x u 3 x v k x k−2 u − k 1 2 x u 3 x v 1 x k−2 v − a 1 k a 1 a 1 + 1 x v 1 x k−1 v ′ 1 − k−1 j=2 a j k a j a j + 1 x v j x k−1 v ′ j − a k k a k a k + 1 x v k x k−1 v ′ k = k 1 2 − k m (a 1 + 2)(a k + 2) k−1 j=2 (a j + 1) x v 1 . . . x v k + a 1 k m − 2 m − 1 x v k − k a 1 + 1 a 1 + 2 x v 1 x k−1 v ′ 1 + k−1 j=2 a j k m − 2 m − 1 x v k − k a j a j + 1 x v j x k−1 v ′ j + a k k m − 2 m − 1 x v k − k a k + 1 a k + 2 x v k x k−1 v ′ k > 0. So ρ ABC (H) > ρ ABC (G)
, also a contradiction. Thus a 1 = m − 3 and a 2 = · · · = a k = 0.
Proof of Theorem 1.5. First, we calculate ρ ABC (U x 1 , (6.1) Suppose that the girth of G is 2. Let u 1 e 1 u 2 e 2 u 1 be the cycle of G. Note that there is no edge different from e 1 and e 2 containing two vertices in e 1 ∪ e 2 . If there is an edge containing no vertex in e 1 ∪e 2 , or there are two edges one containing a vertex in e 1 \{u 1 , u 2 } and the other containing a vertex in e 2 \ {u 1 , u 2 }, then for any edge e of G, we have w∈e d w − k ≤ m − 1. So by Theorem 1.
ρx k−1 1 = (m − 2) k m − 1 m x k−1 0 + 2 · k 1 2 x 2 y k−2 3 , (6.2) ρx k−1 2 = 2 · k 1 2 x 1 x k−2 3 , (6.3) ρx k−1 3 = k 1 2 y 1 x 2 x k−3 3 .1, ρ ABC (G) ≤ k √ m − 1 < ρ ABC (U (k)
m,2 ). Suppose that any edge different from e 1 and e 2 is a pendant edge at some vertex in e i with i = 1, 2, say in e 1 . Let e 1 = {v 1 , . . . , v k }, where v 1 = u 1 and v k = u 2 . Let a i be the number of pendant edges at v i for 1 ≤ i ≤ k. Then k i=1 a i = m − 2. Assume that a 1 ≥ a k ≥ 0 and a 2 ≥ · · · ≥ a k−1 ≥ 0. Then G ∼ = U m,k,2 (a 1 , . . . , a k ). So by Lemma 6.1, ρ ABC (G) < ρ ABC (U Proof of Theorem 1.6. By the definition of f (t) in (4.2), we have
f ( √ m − 1) = (m − 1) √ m − 1 − 1 √ 2 − m 2 − 4m + 5 √ m − 1 + m 2 − 5m + 6 √ 2(m − 1) = 2 √ m − 1 − 3 √ 2 − 2 √ m − 1 + √ 2 m − 1 > 0 f ( √ m − 2) = (m − 2) √ m − 2 − 1 √ 2 − (m − 2)(m 2 − 4m + 5) m − 1 + m 2 − 5m + 6 √ 2(m − 1) < m 2 − 5m + 6 − (m − 2)(m 2 − 4m + 5) m − 1 < 0, f (0) = m 2 − 5m + 6 √ 2(m − 1) > 0.
Then all roots of f (t) lie in (−∞, 0), (0, √ m − 2) and ( √ m − 2, √ m − 1), respectively. So √ m − 2 < a m < √ m − 1. It follows from Theorem 4.1 that k √ m − 2 < ρ ABC (U k m,3 ) < k √ m − 1. Now, we prove the result. It is trivial if m = 3. Suppose that m ≥ 4. Let G be a linear k-uniform unicyclic hypergraph of size m different from U k m,3 . As G is linear, its girth is at least three. If the girth of G is at least 4, then for any edge e in G, we have w∈e d G (w) − k ≤ m − 2, so, by Theorem 1.1, we have ρ ABC (G) ≤ k √ m − 2 < ρ ABC (U k m, 3 ). Suppose that the girth of G is 3. Let u 1 e 1 u k e 2 u 2k−1 e 3 u 1 be a cycle of length three in G, where e i = {u (i−1)(k−1)+j : j = 1, . . . , k} for i = 1, 2, 3, and u 1 = u 3k−2 . Suppose that there is an edge containing no vertex in e 1 ∪ e 2 ∪ e 3 , or there are two edges, one containing a vertex in e i \ {u (i−1)(k−1)+1 , u (i−1)(k−1)+k } and the other containing a vertex in e j \ {u (j−1)(k−1)+1 , u (j−1)(k−1)+k }, where 1 ≤ i < j ≤ 3. Then for any edge e in G, we have w∈e d G (w) − k ≤ m − 2. So by Theorem 1.1, ρ ABC (G) ≤ k √ m − 2 < ρ ABC (U k m,3 ). Suppose that each edge different from e 1 , e 2 and e 3 is a pendant edge at some vertex in exactly one of e 1 , e 2 , e 3 , say e 1 . Let a i be the number of pendant edges at u i for 1 ≤ i ≤ k. Then k i=1 a i = m − 3. Assume that a 1 ≥ a k ≥ 0 and a 2 ≥ · · · ≥ a k−1 ≥ 0. Then G ∼ = U m,k,3 (a 1 , . . . , a k ). So by Lemma 6.1, ρ ABC (G) < ρ ABC (U k m,3 ).
Concluding remarks
We propose the ABC tensor of a uniform hypergraph as a generalization of the ABC matrix of a graph. We give tight lower and upper bounds for the ABC spectral radius of a uniform hypergraph and characterize the hypergraphs that attain these bounds. We determine the maximum ABC spectral radii of uniform hypertrees, uniform non-hyperstar hypertrees and uniform non-power hypertrees of given size, as well as the maximum ABC spectral radii of unicyclic uniform hypergraphs and linear unicyclic uniform hypergraphs of given size, respectively. We also characterize those uniform hypergraphs for which the maxima for the ABC spectral radii are actually attained in all cases. We two examples to show that the case for the ABC spectral radius of k-uniform hypergraphs for k ≥ 3 is quite different from the ordinary case with k = 2. A hyperpath is a hypertree with at most two pendant edges. Denote by P m,k the kuniform hyperpath with m edges. On one hand, we note from [37] that P m,k for m ≥ 2 and k ≥ 2 is the unique connected k-uniform hypertrees that minimizes the spectral radius and [3] that P m,2 is the unique connected graph of m edges that minimizes the ABC spectral radius.
A pendant vertex of a hypergraph is a vertex of degree one. Let G be a k-uniform hypergraph with u ∈ V (G) and u i ∈ V (G) for i = 2, . . . , k. The hypergraph with vertex set V (G) ∪ {u, . . . , u k } and edge set E(G) ∪ {u, u 2 , . . . , u k } is said to be obtained from G by adding a new pendant edge {u, u 2 , . . . , u k } at u. Example 7.1. Let H 1 be the 3-uniform hypertree with 6 edges obtained from S 2,3 by adding a new pendant edge at each pendant vertex. Let V 1 be the set of vertices of degree one of H 1 . Let V 2 be the set of vertices of degree two that lie in a pendant edge of H 1 . Let x be the k-unit positive eigenvector corresponding to ρ(H 1 ). By Lemma 2.5, the entry of x corresponding to each vertex in V i for i = 1, 2 is equal, which we denote by x i . Denote by x 3 the entry of x corresponding to the only vertex of degree two outside V 2 . Then ρ ABC (H 1 )x 2 1 = 1 2 1 3
x 1 x 2 ,
Theorem 1 . 4 .
14For k ≥ 3, let G be a non-power k-uniform hypertree of size m ≥ 4. Then ρ ABC (G) ≤ b 1 k m with equality if and only if G ∼ = S m,k;m−3,1,1 , where b m is the largest root of with
Theorem 1 . 5 .
15For k ≥ 3, let G be a k-uniform unicyclic hypergraph of size m ≥ 2. Then ρ ABC (G) ≤ k m − 1 + 2 m with equality if and only if G ∼ = U (k) m,2 .
Theorem 1. 6 .
6For k ≥ 3, let G be a linear k-uniform unicyclic hypergraph of size m ≥ 3. Then ρ ABC (G) where a m is the largest root of
(i) ρ(T ) is an H-eigenvalue with a nonnegative eigenvector.
(
ii) If T is weakly irreducible, then ρ(T ) is an H-eigenvalue with a positive eigenvector and no other eigenvalue has a positive eigenvector.
Lemma 2.3.[28] Let T be a k-order n-dimensional symmetric nonnegative tensor. Then, for any nonnegative n-dimensional k-unit column vector x,ρ(T ) ≥ T x kwith equality when T is weakly irreducible if and only if x is the unique k-unit positive eigenvector corresponding to ρ(T ).
Corollary 4 . 1 .
41For k ≥ 3, let G be a k-uniform power unicyclic hypergraph of size m ≥ 3. Then ρ ABC (G) ≤ a 2 k m with equality if and only if G ∼ = U k m,3 , where a m is the largest root of f (t) = 0, and
, 3
3) is the largest root of f (t) = 0. Now the result follows from Theorem 4.1 and the known result that ρ ABC (G) ≤ ρ ABC (U m,3 ) with equality if and only if G ∼ = U m,3 for a unicyclic graph G of size m ≥ 3[23,36].
, 4 .
4By Lemma 2.5 and the (ρ, x)-eigenequations of A(D m,2 ), we have
Lemma 5. 2 .
2For m ≥ 4 and k ≥ 3, ρ ABC (S m,k;m−3,1,1 ) = b 1 k m , where b m is the largest root of η(t) = 0, and η(t) = t 3 − 4m 2 − Let v 1 e 1 v 2 e 2 v 3 e 3 v 4 be a path of S m,3;m−3,1,1 such that v 2 is of degree m − 2 and v 3 is of degree 2. Let x be the 3-unit positive eigenvector of ABC(S m,3;m−3,1,1 ) corresponding to ρ = ρ ABC (S m,3;m−3,1,1 ). Let x i = x v i for i = 1, 2, 3, 4. By Lemma 2.5 and the (ρ, x)eigenequations of ABC(S m,3;m−3,1,1 ), we have
, 3 ;m− 3 ,1, 1
331) is the largest root of η(t 3 ) = 0. Let b m be the largest root of η(t) 5.2, ρ ABC (S m,3;m−3,1,1 ) is the largest root of η(t 3 ) = 0, where η(t) is given in (5.1). We introduce four 3-uniform hypergraphs. Let T m,1 = S m,3;m−4,2,1 for m ≥ 6. For m ≥ 5, let T m,2 be the 3-uniform hypertree obtained by adding a pendant edge at a pendant vertex in a pendant edge at the vertex of degree m − 3 in S m−1,3;m−4,1,1 , T m,3 be the 3-uniform hypertree obtained by adding respectively one pendant edge at two distinct pendant vertices in a pendant edge at the vertex of degree 2 in D 3 m−2,1 and T m,4 be the 3-uniform hypertree obtained by adding a pendant edge at a pendant vertex in a pendant edge at the vertex of degree 2 in S m−1,3;m−4,1,1 . Obviously, T 5,2 ∼ = T 5,3 ∼ = T 5,4 .
Lemma 5. 3 .
3The following statements are true: (i) For m ≥ 6, max{ρ ABC (T m,1 ), ρ ABC (T m,2 )} < ρ ABC (S m,3;m−3,1,1 ). (ii) For m ≥ 5, max{ρ ABC (T m,3 ), ρ ABC (T m,4 )} < ρ ABC (S m,3;m−3,1,1 ).
ρ 1 .
1Eliminating x 4 and x 6 from (5.8) and (5.10), respectively, we have
t) is strictly increasing for t ≥ m − 4. Therefore h
t) is strictly increasing for t ≥ m − 4 and thus h 1 (t) is increasing for t ≥ m − 4. Note that
largest root of h 2 (t) = 0, say t 2 lies in (m − 5, m − 4) if m=6,7,8 and lies in (m − 6, m − 4) if m ≥ 9. It is easy to see that p 2 (t) is strictly increasing for t ∈ [m − 6, m − 4]. So p 2 (t) > p 2 (m − 6) = 6m 4 − 113m 3 + 746m 2 − 1983m + 1693 > 0 for t ∈ (m − 6, m − 4). Then
v 1 e 1 v 2 e 2 v 3 e 3 v 4 e 4 v 5 be a path in T m,3 such that v 2 is the vertex of degree m − 3. Let v 6 be a pendant vertex in e 2 . Let ρ 3 = ρ ABC (T m,3 ). Let x be the 3-unit positive eigenvector of ABC(T m,3 ) corresponding to ρ 3 . Let x i = x v i for i = 1, . . . , 6. By Lemma 2.5 and the (ρ 3 , x)-eigenequations of ABC(T m,3 ), we have
x 2 .
2Eliminating x 1 , x 3 and x 6 from (5.20), we have
Let t 3
3be the maximum root of h 3 (t) = 0. If m = 5, then t
T 5 , 3 ∼
53= T 5,4 and ρ ABC (T 5,3 ) < ρ ABC (S 5,3;2,1,1 ), it is sufficient to consider T m,4 for m ≥ 6. Let v 1 e 1 v 2 e 2 v 3 e 3 v 4 e 4 v 5 be a path in T m,4 such that v 2 is the vertex of degree m − 3. Let v 6 be the vertex in e 2 \ {v 2 , v 3 }, and v 7 and v 8 be the pendant vertices in e 3 and a pendant edge at v 6 , respectively. Let ρ 4 = ρ ABC (T m,4 ). Let x be the 3-unit positive eigenvector of ABC(T m,4 ) corresponding to ρ 4 . Let x i = x v i for i = 1, . . . , 8. By Lemma 2.5 and the (ρ 4 , x)-eigenequations of ABC(T m,4 ), we have
4 lies in (m − 5, m − 4). As p 4 (t) is strictly increasing for t ∈ [m − 5, m − 4], we have p 4 (t) > p 4 (m − 5) = (m − 4)(8m 3 − 95m 2 + 335m − 296) > 0 for t ∈ (m − 5, m − 4). Then
So t 4
4is less than the largest root of η(t) = 0. Thus ρ ABC (T m,4 ) < ρ ABC (S m,3;m−3,1,1 ).Lemma 5.4. For m ≥ 5, ρ ABC (S m,4;m−4,1,1,1 ) < ρ ABC (S m,4;m−3,1,1 ). Proof. Let v 1 e 1 v 2 e 2 v 3 e 3 v 4be a path in S m,4;m−4,1,1,1 such that v 2 and v 3 are the vertices of degree m − 3 and 2, respectively. Let ρ = ρ ABC (S m,4;m−4,1,1,1 ). Let x be the 4-unit positive eigenvector of ABC(S m,4;m−4,1,1,1 ) corresponding to ρ. Let x i = x v i for i = 1, . . . , 4. By Lemma 2.5 and the (ρ, x)-eigenequations of ABC(S m,4;m−4,1,1,1 ), we have
2, ρ ABC (S m,4;m−3,1,1 ) is the largest root of η(t 4 ) = 0. Thus ρ ABC (S m,4;m−4,1,1,1 ) < ρ ABC (S m,4;m−3,1,1 ). Proof of Theorem 1.4. By Lemma 5.2, we only need to show that ρ ABC (G) ≤ b 1 k m with equality if and only if G ∼ = S m,k;m−3,1,1 . It is trivial for m = 4.
≥ 4 . 4
44If k ≥ 3, then by Theorem 4.1 and Lemma 5.3 (ii), ρ ABC (T k 5,3 ) = ρ and if k ≥
Case 1 .
1The diameter of G is either 3 or 4, and the maximum degree of G is m − 3. Suppose first that the diameter of G is 3.Then G ∼ = S m,k;m−4,2,1 , or S m,1 ) = ρ ABC (S m,k;m−3,1,1 ). Thus ρ ABC (G) < ρ ABC (S m,k;m−3,1,1 ).Suppose next that the diameter of G is 4. Then G ∼ = T k m,2 , 1 ) = ρ ABC (S m,k;m−3,1,1 ).So ρ ABC (G) < ρ ABC (S m,k;m−3,1,1 ). Case 2. The diameter of G is at least 5, or the maximum degree of G at most m − 4.
we only need to show that k m−5 m−4 ρ A (S m,k;m−4,2,1 ) < ρ ABC (S m,k;m−3,1,1 ). Now, we calculate ρ = ρ A ((S m,k;m−4,2,1 )). Let v 1 e 1 v 2 e 2 v 3 e 3 v 4 be a path of S m,k;m−4,2,1 such that v 2 is of degree m − 3 and v 3 is of degree 2. Let v 5 be the vertex of degree 3 in e 2 \ {v 2 , v 3 }, and v 6 and v 7 be pendant vertices in a pendant edge at v 5 and e 2 , respectively. Let x be the k-unit positive eigenvector of A(S m,k;m−4,2,1 ) corresponding to ρ. Let x i = x v i for i = 1, . . . , 7. By Lemma 2.5 and the ( ρ, x)-eigenequations of A(S m,k;m−4,2,1 ), we have
that s (5−i) (m) is strictly increasing for m ≥ 6 as s (6−i) (6) > 0 with i = 1, . . . , 6. Then s(m) ≥ s(6) > 0 for m ≥ 6, so r(m − 2) = −s(m) < 0. As r(t) is strictly increasing for t ∈ (m − 3, m − 2), we have r(t) < 0 for t ∈ (m − 3, m − 2). Let t = k m−5 m−4 ρ. Recall that ρ k ∈ (m − 3, m − 2) and h( ρ k ) = 0. Then by (5.43),
2 ). Let v 1 be the vertex of degree m in U (k) m,2 . Let v 1 e 1 v 2 e 2 v 1 be the cycle of U (k) m,2 . Let v 0 and v 3 be pendant vertices in a pendant edge and in e 1 of U (k) m,2 , respectively. Let x be the k-unit positive eigenvector of ABC(U (k) m,2 ) corresponding to ρ = ρ ABC (U (k) m,2 ). Let x i = x v i for i = 0, 1, 2, 3. By Lemma 2.5 and the (ρ, y)-eigenequations of ABC(U
.3), implies that thatx 2 = 2 1 k x 1 ρ , so x 3 = x 1ρ . Now eliminating x 0 , x 2 and x 3 from (6.2) and noting that x 1 > 0, we have ρ − we prove the result. It is trivial if m = 2. Suppose that m ≥ 3. Let G be a k-uniform unicyclic hypergraph of size m different from U
girth of G is at least 3. Then for any edge e = {i 1 , . . . , i k } in G, we havew∈e d w − k ≤ m − 1. By Theorem 1.1, we have ρ ABC (G) ≤ k √ m − 1 < ρ ABC (U
Lemma 2.1. Let T be a k-order n-dimensional nonnegative tensor. Then1, Theorem 1.3], [33, Theorem 2.3], and [12, Theorem 4.1].
2 3 x 3 x 4 , (5.9)
k ABC (T 5,3 ) < ρ 3 k ABC (S 5,3;2,1,1 ) = ρ ABC (S 5,k;2,1,1 )
Perron-Frobenius theorem for nonnegative tensors. K C Chang, K Pearson, T Zhang, Commun. Math. Sci. 6K.C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci. 6 (2008) 507-520.
On extremality of ABC spectral radius of a tree. X Chen, Linear Algebra Appl. 564X. Chen, On extremality of ABC spectral radius of a tree, Linear Algebra Appl. 564 (2019) 159-169.
A note on the ABC spectral radius of graphs. X Chen, Linear Multilinear Algebra. 70X. Chen, A note on the ABC spectral radius of graphs, Linear Multilinear Algebra 70 (2022) 775-786.
Spectral Graph Theory. F R K Chung, American Math. Soc., ProvidenceF.R.K. Chung, Spectral Graph Theory, American Math. Soc., Providence, 1997.
Spectra of uniform hypergraphs. J Cooper, A Dutle, Linear Algebra Appl. 436J. Cooper, A. Dutle, Spectra of uniform hypergraphs, Linear Algebra Appl. 436 (2012) 3268-3292.
Comparison between atom-bond connectivity indices of graphs. K Das, M A Mohammed, I Gutman, K A Atan, MATCH Commun. Math. Comput. Chem. 76K. Das, M.A. Mohammed, I. Gutman, K.A. Atan, Comparison between atom-bond connectivity indices of graphs, MATCH Commun. Math. Comput. Chem. 76 (2016) 159-170.
The ABC matrix. E Estrada, J. Math. Chem. 55E. Estrada, The ABC matrix, J. Math. Chem. 55 (2017) 1021-1033.
Atom-bond connectivity and the energetic of branched alkanes. E Estrada, Chem. Phys. Lett. 463E. Estrada, Atom-bond connectivity and the energetic of branched alkanes, Chem. Phys. Lett. 463 (2008) 422-425.
Statistical-mechanical theory of topological indices. E Estrada, Phys. A. 602127612E. Estrada, Statistical-mechanical theory of topological indices, Phys. A 602 (2022) 127612.
What is the meaning of the graph energy after all?. E Estrada, M Benzi, Discrete Appl. Math. 230E. Estrada, M. Benzi, What is the meaning of the graph energy after all? Discrete Appl. Math. 230 (2017) 71-77.
An atom-bond connectivity index: modelling the enthalpy of formation of alkanes. E Estrada, L Torres, L Rodríguez, I Gutman, Indian J. Chem. 37E. Estrada, L. Torres, L. Rodríguez, I. Gutman, An atom-bond connectivity index: modelling the enthalpy of formation of alkanes, Indian J. Chem. 37A (1998) 849-855.
Perron-Frobenius theorem for nonnegative multilinear forms and extensions. S Friedland, S Gaubert, L Han, Linear Algebra Appl. 438S. Friedland, S. Gaubert, L. Han, Perron-Frobenius theorem for nonnegative multilinear forms and extensions, Linear Algebra Appl. 438 (2013) 738-749.
Atom-bond connectivity index of trees. B Furtula, A Graovac, D Vukičević, Discrete Appl. Math. 157B. Furtula, A. Graovac, D. Vukičević, Atom-bond connectivity index of trees, Discrete Appl. Math. 157 (2009) 2828-2835.
Trees with smallest atom-bond connectivity index. I Gutman, B Furtula, MATCH Commun. Math. Comput. Chem. 68I. Gutman, B. Furtula, Trees with smallest atom-bond connectivity index, MATCH Commun. Math. Comput. Chem. 68 (2012) 131-136.
On Randić energy. I Gutman, B Furtula, S B Bozkurt, Linear Algebra Appl. 442I. Gutman, B. Furtula, S. B. Bozkurt, On Randić energy, Linear Algebra Appl. 442 (2014) 50-57.
I Gutman, E V Konstantinova, V A Skorobogatov, Molecular hypergraphs and Clar structural formulas of benzenoid hydrocarbons. 136I. Gutman, E.V. Konstantinova, V.A. Skorobogatov, Molecular hypergraphs and Clar structural formulas of benzenoid hydrocarbons, ACH-Models Chem. 136 (1999) 539- 548.
Bounds on the ABC spectral radius and ABC energy of graphs. M Ghorbani, X Li, M Hakimi-Nezhaad, J Wang, Linear Algebra Appl. 598M. Ghorbani, X. Li, M. Hakimi-Nezhaad, J. Wang, Bounds on the ABC spectral radius and ABC energy of graphs, Linear Algebra Appl. 598 (2020) 145-164.
The Laplacian of a uniform hypergraph. S Hu, L Qi, J. Comb. Optim. 29S. Hu, L. Qi, The Laplacian of a uniform hypergraph, J. Comb. Optim. 29 (2015) 331-366.
Cored hypergraphs, power hypergraphs and their Laplacian H-eigenvalues. S Hu, L Qi, J Shao, Linear Algebra Appl. 439S. Hu, L. Qi, J. Shao, Cored hypergraphs, power hypergraphs and their Laplacian H-eigenvalues, Linear Algebra Appl. 439 (2013) 2980-2998.
On atom-bond connectivity index of graphs. H Hua, K Das, H Wang, J. Math. Anal. Appl. 479H. Hua, K. Das, H. Wang, On atom-bond connectivity index of graphs, J. Math. Anal. Appl. 479 (2019) 1099-1114.
Molecular hypergraphs: The new representation of nonclassical molecular structures with polycentric delocalized bonds. E V Konstantinova, V A Skorobogatov, J. Chem. Inf. Comput. Sci. 35E.V. Konstantinova, V.A. Skorobogatov, Molecular hypergraphs: The new representa- tion of nonclassical molecular structures with polycentric delocalized bonds, J. Chem. Inf. Comput. Sci. 35 (1995) 472-478.
Graph and hypergraph models of molecular structure: a comparative analysis of indices. E V Konstantinova, V A Skoroboratov, J. Structure Chem. 39E.V. Konstantinova, V.A. Skoroboratov, Graph and hypergraph models of molecular structure: a comparative analysis of indices, J. Structure Chem. 39 (1998) 958-966.
On the ABC spectra radius of unicyclic graphs. X Li, J Wang, Linear Algebra Appl. 596X. Li, J. Wang, On the ABC spectra radius of unicyclic graphs, Linear Algebra Appl. 596 (2020) 71-81.
L Lim, Proceedings of the First IEEE International Workshop on Computational Advances of Multi-Sensor Adaptive Processing. the First IEEE International Workshop on Computational Advances of Multi-Sensor Adaptive ProcessingPuerto VallartaSingular values and eigenvalues of tensors: a variational approachL. Lim, Singular values and eigenvalues of tensors: a variational approach, in: Proceed- ings of the First IEEE International Workshop on Computational Advances of Multi- Sensor Adaptive Processing, Puerto Vallarta, 2005, pp. 129-132.
Combinatorial methods for the spectral p-norm of hypermatrices. V Nikiforov, Linear Algebra Appl. 529V. Nikiforov, Combinatorial methods for the spectral p-norm of hypermatrices, Linear Algebra Appl. 529 (2017) 324-354.
Analytic methods for uniform hypergraphs. V Nikiforov, Linear Algebra Appl. 457V. Nikiforov, Analytic methods for uniform hypergraphs, Linear Algebra Appl. 457 (2014) 455-535.
On spectral hypergraph theory of the adjacency tensor, Graphs Combin. K Pearson, T Zhang, 30K. Pearson, T. Zhang, On spectral hypergraph theory of the adjacency tensor, Graphs Combin. 30 (2014) 1233-1248.
Eigenvalues of a real supersymmetric tensor. L Qi, J. Symbolic Comput. 40L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput. 40 (2005) 1302-1324.
Eigenvalues and invariants of tensors. L Qi, J. Math. Anal. Appl. 325L. Qi, Eigenvalues and invariants of tensors, J. Math. Anal. Appl. 325 (2007) 1363-1377.
L Qi, H Chen, Y Chen, Tensor Eigenvalues and Their Applications. SingaporeSpringerL. Qi, H. Chen, Y. Chen, Tensor Eigenvalues and Their Applications, Springer, Singa- pore, 2018.
L Qi, Z Luo, Tensor Analysis. Spectral theory and special tensors. SIAM, Philadelphia, PAL. Qi, Z. Luo, Tensor Analysis. Spectral theory and special tensors, SIAM, Philadelphia, PA, 2017.
On weakly irreducible nonnegative tensors and interval hull of some classes of tensors. M Rajesh Kannan, N Shaked-Monderer, A Berman, Linear Multilinear Algebra. 64M. Rajesh Kannan, N. Shaked-Monderer, A. Berman, On weakly irreducible nonnega- tive tensors and interval hull of some classes of tensors, Linear Multilinear Algebra 64 (2016) 667-679.
Further results for Perron-Frobenius theorem for nonegative tensors. Y Yang, Q Yang, SIAM J. Matrix Anal. Appl. 31Y. Yang, Q. Yang, Further results for Perron-Frobenius theorem for nonegative tensors, SIAM J. Matrix Anal. Appl. 31 (2010) 2517-2530.
Ordering of some uniform supertrees with larger spectral radii. X Yuan, J Shao, H Shan, Linear Algebra Appl. 495X. Yuan, J. Shao, H. Shan, Ordering of some uniform supertrees with larger spectral radii, Linear Algebra Appl. 495 (2016) 206-222.
The first two maximum ABC spectral radii of bicyclic graphs. Y Yuan, Z Du, Linear Algebra Appl. 615Y. Yuan, Z. Du, The first two maximum ABC spectral radii of bicyclic graphs, Linear Algebra Appl. 615 (2021) 28-41.
On large ABC spectral radii of unicyclic graphs. Y Yuan, B Zhou, Z Du, Discrete Appl. Math. 298Y. Yuan, B. Zhou, Z. Du, On large ABC spectral radii of unicyclic graphs, Discrete Appl. Math. 298 (2021) 56-65.
Uniform hypergraphs with the first two smallest spectral radii. J Zhang, J Li, H Guo, Linear Algebra Appl. 594J. Zhang, J. Li, H. Guo, Uniform hypergraphs with the first two smallest spectral radii, Linear Algebra Appl. 594 (2020) 71-80.
On atom-bond connectivity index. B Zhou, R Xing, Z. Naturforsch. A. 66B. Zhou, R. Xing, On atom-bond connectivity index, Z. Naturforsch. A 66 (2011) 61-66.
Some spectral properties of uniform hypergraphs. J Zhou, L Sun, W Wang, C Bu, Electron. J. Combin. 21Paper 4.24J. Zhou, L. Sun, W. Wang, C. Bu, Some spectral properties of uniform hypergraphs, Electron. J. Combin. 21 (2014) Paper 4.24.
| []
|
[
"Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties",
"Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties",
"Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties",
"Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties"
]
| [
"Tania Paul \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"V Fernández Becerra \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"Timo Hyart \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n\nDepartment of Applied Physics\nAalto University\n00076Aalto, EspooFinland\n",
"Tania Paul \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"V Fernández Becerra \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"Timo Hyart \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n\nDepartment of Applied Physics\nAalto University\n00076Aalto, EspooFinland\n"
]
| [
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"Department of Applied Physics\nAalto University\n00076Aalto, EspooFinland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"Department of Applied Physics\nAalto University\n00076Aalto, EspooFinland"
]
| []
| The band-inverted electron-hole bilayers, such as InAs/GaSb, are an interesting playground for the interplay of quantum spin Hall effect and correlation effects because of the small density of electrons and holes and the relatively small hybridization between the electron and hole bands. It has been proposed that Coulomb interactions lead to a time-reversal symmetry broken phase when the electron and hole densities are tuned from the trivial to the quantum spin Hall insulator regime. We show that the transport properties of the system in the time-reversal symmetry broken phase are consistent with the recent experimental observations in InAs/GaSb. Moreover, we carry out a quantum transport study on a Corbino disc where the bulk and edge contributions to the conductance can be separated. We show that the edge becomes smoothly conducting and the bulk is always insulating when one tunes the system from the trivial to the quantum spin Hall insulator phase, providing unambiguous transport signatures of the time-reversal symmetry broken phase. arXiv:2205.12790v1 [cond-mat.mes-hall] | 10.1103/physrevb.106.235420 | [
"https://export.arxiv.org/pdf/2205.12790v1.pdf"
]
| 249,062,668 | 2205.12790 | 795fed9ea6fe0899ef54c4d7a93f6e53e66a7225 |
Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties
Tania Paul
International Research Centre MagTop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/46PL-02668WarsawPoland
V Fernández Becerra
International Research Centre MagTop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/46PL-02668WarsawPoland
Timo Hyart
International Research Centre MagTop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/46PL-02668WarsawPoland
Department of Applied Physics
Aalto University
00076Aalto, EspooFinland
Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers I: Transport properties
(Dated: May 26, 2022)
The band-inverted electron-hole bilayers, such as InAs/GaSb, are an interesting playground for the interplay of quantum spin Hall effect and correlation effects because of the small density of electrons and holes and the relatively small hybridization between the electron and hole bands. It has been proposed that Coulomb interactions lead to a time-reversal symmetry broken phase when the electron and hole densities are tuned from the trivial to the quantum spin Hall insulator regime. We show that the transport properties of the system in the time-reversal symmetry broken phase are consistent with the recent experimental observations in InAs/GaSb. Moreover, we carry out a quantum transport study on a Corbino disc where the bulk and edge contributions to the conductance can be separated. We show that the edge becomes smoothly conducting and the bulk is always insulating when one tunes the system from the trivial to the quantum spin Hall insulator phase, providing unambiguous transport signatures of the time-reversal symmetry broken phase. arXiv:2205.12790v1 [cond-mat.mes-hall]
I. INTRODUCTION
The advent of topological materials [1,2] has brought band-inverted semiconductors, with small electron and hole densities, to the focus of the attention in the search of quantum spin Hall (QSH) insulators [3][4][5][6][7][8]. However, the electron-electron interactions are important in these materials if the hybridization of the electron and hole bands is small compared to the exciton binding energy, as can be appreciated by noticing that the bilayer system of spatially separated electrons and holes is the wellknown paradigm system for the realization of an exciton condensate state [9,10]. Indeed, it is now theoretically understood that interactions can lead to a plethora of correlated phases in band-inverted semiconductors [11][12][13][14][15][16][17] and the recent experiments have shown evidence of excitonic phenomenology in InAs/GaSb quantum wells [18][19][20][21][22] as well as in WTe 2 [23,24].
We concentrate on the correlated phases appearing in the band-inverted electron-hole bilayers shown in Fig. 1(a) [4]. In these systems, the electron and hole bands are spatially separated and therefore only weakly hybridized. Moreover, the electron and hole densities (and hence also the band-inversion parameter E G ) can be controlled in situ with front and back gate voltages, V f and V b , allowing the possibility to study the phase transition between trivial and QSH insulator phases [4,22,25], as schematically illustrated in Fig. 1(b). It has been theoretically predicted that, due to the excitonic correlations caused by the Coulomb interactions, a third phase with spontaneously broken time-reversal symmetry (TRS) will appear in the transition regime between the two topologically distinct phases [11]. Within this phase the helical edge states, originating from the QSH insulator phase, can exist but they are not protected against backscattering, and it was theoretically demonstrated [11] that these unprotected edge states can explain the temperature- independent mean free path observed in InAs/GaSb bilayers in the presence of reasonably large applied currents [7,26,27]. However, an unambiguous experimental demonstration of the existence of the exotic insulating phase with spontaneously broken TRS symmetry is still lacking in these systems.
Here, we demonstrate that the transport properties of the system in the TRS broken phase are also consistent with the more recent transport experiments in InAs/GaSb bilayers with small applied currents [28], so that the spontaneous TRS symmetry breaking provides a comprehensive explanation of the temperature, voltage and length dependencies of the observed conductance [7,[26][27][28]. Finally, we propose an experiment which can be used to unambiguously demonstrate the existence of the spontaneous TRS breaking in this system. Namely, we show that the edge becomes smoothly conducting and the bulk remains insulating when one tunes across the TRS broken phase appearing between the trivial and QSH insulator phases in the Corbino geometry, where the bulk and edge contributions to the conductance can be separated [29]. In the presence of TRS symmetry the bulk transport gap must close when the system is tuned between topologically distinct phases, and hence the experimental demonstration of a transition without a bulk transport gap closing constitutes a proof of an existence of TRS broken insulating phase.
II. SPONTANEOUS TRS BREAKING IN ELECTRON-HOLE BILAYERS
In Ref. 11 it was shown using a full Hartree-Fock calculation that the Coulomb interactions in the Bernevig-Hughes-Zhang (BHZ) model [3] developed for InAs/GaSb bilayers [4,30] lead to three different phases as a function of the hybridization of the electron and hole bands A and the band-inversion parameter E G , which is defined here so that for E G > 0 (E G < 0) the electron and hole bands are (not) inverted at the Γ point, see Fig. 1(b). As intuitively expected, for small (large) A and E G one realizes a trivial (QSH) insulator phase. However, interestingly it was found that at intermediate values of A and E G there exists an insulating phase with spontaneously broken TRS symmetry separating the topologically distinct phases. In this Section we describe a simplified minimal model that fully captures all the essential results obtained using the full Hartree-Fock calculations in Ref. 11.
The single particle BHZ Hamiltonian is
H 0 = E G − 2 k 2 2m τ z σ 0 + Ak x τ x σ z − Ak y τ y σ 0 + ∆ z τ y σ y ,(1)
where τ 's and σ's denote the Pauli matrices in the electron-hole and spin basis, respectively. The electron band is made out of s-orbitals and the hole band is made out of only two p-orbitals, because the electric confining potential and the atomic spin-orbit coupling remove the degeneracies of the p-orbitals. The tunneling between the layers is dominantly odd in momentum and opens up a hybridization gap ∝ A. Here, we have assumed the same effective mass m for electrons and holes, and included only the momentum-independent spin-orbit coupling term ∆ z arising due to bulk inversion asymmetry. We have ignored the asymmetry of the masses and the momentum-dependent spin-orbit coupling terms, because they are not essential for understanding the phase diagram of the InAs/GaSb bilayers [11].
The main effect of Coulomb interactions is the binding of the electrons and holes into excitons with the characteristic size d 0 and binding energy E 0 determined by the relation E 0 = 2 /(md 2 0 ) = e 2 /(4π 0 d 0 ). This leads to an excitonic mean field [11]
H EC = [∆ 1 ]τ y σ y + [∆ 2 ] k x τ x σ z − k y τ y σ 0 + [∆ 1 ]τ x σ y − [∆ 2 ] k x τ y σ z + k y τ x σ 0 ,(2)
where ∆ 1 and ∆ 2 are complex bosonic fields describing swave and p-wave excitonic correlations, respectively. For simplicity we have expanded the fields ∆ 1 and ∆ 2 only to the lowest order in momentum and neglected the full |k| dependence, which is present in the numerical solution of the Hartree-Fock equations [11]. It is easy to see by straightforward calculation that the terms in the first line of Eq. (2) obey the TRS T = iτ 0 σ y K (K is the complex conjugation operator) and the terms in the second line break it. Therefore, the imaginary parts of the fields [∆ 1 ], [∆ 2 ] = 0 result in spontaneous TRS breaking. We can solve the complex bosonic mean fields ∆ 1 and ∆ 2 by substituting the ansatz (2) to the Hartree-Fock mean field equations. This way, we arrive to the following mean field equations (see Appendix A for more details)
∆ 1 = g s d 2 0 (2π) 2 d 2 k c † k↓2 c k↑1 − c † k↑2 c k↓1(3)
and
∆ 2 = g p d 4 0 (2π) 2 d 2 k − c † k↑2 c k↑1 (k x − ik y ) + c † k↓2 c k↓1 (k x + ik y ) ,(4)
where g s (g p ) is the effective interaction strength for swave (p-wave) pairing and c 1σk (c 2σk ) is the electron annihilation operator with spin σ and momentum k in electron (hole) layer. In our numerical calculations the integration is performed over the range |k| ≤ 2.26/d 0 , but the exact values of the integration limits are not important. The effective interaction strengths g s and g p can be considered as fitting parameters, whose values should be fixed so that one approximately reproduces the results obtained from Hartree-Fock calculations [11].
The values of the model parameters for InAs/GaSb can be estimated by combining theoretical calculations [4,10,11,30] and the experimentally observed energy gaps [7,18]. This way, we arrive to parameter values that are used in our calculations:
E 0 /k B = 200 K, d 0 = 10 nm, A/(E 0 d 0 ) = 0.06, ∆ z /E 0 = 0.02, g s /E 0 = 1.0 and g p /E 0 = 0.2.
The band-inversion parameter E G is a gate-tunable parameter (see Fig. 1), which is varied in our calculations to tune the system from trivial insulator to QSH insulator phase. As shown in Fig. 2 calculations [11]. For small (large) values of E G the system is in a trivial (QSH) insulator phase, and importantly these two phases are separated from each other by an insulating phase with spontaneously broken TRS, where [∆ 1 ], [∆ 2 ] = 0. The bulk gap ∆ bulk remains open for all values of E G , because the intermediate TRS broken phase enables the connection of the topologically distinct phases without bulk gap closing. The edge gap ∆ edge decreases monotonously when one starts from the trivial phase and tunes the system across the TRS broken phase to the QSH phase, where the gapless edge excitations are protected by the topology.
Δ bulk Δ edge Δ bulk Δ edge Δ edge =0 Δ bulk (c) (d) (a) (b)
The appearance of spontaneous TRS breaking can be understood with the help of topological considerations. The topological invariant distinguishing the QSH phase from the trivial insulator can change only if (i) the bulk energy gap closes or (ii) TRS is broken in a regime between the topologically distinct phases. The case (i) would be the only possibility if the local order were fixed. However, in an interacting system the order parameter corresponds to a minimum of the free energy, and it is energetically favourable to keep the system gapped. Due to this reason there is a general tendency for the appearance of a TRS broken phase in the transition regime between QSH and trivial insulator phases.
III. LENGTH, TEMPERATURE AND VOLTAGE DEPENDENCE OF THE CONDUCTANCE
The identification of the edge states in InAs/GaSb bilayers was initially problematic due to finite bulk density of states in the minigap [6]. The main breakthrough in eliminating the bulk conduction came from insertion of Si to the interface between the InAs and GaSb layers during the growth process [7]. After achieving a truly insulating bulk this way, L. Du et al. [7] managed to demonstrate in mesoscopic samples wide conductance plateaus quantized to the values expected for nonlocal helical edge transport (variations less than 1%). The accurate conductance quantization was reported for several devices of various lengths and three different geometries in Ref. 7. Moreover, by imaging the distribution of the current flow inside the sample it has been confirmed that the current flows along the edge in agreement with helical edge conduction [27]. More careful measurements of temperature and voltage dependencies are also consistent with singlemode edge conduction [28]. In a different type of samples, where Si was not inserted and the observed thermal activation gap for the bulk transport is an order of magnitude smaller, multi-mode edge conduction has been reported by another group [32]. The explanation of the remarkably different transport properties observed in the presence and in the absence of Si doping remains an open theoretical problem. Because these observations are mutually inconsistent, it is clear that they cannot be explained with the same model Hamiltonian. Here, we concentrate on the transport experiments in Si doped samples with large activation gap [7,28]. We show that these experiments are consistent with the transport properties theoretically obtained in the TRS broken phase.
In long samples the conductance is not observed to be quantized [7] indicating that backscattering processes occur between the counterpropagating edge channels. It was found that in the limit eV k B T the resistance is independent on temperature between 20 mK -4.2 K and it increases linearly with the edge length L. These observations are not surprising once the elastic backscattering processes are allowed and large voltage is applied, because under these conditions the inelastic scattering rate is expected to be approximately equal to the elastic one [33], and therefore the localization effects can be neglected and the resistance is expected to be temperature independent. In the QSH phase the elastic backscattering is forbidden in the presence of time-reversal symmetry due to the topological protection, so these observations are not consistent with the system being in the QSH phase without additional assumption about the existence of charge puddles that may lead to enhanced backscattering rate [34]. On the other hand, TRS broken phase supports edge states but the elastic backscattering is now allowed, so the experimental observations are fully consistent with the system being in the TRS broken phase. Thus, TRS broken phase provides an intrinsic explanation of these experiments, remaining applicable even if we assume that the samples are of high quality so that no charge puddles are present in the system.
In short mesoscopic samples with small applied voltage and temperature, the voltage and temperature dependencies of the conductance are more complicated and we need to use a quantum transport approach to describe them. The disorder-averaged differential conductance G d = dI/dV is obtained from
G d (E F + eV, T ) = +∞ −∞ dE 2G 0 exp[−L/ (E)] 4k B T cosh 2 E−E F −eV 2k B T ,(5)
where G 0 = e 2 /h, E F is the Fermi energy, V is voltage, T is temperature of the reservoirs, L is the length of the sample and (E) is the energy-dependent elastic mean free path, which for E ∆ edge is given by [11] (
E) = 4a 2 v 2 E 2 ξV 2 dis ∆ 2 edge .(6)
Here, E is the energy relative to the energy of the crossing of the edge states, v is the edge velocity, V dis is the strength of the disorder potential, ξ is the disorder correlation length and a ∼ 1 is a numerical factor. Although the exact expression for (E) is model dependent, it must always satisfy (E) → ∞ for E ∆ edge , so that G d ≈ 2G 0 for k B T ∆ edge . Therefore, there exists robust asymptotic limits which guarantee that G d undergoes a crossover from nonquantized value to the quantized value G d = 2G 0 both with increasing temperature and voltage. In order to study the full temperature dependence we introduce an energy scale E L , which is defined in such a way that
G d ≈ 2G 0 1 − L/ (E F + eV ) , k B T E F + eV, 2G 0 , k B T ∆ edge ,(7)
l a t e x i t s h a 1 _ b a s e 6 4 = " i H C i 3 d 4 r U p H 1 e b t 7 T L 8 R w s m X 1 Q 0 = " > A A A B + 3 i c b V D L S s N A F L 3 x W e s r 1 q W b w S I I Q k l U V F w V R H H h o o J 9 Q B v C Z D p p h 0 4 m Y W Y i l p B f c e N C E b f + i D v / x u l j o a 0 H L v d w z r 3 M n R M k n C n t O N / W w u L S 8 s p q Y a 2 4 v r G 5 t W 3 v l B o q T i W h d R L z W L Y C r C h n g t Y 1 0 5 y 2 E k l x F H D a D A Z X I 7 / 5 S K V i s X j Q w 4 R 6 E e 4 J F j K C t Z F 8 u 9 Q J J S b Z t X 9 z R B u 5 6 X e 5 b 5 e d i j M G m i f u l J R h i p p v f 3 W 6 M U k j K j T h W K m 2 6 y T a y 7 D U j H C a F z u p o g k m A 9 y j b U M F j q j y s v H t O T o w S h e F s T Q l N B q r v z c y H C k 1 j A I z G W H d V 7 P e S P z P a 6 c 6 v P A y J p J U U 0 E m D 4 U p R z p G o y B Q l 0 l K N B 8 a g o l k 5 l Z E + t i E o U 1 c R R O C O / v l e d I 4 r r h n l Z P 7 0 3 L 1 c h p H A f Z g H w 7 B h X O o w i 3 U o A 4 E n u A Z X u H
(E L ) ≡ L, i.e. E L = LξV 2 dis ∆ 2 edge 4a 2 v 2 .(8)
The differential conductance G d , which depends on two parameters (E F + eV )/E L and 2k B T /E L , is shown in Fig. 3. In this analysis we have neglected the effects of electron-electron interactions beyond the mean field theory, and the energy and temperature dependence of the excitonic mean fields. Nevertheless, our results for the G d crossovers from a non-quantized to the quantized value G d = 2G 0 with increasing voltage and temperature are in reasonable agreement with the experimental observations [28]. We consider the observations of these crossovers as very strong evidence of single-mode edge transport. In the experiment [28] the temperature dependence of the conductance
G(E F , V, T ) = 1 V V 0 dV G d (E F + eV, T )(9)
was reported also in a current I biased situation. The theoretical predictions for this situation, obtained using Eqs. (5), (6), (9) and I = GV , are shown in Fig. 4. In this case, the shapes of the curves in the crossover regime depend on the Fermi energy E F and they resemble the experimental observations [28] more in the case of reasonably large values of E F . In a more detailed microscopic description the crossing of the edge states may be buried within the bulk bands [35] so that reasonably large E F compared the energy of the crossing could naturally be realized in the experiments.
< l a t e x i t s h a 1 _ b a s e 6 4 = " B S M a F j A q I s t y e x V c K 1 v e Z o b 4 2 8 w = " > A A A B + n i c b V B N S 8 N A E N 3 U r 1 q / U j 1 6 C R b B U 0 l U V D w V R F T w U M F + Q B v C Z j t p l 2 4 2 Y X e j l J i f 4 s W D I l 7 9 J d 7 8 N 2 7 b H L T 1 w c D j v R l m 5 v k x o 1 L Z 9 r d R W F h c W l 4 p r p b W 1 j c 2 t 8 z y d l N G i S D Q I B G L R N v H E h j l 0 F B U M W j H A n D o M 2 j 5 w 4 u x 3 3 o A I W n E 7 9 U o B j f E f U 4 D S r D S k m e W u 4 H A J I W b L L 3 y 7 E v v N v P M i l 2 1 J 7 D m i Z O T C s p R 9 8 y v b i 8 i S Q h c E Y a l 7 D h 2 r N w U C 0 U J g 6 z U T S T E m A x x H z q a c h y C d N P J 6 Z m 1 r 5 W e F U R C F 1 f W R P 0 9 k e J Q y l H o 6 8 4 Q q 4 G c 9 c b i f 1 4 n U c G Z m 1 I e J w o 4 m S 4 K E m a p y B r n Y P W o A K L Y S B N M B N W 3 W m S A d R Z K p 1 X S I T i z L 8 + T 5 m H V O a k e 3 R 1 X a u d 5 H E W 0 i / b Q A X L Q K a q h a 1 R H D U T Q I 3 p G r + j N e D J e j H f j Y 9 p a M P K Z H f Q
IV. DECOUPLING OF BULK AND EDGE TRANSPORT IN CORBINO GEOMETRY
We have shown that the transport experiments performed so far with InAs/GaSb devices are consistent with the system being in the TRS broken phase. However, it is difficult to rule out other possible theoretical explanations based on these experimental observations. In this Section, we propose a transport experiment, which could be used to proof the existence of the exotic TRS broken phase based on robust topological arguments.
Namely, we consider a Corbino device for decoupling the differential conductances corresponding to the bulk G bulk and edge G edge transport as illustrated in Fig. 5. The dimensions of the Corbino disc R in ≈ 1 µm and R out = 2 µm are chosen so that the transport is (approx- imately) ballistic and the decay lengths of the evanescent bulk modes in the middle of the bulk gap are much shorter than the transport paths. This guarantees that G bulk ≈ 0 for the applied voltage satisfying |eV dc | < ∆ bulk /2. Importantly, this allows to demonstrate that the transport gap does not close when the system is tuned from trivial to the QSH insulator phase by varying E G [see Fig. 5(b)]. On the other hand, the edge conductance changes smoothly from G edge = 0 (trivial phase) to G edge = 2G 0 (QSH phase) upon increasing E G demonstrating the closing of the edge gap ∆ edge at the transition to the QSH insulator phase [see Fig. 5(c)]. Importantly, the bulk and edge conductances can be elegantly measured in the same device when the system is tuned in-situ from trivial to the QSH insulator phase using the gate voltages. Such kind of experimental demonstration of a topological transition without a bulk transport gap closing would constitute a proof of the existence of TRS broken insulating phase.
V. CONCLUSIONS AND DISCUSSION
We have discussed the possibility of unconventional topological phase transition between trivial and QSH insulator phases in band-inverted electron-hole bilayers. The hallmark of this transition is the existence of an intermediate insulating phase with spontaneously broken TRS. We have demonstrated that the transport properties of the system in the TRS broken phase are consistent with the observed transport characteristics of InAs/GaSb devices, and we have shown that the measurement of the bulk and edge conductances in a Corbino device can provide unambiguous transport signatures of a topological transition without a bulk transport gap closing, proving the existence of the TRS broken phase.
Although we have focused on InAs/GaSb bilayers, we point out that band-inverted electron-hole systems can be realized in many semiconducting bilayers by creating a strong electric field at the barrier between the layers [36][37][38][39][40]. In principle all these systems are potential candidates for supporting the interplay of excitonic correlations and the QSH effect, but for most of the semiconductors the barrier thickness may have to be so large that the hybridization gap between the electron and hole bands becomes too small to realize a sufficiently large topological gap in the QSH insulator phase. Our theory may also be applicable to HgTe bilayers [12].
In a separate work [41], we show also that in the presence of induced superconductivity the spontaneous timereversal symmetry breaking allows to realize Majorana zero modes in the absence of magnetic field.
ACKNOWLEDGMENTS
We thank D. I. Pikulin for useful discussions and comments. The work is supported by the Foundation for Polish Science through the IRA Programme co-financed by EU within SG OP. We acknowledge the computational resources provided by the Aalto Science-IT project and the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant No. G87-1164 and G78-13.
Appendix A: Minimal model and mean field equations for excitonic correlations Based on the numerical solution of the Hartree-Fock mean field theory [11], we know that the the main effect of intraband interactions (in the relevant part of the parameter space) is to renormalize the band structure. Therefore, we consider only the interband interactionŝ
H I = − s,s k,k V k,k c † ks1 c ks 2 c † k s 2 c k s1 ,(A1)
where V k,k describes the Coulomb interactions between the layers. On a mean-field level, the Hamiltonian iŝ
H mf =Ĥ 0 − k,s,s [∆ ss (k)c † ks1 c ks 2 + h.c.],(A2)H mf (k) = ξ(k) 0 (A + ∆ 2 )(k x + ik y ) −(∆ 1 + ∆ z ) 0 ξ(k) (∆ 1 + ∆ z ) −(A + ∆ 2 )(k x − ik y ) (A + ∆ 2 ) * (k x − ik y ) (∆ 1 + ∆ z ) * −ξ(k) 0 −(∆ 1 + ∆ z ) * −(A + ∆ 2 ) * (k x + ik y ) 0 −ξ(k) .(A7)
Here we have utilized the fact that the excitonic mean field can be approximated as
∆ mf = i∆ 1 σ 2 − ∆ 2 (k x σ 3 + ik y σ 0 ),(A8)
where ∆ 1 and ∆ 2 are complex bosonic fields describing s-wave and p-wave excitonic correlations, respectively. By inverting the interaction matrix and substituting the ansatz (A8) to the mean field equation, we obtain
d 2 0 L 2 k [f ↑,↓ (k) − f ↓,↑ (k)] = 2 d 2 0 L 2 k,k V −1 kk ∆ 1 = 1 g s ∆ 1
(A9) and
d 2 0 L 2 k [−f ↑,↑ (k)(k x − ik y ) + f ↓,↓ (k)(k x + ik y )] = 2 d 2 0 L 2 k,k V −1 k,k ∆ 2 (k x k x + k y k y ) = 1 g p d 2 0 ∆ 2 , (A10)
where we have defined effective interaction strengths g s and g p for the s-wave and p-wave excitonic correlations as
g −1 s = 2 d 2 0 L 2 k,k V −1 k,k , g −1 p = 2 d 4 0 L 2 k,k V −1 k,k (k x k x +k y k y ).
(A11) The length scale d 0 is introduced to guarantee that the interaction strengths have a unit of energy, and it can in principle be chosen arbitrarily. However, we know that in the case of Coulomb interaction the natural length d 0 and energy E 0 scales are determined so that the kinetic and interaction energies are equal
E 0 = 2 d 2 0 1 m = 1 4π 0 e 2 d 0 . (A12)
This way we obtain the mean field equations (3) and (4) given in the main text.
FIG. 1 .
1Schematic illustration of the setup. (a) The densities of the electrons and holes can be controlled with gate voltages V f and V b in a heterostructure supporting spatially separated electron and hole bands. (b) This way the gate voltages determine whether the electron and hole bands are inverted at the Γ point (EG > 0) or not (EG < 0), as well as whether the Fermi level (thick black line) is in the conduction band, band gap or valence band. The insulating phase with EG > 0 (EG < 0) is the QSH (trivial) insulator phase.
FIG. 2 .
2(a) Phase diagram as a function of EG. The trivial and QSH phases obey the TRS. In the TRS broken phase the s-and p-wave excitonic mean fields obey [∆1], [∆2] = 0. The bulk gap ∆ bulk remains open for all values of EG and the edge gap ∆ edge decreases monotonously from the bulk gap value to zero, when one tunes EG across the TRS broken phase towards the QSH phase. The model parameters are described in the text. Energy bands in (b) trivial phase with EG = 0.3E0, (c) TRS broken phase with EG = 0.86E0 and (d) QSH phase with EG = 1.12E0. The eigenenergies are obtained by diagonalizing the tight-binding Hamiltonian which is generated from the continuum Hamiltonian, defined by Eqs. (1)-(4), using the Kwant software package[31].
FIG. 3 .
3(a) Differential conductance G d as a function of T for (EF + eV )/EL =: 1.5, 1, 0.75, 0.5. (b) G d as a function of V for 2kBT /EL =: 2.5, 2, 1.5, 1.1.
FIG
. 4. (a) Conductance G as a function of T for EF = 0 and eI/(G0EL) =: 2, 1.5, 1, 0.5. (b) Same for EF /EL = 0.7 and eI/(G0EL) =: 0.4, 0.2, 0.1, 0.02.
FIG. 5 .
5(a) Schematic illustration of a Corbino device and the transport paths corresponding to the bulk and edge differential conductances G bulk and G edge . The dimensions of the Corbino disc Rin ≈ 1 µm and Rout = 2 µm are chosen so that the transport is (approximately) ballistic and the decay lengths of the evanescent bulk modes in the middle of the bulk gap are much shorter than the transport paths. (b),(c) G bulk and G edge as a function of EG and applied voltage V dc . The inset in (c) shows G edge as a function of EG (green line) for eV dc = 0.012E0. The red dashed line is a guide to the eye. The conductances have been calculated with the help of the tight-binding Hamiltonian which is generated from the continuum Hamiltonian, defined by Eqs. (1)-(4), using the Kwant software package[31].
F
,s (k) ≡ c † ks 2 c ks1 = m n F (E mk ) U k Q ss U (E) = e E/(k B T ) + 1 −1 is the Fermi function, T is the temperature and the transformation U k diagonalizes diag(E 1k , E 2k , E 3k , E 4k ) = U k H mf (k)U † k (A6)the mean field Hamiltonian
Colloquium: Topological insulators. M Z Hasan, C L Kane, 10.1103/RevModPhys.82.3045Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
Topological insulators and superconductors. X.-L Qi, S.-C Zhang, 10.1103/RevModPhys.83.1057Rev. Mod. Phys. 831057X.-L. Qi and S.-C. Zhang, Topological insulators and su- perconductors, Rev. Mod. Phys. 83, 1057 (2011).
Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells. B A Bernevig, T L Hughes, S.-C Zhang, 10.1126/science.1133734Science. 3141757B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Quan- tum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells, Science 314, 1757 (2006).
. C Liu, T L Hughes, X.-L Qi, K Wang, S.-C , C. Liu, T. L. Hughes, X.-L. Qi, K. Wang, and S.-C.
Quantum Spin Hall Effect in Inverted Type-II Semiconductors. Zhang, 10.1103/PhysRevLett.100.236601Phys. Rev. Lett. 100236601Zhang, Quantum Spin Hall Effect in Inverted Type-II Semiconductors, Phys. Rev. Lett. 100, 236601 (2008).
M König, S Wiedmann, C Brüne, A Roth, H Buhmann, L W Molenkamp, X.-L Qi, S.-C Zhang, 10.1126/science.1148047Quantum Spin Hall Insulator State in HgTe Quantum Wells. 318766M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buh- mann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Quantum Spin Hall Insulator State in HgTe Quantum Wells, Science 318, 766 (2007).
Evidence for Helical Edge Modes in Inverted InAs/GaSb Quantum Wells. I Knez, R.-R Du, G Sullivan, 10.1103/PhysRevLett.107.136603Phys. Rev. Lett. 107136603I. Knez, R.-R. Du, and G. Sullivan, Evidence for Heli- cal Edge Modes in Inverted InAs/GaSb Quantum Wells, Phys. Rev. Lett. 107, 136603 (2011).
Robust Helical Edge Transport in Gated InAs/GaSb Bilayers. L Du, I Knez, G Sullivan, R.-R Du, 10.1103/PhysRevLett.114.096802Phys. Rev. Lett. 11496802L. Du, I. Knez, G. Sullivan, and R.-R. Du, Robust Heli- cal Edge Transport in Gated InAs/GaSb Bilayers, Phys. Rev. Lett. 114, 096802 (2015).
Observation of the quantum spin Hall effect up to 100 kelvin in a monolayer crystal. S Wu, V Fatemi, Q D Gibson, K Watanabe, T Taniguchi, R J Cava, P Jarillo-Herrero, 10.1126/science.aan6003Science. 35976S. Wu, V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J. Cava, and P. Jarillo-Herrero, Obser- vation of the quantum spin Hall effect up to 100 kelvin in a monolayer crystal, Science 359, 76 (2018).
New mechanism for superconductivity: pairing between spatially separated electrons and holes. Y Lozovik, V Yudson, Sov. Phys. JETP. 44389Y. Lozovik and V. Yudson, New mechanism for supercon- ductivity: pairing between spatially separated electrons and holes, Sov. Phys. JETP 44, 389 (1976).
Excitonic Instability and Electric-Field-Induced Phase Transition Towards a Two-Dimensional Exciton Condensate. Y Naveh, B Laikhtman, 10.1103/PhysRevLett.77.900Phys. Rev. Lett. 77900Y. Naveh and B. Laikhtman, Excitonic Instability and Electric-Field-Induced Phase Transition Towards a Two- Dimensional Exciton Condensate, Phys. Rev. Lett. 77, 900 (1996).
Interplay of Exciton Condensation and the Quantum Spin Hall Effect in InAs/GaSb Bilayers. D I Pikulin, T Hyart, 10.1103/PhysRevLett.112.176403Phys. Rev. Lett. 112176403D. I. Pikulin and T. Hyart, Interplay of Exciton Conden- sation and the Quantum Spin Hall Effect in InAs/GaSb Bilayers, Phys. Rev. Lett. 112, 176403 (2014).
Time Reversal Symmetric Topological Exciton Condensate in Bilayer HgTe Quantum Wells. J C Budich, B Trauzettel, P Michetti, 10.1103/PhysRevLett.112.146405Phys. Rev. Lett. 112146405J. C. Budich, B. Trauzettel, and P. Michetti, Time Re- versal Symmetric Topological Exciton Condensate in Bi- layer HgTe Quantum Wells, Phys. Rev. Lett. 112, 146405 (2014).
Topological charge-density and spin-density waves in InAs/GaSb quantum wells under an in-plane magnetic field. L.-H Hu, C.-C Chen, C.-X Liu, F.-C Zhang, Y Zhou, 10.1103/PhysRevB.96.075130Phys. Rev. B. 9675130L.-H. Hu, C.-C. Chen, C.-X. Liu, F.-C. Zhang, and Y. Zhou, Topological charge-density and spin-density waves in InAs/GaSb quantum wells under an in-plane magnetic field, Phys. Rev. B 96, 075130 (2017).
Time-Reversal Symmetry-Breaking Nematic Insulators near Quantum Spin Hall Phase Transitions. F Xue, A H Macdonald, 10.1103/PhysRevLett.120.186802Phys. Rev. Lett. 120186802F. Xue and A. H. MacDonald, Time-Reversal Symmetry- Breaking Nematic Insulators near Quantum Spin Hall Phase Transitions, Phys. Rev. Lett. 120, 186802 (2018).
Gate tuning from exciton superfluid to quantum anomalous Hall in van der Waals heterobilayer. Q Zhu, M W Tu, Q Tong, W Yao, 10.1126/sciadv.aau6120Science Advances. 56120Q. Zhu, M. W.-Y. Tu, Q. Tong, and W. Yao, Gate tuning from exciton superfluid to quantum anomalous Hall in van der Waals heterobilayer, Science Advances 5, eaau6120 (2019).
A monolayer transition-metal dichalcogenide as a topological excitonic insulator. D Varsano, M Palummo, E Molinari, M Rontani, 10.1038/s41565-020-0650-4Nature Nanotechnology. 15367D. Varsano, M. Palummo, E. Molinari, and M. Rontani, A monolayer transition-metal dichalcogenide as a topo- logical excitonic insulator, Nature Nanotechnology 15, 367 (2020).
In-plane magnetic field induced density wave states near quantum spin Hall phase transitions. Y Zeng, F Xue, A H Macdonald, arXiv:2112.07523cond-mat.meshallY. Zeng, F. Xue, and A. H. MacDonald, In-plane mag- netic field induced density wave states near quantum spin Hall phase transitions, arXiv:2112.07523 [cond-mat.mes- hall].
L Du, X Li, W Lou, G Sullivan, K Chang, J Kono, R.-R Du, 10.1038/s41467-017-01988-1Evidence for a topological excitonic insulator in InAs/GaSb bilayers. 81971L. Du, X. Li, W. Lou, G. Sullivan, K. Chang, J. Kono, and R.-R. Du, Evidence for a topological excitonic insu- lator in InAs/GaSb bilayers, Nature Communications 8, 1971 (2017).
Resistive signature of excitonic coupling in an electronhole double layer with a middle barrier. X Wu, W Lou, K Chang, G Sullivan, R.-R Du, 10.1103/PhysRevB.99.085307Phys. Rev. B. 9985307X. Wu, W. Lou, K. Chang, G. Sullivan, and R.-R. Du, Resistive signature of excitonic coupling in an electron- hole double layer with a middle barrier, Phys. Rev. B 99, 085307 (2019).
Electrically tuning many-body states in a Coulomb-coupled InAs/InGaSb double layer. X.-J Wu, W Lou, K Chang, G Sullivan, A Ikhlassi, R.-R Du, 10.1103/PhysRevB.100.165309Phys. Rev. B. 100165309X.-J. Wu, W. Lou, K. Chang, G. Sullivan, A. Ikhlassi, and R.-R. Du, Electrically tuning many-body states in a Coulomb-coupled InAs/InGaSb double layer, Phys. Rev. B 100, 165309 (2019).
D Xiao, C.-X Liu, N Samarth, L.-H Hu, 10.1103/PhysRevLett.122.186802Anomalous Quantum Oscillations of Interacting Electron-Hole Gases in Inverted Type-II InAs/GaSb Quantum Wells. 122186802D. Xiao, C.-X. Liu, N. Samarth, and L.-H. Hu, Anoma- lous Quantum Oscillations of Interacting Electron-Hole Gases in Inverted Type-II InAs/GaSb Quantum Wells, Phys. Rev. Lett. 122, 186802 (2019).
Energy gap tuning and gate-controlled topological phase transition in InAs/InxGa1−xSb composite quantum wells. H Irie, T Akiho, F Couëdo, K Suzuki, K Onomitsu, K Muraki, 10.1103/PhysRevMaterials.4.104201Phys. Rev. Materials. 4104201H. Irie, T. Akiho, F. Couëdo, K. Suzuki, K. Onomitsu, and K. Muraki, Energy gap tuning and gate-controlled topological phase transition in InAs/InxGa1−xSb com- posite quantum wells, Phys. Rev. Materials 4, 104201 (2020).
Evidence for a monolayer excitonic insulator. Y Jia, P Wang, C.-L Chiu, Z Song, G Yu, B Jäck, S Lei, S Klemenz, F A Cevallos, M Onyszczak, N Fishchenko, X Liu, G Farahi, F Xie, Y Xu, K Watanabe, T Taniguchi, B A Bernevig, R J Cava, L M Schoop, A Yazdani, S Wu, 10.1038/s41567-021-01422-wNature Physics. 1887Y. Jia, P. Wang, C.-L. Chiu, Z. Song, G. Yu, B. Jäck, S. Lei, S. Klemenz, F. A. Cevallos, M. Onyszczak, N. Fishchenko, X. Liu, G. Farahi, F. Xie, Y. Xu, K. Watanabe, T. Taniguchi, B. A. Bernevig, R. J. Cava, L. M. Schoop, A. Yazdani, and S. Wu, Evidence for a monolayer excitonic insulator, Nature Physics 18, 87 (2022).
Evidence for equilibrium exciton condensation in monolayer WTe2. B Sun, W Zhao, T Palomaki, Z Fei, E Runburg, P Malinowski, X Huang, J Cenker, Y.-T Cui, J.-H Chu, X Xu, S S Ataei, D Varsano, M Palummo, E Molinari, M Rontani, D H Cobden, 10.1038/s41567-021-01427-5Nature Physics. 1894B. Sun, W. Zhao, T. Palomaki, Z. Fei, E. Runburg, P. Malinowski, X. Huang, J. Cenker, Y.-T. Cui, J.-H. Chu, X. Xu, S. S. Ataei, D. Varsano, M. Palummo, E. Molinari, M. Rontani, and D. H. Cobden, Evidence for equilibrium exciton condensation in monolayer WTe2, Nature Physics 18, 94 (2022).
Kouwenhoven, Electric and Magnetic Tuning Between the Trivial and Topological Phases in InAs/GaSb Double Quantum Wells. F Qu, A J A Beukman, S Nadj-Perge, M Wimmer, B.-M Nguyen, W Yi, J Thorp, M Sokolich, A A Kiselev, M J Manfra, C M Marcus, L P , 10.1103/PhysRevLett.115.036803Phys. Rev. Lett. 11536803F. Qu, A. J. A. Beukman, S. Nadj-Perge, M. Wimmer, B.-M. Nguyen, W. Yi, J. Thorp, M. Sokolich, A. A. Kise- lev, M. J. Manfra, C. M. Marcus, and L. P. Kouwen- hoven, Electric and Magnetic Tuning Between the Trivial and Topological Phases in InAs/GaSb Double Quantum Wells, Phys. Rev. Lett. 115, 036803 (2015).
Observation of Edge Transport in the Disordered Regime of Topologically Insulating InAs/GaSb Quantum Wells. I Knez, C T Rettner, S.-H Yang, S S P Parkin, L Du, R.-R Du, G Sullivan, 10.1103/PhysRevLett.112.026602Phys. Rev. Lett. 11226602I. Knez, C. T. Rettner, S.-H. Yang, S. S. P. Parkin, L. Du, R.-R. Du, and G. Sullivan, Observation of Edge Trans- port in the Disordered Regime of Topologically Insulat- ing InAs/GaSb Quantum Wells, Phys. Rev. Lett. 112, 026602 (2014).
Images of Edge Current in InAs/GaSb Quantum Wells. E M Spanton, K C Nowack, L Du, G Sullivan, R.-R Du, K A Moler, 10.1103/PhysRevLett.113.026804Phys. Rev. Lett. 11326804E. M. Spanton, K. C. Nowack, L. Du, G. Sullivan, R.-R. Du, and K. A. Moler, Images of Edge Current in InAs/GaSb Quantum Wells, Phys. Rev. Lett. 113, 026804 (2014).
Observation of a Helical Luttinger Liquid in InAs/GaSb Quantum Spin Hall Edges. T Li, P Wang, H Fu, L Du, K A Schreiber, X Mu, X Liu, G Sullivan, G A Csáthy, X Lin, R.-R Du, 10.1103/PhysRevLett.115.136804Phys. Rev. Lett. 115136804T. Li, P. Wang, H. Fu, L. Du, K. A. Schreiber, X. Mu, X. Liu, G. Sullivan, G. A. Csáthy, X. Lin, and R.-R. Du, Observation of a Helical Luttinger Liquid in InAs/GaSb Quantum Spin Hall Edges, Phys. Rev. Lett. 115, 136804 (2015).
Decoupling Edge Versus Bulk Conductance in the Trivial Regime of an InAs/GaSb Double Quantum Well Using Corbino Ring Geometry. B.-M Nguyen, A A Kiselev, R Noah, W Yi, F Qu, A J A Beukman, F K De Vries, J Van Veen, S Nadj-Perge, L P Kouwenhoven, M Kjaergaard, H J Suominen, F Nichele, C M Marcus, M J Manfra, M Sokolich, 10.1103/PhysRevLett.117.077701Phys. Rev. Lett. 11777701B.-M. Nguyen, A. A. Kiselev, R. Noah, W. Yi, F. Qu, A. J. A. Beukman, F. K. de Vries, J. van Veen, S. Nadj- Perge, L. P. Kouwenhoven, M. Kjaergaard, H. J. Suomi- nen, F. Nichele, C. M. Marcus, M. J. Manfra, and M. Sokolich, Decoupling Edge Versus Bulk Conductance in the Trivial Regime of an InAs/GaSb Double Quan- tum Well Using Corbino Ring Geometry, Phys. Rev. Lett. 117, 077701 (2016).
Models and Materials for Topological Insulators. C Liu, S Zhang, Topological Insulators. M. Franz and L. MolenkampC. Liu and S. Zhang, Models and Materials for Topolog- ical Insulators, edited by M. Franz and L. Molenkamp, Topological Insulators (2013).
Kwant: a software package for quantum transport. C W Groth, M Wimmer, A R Akhmerov, X Waintal, 10.1088/1367-2630/16/6/063065New Journal of Physics. 1663065C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Waintal, Kwant: a software package for quantum transport, New Journal of Physics 16, 063065 (2014).
F Nichele, H J Suominen, M Kjaergaard, C M Marcus, E Sajadi, J A Folk, F Qu, A J A Beukman, F K De Vries, J Van Veen, S Nadj-Perge, L P Kouwenhoven, B.-M Nguyen, A A Kiselev, W Yi, M Sokolich, M J Manfra, E M Spanton, K A Moler, 10.1088/1367-2630/18/8/083005Edge transport in the trivial phase of InAs/GaSb. 1883005F. Nichele, H. J. Suominen, M. Kjaergaard, C. M. Mar- cus, E. Sajadi, J. A. Folk, F. Qu, A. J. A. Beukman, F. K. de Vries, J. van Veen, S. Nadj-Perge, L. P. Kouwenhoven, B.-M. Nguyen, A. A. Kiselev, W. Yi, M. Sokolich, M. J. Manfra, E. M. Spanton, and K. A. Moler, Edge trans- port in the trivial phase of InAs/GaSb, New Journal of Physics 18, 083005 (2016).
Nonequilibrium kinetics of a disordered Luttinger liquid. D A Bagrets, I V Gornyi, D G Polyakov, 10.1103/PhysRevB.80.113403Phys. Rev. B. 80113403D. A. Bagrets, I. V. Gornyi, and D. G. Polyakov, Nonequilibrium kinetics of a disordered Luttinger liquid, Phys. Rev. B 80, 113403 (2009).
Glazman, Resistance of helical edges formed in a semiconductor heterostructure. J I Väyrynen, M Goldstein, Y Gefen, L I , 10.1103/PhysRevB.90.115309Phys. Rev. B. 90115309J. I. Väyrynen, M. Goldstein, Y. Gefen, and L. I. Glaz- man, Resistance of helical edges formed in a semiconduc- tor heterostructure, Phys. Rev. B 90, 115309 (2014).
Robust helical edge transport in quantum spin Hall quantum wells. R Skolasinski, D I Pikulin, J Alicea, M Wimmer, 10.1103/PhysRevB.98.201404Phys. Rev. B. 98201404R. Skolasinski, D. I. Pikulin, J. Alicea, and M. Wim- mer, Robust helical edge transport in quantum spin Hall quantum wells, Phys. Rev. B 98, 201404 (2018).
Coupled electron-hole transport. U Sivan, P M Solomon, H Shtrikman, 10.1103/PhysRevLett.68.1196Phys. Rev. Lett. 681196U. Sivan, P. M. Solomon, and H. Shtrikman, Coupled electron-hole transport, Phys. Rev. Lett. 68, 1196 (1992).
Separately contacted electronhole double layer in a GaAs/AlxGa1−xAs heterostructure. B E Kane, J P Eisenstein, W Wegscheider, L N Pfeiffer, K W West, 10.1063/1.112432Applied Physics Letters. 653266B. E. Kane, J. P. Eisenstein, W. Wegscheider, L. N. Pfeiffer, and K. W. West, Separately contacted electron- hole double layer in a GaAs/AlxGa1−xAs heterostruc- ture, Applied Physics Letters 65, 3266 (1994).
A simple lateral transport device of strongly interacting electron and hole layers. S Shapira, E H Linfield, M Pepper, 10.1063/1.123630Applied Physics Letters. 741603S. Shapira, E. H. Linfield, and M. Pepper, A simple lat- eral transport device of strongly interacting electron and hole layers, Applied Physics Letters 74, 1603 (1999).
Closely spaced and separately contacted two-dimensional electron and hole gases by in situ focused-ion implantation. M Pohlt, M Lynass, J G S Lok, W Dietsche, K V Klitzing, K Eberl, R Mühle, 10.1063/1.1463698Applied Physics Letters. 802105M. Pohlt, M. Lynass, J. G. S. Lok, W. Dietsche, K. v. Kl- itzing, K. Eberl, and R. Mühle, Closely spaced and sepa- rately contacted two-dimensional electron and hole gases by in situ focused-ion implantation, Applied Physics Let- ters 80, 2105 (2002).
Undoped electron-hole bilayers in a GaAs/AlGaAs double quantum well. J A Seamons, D R Tibbetts, J L Reno, M P Lilly, 10.1063/1.2437664Applied Physics Letters. 9052103J. A. Seamons, D. R. Tibbetts, J. L. Reno, and M. P. Lilly, Undoped electron-hole bilayers in a GaAs/AlGaAs double quantum well, Applied Physics Letters 90, 052103 (2007).
Interplay of quantum spin Hall effect and spontaneous time-reversal symmetry breaking in electron-hole bilayers II: Zero-field topological superconductivity. T Paul, V F Becerra, T Hyart, In preparationT. Paul, V. F. Becerra, and T. Hyart, Interplay of quan- tum spin Hall effect and spontaneous time-reversal sym- metry breaking in electron-hole bilayers II: Zero-field topological superconductivity, In preparation.
| []
|
[
"An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants",
"An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants"
]
| [
"Hongbin Zhang \nIdaho National Laboratory\nP.O. Box 16253860, 83415 IDIdaho FallsMSUnited States\n",
"Han Bao *[email protected] \nIdaho National Laboratory\nP.O. Box 16253860, 83415 IDIdaho FallsMSUnited States\n",
"Tate Shorthill \nUniversity of Pittsburgh\n3700 O'Hara Street15261PittsburghPennsylvania\n",
"Edward Quinn \nTechnology Resources\n#Current Address: Terrapower\n15800 Northup Way98008Dana Point, BellevueCA, WA\n"
]
| [
"Idaho National Laboratory\nP.O. Box 16253860, 83415 IDIdaho FallsMSUnited States",
"Idaho National Laboratory\nP.O. Box 16253860, 83415 IDIdaho FallsMSUnited States",
"University of Pittsburgh\n3700 O'Hara Street15261PittsburghPennsylvania",
"Technology Resources\n#Current Address: Terrapower\n15800 Northup Way98008Dana Point, BellevueCA, WA"
]
| []
| Upgrading the existing analog instrumentation and control (I&C) systems to state-of-theart digital I&C (DI&C) systems will greatly benefit existing light-water reactors (LWRs).However, the issue of software common cause failure (CCF) remains an obstacle in terms of qualification for digital technologies. Existing analyses of CCFs in I&C systems mainly focus on hardware failures. With the application and upgrading of new DI&C systems, design flaws could cause software CCFs to become a potential threat to plant safety, considering that most redundancy designs use similar digital platforms or software in their operating and application systems. With complex multi-layer redundancy designs to meet the single failure criterion, these I&C safety systems are of particular concern in U.S.Nuclear Regulatory Commission (NRC) licensing procedures. In Fiscal Year 2019, theRisk-Informed Systems Analysis (RISA) Pathway of the U.S. Department of Energy's (DOE's) Light Water Reactor Sustainability (LWRS) Program initiated a project todevelop a risk assessment strategy for delivering a strong technical basis to support effective, licensable, and secure DI&C technologies for digital upgrades and designs. An integrated risk assessment for the DI&C (IRADIC) process was proposed for this strategy to identify potential key digital-induced failures, implement reliability analyses of related digital safety I&C systems, and evaluate the unanalyzed sequences introduced by these failures (particularly software CCFs) at the plant level. This paper summarizes these RISA efforts in the risk analysis of safety-related DI&C systems at Idaho National Laboratory. | 10.1080/00295450.2022.2076486 | [
"https://arxiv.org/pdf/2112.09287v1.pdf"
]
| 245,329,809 | 2112.09287 | 629c63b9b95acbaa38d89e23babbe84857f479ee |
An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants
Hongbin Zhang
Idaho National Laboratory
P.O. Box 16253860, 83415 IDIdaho FallsMSUnited States
Han Bao *[email protected]
Idaho National Laboratory
P.O. Box 16253860, 83415 IDIdaho FallsMSUnited States
Tate Shorthill
University of Pittsburgh
3700 O'Hara Street15261PittsburghPennsylvania
Edward Quinn
Technology Resources
#Current Address: Terrapower
15800 Northup Way98008Dana Point, BellevueCA, WA
An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants An Integrated Risk Assessment Process of Safety-Related Digital I&C Systems in Nuclear Power Plants
2DI&Crisk assessmentcommon cause failurehazard analysisreliability analysisconsequence analysis
Upgrading the existing analog instrumentation and control (I&C) systems to state-of-theart digital I&C (DI&C) systems will greatly benefit existing light-water reactors (LWRs).However, the issue of software common cause failure (CCF) remains an obstacle in terms of qualification for digital technologies. Existing analyses of CCFs in I&C systems mainly focus on hardware failures. With the application and upgrading of new DI&C systems, design flaws could cause software CCFs to become a potential threat to plant safety, considering that most redundancy designs use similar digital platforms or software in their operating and application systems. With complex multi-layer redundancy designs to meet the single failure criterion, these I&C safety systems are of particular concern in U.S.Nuclear Regulatory Commission (NRC) licensing procedures. In Fiscal Year 2019, theRisk-Informed Systems Analysis (RISA) Pathway of the U.S. Department of Energy's (DOE's) Light Water Reactor Sustainability (LWRS) Program initiated a project todevelop a risk assessment strategy for delivering a strong technical basis to support effective, licensable, and secure DI&C technologies for digital upgrades and designs. An integrated risk assessment for the DI&C (IRADIC) process was proposed for this strategy to identify potential key digital-induced failures, implement reliability analyses of related digital safety I&C systems, and evaluate the unanalyzed sequences introduced by these failures (particularly software CCFs) at the plant level. This paper summarizes these RISA efforts in the risk analysis of safety-related DI&C systems at Idaho National Laboratory.
I. INTRODUCTION
Digital upgrades and plant modernization efforts offer the foremost path to performance and cost improvements of nuclear power plants (NPPs) [1]. Despite decades of experience with analog systems, the technical challenges associated with their continued use (e.g., signal drift, high maintenance costs, obsolescence, and lack of industrial suppliers) have caused the nuclear 3 industry to move toward digital instrumentation and control (DI&C) in favor of integrated circuitry and the modern microcontroller [3]. Compared with analog systems, DI&C systems offer significant advantages in the areas of monitoring, processing, testing, and maintenance [4] [5]. Notwithstanding the immediate attraction, the nuclear industry has been slow to adopt safety-rated DI&C because each new design must be shown to maintain or improve the status quo by means of a risk assessment [3]. Though many of the concepts for the risk assessment of analog systems carry over, DI&C systems present unique challenges. In 1997, the National Research Council detailed several technical challenges for the implementation of DI&C systems.
Those relating specifically to the present work are: (1) the system aspects of digital systems; (2) the potential for software-based common cause failures (CCFs); and (3) the need for a risk assessment method tailored to DI&C systems [3].
The system aspects of DI&C involve issues that extend beyond individual components and even beyond the function of the system itself. The challenge with using these system aspects is discussed in NUREG/CR-6901. Digital systems exhibit two types of interactions-Type 1: the interactions of a DI&C system (and/or its components) with a controlled process (e.g., NPP) and Type 2: the interactions of a DI&C system (and/or its components) with itself and/or other digital systems and components [6]. Kirschenbaum et al. provide a useful summary of these concerns in their own work on the investigation of digital systems [7]. Common or redundant components are often utilized as a backup to ensure system reliability. However, the improper application of redundant features can leave a system vulnerable to CCFs, which arise from the malfunction of two or more components, or functions, due to a single failure source [1] [8]. To make redundancy designs effective, diversity is employed, providing an alternative technology, method, technique, or means to achieve a desired result [9]. The diverse protection helps eliminate the common features necessary for a CCF. Early in 1995, the U.S. Nuclear Regulatory Committee (NRC) probabilistic risk assessment (PRA) policy statement believed the use of risk information in all regulatory activities would promote regulatory stability and efficiency to the extent supported by the state-of-the-art in PRA methods and data [10], while diversity and defense-in-depth (D3) analyses were mainly performed using deterministic approaches.
In Fiscal Year (FY) 2019, the Risk-Informed Systems Analysis (RISA) Pathway of the U.S. Department of Energy's (DOE's) Light Water Reactor Sustainability (LWRS) program initiated a project to develop a risk assessment strategy for delivering a strong technical basis to support effective, licensable, and secure DI&C technologies for digital upgrades/designs [11] [12] [13]. An integrated risk assessment for the DI&C (IRADIC) process was proposed for this strategy, which aims to identify key digital-induced failures, implement reliability analyses on related digital safety I&C systems, and evaluate the unanalyzed sequences introduced by these failures (particularly software CCFs) at the plant level. More details are included in Section II.
According to the guidelines and requirements of the IRADIC process, an approach for redundancy-guided systems-theoretic hazard analysis (RESHA) was developed in FY-2020. It aims to help system designers and engineers identify digital-based CCFs and qualitatively analyse their effects on digital system vulnerability. It also provides a technical basis for implementing future reliability and consequence analyses of unanalyzed sequences and optimizing the D3 applications in a cost-effective way. This approach has been developed and applied for the hazard analysis of digital reactor trip system (RTS) and engineered safety features actuation system (ESFAS). Relevant description and case studies are shown in Section III. A method for software reliability assessment of digital control systems with consideration for the quantification of CCFs is described in Section IV, which is defined as a Bayesian and HRA 5 (human reliability analysis)-aided method for the reliability analysis of software (BAHAMAS).
Section V describes the efforts in consequence analysis that evaluate the impact of digital-based failures to the plant safety. Section VI summarizes the conclusion and future work on risk assessment of DI&C systems.
II. INTEGRATED RISK ASSESSMENT PROCESS FOR DI&C SYSTEMS
The overall goal of developing an integrated risk assessment approach is to deliver a strong technical basis to support effective, licensable, and secure technologies for DI&C upgrades/designs. To deal with the expensive licensing justifications from regulatory insights, this technical basis is instructive for nuclear vendors and utilities to effectively lower the costs associated with digital compliance and speed industry advances by: (1) defining an integrated risk-informed analysis process for DI&C upgrade, including hazard analysis, reliability analysis, and consequence analysis; (2) applying systematic and risk-informed tools to identify CCFs and It is critical for the viability of a nuclear power fleet to upgrade DI&C (i.e., safety and non-safety-related) systems in existing NPPs within a cost-effective and regulatory acceptable way. One key outcome of this project is to perform a plant-specific risk assessment to provide sustainable scientific support for enabling industry to balance the digital-related risk and cost.
The IRADIC technology consists of two parts: risk analysis and risk evaluation. Risk analysis, including hazard analysis, reliability analysis, and consequence analysis, focuses on identifying potential failures of digital systems and components, estimating probabilities, and 6 analyzing relevant consequences. Risk evaluation compares risk analysis results with specific risk acceptance criteria in component, system, and plant levels. Figure 1 displays the schematic of the IRADIC technology for safety evaluation and design optimization of DI&C systems. More details about the workflows and information flows of hazard, reliability and consequence analysis can be found in Sections III, IV, and V. The IRADIC technology was also suggested to deal with the software risk analysis for digital twins in the nearly autonomous management and control systems [14].
III. REDUNDANCY-GUIDED SYSTEM-THEORETIC HAZARD ANALYSIS
In the IRADIC framework, a method for hazard analysis, RESHA [15] [16], was developed by combining fault tree analysis (FTA) along with a reframed, redundancy-guided application of the systems-theoretic process analysis (STPA) [17]. An integrated fault tree (FT) can be generated with both software failures and hardware failures and used to discover single 7 points of failure (SPOFs) leading to the loss of function of the entire DI&C system. SPOF refers to a situation in which a single part of a system fails, and the entire system loses function as a result. The proposed approach for a RESHA is illustrated in Figure 2. RESHA, its steps, and the role of STPA within this hazard analysis are briefly described in the subsequent paragraphs. Figure 2. Workflow of the proposed RESHA approach (derived from [15] and [16]). The dashed lines indicate the influence of FTA, STPA [17], and HAZCADS [18].
RESHA employs a reframed STPA to identify software-based failure events to be used within an integrated FT. STPA was designed as a systems-focused, top-down approach to modeling [17]. The goal of STPA is to assess a system for unsafe behavior and identify scenarios that link that behavior to system-level hazards and losses. The system-level hazards are those system states or conditions that may lead to a loss (i.e., something of value to stakeholders) [17].
STPA consists of four main parts. The first three parts emphasize the identification of undesirable or unsafe control actions (UCAs), and the last part determines the context or scenario 8 for which a UCA might occur. There are four categories of UCAs in STPA: (1) control action is not provided when it is needed; (2) control action is provided when it is not needed; (3) control action is provided when it is needed but too early, too late, or in a wrong order; (4) control action lasts too long or stops too soon (only applicable to continuous control actions) [17]. In a digital system, unsafe or undesirable control actions and information exchanges may lead to a failure of the digital system; hence, UCAs and their causes are selected as potential software failures.
RESHA relies on STPA concepts to support a redundancy-guided hazard analysis.
The first step of RESHA, like most methods for hazard analysis, is focused on information gathering; this might require the creation of diagrams and sketches. The essential aspect of this step is to assemble the necessary information for the remaining steps.
Step 2 begins the formation of a FT that serves as the backbone for risk assessment within IRADIC. In this step, the structure of the FT is created based on the hardware components of the system of interest. While not required, the FT is often linked to an event tree (ET) as part of an event tree analysis (ETA); that link is the FT's top-event [1]. Proper selection of the top event is a vital aspect of both FTA and ETA. Here, STPA can be leveraged to inform decisions regarding the selection of FT top events and their relationships to an ET. The STPA-identified system-level hazards and losses can provide clarification for what the RESHA integrated FT will look like; system-level hazards may serve as FT top events that integrate with an ET for tracking systemlevel losses. FT top events aid in selecting credible UCA from those identified in Step 3.
The goal of Step 3 is to identify UCAs as potential software failures by means of a redundancy-guided application of STPA. UCAs, and their causes, are selected as the potential software failures to be included within FT from Step 2. In order to find UCAs, STPA relies on a control structure that details the controllers and processors of the system. STPA does not 9 explicitly model safety features such as redundant components in the control structure; these details are left to be addressed in context scenario discussions during the final part of STPA [17].
Failure to provide explicit incorporation of the safety features early and within the control structure diagram may cause the potential CCFs in redundant designs to be overlooked. Thus, STPA is reframed according to safety features (e.g., redundant and diverse designs) directly and early by explicitly modeling them within the control structure diagrams.
A redundancy-guided multi-layer control structure is formed by decomposing the system based on redundancy. A top-down approach identifies functional redundancy within the system and creates a control structure for the components and modules pertaining to that layer. The process is repeated systematically and incorporating the information exchanges found for each component within each redundancy layer. The result is a multi-layer control structure that captures all the necessary control actions of the system and its components. Finally, UCAs are identified from the control signals indicated in the control structure diagram.
Step 4 combines the FT with UCAs from STPA, a concept borrowed from the hazard and consequence analysis for digital systems (HAZCADS) [18]. In this step, applicable UCAs are selected and added into the hardware FT as the software failures. For a specific top event in the FT, some UCAs may be inapplicable. For example, the UCA of a component associated with UCA category 1 (i.e., "action not provided") may not be applicable for a top event that represents spurious activation. Applicable UCAs are added to the FT as potential software failure events.
Step 5 provides consideration of CCFs. Any group of identical components may be susceptible to a CCF that falls into any of the categories of UCAs. These groups of components are called common cause component groups (CCCGs). Basic events that represent CCFs are also added based on applicable UCA categories. In some instances, the redundancy layers of the multi-layered control structure may distinguish CCCGs and provide indication for which CCF basic events should be added. For example, the CCF of a CCCG associated with a particular redundant division from a four-division digital system. After adding UCAs and CCFs to the FT, the next step performs a qualitative evaluation of the integrated FT.
Step 6 provides the main outcome of the hazard analysis; the minimal cut sets of the integrated FT are evaluated to determine potential critical points of failure. The critical points of failure are the low-order cut sets (i.e., those cut sets with few basic events). Of particular interest are the SPOFs. Identification of the basic events that make up the low-order cut sets provides a starting place for potential design improvements and for reliability analysis.
The purpose of Step 7 is to identify and provide guidance to eliminate latent faults or triggers of CCFs and other critical failures or UCAs identified in Step 6. The STPA Handbook [17] indicates that the causes of UCAs can be grouped into two categories: (1) unsafe controller behaviors and (2) inadequate feedback and/or other inputs. STPA provides guidance for identification of these categories based on how controllers process information and act.
Currently, RESHA has been demonstrated for the hazard analysis of a four-division digital RTS [15] and ESFAS [16]. The designs for the ESFAS and RTS in those works have similar structures to state-of-the-art digital systems in existing NPP designs such as the APR-1400 [19]. Portions of FT for RTS failure with software failures are displayed in Figure 3, Figure 4, Figure 5, and Figure 6. More details can be found in [15].
IV. INTEGRATED RELIABILITY ANALYSIS
The reliability analysis in IRADIC consists of (1) quantification of the basic events of the integrated FT that is built up using RESHA and (2) estimation of top event of the integrated FT.
In this work, the BAHAMAS method is applied to quantify software failure probability;
hardware failure probabilities are collected from previous publications [20]. The quantification of integrated FT is performed using the Idaho National Laboratory (INL)-developed PRA tool Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) [21].
IV.A. Reliability Analysis of Software of a Four-division Digital Reactor Trip System
BAHAMAS was developed by INL for reliability estimations of software in early development stage [22]. It can provide a rough estimation of software failure probabilities even when testing/performance data of the target software is very limited. In this condition, BAHAMAS assumes that software failures root from human errors in the software development life cycle (SDLC) and can be modeled and roughly estimated using HRA. In BAHAMAS, a
Bayesian belief network (BBN) is constructed to integrate disparate causal factors of the system in a logic way. More technical information about BAHAMAS method can be found in [22].
The BAHAMAS workflow is briefly introduced in this section, where each of the main methods mentioned in the approach is incorporated for the reliability analysis of a software system. As discussed in Section I, the risk assessment of digital systems has been divided into Step 1 of BAHAMAS is to select a software failure of interest that needs to be quantified from a qualitative study (e.g., RESHA). For example, it can be an individual failure or CCF of bistable processor of the four-division RTS that was identified in RESHA study.
Step 2 collects information regarding the event of interest and identifies potential causes of the failure of interest. For the UCA of bistable processors, the root causes may be software inner defects or data communication errors due to environmental hazards or human errors.
Step 3 builds up a BBN to organize the potential causes that were identified in Step 2 so that an acyclic (i.e., without feedback) graphical network representing the relationships of interest can be used for quantification process. In this case, this refers to the relationships between root causes and probability of software failure.
Step 4 determines the fault parameter by estimating the root node probabilities and generic software failure probability.
Step 5 determines the probability for failure of interest by estimating specific software failure probability and evaluating single and CCF probability.
Step 6 conducts CCF modeling and estimation using the beta-factor method [23], which assumes the total failure probability of a component is the sum of the individual and the CCF probabilities. A beta-factor is estimated based on existing data to represent the proportions of the individual failures and the CCFs in the total failure probability. A case study has been performed in [22]; for the software failure of bistable processors of the four-division RTS, the individual failure probability is 1.554E-4, the probability of the CCF of bistable processor in all divisions is 8.494E-6, and the probability of the CCF of bistable processor in one division is 2.320E-5.
IV.B. Quantification of the Integrated Fault Tree of a Four-division Digital Reactor
Trip System
By assigning the software and hardware failure probabilities into the integrated FTs of the four-division Figure 9 and Figure 10. Table 1
quantify responding failure probabilities for DI&C technologies; (3) evaluating the impact of digital failures at the component level, system level, and plant level; (4) providing insights and suggestions on designs to manage the risks, thereby supporting the development, licensing, and deployment of advanced DI&C technologies on NPPs.
Figure 1 .
1Schematic of the IRADIC technology for safety evaluation and design optimization of DI&C systems.
Figure 3 .
3Main FT of integrated RTS-FT using IRADIC technology.
RTS
Figure 4 .Figure 5 .Figure 6 .
456Transfer event of "Failure of A1 Breaker" of the integrated RTS-FT. Transfer event of "DA LC R1 LCL Processor-1 fails to send trip signal to DOM-1" of the integrated RTS-FT. DA LC R1 LP1 does not provide trip command to DA LC R1 DOM-1 DA-LC-LP-SF-CCF-TA 1.150E-05 SF-CCF: DA LCL processors do not provide trip command to DOMs LC-LP-SF-CCF-TA 8.490E-06 SF-CCF: All LCL processors do not provide trip command to DOMs DA-LC-R1-LP-SF-CCF-TA 1.990E-05 SF-CCF: DA Rack I LCL processors do not provide trip command to DOMs DA-LC-R1-LP1-SF-TC UCA13 Type C software failures of DA LC R1 LCL Processor-1 DA-LC-R1-LP1-SF-UCA13C 1.470E-04 SF: DA LC R1 LP1 provides trip command to DA LC R1 DOM-1 too late DA-LC-LP-SF-CCF-TC 1.150E-05 SF-CCF: DA LCL processors provide trip command to DOMs too late LC-LP-SF-CCF-TC 8.490E-06 SF-CCF: All LCL processors provide trip command to DOMs too late DA-LC-R1-LP-SF-Transfer event of "DA BP1 fails to send signal to DA LCL Cabinets" of integrated RTS-FT. sensors to detect and/or provide initiating signal to DA
16 Figure 7 .
167three phases. Phase 2 provides quantification for the results found in Phase 1. Although it is the intention in Phase 2 for BAHAMAS to be flexible, much of its formulation is based on the results of a RESHA-based Phase 1 hazard analysis. Consequently, the subsequent approach to Phase 2 is tailored best to hazards identified by RESHA. BAHAMAS workflow and information flow are shown inFigure 7andFigure 8, respectively. BAHAMAS workflow.
Figure 8 .
8Flowchart showing the primary inputs and outputs of each step of BAHAMAS (derived from[22]).
digital RTS, the failure probabilities of the RTS can be calculated using INL-developed PRA tool SAPHIRE. The RTS failure probability is 1.270E-6. Mechanical CCF of rod control cluster assembly (RCCA) is the main contributor to the failure of the representative four-division digital RTS; the software CCFs do not have significant impacts to the failure of digital RTS because of the highly redundant design and high reliability of digital components. V. CONSEQUENCE ANALYSIS This section describes the consequence analysis of a generic pressurized-water reactor (PWR) SAPHIRE model with integrated FT for a four-division digital RTS. This model was developed using SAPHIRE 8 for a typical PWR plant for the accident scenario analysis with an original FT for a two-division analog RTS. The core damage frequency (CDF) has been calculated for the ET model with different RTS-FTs and compared to show how much safety margin can be increased by introducing the modern four-division digital RTS. The original two-train analog RTS was modeled with different failure modes, such as electric failures, CCF of RCCA fail to drop, contribution of seismic events, operator errors, and RTS failures during test and maintenance. The FT was quantified using SAPHIRE 8, and the RTS failure probability is 4.288E-6. It shows that the failure probability of integrated fourdivision digital RTS-FT is only about 50% of the original one. In this paper, an accident scenario about INT-TRANS (initiating event -general plant transient) is selected for the consequence analysis of a four-division digital RTS failure, the ET model is shown in
compares the values of CDF with original and new RTS-FTs. The original total IE-TRANS CDF is 1.073E-6/reactor year and greatly reduced to 6.418E-07/reactor year with the new RTS-FTs. There are 16 non-zero CDF sequences out of a total of 145 INT-TRANS accident sequences (i.e., the sequence end state is core damage). INT-TRANS:21-16 from ATWS (anticipated transient without scram) scenarios is one of the most risk-significant sequences with a CDF reduced from 5.388E-7/reactor year to 1.596E-
7/reactor year and contributes 24.87% of the CDF of improved INT-TRANS. In this sequence, RTS fails to trip the reactor; primary and secondary side depressurizations are not successful because safety relief values are closed. Core damage occurs as long-term cooling cannot be established.INT-TRANS:21-14 from ATWS scenarios is another risk-significant sequence with a CDF reduced from 7.262-8/reactor year to 2.150E-08/reactor year, contributing 3.35% of the CDF of improved INT-TRANS. In this sequence, RTS fails to trip the reactor; reactor cooling system fails to limit the pressure under 3200 psi; main feedwater is unavailable and emergency boration fails. Core damage occurs as long-term cooling cannot be established.Results show that by introducing a four-division digital RTS instead of the two-division analog RTS, the safety margin increased from the plant digitalization on safety-related DI&C system can be quantitatively estimated: the CDF is significantly reduced. Plant modernization including the improvement of safety-related DI&C systems such as RTS will benefit plant safety by providing more safety margins.In addition, the number of cut sets is also reduced from 3590 to 3474 due to the improved design from a two-train analog system to a four-division digital system. As the complexity of the system increases, the number of failure combination should also increase. However, with the improved design, the cut-set probabilities are reduced and truncated below the 1E-12 threshold.VI. CONCLUSIONS AND FUTURE WORKThis paper summarized the development of an integrated risk assessment technology for highly redundant safety-related DI&C systems in NPPs. By integrating hazard analysis, reliability analysis, and consequence analysis together, the risk assessment strategy aims to: (1)help system designers and engineers to systematically address digital-based CCFs and quantitatively analyze their effects on digital system vulnerability and key plant responses;(2)improve existing PRA models for the industry by identifying and evaluating the risk associated with DI&C technologies; and (3) provide risk insights to address the licensing challenges facing DI&C upgrades. Results show that by adding the integrated FT of the four-division digital RTS instead of the two-division analog RTS, the safety margin increased from the plant digitalization on a safety-related DI&C system can be quantitatively estimated; the CDF is significantly 23 reduced. It indicates that plant modernization including the improvement of safety-related DI&C systems such as RTS will benefit plant safety by providing more safety margins.One area for future research of LWRS-RISA is to deal with the risk analysis for the Human System Interface (HSI) in DI&C modernization of existing NPPs. The HSI is one of the key advanced design features applied in modern DI&C systems of NPPs. Normally, it is designed based on a compact workstation-based system in the control room. The compact workstation provides a convenient operating environment to facilitate the display of plant status information to the operator so that operability is enhanced by using advanced display, alarm, and procedure systems. The HSI should have sufficient diversity to demonstrate D3 protection against CCF of the safety system. However, the vulnerability of HSI is affected by many factors, such as human errors, cyber-attacks, software CCFs, etc. Therefore, one of the future works aims to identify, evaluate, and reduce these system vulnerabilities to support the licensing, deployment, and operation of the HSI designs. Relevant research results will be published soon.Another future work is uncertainty quantification and verification of RESHA and BAHAMAS.VII. ACKNOWLEDGMENTSThis submitted manuscript was authored by a contractor of the U.S. Government under DOE Contract No. DE-AC07-05ID14517. Accordingly, the U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes. This information was prepared as an account of work sponsored by an agency of the U.S.Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the 24 accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. References herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof.
Version 1.1, National Aeronautics and Space Administration. M Stamatelatos, W Vesely, J Dugan, J Fragola, J Iii, J Railsback, Washington, DCFault Tree Handbook with Aerospace ApplicationsM. STAMATELATOS, W. VESELY, J. DUGAN, J. FRAGOLA, J. MINARICK III and J. RAILSBACK, "Fault Tree Handbook with Aerospace Applications," Version 1.1, National Aeronautics and Space Administration, Washington, DC (2002).
Strategy for Implementation of Safety-Related Digital I&C Systems. K Thomas And K, Scarola, INL/EXT-18-45683Idaho National LaboratoryK. THOMAS AND K. SCAROLA, "Strategy for Implementation of Safety-Related Digital I&C Systems," INL/EXT-18-45683, Idaho National Laboratory (June 2018).
Digital Instrumentation and Control Systems in Nuclear Power Plants: Safety and Reliability Issues. The National Academies PressWashington, DCNational Research CouncilNational Research Council, Digital Instrumentation and Control Systems in Nuclear Power Plants: Safety and Reliability Issues, Washington, DC: The National Academies Press (1997).
Nuclear Power Plant Instrumentation and Control. H Hashemian, Nuclear Power -Control, Reliability and Human Factors. P. TsvetkovIntechH. HASHEMIAN, "Nuclear Power Plant Instrumentation and Control," in Nuclear Power -Control, Reliability and Human Factors, P. Tsvetkov, Ed., pp. 49-66, Intech (2011);
. 10.5772/18768https://doi.org/10.5772/18768.
T.-L Chu, M Yue, G Martinez-Guridi, J Lehner, BNL-94047-2010Review of Quantitative Software Reliability Methods. Brookhaven National LaboratoryT.-L. CHU, M. YUE, G. MARTINEZ-GURIDI, and J. LEHNER, "Review of Quantitative Software Reliability Methods," BNL-94047-2010, Brookhaven National Laboratory (September 2010);
. 10.2172/1013511https://doi.org/10.2172/1013511.
Current State of Reliability Modeling Methodologies for Digital Systems and Their Acceptance Criteria for Nuclear Power Plant Assessments. T Aldemir, D Miller, M Stovsky, J Kirschenbaum, P Bucci, A Fentiman, L Mangan, NUREG/CR-6901, U.S. Nuclear Regulatory CommissionT. ALDEMIR, D. MILLER, M. STOVSKY, J. KIRSCHENBAUM, P. BUCCI, A. FENTIMAN, and L. MANGAN, "Current State of Reliability Modeling Methodologies for Digital Systems and Their Acceptance Criteria for Nuclear Power Plant Assessments," NUREG/CR-6901, U.S. Nuclear Regulatory Commission (February 2006).
. J Kirschenbaum, P Bucci, M Stovsky, D Mandelli, T Aldemir, M , J. KIRSCHENBAUM, P. BUCCI, M. STOVSKY, D. MANDELLI, T. ALDEMIR, M.
A Benchmark System for Comparing Reliability Modeling Approaches for Digital Instrumentation and Control Systems. S Yau, E Guarro, S A Ekici, Arndt, Nuclear Technology. 16553YAU, S. GUARRO, E. EKICI, and S. A. ARNDT, "A Benchmark System for Comparing Reliability Modeling Approaches for Digital Instrumentation and Control Systems," Nuclear Technology, 165, 1, 53 (2009);
. 10.13182/NT09-A4062https://doi.org/10.13182/NT09-A4062.
Common-Cause Failure Databased and Analysis System: Event Data Collection, Classification, and Coding. T E Wierman, D M Rasmuson, A Mosleh, Rev. 1, Idaho National LaboratoryNUREG/CR-6268T. E. WIERMAN, D. M. RASMUSON, and A. MOSLEH, "Common-Cause Failure Databased and Analysis System: Event Data Collection, Classification, and Coding," NUREG/CR-6268, Rev. 1, Idaho National Laboratory, (September 2007).
Nuclear Regulatory Commission, A Defense-In-Depth and Diversity Assessment of the RESAR-414 Integrated Protection System. U S , Nuclear Regulatory Commission. U.S. Nuclear Regulatory Commission, A Defense-In-Depth and Diversity Assessment of the RESAR-414 Integrated Protection System, U.S. Nuclear Regulatory Commission, Washington, DC (1979).
Use of Probabilistic Risk Assessment Methods in Nuclear Regulatory Activities. U S Nrc, U.S. NRCU.S. NRC, " Use of Probabilistic Risk Assessment Methods in Nuclear Regulatory Activities," 95-20237, U.S. NRC, (1995).
An Integrated Risk Assessment Process for Digital Instrumentation and Control Upgrades of Nuclear Power Plants. H Bao, H Zhang, K Thomas, INL/EXT-19- 55219Idaho National LaboratoryH. BAO, H. ZHANG, and K. THOMAS, "An Integrated Risk Assessment Process for Digital Instrumentation and Control Upgrades of Nuclear Power Plants," INL/EXT-19- 55219, Idaho National Laboratory, (August 2019).
Redundancy-guided System-theoretic Hazard and Reliability Analysis of Safety-related Digital Instrumentation and Control Systems in Nuclear Power Plants. H Bao, T Shorthill, H Zhang, INL/EXT-20-59550Idaho National LaboratoryH. BAO, T. SHORTHILL, and H. ZHANG, "Redundancy-guided System-theoretic Hazard and Reliability Analysis of Safety-related Digital Instrumentation and Control Systems in Nuclear Power Plants," INL/EXT-20-59550, Idaho National Laboratory (August 2020).
Quantitative Risk Analysis of High Safety-significant Safety-related Digital Instrumentation and Control Systems in Nuclear Power Plants using IRADIC Technology. H Bao, T Shorthill, E Chen, H Zhang, INL/EXT-21-64039Idaho National LaboratoryH. BAO, T. SHORTHILL, E. CHEN, and H. ZHANG, "Quantitative Risk Analysis of High Safety-significant Safety-related Digital Instrumentation and Control Systems in Nuclear Power Plants using IRADIC Technology," INL/EXT-21-64039, Idaho National Laboratory (August 2021).
Uncertainty quantification and software risk analysis for digital twins in the nearly autonomous management and control systems: A review. L Lin, H Bao, N Dinh, Annals of Nuclear Energy. 160108362L. LIN, H. BAO, and N. DINH, "Uncertainty quantification and software risk analysis for digital twins in the nearly autonomous management and control systems: A review," Annals of Nuclear Energy, 160, 108362 (2021);
. 10.1016/j.anucene.2021.108362https://doi.org/10.1016/j.anucene.2021.108362.
A Redundancy-Guided Approach for the Hazard Analysis of Digital Instrumentation and Control Systems in Advanced 26. T Shorthill, H Bao, H Zhang, H Ban, T. SHORTHILL, H. BAO, H. ZHANG, and H. BAN, "A Redundancy-Guided Approach for the Hazard Analysis of Digital Instrumentation and Control Systems in Advanced 26
. Nuclear Power Plants, Nuclear Technology. Nuclear Power Plants," Nuclear Technology (2021);
. 10.1080/00295450.2021.1957659https://doi.org/10.1080/00295450.2021.1957659.
Hazard Analysis for Identifying Common Cause Failures of Digital Safety Systems using a Redundancy-Guided Systems-Theoretic Approach. H Bao, T Shorthill, H Zhang, Annals of Nuclear Energy. 148107686H. BAO, T. SHORTHILL, and H. ZHANG, "Hazard Analysis for Identifying Common Cause Failures of Digital Safety Systems using a Redundancy-Guided Systems-Theoretic Approach," Annals of Nuclear Energy, 148, 107686 (2020).
. N G Leveson, J P Thomas, Handbook, N. G. LEVESON and J. P. THOMAS, STPA Handbook (2018).
Hazard and Consequence Analysis for Digital Systems -A New Approach to Risk Analysis in the Digital Era for Nuclear Power Plants. J Clark, A D Williams, A Muna, M Gibson, Transactions of the American Nuclear Society. 119J. CLARK, A. D. WILLIAMS, A. MUNA, and M. GIBSON, "Hazard and Consequence Analysis for Digital Systems -A New Approach to Risk Analysis in the Digital Era for Nuclear Power Plants," Transactions of the American Nuclear Society, 119, 1, 888, (2018).
APR1400 Design Control Document Tier 2. Korea Electric Power Corporation. Korea Hydro & Nuclear Power Co., LtdInstrumentation and Controls"APR1400 Design Control Document Tier 2. Chapter 7: Instrumentation and Controls," Korea Electric Power Corporation, Korea Hydro & Nuclear Power Co., Ltd, Korea, Republic of (2018).
Reliability analysis of protection system of advanced pressurized water reactor -APR 1400. P V Varde, J G Choi, D Y Lee, J B Han, P. V. VARDE, J. G. CHOI, D. Y. LEE, and J. B. HAN, "Reliability analysis of protection system of advanced pressurized water reactor -APR 1400," KAERI/TR--2468/2003.
Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 8.0. U S , NUREG/CR-7039, U.S. Nuclear Regulatory CommissionNuclear Regulatory CommissionU.S. Nuclear Regulatory Commission, "Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 8.0," NUREG/CR-7039, U.S. Nuclear Regulatory Commission (June 2011).
A novel approach for software reliability analysis of digital instrumentation and control systems in nuclear power plants. T Shorthill, H Bao, Z Hongbin, H Ban, Annals of Nuclear Energy. 158108260T. SHORTHILL, H. BAO, Z. HONGBIN, and H. BAN, "A novel approach for software reliability analysis of digital instrumentation and control systems in nuclear power plants," Annals of Nuclear Energy, 158, 108260 (2021);
. 10.1016/j.anucene.2021.108260https://doi.org/10.1016/j.anucene.2021.108260.
A new method for explicitly modelling of single failure event within different common cause failure groups. D Kancev, M Cepin, Reliability Engineering and System Safety. 103D. KANCEV and M. CEPIN, "A new method for explicitly modelling of single failure event within different common cause failure groups," Reliability Engineering and System Safety, 103, 84-93, (2012).
| []
|
[
"KILLING VECTOR FIELDS ON RIEMANNIAN AND LORENTZIAN 3-MANIFOLDS",
"KILLING VECTOR FIELDS ON RIEMANNIAN AND LORENTZIAN 3-MANIFOLDS"
]
| [
"Amir Babak ",
"Robert Ream "
]
| []
| []
| We give a complete local classification of all Riemannian 3-manifolds (M, g) admitting a nonvanishing Killing vector field T . We then extend this classification to timelike Killing vector fields on Lorentzian 3-manifolds, which are automatically nonvanishing. The two key ingredients needed in our classification are the scalar curvature S of g and the function Ric(T, T ), where Ric is the Ricci tensor; in fact their sum appears as the Gaussian curvature of the quotient metric obtained from the action of T . Our classification generalizes that of Sasakian structures, which is the special case when Ric(T, T ) = 2. We also give necessary, and separately, sufficient conditions, both expressed in terms of Ric(T, T ), for g to be locally conformally flat. We then move from the local to the global setting, and prove two results: in the event that T has unit length and the coordinates derived in our classification are globally defined on R 3 , we show that the sum S + Ric(T, T ) completely determines when the metric will be geodesically complete. In the event that the 3-manifold M is compact, we give a condition stating when it admits a metric of constant positive sectional curvature. | 10.1002/mana.202000576 | [
"https://arxiv.org/pdf/2011.01144v2.pdf"
]
| 226,236,847 | 2011.01144 | e4c84d10d994fed58645b9907f15cdb8a818aeac |
KILLING VECTOR FIELDS ON RIEMANNIAN AND LORENTZIAN 3-MANIFOLDS
6 Jun 2021
Amir Babak
Robert Ream
KILLING VECTOR FIELDS ON RIEMANNIAN AND LORENTZIAN 3-MANIFOLDS
6 Jun 2021
We give a complete local classification of all Riemannian 3-manifolds (M, g) admitting a nonvanishing Killing vector field T . We then extend this classification to timelike Killing vector fields on Lorentzian 3-manifolds, which are automatically nonvanishing. The two key ingredients needed in our classification are the scalar curvature S of g and the function Ric(T, T ), where Ric is the Ricci tensor; in fact their sum appears as the Gaussian curvature of the quotient metric obtained from the action of T . Our classification generalizes that of Sasakian structures, which is the special case when Ric(T, T ) = 2. We also give necessary, and separately, sufficient conditions, both expressed in terms of Ric(T, T ), for g to be locally conformally flat. We then move from the local to the global setting, and prove two results: in the event that T has unit length and the coordinates derived in our classification are globally defined on R 3 , we show that the sum S + Ric(T, T ) completely determines when the metric will be geodesically complete. In the event that the 3-manifold M is compact, we give a condition stating when it admits a metric of constant positive sectional curvature.
Introduction
The aim of this paper is to give a complete local classification of all Riemannian 3-manifolds (M, g) that admit a nonvanishing Killing vector field T . In fact this classification will also yield a related one: that of all Lorentzian 3-manifolds supporting a timelike Killing vector field. Our classification proceeds by considering the special case when T has constant length: the general case follows from this one by applying a conformal change by the factor of g(T, T ). But in fact there are important reasons for imposing this condition. One of them is that constant length allows us to adapt the machinery of the Newman-Penrose formalism [NP62] -a construct that originated in 4-dimensional Lorentzian geometry -to the setting of 3-dimensional Riemannian geometry; see also [SW14,NTC15,BS18], wherein similar frame techniques have been applied in dimension 3. As shown in Section 2 below, constant length is a prerequisite for this formalism. But more importantly, our interest in constant length arises from what we regard as the "canonical" constant length Killing vector field in dimension 3: the unit length Killing vector field T on (S 3 ,g) tangent to the Hopf fibration, whereg is the standard (round) metric. Given the special 1 geometry of (S 3 ,g) as a spherical space form, and the presence of such a vector field on it, we take our motivation from the following questions:
1. Can one classify locally all Riemannian 3-manifolds admitting a constant length Killing vector field? Do they take on a "canonical" form? 2. What is the relationship between the existence of constant length Killing vector fields on the one hand, and metrics of constant positive curvature on the other? 3. If a Riemannian 3-manifold admits a constant length Killing vector field, then when will it be locally conformally flat, as with (S 3 ,g)? (Yet another path of inquiry, which we do not pursue here, would be to examine when the circle action provided by a constant length Killing vector field is free, and the role that sectional curvature plays in this; see [BN08].) A complete answer to our first question above is provided in our first Theorem:
Theorem 1. Let (M, g) be a Riemannian 3-manifold that admits a unit length Killing vector field T . Then there exist local coordinates (t, r, θ) and a smooth function ϕ(r, θ) such that
T = ∂ t , g = (T ♭ ) 2 + dr 2 + ϕ 2 dθ 2 ,(1)
and where the quotient metric dr 2 + ϕ 2 dθ 2 has Gaussian curvature
− ϕ rr ϕ = 1 2 S + Ric(T, T ) ,(2)
with S and Ric the scalar curvature and Ricci tensor of g, respectively. If The following remarks help to shed light on this result: i. After our first preprint appeared, we learned of the works [Man14,LM17], in which the existence of coordinates isometric to (1) are proved, as well as a result that includes (2) as a special case; these were obtained via a different method than ours, and applied to the classification of Riemannian submersions from 3-manifolds to a surface, whose fibers are the integral curves of a Killing vector field. ii. An almost identical Theorem exists for unit timelike Killing vector fields on Lorentzian 3-manifolds; see Corollary 1 in Section 6 below, wherein the relevant Lorentzian terminology is also defined. iii. If T does not have unit length, then (1) is scaled by g(T, T ). iv. The "canonical form" alluded to above is manifested in (1) and (2); for the form of the metric in the coordinate basis {∂ t , ∂ r , ∂ θ }, see (37) in Section 5 below. As Theorem 1 makes clear, our classification depends entirely on two functions, the scalar curvature S and Ric(T, T ). Let us say more about the latter function, which is especially important; one way to appreciate its significance when dim M = 3 is as follows. If a vector field T has constant length and geodesic flow (as does any constant length Killing vector field), then the function Ric(T, T ), if nonnegative, completely governs whether its orthogonal complement T ⊥ ⊆ T M is integrable. As a consequence, it was shown in [HP16] that when such a T satisfies Ric(T, T ) > 0 and when M is orientable and compact, then T ♭ is a contact form and T is its Reeb vector field; if in addition T is divergence-free and Ric(T, T ) = 1, then T * (T ⊥ ) is J-invariant, where J is the Levi-Civita almost-complex structure on T T M (in fact these two conditions are necessary and sufficient). v. The previous remark did not assume that T is a unit length Killing vector field. Imposing this condition -as well as the condition Ric(T, T ) = 2, so that the endomorphism in (6) below defines an almost complex structure on T ⊥ -would make (M, g, T ) a Sasakian structure. In dimension 3, a classification of these on closed manifolds was obtained in [Gei97], up to diffeomorphism; an explicit metric classification was then given in [Bel01,Bel03], which also established a one-to-one correspondence between Sasakian and normal CR structures, and also classified the latter. For an application to monopole fields, see [DH19]. As to our second question above, we are able to provide the following sufficient condition, using a well known result in [Ham82]:
Theorem 2. Let (M, g) be a compact Riemannian 3-manifold and T a globally defined, unit length Killing vector field. If
S > 2 |Ric(T )| 2 g Ric(T, T ) − Ric(T, T ),(3)
where Ric(T ) is the Ricci operator, then M admits a metric of constant positive sectional curvature.
Finally, our answer to the third question is also given in terms of Ric(T, T ):
Theorem 3. Let (M, g) be a Riemannian 3-manifold that admits a unit length Killing vector field T . If g is locally conformally flat, then
4|Ric(T )| 2 g = 3Ric(T, T ) 2 − 2BRic(T, T ) + C (4)
for some constants B, C, where Ric(T ) is the Ricci operator. Conversely, given (4), there exist coordinates (r, θ) on the quotient metric in (1) with respect to which g is conformally flat when
ω θ = 0 , ω 2 r + 1 4 (ω 2 + 2B) 2 = C + B 2 , ϕ = h(θ)ω r ,
where ω 2 = 2Ric(T, T ), ϕ is as in Theorem 1, and h(θ) is a smooth function.
If Ric(T, T ) is constant, then g is locally conformally flat if and only if S = 3Ric(T, T ).
As a check of the last statement, note that for (S 3 ,g) with Hopf Killing vector field T and radius R,
Ric(T, T ) = 2 R 2 , S = 6 R 2 ·
Divergence, Twist, Shear
A Killing vector field T on a Riemannian manifold (M, g) is defined by the condition
L T g = 0,(5)
where L is the Lie derivative. However, when T has unit length and when dim M = 3, there is an equivalent formulation, given by Lemma 2 below, which plays a crucial role in our classification. This formulation also involves the Lie derivative, but owing to the low dimension, only certain components of it, which components carry geometric properties of the flow of T . These properties are the divergence, twist, and shear ; as the latter two are not as well known as the former, we now digress to define them explicitly. Thus, let T be a smooth unit length vector field defined in an open subset of a Riemannian 3-manifold (M, g), so that ∇ v T ⊥ T for all vectors v (∇ is the Levi-Civita connection). Let X and Y be two smooth vector fields such that {T, X, Y } is a local orthonormal frame. Now define the following endomorphism D of the orthogonal complement T ⊥ ⊆ T M ,
D : T ⊥ −→ T ⊥ , v → ∇ v T,(6)
and observe that its matrix with respect to the frame {T, X, Y } is
D = g(∇ X T , X) g(∇ Y T , X) g(∇ X T , Y ) g(∇ Y T , Y ) ·
Contained within this matrix are three geometric properties associated to the flow of T : 1. The divergence of T , denoted div T , is simply the trace of D.
2. By Frobenius's theorem, T ⊥ is integrable if and only if the antisymmetric part of D vanishes; as seen in (9) below, this vanishing is governed by the following function, which comprises the off-diagonal elements of the anti-symmetric part of D:
ω · · = g(T, [X, Y ]) = g(∇ Y T , X) − g(∇ X T , Y ).(7)
Since ω 2 equals the determinant of the anti-symmetric part of D, it is a frame independent quantity. We call ω 2 the twist function of T and say that the flow of T is twist-free if ω 2 = 0. 3. The third piece of information is the shear σ of T ; it is given by the trace-free symmetric part of D, whose components σ 1 , σ 2 we combine here into a complex-valued quantity:
σ · · = 1 2 g(∇ Y T , Y ) − g(∇ X T , X) σ 1 + i 1 2 g(∇ Y T , X) + g(∇ X T , Y ) σ 2 .(8)
Although σ itself is not frame independent, its magnitude |σ 2 | is: by (9) below, it is minus the determinant of the trace-free symmetric part of D. We say that the flow of T is shear-free if σ = 0. As with being twist-free, being shear-free is a frame independent statement. In terms of div T , ω, and σ, D takes the form
D = 1 2 div T 0 0 1 2 div T + −σ 1 σ 2 σ 2 σ 1 trace-free symmetric + 0 ω 2 − ω 2 0 anti-symmetric ·(9)
We record here the well known fact that, while being divergence-free is not a conformal invariant, being shear-free or twist-free is:
Lemma 1. Let (M, g) be a Riemannian 3-manifold and T a unit length vector field. Given a conformal metricg = e 2f g, the vector field T = e −f T is shear-free with respect tog if and only if T is shear-free with respect to g.
Likewise if shear-free is replaced by twist-free.
Proof. Note thatg( T , T ) = 1; given a g-
orthonormal frame {T, X, Y }, form theg-orthonormal frame { T , X, Y }, with X = e −f X and Y = e −f Y . De- noting by ∇ the Levi-Civita connection ofg, the shear of T with respect tõ g is σ = 1 2 g( ∇ Y T , Y ) −g( ∇ X T , X) + i 2 g( ∇ Y T , X) +g( ∇ X T , Y ) = e −f 2 g( ∇ Y T , Y ) − g( ∇ X T , X) + i e −f 2 g( ∇ Y T , X) + g( ∇ X T , Y ) = e −f σ,
where in the last step we have used standard formulae relating Levi-Civita connections of conformal metrics, e.g.,
∇ Y T = ∇ Y T + Y (f )T + T (f )Y.
Likewise for the twist:
ω =g( T , [ X, Y ]) = e −f g(T, [X, Y ]) = e −f ω.
Thus not only twist-free, but shear-free as well, is a conformal property:
|σ| 2 = 0 ⇔ |σ| 2 = 0 andω 2 = 0 ⇔ ω 2 = 0.
When dim M ≥ 4, divergence and shear alone are not enough to characterize unit length Killing vector fields, but they do when dim M = 3:
Lemma 2. A unit length vector field T on a Riemannian 3-manifold (M, g) is a Killing vector field if and only if its flow is geodesic, divergence-free, and shear-free.
Proof. The Killing condition (5) is equivalent to
g(∇ v T, w) + g(∇ w T, v) = 0 for all v, w ∈ T M,(10)
from which it follows that any Killing vector field T is divergence-free and shear-free, via (8). Finally, (10) also implies that any unit length Killing vector field must have geodesic flow:
∇ T T = 0.
Conversely, suppose that a unit length vector field T is geodesic, divergencefree, and shear-free, and consider (10). Writing v, w with respect to an
orthonormal frame {T, X, Y } as v = a 0 T + a 1 X + a 2 Y , w = b 0 T + b 1 X + b 2 Y, we have g(∇ v T, w) + g(∇ w T, v) = a 1 g(∇ X T, w) + a 2 g(∇ Y T, w) + a 1 g(∇ w T, X) + a 2 g(∇ w T, Y ) = 2a 1 b 1 g(∇ X T, X) + (a 1 b 2 + b 1 a 2 )g(∇ X T, Y ) + (a 1 b 2 + b 1 a 2 )g(∇ Y T, X) + 2a 2 b 2 g(∇ Y T, Y ) g(∇ X T,X) = 2(a 1 b 1 + a 2 b 2 ) g(∇ X T, X) 1 2 div T + (a 1 b 2 + b 1 a 2 ) g(∇ X T, Y ) + g(∇ Y T, X) 2σ 2 .
This vanishes by our assumptions, completing the proof.
We can now state our plan of attack: divergence, geodesic flow, twist, and shear all involve first derivatives of T , whereas curvature involves second derivatives. Our plan of attack, therefore, is to express the components of the Riemann curvature tensor in terms of the divergence, twist, and shear of T , thereby reducing second-order equations to first-order ones -indeed, further encouraged by the fact that, as we have just seen, if T is a unit length Killing vector field, then div T, σ, and ∇ T T all vanish, so that only T 's twist function ω 2 is unknown. The hope is that this will simplify things enough to allow a full determination of the metric. And it will -after we express the curvature in terms of the divergence, twist, and shear, which we now proceed to do.
The Newman-Penrose Formalism for Riemannian 3-manifolds
In what follows we present the Newman-Penrose formalism for Riemannian 3-manifolds, presenting here only the resulting equations; complete derivations can be found in [Aaz15]. Let {T, X, Y } be an orthonormal framewith T not necessarily a Killing vector field -and form the complex-valued quantities
m · · = 1 √ 2 (X − iY ) , m · · = 1 √ 2 (X + iY ).
Henceforth we work with the complex frame {T, m, m}, for which only g(T, T ) = 1 , g(m, m) = 1 are nonzero. The following quantities associated to this complex frame play a central role in all that follows.
Definition. The spin coefficients of the complex frame {T, m, m} are the complex-valued functions
κ = −g(∇ T T , m) , ρ = −g(∇ m T , m) , σ = −g(∇ m T , m), ε = g(∇ T m, m) , β = g(∇ m m, m).
Note that, because T has unit length, its flow is geodesic, ∇ T T = 0, if and only if κ = 0; that σ, when written out in terms of its real and imaginary parts, is precisely the complex shear (8); and that the spin coefficient ρ has real and imaginary parts given by
ρ = − div T 2 − i ω 2 · (11)
In other words, the first three spin coefficients κ, ρ, σ stand in for the geometric properties of the flow of T discussed above. In terms of all five spin coefficients, the Lie brackets are
[T, m] = κ T + (ε +ρ) m + σ m,(12)[m, m] = (ρ − ρ) T +β m − β m.(13)
(The remaining Lie bracket [T, m] is obtained by complex conjugation.) Now to the curvature; to begin with, our sign convention for the Riemann curvature tensor is
R(X, Y )Z = ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [X,Y ] Z,
in which case the Ricci tensor with respect to the complex frame {T, m, m} is
Ric(v, w) = R(T, v, w, T ) + R(m, v, w, m) + R(m, v, w, m).
The following identities satisfied by the Ricci tensor in the complex frame {T, m, m} will appear in formulae below: in terms of the Ricci tensor and the spin coefficients. Doing so, the following (first-order) equations arise; they play the driving role in our classification:
Ric(m, m) = R(T, m, m, T ), Ric(T, T ) = 2R(m, T, T, m), Ric(T, m) = R(m, T, m, m), Ric(m, m) = 1 2 Ric(T, T ) + R(m, m, m, m).T (ρ) − m(κ) = |κ| 2 + |σ| 2 + ρ 2 + κβ + 1 2 Ric(T, T ),(14)T (σ) − m(κ) = κ 2 + 2σε + σ(ρ +ρ) − κβ + Ric(m, m),(15)m(ρ) − m(σ) = 2σβ + (ρ − ρ)κ + Ric(T, m),(16)T (β) − m(ε) = σ(κ −β) + κ(ε −ρ) + β(ε +ρ) − Ric(T, m),(17)m(β) + m(β) = |σ| 2 − |ρ| 2 − 2|β| 2 + (ρ −ρ)ε − Ric(m, m) + 1 2 Ric(T, T ).(18)
Finally, up to complex conjugation, there are two nontrivial differential Bianchi identities: We now immediately specialize to the case when T is a Killing vector field:
T (Ric(T, m)) − 1 2 m(Ric(T, T )) + m(Ric(m, m)) = κ Ric(T, T ) − Ric(m, m) + ε + 2ρ +ρ Ric(T, m) (19) + σ Ric(T, m) − κ + 2β Ric(m, m)
Lemma 3. Let (M, g) be a Riemannian 3-manifold admitting a unit length Killing vector field T with twist function ω 2 . With respect to any complex frame {T, m, m}, the Ricci tensor Ric and scalar curvature S satisfy
T (ω) = 0 , Ric(T, T ) = ω 2 2 , Ric(m, m) = 0, m(β) + m(β) = −2|β| 2 − iωε − 1 2 S − ω 2 2 ·(21)
When (21) is written in terms of the underlying orthonormal frame {T, X, Y } of the complex frame, it is
X(div X) + Y (div Y ) = −(div X) 2 − (div Y ) 2 − iωε − 1 2 S − ω 2 2 · (22)
Proof. By Lemma 2, we know that κ = σ = ρ +ρ = 0;
inserting these into (14) and (15) directly yields the first line of equations; e.g.,
Ric(T, T ) = ω 2 2 , T (ω) = 0,
are, respectively, the real and imaginary parts of (14). Meanwhile, (21) follows from (18), which has no imaginary part, and the fact that the scalar curvature S in terms of the complex frame {T, m, m} is
S = Ric(T, T ) + 2Ric(m, m) ⇒ Ric(m, m) = S 2 − ω 2 4 ·(23)
Finally, (22) follows from the fact that, when ∇ T T = 0,
β = 1 √ 2 g(∇ Y X, Y ) + ig(∇ X X, Y ) = 1 √ 2 (div X − i div Y ),(24)
which completes the proof.
We have not yet considered the differential Bianchi identities; let us do so now. Inserting the contents of Lemma 3 into (19) and (20), as well as ρ = −ρ, yields T (m(ρ)) = (ε +ρ)m(ρ) for (19); but this is precisely the Lie bracket (12) applied to ρ (bearing in mind that T (ρ) = 0), and therefore carries no new information. As for (20), it yields −m(m(ρ)) + m(m(ρ)) = −β m(ρ) + β m(ρ), where T (S) = 0 andρ = −ρ have been used. But this is precisely the Lie bracket (13) applied to ρ, so that (20) also yields no new information.
Local Coordinates
The goal of this section is to establish the "right" local coordinates in which to prove Theorem 1 in the next section. To begin with, recall that because κ = σ = ρ +ρ = 0, the only spin coefficients remaining are ε and β. Observe that the former is in fact purely imaginary,
ε = ig(∇ T X, Y ),(25)
and the latter, when ∇ T T = 0, is given by (24). The following "gauge freedom" simultaneously enjoyed by these two spin coefficients will prove useful in the proof of Theorem 1:
Proposition. Let T be a unit length Killing vector field with twist function ω 2 and {T, m, m} a complex frame. Then there exists a smooth real-valued function ϑ such that the complex frame {T, m * , m * } defined by the rotation m * · · = e iϑ m , m * · · = e −iϑ m has spin coefficients κ * = σ * = 0, ρ * = ρ, ε * = ρ , Re(β * ) = 0 , T (β * ) = 0.
Proof. By definition,
κ * = −g(∇ T T , m * ) = e iϑ κ = 0;
similarly, σ * = e 2iϑ σ = 0, and ρ * = ρ (in particular, ω 2 * = ω 2 ). Next,
ε * = g(∇ T m * , m * ) = e −iϑ g(∇ T (e iϑ m), m) = ε + e −iϑ T (e iϑ ) = ε + iT (ϑ).(27)
Similarly,
β * = g(∇ m * m * , m 1 ) = g(∇ m (e iϑ m), m)
= e iϑ (β + im(ϑ)).
By (25) and (27), we may choose a locally defined function ϑ so that
ε * = ρ * = ρ.
Let {T, X * , Y * } denote the underlying orthonormal frame corresponding to the complex frame {T, m * , m * }. Since [T, X * ] = 0, there exist local coordinates (t, u, v) and functions p, q, r such that
T = ∂ t , X * = ∂ u , Y * = p∂ t + q∂ u + r∂ v ,
with p, q, r functions of u, v only, since [T, Y * ] = 0, and with r nowhere vanishing. The coframe metrically equivalent to {T, X * , Y * } is therefore
T ♭ = dt − p r dv , X ♭ * = du − q r dv , Y ♭ * = 1 r dv.
Next, since (X ♭ * ) 2 + (Y ♭ * ) 2 defines a Riemannian metric on the 2-manifold with coordinates {(u, v)}, and since any Riemannian 2-manifold is locally conformally flat (see, e.g., [Che55]), it follows that there exist coordinates (x, y) and a smooth function λ(x, y) such that (X ♭ * ) 2 + (Y ♭ * ) 2 = e 2λ (dx 2 + dy 2 ).
By a rotation in x, y if necessary, we may further assume that
X ♭ * = e λ dx , Y ♭ * = e λ dy.
In the new coordinates (t, x, y), we thus have that
T = ∂ t , X * = e −λ (∂ x + a∂ t ) , Y * = e −λ (∂ y + b∂ t ),
for some smooth functions a(x, y), b(x, y). With these coordinates in hand, we now return to the task of satisfying (28), namely, finding a function ψ(x, y) satisfying Re(β o ) = Re e iψ (β * + im * (ψ)) = 0, or
e iψ (β * + im * (ψ)) + e −iψ (β * − i m * (ψ)) = 0.(30)
When expanded, and using the fact that The following Corollary collects together what we've established so far:
div X * = λ x e λ , div Y * = λ y e λ ,(30)
Corollary. Let (M, g) be a Riemannian 3-manifold and T a unit length Killing vector field with twist function ω 2 . Then there exists an orthonormal frame {T, X, Y } satisfying
κ = σ = 0 , ρ = ε = − i 2 ω , β = − i √ 2 div Y,(31)
and with T (ω) = T (β) = 0. In this frame, (22) takes the form
Y (div Y ) = −(div Y ) 2 − 1 2 S + ω 2 2 ·(32)
Notice that (32) implies that such a frame may not always exist globally; e.g., if M is compact and S is nonnegative and positive somewhere, then a standard Riccati argument yields that in such a case the only complete solution to (32) is one where div Y = S + ω 2 2 = 0, which is impossible. We now proceed to our local classification.
The Local Classification
Theorem 1 follows from one further modification to the orthonormal frame satisfying (31):
Theorem 1. Let (M, g) be a Riemannian 3-manifold that admits a unit length Killing vector field T . Then there exist local coordinates (t, r, θ) and a smooth function ϕ(r, θ) such that
T = ∂ t , g = (T ♭ ) 2 + dr 2 + ϕ 2 dθ 2 ,(33)
and where the quotient metric dr 2 + ϕ 2 dθ 2 has Gaussian curvature
− ϕ rr ϕ = 1 2 S + Ric(T, T ) ,(34)
with S and Ric the scalar curvature and Ricci tensor of g, respectively. If Proof. Let (M, g) be a Riemannian 3-manifold and T a unit length Killing vector field with twist function ω 2 . By our Corollary above, there exist a local orthonormal frame {T, X, Y } satisfying (31) and coordinates (t, x, y) in which T = ∂/∂t. Let {T ♭ , X ♭ , Y ♭ } denote the dual coframe. We now modify the coordinates (t, x, y) while keeping T = ∂/∂t unchanged. The key is that (12) and (13) satisfy
[T, X] = [T, Y ] = 0 , [X, Y ] = ωT + (div Y )X,
from which it follows that Y ♭ is closed, dY ♭ = 0; hence
Y ♭ = dr
for some smooth function r(x, y). Similarly,
dX ♭ = (div Y )Y ♭ ∧ X ♭ ⇒ X ♭ = ϕdθ
for some smooth functions ϕ(x, y) > 0 and θ(x, y), with the former satisfying
Y (ϕ) = (div Y )ϕ(35)
(recall that T (β) = 0). Since X(r) = Y (θ) = 0, we can define new coordinates (t, r, θ), in terms of which the frame {T, X, Y } takes the form
T = ∂ t , X = h∂ t + 1 ϕ ∂ θ , Y = k∂ t + ∂ r ,(36)
for some smooth functions h, k; furthermore, ϕ t = h t = k t = 0 (recall that [T, X] = [T, Y ] = 0), so that ϕ, h, k are all functions of r, θ only. Thus
g = (T ♭ ) 2 + (X ♭ ) 2 + (Y ♭ ) 2 = (T ♭ ) 2 + dr 2 + ϕ 2 dθ 2 ,
confirming (33). Now, the quotient metric dr 2 + ϕ 2 dθ 2 has scalar curvature −2ϕ rr /ϕ, hence Gaussian curvature −ϕ rr /ϕ. To relate this to the curvature of (M, g), we take a Y -derivative of (35), make use of (32), and note that ∂ t (div Y ) = 0 by (26), to obtain
ϕ rr = Y (div Y )ϕ + (div Y ) 2 ϕ ⇒ − ϕ rr ϕ (32) = 1 2 S + ω 2 2 ·
Since Ric(T, T ) = ω 2 2 by Lemma 3, this confirms (34). There remains, finally, the statement about completeness; thus, suppose that on R 3 = {(t, r, θ)} with metric g given by (33) we have the globally defined vector fields appearing in (36). Let γ(s) = (t(s), r(s), θ(s)) be a geodesic in (R 3 , g); since g(T, γ ′ (s)) is a constant for all s, which constant we denote by c, the tangent vector γ ′ (s) takes the form
γ ′ (s) = cT | γ(s) + a(s)X| γ(s) + b(s)Y | γ(s)
for some smooth functions a(s), b(s). Letting γ(s) · · = (r(s), θ(s)) denote the projection onto R 2 = {(r, θ)}, it follows that γ(s) will be complete if and only if γ(s) is complete in (R 2 , dr 2 +ϕ 2 dθ 2 ). We now make use of well known result in [KW74]: the latter metric is complete if and only if
lim r→∞ inf |p|≥r − ϕ rr ϕ p ≤ 0.
Since − ϕrr ϕ = 1 2 S + Ric(T, T ) , the proof is complete. A final remark regarding Theorem 1: bear in mind that, since
T ♭ = dt − ϕhdθ − kdr,
the coordinates (t, r, θ) above are not "semigeodesic" (see, e.g., [Lee18]); indeed, the metric components g ij in the coordinate basis {∂ t , ∂ r , ∂ θ } are given by
(g ij ) = 1 −k −ϕh −k 1 + k 2 ϕhk −ϕh ϕhk ϕ 2 (1 + h 2 ) ·(37)
We now proceed to the Lorentzian setting.
The Lorentzian setting
Before proceeding to a proof of the Lorentzian analogue of Theorem 1, we first collect a few facts from Lorentzian geometry; in what follows we adopt the metric index (−++). First, a vector field T on a Lorentzian manifold (M, g L ) is timelike if g L (T, T ) < 0. Second, if a timelike T has unit length, g L (T, T ) = −1, then
g R · · = g L + 2(T ♭ L ) 2(38)
defines a Riemannian metric on M (here T ♭ L = g L (T, ·)). Third, the following properties hold between g R and g L :
1. T is a unit length Killing vector field with respect to g R if and only if T is a unit timelike Killing vector field with respect to g L (see, e.g., [Ole14]). 2. If T is a g R -unit length Killing vector field, then Ric R (T, T ) = Ric L (T, T ) (consult [Ole14]; this follows because ∇ R X T = −∇ L X T for any unit length X that is g R -or g L -orthogonal to T , where ∇ R and ∇ L are, respectively, the Levi-Civita connections of g R and g L ), while their scalar curvatures S R and S L satisfy
S L = S R + 2Ric R (T, T ).
In particular, S R + Ric R (T, T ) = S L − Ric L (T, T ). 3. If T is g R -unit length Killing vector field, then g L is complete if and only if g R is complete (see [RS94]). With these facts established, the Lorentzian analogue of Theorem 1 now follows easily:
Corollary 1. Let (M, g L ) be a Lorentzian 3-manifold that admits a unit timelike Killing vector field T . Then there exists local coordinates (t, r, θ) and a smooth function ϕ(r, θ) such that
T = ∂ t , g L = −(T ♭ ) 2 + dr 2 + ϕ 2 dθ 2 ,(39)
and where the quotient metric dr 2 + ϕ 2 dθ 2 has Gaussian curvature Equivalently, g L is complete if and only if g R is complete, where g R is the corresponding Riemannian metric given by (38).
Proof. By our remarks above, T is a unit length Killing vector field with respect to the Riemannian metric g R , with S R + Ric R (T, T ) = S L − Ric L (T, T ); Corollary 1 therefore follows immediately from Theorem 1.
The compact case
We now prove a global obstruction result in the compact setting. Thus, let (M, g) be a compact Riemannian 3-manifold equipped with a globally defined unit length Killing vector field. With respect to a local orthonormal frame {T, X, Y }, we have, by Lemma 3, that
Ric(X, X) = Ric(Y, Y )
via Ric(m,m) = 0
, Ric(T, T ) = ω 2 2 .
In fact, because
S 2 − ω 2 4 (23) = Ric(m, m) = 1 2 Ric(X, X) + Ric(Y, Y ) ,
it follows that Ric(X, X) = Ric(Y, Y ) = S 2 − ω 2 4 . Finally, by (16) we get Ric(T, m) = m(ρ), whose real and imaginary parts yield
Ric(T, X) = − Y (ω) 2 , Ric(T, Y ) = Y (ω) 2 ·(40)
Thus the Ricci operator Ric :
T M −→ T M , defined by v → Ric(v) = R(v, T )T + R(v, X)X + R(v, Y )Y,(41)
has, with respect to the frame {T, X, Y }, the matrix
Ric = 1 2 ω 2 −Y (ω) X(ω) −Y (ω) S − ω 2 2 0 X(ω) 0 S − ω 2 2 ·(42)
The characteristic polynomial of Ric is
S 2 − ω 2 4 − λ λ 2 − S 2 + ω 2 4 λ + ω 2 2 S 2 − ω 2 4 − 1 4 |∇ω| 2 g ,
where ∇ω = X(ω)X + Y (ω)Y is the gradient of ω; the eigenvalues of Ric are then easily found to be
λ 1 = S 4 + ω 2 8 + √ ∆ 2 , λ 2 = S 4 + ω 2 8 − √ ∆ 2 , λ 3 = S 2 − ω 2 4 ,(43)
where ∆ · · = 1 4 (S − 3 2 ω 2 ) 2 + |∇ω| 2 g . Note that when the twist is constant, (42) reduces to
Ric = ω 2 2 0 0 0 S 2 − ω 2 4 0 0 0 S 2 − ω 2 4 ·(44)
As mentioned in the Introduction, the canonical such example is (S 3 ,g) with radius R and Hopf Killing vector field T :
Ric(T, T ) = 2 R 2 = ω 2 2 , S = 6 R 2 ⇒ S 2 − ω 2 4 = ω 2 2 ·(45)
In any case, owing to Hamilton's well known result regarding the positivity of the Ricci operator in dimension 3 [Ham82], we have the following global obstruction:
Theorem 2. Let (M, g) be a compact Riemannian 3-manifold and T a globally defined, unit length Killing vector field. If
S > 2 |Ric(T )| 2 g Ric(T, T ) − Ric(T, T ),(46)
then M admits a metric of constant positive sectional curvature.
Proof. Observe that the eigenvalues of Ric in (43) are all positive when
S > |∇ω| 2 g ω 2 + ω 2 2 · (47) Because Ric(T ) = R(T, T )T + R(T, X)X + R(T, Y )Y (40) = ω 2 2 T − Y (ω) 2 X + X(ω) 2 Y,(48)
it follows that |Ric(T )| 2 g = 1 4
(ω 4 + |∇ω| 2 g ).
Since Ric(T, T ) = ω 2 2 , (46) implies (47). By [Ham82], positive Ricci operator implies that M admits a metric of constant positive sectional curvature.
Note that the positive sectional curvature condition in [Ham82] requires that no eigenvalue of Ric should be larger than the sum of the other two eigenvalues. This requires S twice as large: S > 2 |∇ω| 2 g ω 2 + ω 2 .
Criterion for Conformal Flatness
A metric on a 3-manifold is locally conformally flat if and only if its Cotton-York tensor vanishes; since this 2-tensor is symmetric and trace-free, this gives five conditions. The Cotton-York tensor is calculated in Appendix A, where it is written in matrix form as c 1 c 2 c 3 with respect to a local orthonormal frame {T, X, Y } satisfying (36) in the coordinates (t, r, θ); see (55)-(57) below. In what follows, the entry in the i th column and j th row is denoted by c ij . With that said, we now proceed to the proof of Theorem 3:
Theorem 3. Let (M, g) be a Riemannian 3-manifold that admits a unit length Killing vector field T . If g is locally conformally flat, then
4|Ric(T )| 2 g = 3Ric(T, T ) 2 − 2BRic(T, T ) + C(49)
for some constants B, C, where Ric(T ) is the Ricci operator. Conversely, given (49), there exist coordinates (r, θ) on the quotient metric in (1) with respect to which g is conformally flat when
ω θ = 0 , ω 2 r + 1 4 (ω 2 + 2B) 2 = C + B 2 , ϕ = h(θ)ω r ,
where ω 2 = 2Ric(T, T ), ϕ is as in Theorem 1, and h(θ) is a smooth function.
If Ric(T, T ) is constant, then g is locally conformally flat if and only if S = 3Ric(T, T ).
Proof. We start by setting the entry c 32 equal to zero, 1 2 Y (X(ω)) = 1 2 ∂ ∂r 1 ϕ ∂ω ∂θ = 0 (recall from Lemma 3 that ∂ t ω = T (ω) = 0), which implies that
ω θ = A(θ)ϕ(50)
for some function A(θ). Next, c 21 = c 31 = 0 together yield
S = 5 2 ω 2 + B 1 = 5Ric(T, T ) + B 1(51)
for some constant B 1 . It follows that 1 2 S + Ric(T, T ) = − ϕrr ϕ = B 1 2 + 3 2 ω 2 . The remaining two conditions are c 22 = c 33 = 0. Substituting (50) and (51) into c 33 = 0, and recalling (35), gives
ω(ω 2 + B 1 ) = −2 ω r ϕ r + A ′ (θ) ϕ ,
which, after rearranging, becomes
ϕ ω(ω 2 + B 1 ) + 2ω r ϕ r = −2A ′ (θ).(52)
Finally, from c 22 = 0 we get ω(ω 2 + B 1 ) = −2ω rr , and after multiplying through by −ω r and integrating yields
f (θ) = ω 4 + 2B 1 ω 2 + 4ω 2 r(53)
for some function f (θ). To relate f (θ) and A(θ), take a θ-derivative of f ,
f ′ (θ) = 4ω 3 ω θ + 4B 1 ωω θ + 4ω r ω rθ ,(50)
= 4ω 3 A(θ)ϕ + 4B 1 ωA(θ)ϕ + 8ω r A(θ)ϕ r , = 4A(θ) ϕω(ω 2 + B 1 ) + 2ω r ϕ r ,
= −8A(θ)A ′ (θ), and integrate, to obtain
f (θ) = −4A 2 (θ) + 4C
for some constant C. Inserting this back into (53) gives 4ω 2 r = −4A 2 (θ) + 4C + B 2 1 − (ω 2 + B 1 ) 2 . Substituting (50) for A(θ), dividing through by 4, and setting B · · = B 1 /2, yields
ω 2 r + ω 2 θ ϕ 2 = C − ω 4 4 − Bω 2 .(54)
The left-hand side can be further simplified; indeed, since Ric(T, T ) = ω 2 2 ,
Ric(T ) (36) = ω 2 2 T − ω r 2 X + ω θ 2ϕ Y
(recall (48)), so that
|Ric(T )| 2 g = ω 4 4 + ω 2 r 4 + ω 2 θ 4ϕ 2 ·
Substituting this into (54) yields
4|Ric(T )| 2 g = 3Ric(T, T ) 2 − 2BRic(T, T ) + C,
which is precisely (49). Conversely, suppose that (49) holds; then (54) holds and we see that |dω| 2 g = C − ω 4 4 − Bω 2 = |∇ω| 2 g . Next, observe that the vector field
X · · = 2Ric(T ) − ω 2 T = X(ω)Y − Y (ω)X
is divergence-free and satisfies both | X| g = |∇ω| g and g( X, ∇ω) = 0, in which case its normalization will also be divergence-free: div X
| X| g = − g(∇| X| g , X) | X| 2 g = 0.
Then, setting Y · · = ∇ω |∇ω|g gives an orthonormal frame {T, X, Y } satisfying (26). Working in this frame, set X · · = X, Y · · = Y and adjust the coordinates r, θ accordingly. Then in these new coordinates ω θ = 0 and (54) becomes the following ODE:
ω 2 r + 1 4 (ω 2 + 2B) 2 = C + B 2 .
This has the form of a conservation of energy equation with positive potential. The potential is a single well when B ≥ 0 and a double well when B < 0. Thus there will be periodic solutions for generic constants B, C and initial value ω| r=0 = ω 0 satisfying C +B 2 ≥ 1 4 (ω 2 0 +2B) 2 . This is not enough to guarantee conformal flatness, as (51) and (52) must also be satisfied. In light of (34), we now show that these require that ϕ = h(θ)ω r for some function h(θ). Indeed, taking an r derivative of (54) yields 2ω rr ω r = −ω(ω 2 + B 1 )ω r .
Since (50) implies that A is zero, the above implies that (52) can be written as −2ω rr ϕ + 2ω r ϕ r = 0. This requires ω constant or ϕ = h(θ)ω r for some function h(θ). Taking two r derivatives yields
A. Derivation of the Cotton-York Tensor
We compute the Cotton-York tensor with respect to a frame {T, X, Y } satisfying (31). First, the Cotton tensor, Cot, is the exterior covariant derivative of the Schouten tensor:
Cot = d ∇ Sch = ∇ T X Y ⊗ P T ♭ X ♭ Y ♭ + T X Y ⊗ dP ∧ T ♭ X ♭ Y ♭ + T X Y ⊗ P d T ♭ X ♭ Y ♭ = T X Y ⊗ (ωP + dP − P ω) ∧ T ♭ X ♭ Y ♭ ·
Using this, we have
Of related interest is the case when CY equals the traceless Ricci tensor,
CY = Ric − 1 3 Sg,(59)
specifically the case when the scalar curvature S is constant; see, e.g., [NTC15], where this equality is related to so-called topological massive gravity in dimension 3, and where S = 6Λ with Λ the cosmological constant. We mention here in passing that in the presence of a unit length Killing vector field T , the condition (59) with S constant implies that Ric(T, T ) = ω 2 2 is also constant. Indeed, 5 4 ωY (ω)
= 0, together imply that ω is constant, as can be easily verified.
( 1 )
1holds globally on M = R 3 , then g is complete if and only if lim r→∞ inf |p|≥r S + Ric(T, T ) p ≤ 0.
The Newman-Penrose begins by expressing the Lie brackets in terms of spin coefficients, as we saw in (12) and (13) above. It then moves down to the level of curvature, by expressing the following components of the curvature tensor, R(T, m, T, m) , R(T, m, T, m) , R(m, m, T, m) R(T, m, m, m) , R(m, m, m, m),
and m(Ric(T, m)) + m(Ric(T, m)) − T Ric(m, m) − (1/2)Ric(T, T ) = (ρ +ρ) Ric(T, T ) − Ric(m, m) −σRic(m, m) − σRic(m, m) (20) − 2κ +β Ric(T, m) − 2κ + β Ric(T, m).
Now, choose any other function ψ satisfying T (ψ) = 0 and rotate m * , m * by ψ; let {T, m o , m o } denote the corresponding frame. Then the analogue of (27) for the frame {T, m o , m o } shows that ε o remains unchanged, ε o = ε * = ρ * = ρ, so that our task would be complete if we can find a ψ satisfying T (ψ) = 0 and Re(β o ) = 0. (28) To do so, observe that when ε * = ρ * , then [T, m * ]
is a quasilinear first-order PDE in ψ,(sin ψ)ψ x − (cos ψ)ψ y = (cos ψ)λ x + (sin ψ)λ y ,which has a solution by the method of characteristics. Finally, (16) and (17) together yield T (β * ) − m(ε * )
( 33 )
33holds globally on M = R 3 , then g is complete if and only if lim r→∞ inf |p|≥r S + Ric(T, T ) p ≤ 0.
−
Ric L (T, T ) , with S L and Ric L the scalar curvature and Ricci tensor of g L , respectively. If (39) is defined globally on M = R 3 , then g L is complete if and only if lim r→∞ inf |p|≥r S L − Ric L (T, T ) p ≤ 0.
Now using (34), this gives (51), showing that for this choice of φ, the metric g is conformally flat. Finally, as (58) in Appendix A shows, if Ric(T, T ) is constant, then the Cotton-York tensor vanishes if and only if the scalar curvature satisfies S = 3Ric(T, T ).
The Cotton-York tensor is the Hodge-star of the Cotton tensor:CY · · = ⋆Cot = T X Y ⊗ CY case CY = 0 if and only if S = 3Ric(T, T ).
=
Ric(X, Y )
Written as CY = c 1 c 2 c 3 with columns c 1 , c 2 , c 3 , it is given byObserve that if Ric(T, T ) = ω 2 /2 is constant, then the Cotton-York tensor simplifies to
The Newman--Penrose formalism for Riemannian 3-manifolds. Aazami Amir Babak, Journal of Geometry and Physics. 94Amir Babak Aazami. The Newman--Penrose formalism for Riemannian 3- manifolds. Journal of Geometry and Physics, 94:1-7, 2015.
. Florin Alexandru Belgun. Normal CR structures on compact 3-manifolds. Mathematische Zeitschrift. 2383Florin Alexandru Belgun. Normal CR structures on compact 3-manifolds. Math- ematische Zeitschrift, 238(3):441-460, 2001.
. Florin Alexandru Belgun. Normal CR structures on S 3 . Mathematische Zeitschrift. 2441Florin Alexandru Belgun. Normal CR structures on S 3 . Mathematische Zeitschrift, 244(1):125-151, 2003.
Killing vector fields of constant length on Riemannian manifolds. Nikolaevich Valerii, Yu G Berestovskii, Nikonorov, Siberian Mathematical Journal. 493Valerii Nikolaevich Berestovskii and Yu G Nikonorov. Killing vector fields of constant length on Riemannian manifolds. Siberian Mathematical Journal, 49(3):395-407, 2008.
Three-manifolds with many flat planes. Renato Bettiol, Benjamin Schmidt, Transactions of the American Mathematical Society. 3701Renato Bettiol and Benjamin Schmidt. Three-manifolds with many flat planes. Transactions of the American Mathematical Society, 370(1):669-693, 2018.
An elementary proof of the existence of isothermal parameters on a surface. Shing-Shen Chern, Proceedings of the American Mathematical Society. 65Shing-Shen Chern. An elementary proof of the existence of isothermal parame- ters on a surface. Proceedings of the American Mathematical Society, 6(5):771- 782, 1955.
Ricci-positive geodesic flows and pointcompletion of static monopole fields. Kumbu Dorji, Adam Harris, Journal of Geometry and Physics. 139Kumbu Dorji and Adam Harris. Ricci-positive geodesic flows and point- completion of static monopole fields. Journal of Geometry and Physics, 139:78- 87, 2019.
Normal contact structures on 3-manifolds. Hansjörg Geiges, Tohoku Mathematical Journal, Second Series. 493Hansjörg Geiges. Normal contact structures on 3-manifolds. Tohoku Mathemat- ical Journal, Second Series, 49(3):415-422, 1997.
Three-manifolds with positive Ricci curvature. Richard S Hamilton, Journal of Differential Geometry. 172Richard S. Hamilton. Three-manifolds with positive Ricci curvature. Journal of Differential Geometry, 17(2):255-306, 1982.
Conformal great circle flows on the threesphere. Adam Harris, Gabriel P Paternain, Proceedings of the. theAmerican Mathematical Society144Adam Harris and Gabriel P. Paternain. Conformal great circle flows on the three- sphere. Proceedings of the American Mathematical Society, 144:1725-1734, 2016.
Curvature functions for open 2-manifolds. Jerry L Kazdan, Frank W Warner, Annals of Mathematics. Jerry L. Kazdan and Frank W. Warner. Curvature functions for open 2- manifolds. Annals of Mathematics, pages 203-219, 1974.
Introduction to Riemannian manifolds. John M Lee, Springer2nd editionJohn M. Lee. Introduction to Riemannian manifolds. Springer, 2nd edition, 2018.
Compact stable surfaces with constant mean curvature in Killing submersions. M Ana, José M Lerma, Manzano, Annali di Matematica Pura ed Applicata196Ana M Lerma and José M Manzano. Compact stable surfaces with constant mean curvature in Killing submersions. Annali di Matematica Pura ed Applicata (1923-), 196(4):1345-1364, 2017.
On the classification of Killing submersions and their isometries. M José, Manzano, Pacific Journal of Mathematics. 2702José M Manzano. On the classification of Killing submersions and their isome- tries. Pacific Journal of Mathematics, 270(2):367-392, 2014.
An approach to gravitational radiation by a method of spin coefficients. Ezra Newman, Roger Penrose, Journal of Mathematical Physics. 33Ezra Newman and Roger Penrose. An approach to gravitational radiation by a method of spin coefficients. Journal of Mathematical Physics, 3(3):566-578, 1962.
A Goldberg-Sachs theorem in dimension three. Arman Pawe L Nurowski, Taghavi-Chabert, Classical and Quantum Gravity. 3211115009Pawe l Nurowski and Arman Taghavi-Chabert. A Goldberg-Sachs theorem in dimension three. Classical and Quantum Gravity, 32(11):115009, 2015.
Canonical variation of a Lorentzian metric. Benjamín Olea, Journal of Mathematical Analysis and Applications. 4191Benjamín Olea. Canonical variation of a Lorentzian metric. Journal of Mathe- matical Analysis and Applications, 419(1):156-171, 2014.
On completeness of certain families of semi-Riemannian manifolds. Alfonso Romero, Miguel Sánchez, Geometriae Dedicata. 531Alfonso Romero and Miguel Sánchez. On completeness of certain families of semi-Riemannian manifolds. Geometriae Dedicata, 53(1):103-117, 1994.
Three-manifolds with constant vector curvature. Benjamin Schmidt, Jon Wolfson, Indiana University Mathematics Journal. 636Clark University WorcesterMA 01610 Email address: [email protected], [email protected] Schmidt and Jon Wolfson. Three-manifolds with constant vector cur- vature. Indiana University Mathematics Journal, 63(6):1757-1783, 2014. Clark University Worcester, MA 01610 Email address: [email protected], [email protected]
| []
|
[
"Unconstrained Facial Action Unit Detection via Latent Feature Domain",
"Unconstrained Facial Action Unit Detection via Latent Feature Domain"
]
| [
"Zhiwen Shao ",
"Fellow, IEEEJianfei Cai ",
"Tat-Jen Cham ",
"Xuequan Lu ",
"Lizhuang Ma "
]
| []
| [
"IEEE TRANSACTIONS ON AFFECTIVE COMPUTING"
]
| Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propose an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconstrained target domain by exploiting labels of AU-related facial landmarks. Specifically, we map a source image with label and a target image without label into a latent feature domain by combining source landmark-related feature with target landmark-free feature. Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance. Moreover, we introduce a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature by treating the adversarial learning as a multi-player minimax game. Our framework can also be naturally extended for use with target-domain pseudo AU labels. Extensive experiments show that our method soundly outperforms lower-bounds and upper-bounds of the basic model, as well as state-of-the-art approaches on the challenging in-the-wild benchmarks. The code is available at https://github.com/ZhiwenShao/ADLD. Low High Latent Source Target Disentangle-Swap-Translate s x t x s g t g s l z t l z s t z t t work required [14], it is costly and impractical to manually annotate unconstrained images at a large scale for fullysupervised learning.Limitations of Existing Solutions. There have been some attempts at AU detection of unconstrained images, which often depend on pseudo AU labels. These pseudo labels were automatically annotated by an AU detection model[13]trained with constrained images, which are inaccurate due | 10.1109/taffc.2021.3091331 | [
"https://arxiv.org/pdf/1903.10143v4.pdf"
]
| 235,490,203 | 1903.10143 | 7613940eded437e147f148ebbd82f0c7a2d69741 |
Unconstrained Facial Action Unit Detection via Latent Feature Domain
Zhiwen Shao
Fellow, IEEEJianfei Cai
Tat-Jen Cham
Xuequan Lu
Lizhuang Ma
Unconstrained Facial Action Unit Detection via Latent Feature Domain
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
X, NO. X, X 1Index Terms-Unconstrained facial AU detectiondomain adaptationlandmark adversarial lossfeature disentanglement !
Facial action unit (AU) detection in the wild is a challenging problem, due to the unconstrained variability in facial appearances and the lack of accurate annotations. Most existing methods depend on either impractical labor-intensive labeling or inaccurate pseudo labels. In this paper, we propose an end-to-end unconstrained facial AU detection framework based on domain adaptation, which transfers accurate AU labels from a constrained source domain to an unconstrained target domain by exploiting labels of AU-related facial landmarks. Specifically, we map a source image with label and a target image without label into a latent feature domain by combining source landmark-related feature with target landmark-free feature. Due to the combination of source AU-related information and target AU-free information, the latent feature domain with transferred source label can be learned by maximizing the target-domain AU detection performance. Moreover, we introduce a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature by treating the adversarial learning as a multi-player minimax game. Our framework can also be naturally extended for use with target-domain pseudo AU labels. Extensive experiments show that our method soundly outperforms lower-bounds and upper-bounds of the basic model, as well as state-of-the-art approaches on the challenging in-the-wild benchmarks. The code is available at https://github.com/ZhiwenShao/ADLD. Low High Latent Source Target Disentangle-Swap-Translate s x t x s g t g s l z t l z s t z t t work required [14], it is costly and impractical to manually annotate unconstrained images at a large scale for fullysupervised learning.Limitations of Existing Solutions. There have been some attempts at AU detection of unconstrained images, which often depend on pseudo AU labels. These pseudo labels were automatically annotated by an AU detection model[13]trained with constrained images, which are inaccurate due
INTRODUCTION
Facial action unit (AU) detection [1], [2], [3], [4], [5], [6], [7] involves determining the presence of each AU in a given face image. It has gained increasing attention in computer vision and affective computing communities, due to the use of identifying human emotions in various applications. Each AU is a basic facial action for describing facial expressions, as defined by the Facial Action Coding System (FACS) [8], [9]. While AU detection for near-frontal faces in constrained laboratory conditions [10], [11], [12] has achieved remarkable success, AU detection in the wild [13] still remains a challenge. Compared with images captured under fixed conditions, unconstrained images exhibit a wide variability in expressions, poses, ages, illumination, accessories, occlusions, backgrounds and image quality. Furthermore, due to a limited number of experts and the labor-intensive to the large domain gap between annotated images and training images. Wang et al. [15] used the pseudo labels to fine-tune a pre-trained face verification network for AU detection, while Benitez-Quiroz et al. [16] introduced a global-local loss to improve robustness on noisy pseudo annotations. Zhao et al. [17] treated each AU independently during the clustering of re-annotating the pseudo labels but did not take into account the correlations among AUs. All these techniques attempt to work with inaccurate labels and do not exploit accurate AU annotations from other domains like constrained datasets [12], [18], which limits their performance.
Instead of using inaccurate pseudo labels, we consider the approach of transferring AU knowledge from a constrained source domain with accurate AU labels to an unconstrained target domain without AU labels. Recently, self-supervised learning [19], [20], [21] without requiring annotations is exploited to transform a target image to be a new image with the pose and expression of a source image, in which paired input images with the same identity from a video are required during training. However, a constrained source image and an unconstrained target image with the same identity are unavailable. If training the model using paired same-identity images from the same domain, it will have limited performance of transforming an unconstrained target image driven by a constrained source image, due to the unresolved domain gap.
To make the AU detector trained using source AU labels applicable for the target domain, we can follow the prevailing adversarial domain adaptation approaches. One intuitive way is to learn domain-invariant features [22], [23]. Although this can bring the domains closer, it may result in the loss of AU-related information since AUs are often tangled with poses which can cause the domain shift. Another possible solution is to translate source-domain images to images with target-domain style [24], [25]. However, only translating the image style fails to reduce other domain shifts caused by pose and occlusion. Our Solution. To tackle the above limitations, we propose to map a source image and a target image into a latent domain, which contains the transferred source AU label and the preserved target appearances such as pose, illumination, occlusion, and background. This latent domain is derived by (a) combining source AU-related information with target AU-free information, and (b) learning a mapping that will maximize the performance of target-domain AU detection. Although accurate AU labels are unavailable for the target domain, accurate annotations on highly AU-related landmarks are easily accessible due to contemporary landmark detection methods [26], [27], [28] with high accuracy comparable to manual labeling.
We combine the source landmark-related feature with the target landmark-free feature in the latent domain, in which the former contains landmark information and is expected to be AU-related, and the latter discards landmark information and is expected to be AU-free. To alleviate the influence of pose, we choose facial inner-landmarks without contour-landmarks for disentangling landmark-related and landmark-free features. Since there are large domain shifts, it is difficult to simultaneously synthesize realistic images and inherit transferred AU information in the image domain. Instead, we map the unpaired source and target images into a latent feature domain, as illustrated in Fig. 1. The latent featurex t contains source AU-related innerlandmark information and target AU-free global pose and texture information, which is beneficial for training targetdomain AU detection.
In particular, the source image is considered to have accurate AU and landmark labels and the target image only has accurate landmark labels. The "rich" features learned from images are firstly disentangled into landmark-free features and landmark-related features by a novel landmark adversarial loss, in which the adversarial learning is treated as a multi-player minimax game instead of a two-player minimax game [29]. Then, the landmark-related features of the two images are swapped and combined with the landmark-free features to generate the latent features. A further disentangle-swap-translate process is applied to crosscyclically reconstruct the original rich features. The entire framework is end-to-end without any post-processing step. During testing, the rich feature of an input target image is simply disentangled, recombined and translated into the latent feature domain for AU detection.
We refer to our framework, AU Detection via Latent Domain, as ADLD. The main contributions of this paper are threefold:
• We propose to map the unpaired source and target images into a latent feature domain, which is specialized for the target-domain AU detection. To our knowledge, this is the first work of introducing such an idea for facial AU detection in the wild.
•
We propose a novel landmark adversarial loss to disentangle the landmark-free feature from the landmark-related feature, in which the adversarial learning for landmark-free feature is treated as a multi-player minimax game.
• Extensive experiments demonstrate that our method soundly outperforms lower-bounds and upperbounds of the basic model, as well as state-of-the-art techniques. The performance of our framework can be further improved by incorporating the pseudo AU labels of the target domain.
RELATED WORK
We review previous techniques that are most relevant to our work, including facial AU detection in the wild, adversarial domain adaptation, semi-supervised facial AU detection, and feature disentanglement.
Facial AU Detection in the Wild
There are some works exploring the challenging problem of facial AU detection in the wild. Considering accurate annotations of unconstrained images are often unavailable, these methods resort to pseudo AU labels.
On one hand, a pre-trained model for another task can be exploited, since different types of images often have similar characteristics like feature consistency in local regions and approximately Gaussian data distribution. Wang et al. [15] first pre-trained a face verification network on CASIA-WebFace [30], then fine-tuned the network on Emo-tioNet [13] to achieve unconstrained AU detection. Jyoti et al. [7] incorporated the features extracted by the network of holistic facial expression recognition into the AU detection network, so as to facilitate AU detection. Ji et al. [31] finetuned two networks pre-trained on face recognition and facial expression recognition datasets respectively, then fused the AU prediction results of two networks.
On the other hand, a few methods focus on improving the robustness on inaccurate annotations. Benitez-Quiroz et al. [16] introduced a global-local loss for AU detection with noisy pseudo labels. The local loss aids predicting each AU independently, while the global loss aggregates multiple AUs to probe the co-occurrence among AUs. Zhao et al. [17] proposed a Weakly Supervised Clustering (WSC) technique to learn an embedding space, which is used to identify visually and semantically similar samples and re-annotate these samples with rank-order clustering. However, each AU is treated independently during clustering, in which the correlations among AUs were ignored. These methods do not explore the use of accurate annotations from other domains, which limits their performance.
Adversarial Domain Adaptation
Adversarial domain adaptation is a prevailing way of transferring knowledge from a source domain to a target domain.
One typical solution is to use an adversarial loss with a domain discriminator to make the features of source and target domains indistinguishable [22], [23], [32], [33]. Ganin et al. [22] proposed a Domain-Adversarial Neural Network (DANN) that is shared between domains to learn domaininvariant features. Instead of using a shared network, Tzeng et al. [23] developed an Adversarial Discriminative Domain Adaptation (ADDA) method by pre-training a network on the source domain and further refining it on the target domain. It minimizes the adversarial loss between the fixed source-domain feature and the trainable targetdomain feature. Despite these methods being effective for domain adaptation, enforcing feature domain invariance is infeasible for AU detection. This is because AU-related information may be removed since AUs are often tangled with poses which can cause the domain gap.
Another form involves translating source images into target-style images. For example, Zheng et al. [25] presented a method for translating rendered images into the real image domain, with a regularization of identity mapping for real input images. Recently, Wang et al. [34] utilized a generative adversarial network [29] to synthesize an image with similar appearance to the target image while retaining AU patterns of the source image, which is a pioneering work of AU detection via adversarial domain adaptation. However, the source and target images processed by this method have similar expressions and are both constrained images, in which only the image differences of AU patterns are considered. Besides, a few gaps between constrained and unconstrained domains like occlusion differences cannot be well resolved by style translation.
Semi-Supervised Facial AU Detection
Due to the high costs of AU labeling, some AU detection methods use a semi-supervised setting. Specifically, only partial samples have complete AU labels, while remaining samples do not have AU labels or only have labels of partial AUs. Besides, the extreme case is all samples do not have AU labels, in which coarse labels like holistic expression are often used. Since this semi-supervised setting is an alternative way to tackle the lack of AU annotations, we discuss the related works in this section.
One scenario is labels of randomly partial AUs are missing. Wu et al. [35] proposed a Multi-Label Learning with Missing Labels (MLML) for AU detection, which assumes the predicted labels to be close for two samples with similar features as well as two classes with similar semantic meanings. However, the assumption in MLML is not always correct, as similarity of samples may be due to having the same identity rather than occurring the same AUs. Another scenario is to employ prior knowledge in terms of correlations between AUs and holistic expressions, as well as correlations among AUs. To directly aid the learning of AU detector, Zhang et al. [36] incorporated prior probabilities including expression-independent and expression-dependent AU probabilities as constraints into the overall objective function. However, applying fixed prior knowledge to all the samples ignores AU dynamics in different samples.
Recently, Niu et al. [37] utilized two networks to generate conditional independent features of different views, and then proposed a multi-label co-regularization loss to enforce the prediction consistency of two views. In this method, a small set of samples with AU labels and a large number of samples without AU labels are from the same domain. Considering each local facial region plays different roles for different AUs and each AU has individual temporal dynamic, Zhang et al. [38] proposed a feature fusion module and a label fusion module by incorporating a learnable taskrelated context into the attention mechanism. It requires AU intensity labels of peak and valley frames in videos, which is a strict requirement and thus limits its applicability. Different from the above methods, our work is based on domain adaptation and transfers AU knowledge from a constrained domain to an unconstrained domain.
Feature Disentanglement
Feature disentanglement is extensively applied in image or video synthesis, which aims to factorize a feature into different components [24], [39], [40].
Lee et al. [24] disentangled representations for imageto-image translation by embedding images into a domainspecific attribute space and a domain-invariant content space that captures shared information across domains. They also employed a cyclic structure [41] to handle unpaired training data. Shu et al. [40] introduced a generative model to disentangle facial shape and appearance in an unsupervised manner, in which the shape can deform the appearance to generate images. To achieve source-to-target video re-animation, Kim et al. [42] rendered a synthetic target video with the reconstructed head animation parameters from a source video, in which the head animation parameters include disentangled head pose, identity, expression, eye gaze and illumination. In contrast with these methods, our approach proposes a landmark adversarial loss to disentangle the landmark-free feature from the landmark-related (a) During training, given unpaired g s and g t , E f first extracts (x s , x t ) which are further disentangled into (z s t , z t t ) and (z s l , z t l ) by Et and F l . Then, G combines z s t and z t l to generatex s , and combines z t t and z s l to generatex t . The disentangle-swap-translate process in the dotted box contains Et, G, and F l with L l . Another disentangle-swap-translate process is applied to (x s ,x t ) to complete the crossed cycle. The mapping to the latent feature domain is learned by maximizing the performance of the AU detector Fa givenx t . Note that the self-reconstruction loss Lr is not shown. During testing, we input (b) x s and (c) G(z t l , z t t ) to Fa for source-domain and target-domain AU detection, respectively.
feature, and combines the disentangled features in a latent feature domain.
UNCONSTRAINED FACIAL AU DETECTION
Overview
Our main goal is to achieve unconstrained facial AU detection, in which the AU occurrence probabilitiesp t can be predicted given an unconstrained image g t . The main challenge lies in the training setting that we have access to a collection of constrained images from the source domain with both AU and landmark labels, and also an unpaired collection of unconstrained images from the target domain with only landmark labels. We denote a source image of size l × l × 3 as g s , with its AU label p s and landmark label q s , while an unpaired target image of the same size is g t with landmark label q t . The occurrence probabilities of all m AUs are p s = (p s 1 , · · · , p s m ), while the x-y positions of all n landmarks are in q s = (q s 1 , q s 2 , · · · , q s 2n−1 , q s 2n ). Fig. 2 shows the overall architecture of our ADLD framework. During training, our framework consists of two similar paths: top and bottom paths respectively taking in source-domain images and target-domain images. In particular, given two unpaired images (g s , g t ), we first apply a feature encoder E f to extract rich features (x s , x t ). Then we use a texture encoder E t and a landmark detector F l to disentangle the rich features (x s , x t ) into landmark-free features (z s t , z t t ) and landmark-related features (z s l , z t l ), in which the former are expected to be AU-free and the latter are expected to be AU-related. A generator G is further applied to combine the landmark-free features with the swapped landmark-related features, and translates them to latent features (x s ,x t ). After that, we apply another round of the disentangle-swap-translate process to the latent features to obtain the cross-cyclically reconstructed rich features (x s ,x t ). The key to the AU label transfer from source to target images lies in the combination of the target landmark-free feature z t t , which contains the target global pose and texture for adapting to unconstrained conditions of the target domain, with the source landmark-related feature z s l , which brings over the associated source AU label. In this way, we use the transferred AU labels to train an AU detector which can adapt to the unconstrained target domain. By maximizing the performance of the AU detector F a givenx t , we can learn the mapping from source and target domains to the latent feature domain. The landmark discriminator D l is used to ensure the landmark-free feature cannot predict the locations of landmarks so as to be disentangled from the landmark-related feature. The feature discriminators {D s f , D t f } aim to discriminate between the rich features TABLE 2 Rules for defining the locations of AU centers, which are applicable to an aligned face image with eye centers on the same horizontal line.
"Scale" denotes the distance between the inner corners of eyes.
AU Description Location 1
Inner brow raiser 1/2 scale above inner brow 2
Outer brow raiser 1/3 scale above outer brow 4
Brow lowerer 1/3 scale below brow center 5
Upper lid raiser 1/3 scale below brow center 6
Cheek raiser 1 scale below eye bottom 7
Lid tightener Eye 9
Nose wrinkler 1/2 scale above nose bottom 10
Upper (x s , x t ) and the latent features (x s ,x t ) in order to bring them closer. We denote the domains of features and labels using the corresponding capitals, e.g., domain X T for x t . The main notations are summarized in Table 1.
AU Label Transfer
Definition of AU-Related Landmarks
A few previous works [2], [3] exploit facial landmarks to predefine the locations of AU centers based on prior knowledge, as defined in Table 2. Some AU centers are exactly on the locations of landmarks, and other AU centers have certain offsets from the locations of landmarks. The corresponding landmarks of these predefined AU centers are from 49 facial inner-landmarks [43], as illustrated in Fig. 3(a). Considering the predefined AU centers can be used to extract highly AU-related features so as to facilitate AU detection, we use these AU centers to replace their corresponding landmarks, as shown in Fig. 3(b). Since the correlations among different facial regions are beneficial for AU detection [44], other landmarks are also employed. Note that these 49 landmarks do not contain facial contour-landmarks which are on the facial global contour. In this way, the learned landmark-free feature can discard AU-related information in facial inner regions while preserving AU-free facial global pose. Besides, the new landmark definition in Fig. 3(b) is applied for all the different datasets, even if some AUs in Table 2 are not evaluated due to the lack of their annotations. This is because the detection of a certain AU can benefit from the correlations with other AUs.
Disentanglement of Landmark-Free and Landmark-Related Features
Taking x t as an example, we want it to be disentangled into the landmark-free feature z t t and the landmark-related feature z t l , in which the former is free of facial inner-landmark information and the latter contains inner-landmark information. Landmark-Free Feature. To remove inner-landmark information for the landmark-free feature, we introduce the landmark discriminator D l as the adversary of the texture encoder E t . Since adversarial learning [29] for cross entropy loss is widely used in feature disentanglement [39], [40], we regard facial landmark detection as a classification problem [45], [46] instead of a regression problem [47], [48]. Specifically, the output of D l is n feature maps, each of which can be seen as a response map with a size of d × d × 1 for each landmark. Each position in the response map is considered as one class and the total number of classes is d 2 . The class label of the i-th landmark is defined as
y t i = ( q t 2i d/l − 1)d + q t 2i−1 d/l ,(1)
where · denotes the operation of rounding a number to the nearest integer, and i = 1, · · · , n. Eq. (1) is used for converting the landmark detection from a regression problem to a classification problem, in which the groundtruth x-and y-coordinates of a landmark at l × l scale are transformed to a 1-D location index at 1 × d 2 scale. Similar to the conventional adversarial loss [29] with the form of binary cross entropy loss, we define the landmark adversarial loss as a multi-class cross entropy loss of the multi-player minimax game:
L ad l (E t , D l , X T , Y T ) = E x t ∼X T [ 1 n n i=1 d 2 k=1 1 [k=y t i ] log(σ(D (i,k) l (E t (x t ))))],(2)where E t (x t ) = z t t , D (i,k) l (·) is the k-th value in the i-th response map output by D l , 1 [·]
denotes the indicator function, and σ(·) denotes the softmax function that is applied across spatial locations for each response map. However, the two-player minimax game [29] designed for binary cross entropy loss does not work for this multi-class cross entropy loss.
We propose a novel strategy to solve this multi-player minimax game in adversarial learning. While keeping the same adversarial principle, we train D l by minimizing:
E x t ∼X T [ 1 nd 2 n i=1 d 2 k=1 (1 [k =y t i ] D (i,k) l (E t (x t )) 2 2 + 1 [k=y t i ] D (i,k) l (E t (x t )) − 1 2 2 )],(3)
where we encourage D l to generate 1 at the ground-truth landmark locations while generating 0 at the other locations. Conversely, we train E t by minimizing:
E x t ∼X T [ 1 nd 2 n i=1 d 2 k=1 D (i,k) l (E t (x t )) − 1 d 2 2 2 ],(4)
where E t tries to remove the landmark information as much as possible so that D l will generate the same probability 1/d 2 for all possible landmark locations. Such least-squares loss in Eqs. (3) and (4) is often used in adversarial learning due to its stability [49]. The combination of Eq. (3) and Eq. (4) completely defines the landmark adversarial loss L ad l (E t , D l , X T , Y T ). In Fig. 2(a), we can observe that z t t contains AU-free information including global pose and texture, which are beneficial for the latent featurex t to adapt to unconstrained conditions of the target domain. Besides, the gradients from E t are set to not be back-propagated to E f and G for avoiding the adversarial training between E t and D l impacts the learning of x t and x t , respectively. Landmark-Related Feature. To extract the landmark-related feature, we employ the landmark detector F l to predict the locations of facial inner-landmarks. By treating the landmark detection as a classification problem, we define the landmark classification loss as
L l (F l , X T , Y T ) = − E x t ∼X T [ 1 n n i=1 d 2 k=1 1 [k=y t i ] log(σ(F (i,k) l (x t )))],(5)
where F l also outputs n response maps similar to D l . Minimizing Eq. (5) encourages the i-th response map to have the highest response σ(F (i,y t i ) l (x t )) at the location ( q t 2i−1 d/l , q t 2i d/l ) while having near-zero responses at other locations.
To make the landmark-related feature z t l contain facial inner shape information, we sum the response maps of all n landmarks element-wise:
z t l = ⊕ n i=1 σ(F (i) l (x t )),(6)
where ⊕ denotes element-wise sum. We express Eq. (6) with a simplified form z t l =F l (x t ). The landmark-related feature is enforced to only have high responses at the landmark locations while discarding other AU-free information, as shown in Fig. 2(a).
AU Detection in Latent Feature Domain
Since the landmark-related feature contains AU-related information, we can inherit the source AU label by introducing the source landmark-related feature z s l . In particular, we swap the landmark-related features z s l and z t l , and input z s l and z t t to the generator G to generate the latent featurex t :
x t = G(F l (x s ), E t (x t )),(7)
where the channels of z s l and z t t are concatenated to input to G.x t in the latent feature domain is expected to include preserved AU-free information from x t , and transferred AUrelated information with AU and landmark labels from x s . To enforcex t to inherit source AU-related information, we apply L l (F l ,X T , Y S ). In Fig. 2(a), at each training iteration, the parameters of F l are updated for x s and x t , while fixed forx s andx t so that F l only used for constraining their generation. This is to avoid that F l is influenced by the generation of latent features, which will weaken the effect of constraint.
Then, we achieve the target-domain AU detection by applying the AU detector F a onx t with an AU detection loss:
L a (F a ,X T , P S ) = − Ex t ∼X T [ m j=1 w s j (p s j logp s j + (1 − p s j ) log(1 −p s j ))],(8)
where p s j is ground-truth occurrence probability of the j-th
AU transferred from x s ,p s j = δ(F a (j) (x t ))
is predicted occurrence probability of the j-th AU, δ(·) is the sigmoid function, and w s j is a weight parameter [3] for alleviating the data imbalance problem. We choose w s j = (1/r s j )/ m u=1 (1/r s u ), where r s j is the occurrence rate of the j-th AU in sourcedomain training set. With Eq. (8), we learn the mapping from source and target domains to the latent feature domain by maximizing the performance of target-domain AU detection. Although we do not focus on source-domain AU detection,x s is also obtained in the latent feature domain due to the symmetric structure in our ADLD framework, as shown in Fig. 2(a).
Reliability Constraints on Latent Feature Domain
To obtain a reliable latent feature domain, we want the latent feature domain has a similar structure to the source domain and the target domain. To encourage the latent features to be indistinguishable from the rich features, we impose two feature discriminators D s f and D t f with a feature adversarial loss L ad f for source and target domains, respectively. L ad f forx t is defined as
L ad f (F l , E t , G, D t f , X T ) = E x t ∼X T [log D t f (x t )] + Ex t ∼X T [log(1 − D t f (x t ))].(9)
For stable adversarial learning, in our implementation we use the least-squares loss [49] to train L ad f . Particularly, we train D t f by minimizing Fig. 2(c), the rich feature x t of an input target image g t is disentangled, recombined and translated to be a self-reconstructed latent feature G(z t l , z t t ) during testing. Similarly, we expect this self-reconstructed latent feature to be similar to the rich feature during training by using a self-reconstruction loss:
E x t ∼X T [ D t f (x t ) − 1 2 2 ] + Ex t ∼X T [ D t f (x t ) 2 2 ], and train G by minimizing Ex t ∼X T [ D t f (x t ) − 1 2 2 ]. As illustrated inL r (F l , E t , G, X T ) = E x t ∼X T [ G(z t l , z t t ) − x t 1 ].(10)
Besides, considering the effectiveness of cyclic structure [24], [41] for unpaired training data, we employ a cross-cycle consistency loss L cc to encourage the cross-cyclically reconstructed rich feature to be similar to the rich feature:
L cc (F l , E t , G, X T , X S ) = E x t ∼X T ,x s ∼X S [ x t − x t 1 ],(11)wherex t = G(F l (x s ), E t (x t ))
.
With L ad f , L r and L cc , we can generate a reliable latent feature domain specialized for target-domain AU detection.
Overall Objective Function
As shown in Fig. 2(a), the losses introduced above are applied for both source and target images in our ADLD framework. Specifically, L l (F l , X S , Y S ) and L l (F l , X T , Y T ) are used for training the landmark detector F l , and L l (F l ,X S , Y T ) and L l (F l ,X T , Y S ) are only used for constraining the generation of latent featuresx s and x t . L a (F a , X S , P S ) and L a (F a ,X T , P S ) are used for training the AU detector F a . The remaining losses defined in Eqs.
(2), (9), (10), (11) are also applied to the source image:
L ad l (E t , D l , X S , Y S ), L ad f (F l , E t , G, D s f , X S ), L r (F l , E t , G, X S ), L cc (F l , E t , G, X S , X T ).
Combining all the losses, we yield the overall objective function:
min {Fa,F l } min {E f ,Et,G} max {D l ,D s f ,D t f } L ADLD = L a + λ l L l + λ ad l L ad l + λ ad f L ad f + λ r L r + λ cc L cc ,(12)
where the hyper-parameters λ (.) control the importance of each loss term. Our framework is trainable end-to-end, in which all the network modules are trained jointly. At test time, the inputs of F a are source rich feature x s and target self-reconstructed latent feature G(z t l , z t t ) for given source and target images, respectively. This inference process is consistent with the training process, which is beneficial for AU detection in both source and target domains.
EXPERIMENTS
Datasets and Settings
Datasets
In our experiments, we utilized four popular AU detection datasets BP4D [12], GFT [18], DISFA [11] and UNBC-McMaster Shoulder Pain [50] for the constrained source domain, and utilized challenging EmotioNet [13] and VG-GFace2 [51] for the unconstrained target domain, respectively. Note that we evaluate frame-level AU detection, and thus other datasets with only video-level annotations like CK+ [10] are not used.
• BP4D comprises of 328 videos with 41 subjects, each of whom participates in 8 sessions. These videos contain both AU and landmark annotations, which were captured in constrained conditions with near-frontal faces in good illumination and simple backgrounds. We removed a few frames without AU and landmark annotations, and partitioned the remaining frames into a training set with 100, 767 images of 28 subjects, a validation set with 24, 869 images of 7 subjects and a test set with 20, 940 images of 6 subjects.
• GFT includes 96 subjects in 32 three-subject groups with unscripted social interactions, in which each subject was captured using a video with both AU and landmark annotations. Although the captured frames show moderate out-of-plane poses, they are still in constrained conditions with good illumination and simple backgrounds. There are a few frames without AU annotations. We ignored these frames, and partitioned the remaining frames into a training set with 83, 346 images of 60 subjects, a validation set with 24, 145 images of 18 subjects and a test set with 24, 621 images of 18 subjects.
• DISFA consists of 27 subjects, each of whom was recorded by one video. Each frame was labeled with 66 facial landmarks, which includes the 49 landmarks in Fig. 3(a), as well as AU intensities on a six-point ordinal scale from 0 to 5. Following the setting in [2], [3], we treated AU intensities equal or greater than 2 as occurrence, while treated others as non-occurrence. The frames were partitioned into a training set with 82, 971 images of 18 subjects, a validation set with 19, 275 images of 4 subjects and a test set with 23, 898 images of 5 subjects.
• UNBC-McMaster Shoulder Pain was captured with 200 videos from 25 subjects suffering from shoulder pain. Each frame was annotated with 66 landmarks as well as AU intensities ranging from 0 to 5. Similar to the setting in DISFA, AU intensities equal or greater than 2 were considered as occurrence, while others were considered as non-occurrence. The frames were partitioned into a training set with 34, 025 images of 18 subjects, a validation set with 6, 269 images of 3 subjects and a test set with 8, 104 images of 4 subjects. We denote this dataset as Pain in the following sections.
• EmotioNet contains about one million training and validation images collected from the Internet, and exhibits unconstrained variations of expression, pose, illumination and occlusion. The AU labels of training images were automatically annotated by [13] and those of validation images were manually annotated by certified experts. Since landmark annotations were not provided, we employed a powerful landmark detection library OpenPose [27] to annotate 49 facial landmarks as defined in Fig. 3(a) for each image, in which the images failed to be detected with landmarks were removed. We randomly selected 100, 767 training images as a training set, and split the validation images into a validation set with 10, 544 images and a test set with 10, 544 images. Note that the training set has inaccurate pseudo AU labels, while the validation set and the test set have accurate manual AU labels.
• VGGFace2 is a large-scale face recognition dataset, which consists of 3.31 million images with large variations in pose, age and illumination. We also use OpenPose [27] to annotate 49 facial landmarks for each image. Since VGGFace2 does not have manual AU labels as well as pseudo AU labels, it is applied in the scenario of only using landmark labels for the target domain. We randomly selected 100, 767 images as a training set, and use the validation and test sets of EmotioNet for validation and testing, respectively.
Evaluation Metrics
The common AUs of BP4D and EmotioNet are AUs 1, 2, 4, 5, 6, 9, 12, 17 and 20, the common AUs of GFT and EmotioNet are AUs 1, 2, 4, 5, 6, 9, 12 and 17, the common AUs of DISFA and EmotioNet are AUs 1, 2, 4, 5, 6, 9, 12, 17, 20, 25, 26, and Table 3. We can see that some AUs like AU 5 and AU 20 have very low occurrence rates, while other AUs like AU 6 and AU 12 have high occurrence rates. Similar to [17], to alleviate this data imbalance issue, we chose the AUs with occurrence rates in the source-domain training set larger than 6% to evaluate our framework. In this way, we used AUs 1, 2, 4, 6, 12 and 17 for BP4D, used AUs 2, 6, 12 and 17 for GFT, used AUs 4, 6, 12, 25, 26 for DISFA, and used AUs 6 and 12 for Pain.
Following the previous techniques [2], [3], we report the frame-based F1-score (F1-frame) for AU detection; meanwhile the average result over all AUs (abbreviated as Avg) is also presented. In the following sections, the F1-frame results are reported in percentages with "%" omitted.
Implementation Details
Our ADLD framework consists of F a , F l , E f , E t , G, D l , D s f and D t f . Specifically, F a uses an independent branch to estimate the occurrence probability of each AU, in which each branch contains 4 convolutional layers followed by a global average pooling layer [52] and a one-dimensional fully-connected layer. F l and D l have the same structure with 5 convolutional layers, where the last layer has n channels. Other modules are mainly composed of convolutional layers, in which D s f and D t f share the same structure. For E f and F a which are related to the AU detection task, each convolutional layer is followed by Batch Normalization [53] and Parametric Rectified Linear Unit (PReLU) [54]. For F l , E t , G, D l , D s f and D t f with generation and discrimination, each convolutional layer is followed by Instance Normalization [55] and PReLU. To facilitate feature translation, the Tanh function is applied to the outputs of E f , E t and G.
Our framework was trained using PyTorch [56]. Similar to Shao et al. [3], each sample image was aligned to 200 × 200 × 3 using similarity transformation and further randomly cropped into l × l × 3 and mirrored. In our experiments, the number of landmarks n, the crop size l and the width of landmark response map d are set to 49, 176 and 44, respectively. The numbers of channels for x t , z t t and z t l are 64, 64 and 1, respectively. The hyper-parameters of different loss terms are set via obtaining overall best performance on validation sets: λ l = 0.6, λ ad l = 400, λ ad f = 1.2, λ r = 3 and λ cc = 40. We used the Adam solver [57], setting β 1 = 0.5, β 2 = 0.9 and an initial learning rate of 5 × 10 −5 for E t , G, D l , D s f and D t f , as well as β 1 = 0.95, β 2 = 0.999 and an initial learning rate of 10 −4 for E f , F a and F l . The learning rates were unchanged during the first 5 epochs and linearly decayed during the next 5 epochs. More details can be found in our code https://github.com/ZhiwenShao/ADLD.
Our Framework vs. Lower-Bounds and Upper-Bounds of the Basic Model
In our training setting, we made use of the source images with both AU and landmark labels, and the target images with only landmark labels. We treat a network composed of the AU detection related modules from our framework as the Basic Model. Specifically, it comprises E f followed by two parallel modules F a and F l , in which the lowerbounds use the same training setting and the upper-bounds further use target-domain pseudo AU labels. To validate our framework, we expect that our method performs better than both lower-bounds and upper-bounds of the basic model for target-domain AU detection.
In particular, there are two lower-bounds of the basic model: LI s(a,l) and LI t(l) s(a,l) , in which the former was trained with L a (F a , X S , P S ) and L l (F l , X S , Y S ) using only the source images with AU and landmark labels, and the latter further utilizes the target images with landmark labels by adding L l (F l , X T , Y T ). By using pseudo AU labels of the target images, there are two upper-bounds of the basic model: UI t(a,l) and UI t(a,l) s(a,l) , in which the former only employs the target images with pseudo AU labels and landmark labels, and the latter employs images with all available labels of both domains. Moreover, our ADLD framework in Fig. 2(a) can be naturally extended to the scenario with target-domain pseudo AU labels by applying L a (F a , X T , P T ) and L a (F a ,X S , P T ), which is denoted as ADLD-Full. Evaluation on BP4D and EmotioNet. We compared our method with lower-bounds and upper-bounds of the basic model on the test sets of both source domain BP4D and target domain EmotioNet. The F1-frame results of these methods are listed in Table 4. It can be seen that our method ADLD significantly outperformed the lowerbounds on EmotioNet, in which the margin of average F1frame is 11.3 over LI t(l) s(a,l) . Without using the pseudo AU labels of the target domain, ADLD still performed better than the upper-bounds on EmotioNet. If the pseudo AU labels are available, our ADLD-Full can achieve the average F1-frame of 42.4 with a large gap over the upper-bounds. These demonstrate that our method is superior to both lower-bounds and upper-bounds of the basic model for target-domain AU detection. Evaluation on GFT and EmotioNet. Table 5 shows the results on the test sets of source domain GFT and target domain EmotioNet. We can observe that our ADLD performed better than both the lower-bounds and the upper-bounds F1-frame results of lower-bounds and upper-bounds of the basic model, as well as our approach on the test sets of BP4D [12] and EmotioNet [13].
The best results are shown in bold. s(a,l) achieved higher average F1-frame results than LI s(a,l) on both the source domain and the target domain, which indicates that facial landmarks can capture AUrelated information to facilitate AU detection. (ii) LI s(a,l) and UI t(a,l) showed bad performance on the target domain and the source domain, respectively. This is because there is a large gap between the constrained source domain and the unconstrained target domain. (iii) UI t(a,l) s(a,l) performed worse than LI s(a,l) on the source domain, and UI t(a,l) s(a,l) also had no apparent advantage over UI t(a,l) on the target domain. This is due to the training of source-domain AU detection and target-domain AU detection would compete against each other without the use of domain transfer.
Comparison with State-of-the-Art Methods
We compared our approach against state-of-the-art methods, including fully-supervised AU detection methods using pseudo AU labels and adversarial domain adaptation methods. All methods compared were implemented with their released code.
Fully-Supervised AU Detection
To enable the comparison with fully-supervised AU detection methods, we considered the scenario where targetdomain pseudo AU labels are available. For a reliable comparison, we only compared state-of-the-art AU detection methods with code released. Recently, there are two fully-supervised AU detection methods JAA-Net [3] and ARL [44], as well as an AU detection technique WSC [17] specialized for inaccurate pseudo AU labels.
Specifically, we trained JAA-Net and ARL by using landmark labels and pseudo AU labels of the target domain, and obtained JAA-Net-I t(a,l) and ARL-I t(a,l) , respectively. Note that ARL does not require landmarks, so landmark labels were actually not used. By further using source-domain landmark and AU labels, we can obtain JAA-Net-I t(a,l) s(a,l) and ARL-I t(a,l) s(a,l) . WSC exploits AU-related features to refine the pseudo AU labels, and then uses the re-annotated AU labels to retrain AU detection. We employed UI t(a,l) s(a,l) and UI t(a,l) to extract AU-related features respectively, in which the output of the global average pooling layer of each branch in F a is treated as a related feature for the corresponding AU. This follows the setting of WSC that each AU is processed independently. With the target-domain landmark labels and re-annotated AU labels, we can further retrain UI t(a,l) s(a,l) and UI t(a,l) by adopting and not adopting source-domain images, which are denoted as WSC-I t(a,l) s(a,l) and WSC-I t(a,l) , respectively. Table 6 shows the F1-frame results of our ADLD-Full and state-of-the-art fully-supervised AU detection methods in the scenario with target-domain pseudo AU labels. It can be seen that our method outperformed previous fullysupervised AU detection methods on EmotioNet for any one source domain dataset. Note that JAA-Net-I t(a,l) , JAA-Net-I t(a,l) s(a,l) , ARL-I t(a,l) and ARL-I t(a,l) s(a,l) performed significantly better than the upper-bounds UI t(a,l) and UI t(a,l) s(a,l) . This is because our AU detector F a has a less complex structure than the state-of-the-art JAA-Net and ARL. Our main goal is to propose an effective AU label transfer method rather than a complex fully-supervised AU detector. With a less complex F a , our ADLD-Full still achieved better performance than JAA-Net and ARL. Besides, although WSC can refine the inaccurate pseudo AU labels, its results F1-frame results of our approach and state-of-the-art fully-supervised AU detection methods on the target domain EmotioNet [13] when the source domain datasets are BP4D [12] and GFT [18], respectively. are worse than our ADLD-Full which transfers accurate AU labels from the source domain. We also show the comparison results in Table 7 when the source domain datasets are DISFA and Pain. It can be observed that our ADLD-Full still achieved the highest average F1-frame of 65.3 and 74.7 for DISFA and Pain, respectively. This demonstrates that our framework works well in the scenario of using target-domain pseudo AU labels.
Adversarial Domain Adaptation
To evaluate the effectiveness of AU label transfer, we compared our ADLD with typical adversarial domain adaptation methods. These methods include DANN [22] and ADDA [23] which learn domain-invariant features, and DRIT [24] and T 2 Net [25] which translate the source images into target-style images. For a fair comparison, an AU detection network with the same structure as the basic model was applied to these methods.
Particularly, for DANN and ADDA, E f is encouraged to learn a domain-invariant rich feature by a domain discriminator with the same structure as D s f . We implemented DANN and ADDA by employing and not employing targetdomain landmark labels, respectively. Taking DANN as an example, we denote it as DANN-I t(l) s(a,l) and DANN-I t s(a,l) , respectively. For DRIT, we used its original framework architecture to generate target-style images by transferring the style from the target images to the source images. Then we used the generated target-style images with inherited AU and landmark labels to train AU detection, in which we similarly obtained two variants DRIT-I t(l) s(a,l) and DRIT- F1-frame results of our approach and state-of-the-art adversarial domain adaptation methods on the target domain EmotioNet [13] when the source domain datasets are DISFA [11] and Pain [50], respectively. [13] when the source domain dataset is BP4D [12]. Except for ADLD (input x t ), other methods input G(z t l , z t t ) to Fa to predict the AU occurrence probabilities at test time, as illustrated in Fig. 2(c). I t s(a,l) . We applied the same setting of DRIT to T 2 Net, except we simultaneously trained the image translation network and the AU detection network, following the original setting of T 2 Net. Evaluation on Target Domain EmotioNet. Tables 8 and 9 summarize the F1-frame results of these methods on Emo-tioNet. We can see that our method ADLD remarkably outperformed the state-of-the-art adversarial domain adaptation methods, including both the domain-invariant feature based and image translation based methods. Compared to the average F1-frame (19.2, 25.5) of the lower-bound LI s(a,l) for the source domain BP4D and GFT, DANN-I t s(a,l) and ADDA-I t s(a,l) only improved with small margins of (2.3, 0.4) and (2.9, 3.3) respectively by using target-domain training images. Besides, DANN-I t(l) s(a,l) and ADDA-I t(l) s(a,l) overall performed worse than the lower-bound LI t(l) s(a,l) . This is because enforcing domain invariance of features for inputting to the AU detector F a may neglect AU-related information.
AU
Method La L l Lr Lcc L ad l L ad
Moreover, DRIT-I t s(a,l) , DRIT-I t(l) s(a,l) , T 2 Net-I t s(a,l) and T 2 Net-I t(l) s(a,l) all performed much worse than our ADLD in both Tables 8 and 9. This demonstrates that only translating the image style has a limited contribution to the targetdomain AU detection, since major domain shifts including distribution variations of pose and occlusion are not reduced. In contrast, our method alleviates such problems by mapping images to a latent feature domain specialized for the target-domain AU detection. Evaluation on Target Domain VGGFace2. Table 10 shows the comparison results on the target domain VGGFace2. It can be seen that our ADLD significantly outperformed other adversarial domain adaptation methods for both source domain datasets BP4D and GFT. Note that VGGFace2 shares the same testing set with EmotioNet, so there is a domain gap between the training and testing sets of VGGFace2. In this challenging case, our ADLD still achieved similar performance to the evaulation on EmotioNet in Table 8.
Ablation Study
In this section, we study the effectiveness of main loss terms in Eq. (12) for our framework. Table 11 summarizes the structures and F1-frame results of different variants of our ADLD on EmotioNet. Fig. 4 visualizes the features of ADLD and its variants for three example pairs of input images, in which the unpaired source and target images exhibit different expressions, poses, illuminations, occlusions and backgrounds. [12] and target EmotioNet [13] images. Compared to the source images g s , the target images g t have different expressions and poses, and may be partially occluded. x s and x t are rich features, z s l and z t l are landmark-related features, z s t and z t t are landmark-free features,x s andx t are latent features, and G(z s l , z s t ) and G(z t l , z t t ) are self-reconstructed latent features.x s ,x t , G(z s l , z s t ) and G(z t l , z t t ) from the latent feature domain are shown in the dotted boxes. We expectx t to contain preserved global pose and texture from x t , and transferred AU-related inner-landmark information from x s . that B-Net failed to achieve good performance, in which its average F1-frame is just slightly higher than 25.5 of the lower-bound LI t(l) s(a,l) . compared to the self-reconstructed latent feature G(z t l , z t t ), the latent featurex t of B-Net cannot preserve target-domain AU-free information like facial global pose. This is because the landmark-free feature z t t just simply removes all facial shape information including both inner-landmarks and global pose, without constraints from other losses. In this case,x t is similar to G(z s l , z s t ), and is effective for AU detection of the source domain rather than the target domain, which results in the low performance on EmotioNet.
Note that the landmark-related feature z t l which highlights the locations of landmarks is adaptively obtained. If the i-th landmark is difficult to localize, the response σ(F (i,y t i ) l (x t )) at its location ( q t 2i−1 d/l , q t 2i d/l ) will not be significantly higher than other locations on its response map. By element-wise summing the response maps of all landmarks in Eq. (6), our learned z t l can obtain wider responses around a landmark that is more difficult to localize, so as to capture more information to alleviate the influence of inaccurate localization. Another possible way is to manually generate a response map as the landmarkrelated feature, in which a predefined Gaussian distribution is used to generate responses around the predicted location of each landmark. The landmarks with different localization difficulties are treated equally, which may cause the loss of useful information for challenging landmarks.
4.4.2 L r and L cc In our framework, the self-reconstruction loss L r and the cross-cycle consistency loss L cc are introduced for constraining the mapping from source and target domains to the latent feature domain. It can be observed from Table 11 that B-Net+L r increased the average F1-frame to 29.3 over B-Net. After applying L cc , the result was further improved to 31.2.
Due to the supervisions of L r and L cc , in Fig. 4 we can see that the learnedx t of B-Net+L r +L cc can coarsely inherit source-domain AU-related inner-landmark information and preserve target-domain AU-free global pose. However, facial global contour and inner shape ofx t are not very clear. This is because the use of only L a , L l , L r and L cc cannot effectively enforce z t t to discard inner-landmark information and keep global pose. In this case, the learned latent feature domain has limited effectiveness for target-domain AU detection.
L ad l and L ad f
After adding the landmark adversarial loss L ad l for the landmark-free features, the average F1-frame was improved from 31.2 to 34.3. This profits from L ad l which adversarially disentangles facial inner-landmark information for z t t . When further using the feature adversarial loss L ad f , our ADLD achieved the best performance. L ad f is beneficial for the latent featurex t to preserve target-domain global pose and texture information from x t .
In Fig. 4, we can observe that the facial inner shape ofx t is similar to those of G(z s l , z s t ), and the facial global contour ofx t is similar to that of G(z t l , z t t ). This demonstrates that the learned latent feature domain can preserve sourcedomain AU-related information and target-domain AU-free information, which is specialized for target-domain AU detection. With the latent feature domain, our method can exploit available and accurate source-domain AU labels and adapt to unconstrained conditions of the target domain such as large poses, partial occlusions and arbitrary backgrounds.
Latent Feature Domain for AU Detection
It can be seen from Fig. 4 that the latent feature domain has a similar structure to the domain of rich features, but with different details. If we directly input x t to the AU detector F a in Fig. 2(c), our ADLD only achieved the average F1frame of 25.8, much worse than 36.8 of using G(z t l , z t t ). This demonstrates that the latent feature domain is not just obtained by a simple domain mapping, but instead is learned by disentangling landmark-free and landmarkrelated features and maximizing the performance of targetdomain AU detection.
Moreover, since there are large gaps like pose differences between the constrained source domain and the unconstrained target domain, it is difficult to integrate the information from different domains into a realistic image. The latent feature domain has a larger capacity to combine landmark-related information with target-domain landmark-free information than the image domain. Besides, our goal is to achieve target-domain AU detection instead of synthesizing images. Image generation requires more complex network structures than feature translation, as each image pixel needs numerous computations.
Validation of Landmark Definition
To evaluate the effectiveness of our landmark definition in Fig. 3(b), we implemented a variant of our approach using the original landmark definition in Fig. 3(a). Since an alternative solution of defining AU-related landmarks is to add the predefined AU centers into the original landmark definition, we implemented another variant of our approach using this new definition. These three types of landmark definitions are illustrated in Fig. 5. We show the F1-frame results of our approach using different landmark definitions in Table 12.
It can be observed that our ADLD and ADLD-Full using the landmark definition in Fig. 5(b) both outperformed the variants using other two landmark definitions. This is [13] when the source domain dataset is BP4D [12]. Taking ADLD as an example, we denote its variants using landmark definitions in Fig. 5(a) and (c) as ADLD ori and ADLD add , respectively. because the original landmark definition in Fig. 5(a) fails to capture accurate AU-related information. On the other hand, the landmark definition in Fig. 5(c) has redundant landmark information and thus limits the capability of focusing on AU-related information. Our proposed landmark definition in Fig. 5(b) is beneficial for capturing the most related AU information so as to facilitate the target-domain AU detection. We also notice that the improvements of ADLD-Full over ADLD-Full ori and ADLD-Full add are smaller than the improvements of ADLD over ADLD ori and ADLD add . The main difference between ADLD-Full and ADLD is the former has target-domain pseudo AU labels. Additionally, landmark definition is related to the effectiveness of AU label transfer. Since the effect from pseudo AU labels and the effect from AU label transfer are integrated in ADLD-Full, the improvement from AU label transfer may be degraded. This also causes the performances with the landmark definitions in Fig. 5(a) and Fig. 5(c) fluctuated in ADLD-Full and ADLD. We think the experimental results on ADLD is more convincing, and the landmark definition in Fig. 5(c) is potentially better than the landmark definition in Fig. 5(a) for AU label transfer.
AU
Limitations
The disentanglement of AU-related information and AUfree information is a challenging issue in domain adaptation based AU detection. It is hard to do a perfect disentanglement. In this section, we analyzed the amount of AU information in the disentangled landmark-free feature and landmark-related feature. Since only the source domain has accurate AU labels, we used the trained ADLD model to extract landmark-free features and landmark-related features of source-domain training samples, and then retrained the [12] or GFT [18] and the target domain dataset is EmotioNet [13]. "Source" and "Target" denote the results on the test sets of source domain and target domain, respectively. Since source-domain landmark-related feature is transferred to the target domain in our ADLD framework, we also show the results of Fa (l) on the target domain. AU detector F a by inputting the landmark-free feature and the landmark-related feature, denoted as F a (t) and F a (l) , respectively. To evaluate whether the two AU detectors work better than a random guess, we also implemented F a (none) as the baseline by inputting a zero-value feature to F a . Their AU detection results are presented in Table 13.
We can see that the results of F a (t) were better than those of the random guess of F a (none) , and were significantly worse than those of F a (l) on the source domain. This demonstrates that the landmark-free feature does contain a little AU information. However, it is better to remove the source-domain landmark-free feature since the performance damage it causes in the target domain, due to the domain gap, overtakes the little AU information it contains. On the contrary, it is better to add the targetdomain landmark-free feature since the target-domain texture context it brings in might well surpass the mismatched issue. This can be seen from the margin between the results of F a (l) and ADLD. In our ADLD framework, we combine source-domain landmark-related feature with targetdomain landmark-free feature in the latent feature domain, in which the latter can supplement useful domain-related texture information for target-domain AU detection.
CONCLUSION
In this paper, we proposed an end-to-end unconstrained facial AU detection framework by transferring the available and accurate AU labels from the constrained source domain to the unconstrained target domain. We proposed to map the source and target domains to a latent feature domain which is specialized for the target-domain AU detection. To achieve the domain mapping, we also proposed a novel landmark adversarial loss to disentangle the landmark-free feature and the landmark-related feature. Moreover, our framework can be naturally extended to the scenario with target-domain pseudo AU labels.
We compared our proposed framework with two lowerbounds and two upper-bounds of the basic model on the challenging benchmarks. The experimental results demonstrated that our framework soundly outperforms both the lower-bounds and the upper-bounds. In addition, we compared our method against state-of-the-art approaches involving fully-supervised AU detection methods using target-domain pseudo AU labels and adversarial domain adaptation methods. The results showed that our method performs better than all these previous works. We also conducted an ablation study which indicates that the loss terms in our framework are effective, and the learned latent feature domain combining source-domain AU-related information with target-domain AU-free information is beneficial for the target-domain AU detection.
We further conducted experiments to validate that our proposed landmark definition is beneficial for AU detection. Our method can be generalized as mapping source and target domains to a latent feature domain where source taskrelated feature and target task-free feature are combined, by maximizing the performance of target-domain task. We believe this idea is also promising for other domain adaption problems.
Fig. 1 .
1Illustration of mapping an image g s in the source domain and an image g t in the target domain into the latent feature domain. The rich features (x s , x t ) are first disentangled into landmark-free features (z s t , z t t ) and landmark-related features (z s l , z t l ), and then the landmarkrelated features are swapped to generate latent features (x s ,x t ). The latent feature domain is specialized for target-domain AU detection. The channels of features are summed element-wise for visualization, where the colors from blue to red in the color bar indicate rising feature values.
Fig. 2 .
2The architecture of our ADLD framework, in which E f , Et, G, F l , Fa and D l are shared by source-domain and target-domain input images.
our framework. We only show the source-domain notations, and target-domain notations can be defined similarly. of the i-th source-domain landmark d width of landmark response map m number of AUs n number of landmarks x s source-domain rich feature z s t landmark-free feature from x s z s l landmark-related feature from x s x s source-domain latent featurê x s cross-cyclically reconstructed rich feature for x s
Fig. 3 .
3Definition of AU-related landmarks, in which two landmarks in the same color correspond to the two centers of a certain AU. We replace these landmarks in the (a) original definition with their predefined AU centers in the (b) new definition.
Fig. 4 .
44.4.1 B-Net with L a and L lThe baseline network B-Net uses the same architecture as ADLD with only the losses L a and L l . It can be observed Visualization of features for B-Net, B-Net+Lr+Lcc and our ADLD with three input pairs of source BP4D
Fig. 5 .
5Different types of landmark definitions. (a) Original definition. (b) New definition by replacing the corresponding landmarks using the predefined AU centers. (c) New definition by adding the predefined AU centers.
TABLE 3
3AU occurrence rates (%) in the training sets of source domain. "-" denotes the dataset does not contain this AU. The AUs with occurrence rates larger than 6% are shown in bold.AU
1
2
4
5
6
9
12
17
20
25
26
BP4D [12]
18.4
14.6
19.8
3.3
44.0
5.7
54.0
34.2
2.6
-
-
GFT [18]
4.1
14.7
4.1
2.5
29.2
1.5
30.3
28.7
-
-
-
DISFA [11]
4.3
3.6
12.2
0.8
7.2
3.1
12.9
4.4
2.0
26.2
8.8
Pain [50]
-
-
2.4
-
8.7
0.8
10.9
-
1.0
4.4
4.4
the common AUs of Pain and EmotioNet are AUs 4, 6, 9, 12,
20, 25, 26.
The AU occurrence rates in the training sets of source
domain are shown in
TABLE 4
4
F1-frame results of lower-bounds and upper-bounds of the basic model, as well as our approach on the test sets of GFT[18] and EmotioNet[13].on EmotioNet. Given the target-domain pseudo AU labels, our ADLD-Full further improved the average F1-frame from 41.6 to 46.0. Despite being devised for AU detection of the target domain, our ADLD and ADLD-Full also achieved competitive performance on the source domain GFT.Moreover, there are several interesting observations from Tables 4 and 5. (i) By using target-domain landmark labels, LIAU
BP4D (source domain)
EmotioNet (target domain)
1
2
4
6
12
17
Avg
1
2
4
6
12
17
Avg
LI s(a,l)
57.8
24.7
67.2
75.2
84.6
60.9
61.7
12.0
6.9
11.4
27.9
53.5
3.5
19.2
LI
t(l)
s(a,l)
65.9
39.5
59.8
78.4
75.7
62.4
63.6
19.0
8.7
21.5
38.1
58.4
7.3
25.5
ADLD
50.5
35.7
61.8
74.1
75.2
69.0
61.0
19.8
25.2
31.0
58.2
78.3
8.6
36.8
UI t(a,l)
5.1
2.8
35.9
73.5
81.3
0.7
33.2
14.7
11.4
41.5
49.4
75.8
11.4
34.0
UI
t(a,l)
s(a,l)
24.1
31.3
62.1
77.4
79.9
39.8
52.4
15.3
9.1
38.5
48.9
74.9
4.4
31.9
ADLD-Full
45.7
37.3
63.6
81.7
82.8
64.6
62.6
30.7
26.1
48.1
60.7
77.6
11.5
42.4
TABLE 5
AU
GFT (source domain)
EmotioNet (target domain)
2
6
12
17
Avg
2
6
12
17
Avg
LI s(a,l)
44.7
70.7
83.7
53.6
63.2
6.0
32.7
58.0
5.2
25.5
LI
t(l)
s(a,l)
47.2
76.8
79.5
58.2
65.4
8.6
44.2
69.2
6.4
32.1
ADLD
39.8
79.3
81.4
54.9
63.8
17.4
59.3
80.2
9.5
41.6
UI t(a,l)
1.0
69.9
73.4
9.0
38.3
14.8
53.6
80.4
11.0
39.9
UI
t(a,l)
s(a,l)
27.1
71.5
82.2
47.4
57.0
18.7
51.7
80.5
10.7
40.4
ADLD-Full
43.9
73.3
83.2
57.1
64.4
21.9
64.9
85.4
11.6
46.0
t(l)
TABLE 6
6
TABLE 8
8F1-frame results of our approach and state-of-the-art adversarial domain adaptation methods on the target domain EmotioNet[13] when the source domain datasets are BP4D[12] and GFT[18], respectively.AU
BP4D+EmotioNet
GFT+EmotioNet
1
2
4
6
12
17
Avg
2
6
12
17
Avg
DANN-I t
s(a,l)
12.8
6.9
18.9
30.7
53.1
6.3
21.5
8.9
34.1
55.7
5.1
25.9
DANN-I
t(l)
s(a,l)
16.8
12.7
25.8
28.9
62.5
9.5
26.0
8.7
40.3
63.9
6.4
29.8
ADDA-I t
s(a,l)
13.8
6.1
21.4
28.5
57.4
5.1
22.1
11.0
37.6
61.0
5.5
28.8
ADDA-I
t(l)
s(a,l)
17.7
5.2
15.3
38.2
58.7
6.2
23.6
9.8
35.4
56.7
6.7
27.2
DRIT-I t
s(a,l)
18.8
9.0
27.8
40.6
67.9
5.0
28.2
18.1
48.1
67.6
5.6
34.9
DRIT-I
t(l)
s(a,l)
20.4
7.7
30.9
44.2
67.5
8.3
29.8
18.3
52.3
73.5
5.6
37.4
T 2 Net-I t
s(a,l)
10.1
5.6
21.4
31.3
57.1
5.3
21.8
6.1
35.6
51.7
6.1
24.9
T 2 Net-I
t(l)
s(a,l)
9.4
9.6
24.4
45.1
69.5
4.7
27.1
15.0
39.6
64.5
6.4
31.4
LI s(a,l)
12.0
6.9
11.4
27.9
53.5
3.5
19.2
6.0
32.7
58.0
5.2
25.5
LI
t(l)
s(a,l)
19.0
8.7
21.5
38.1
58.4
7.3
25.5
8.6
44.2
69.2
6.4
32.1
ADLD
19.8
25.2
31.0
58.2
78.3
8.6
36.8
17.4
59.3
80.2
9.5
41.6
TABLE 9
9
frame results for different variants of our ADLD on the target domain EmotioNetDISFA+EmotioNet
Pain+EmotioNet
4
6
12
25
26
Avg
6
12
Avg
DRIT-I t
s(a,l)
39.7
40.7
69.2
74.9
17.8
48.5
36.2
73.8
55.0
DRIT-I
t(l)
s(a,l)
38.0
49.0
64.4
76.4
19.2
49.4
37.0
74.8
55.9
T 2 Net-I t
s(a,l)
19.6
31.2
52.1
68.6
22.5
38.8
38.0
46.1
42.0
T 2 Net-I
t(l)
s(a,l)
23.0
37.9
61.6
80.9
28.0
46.3
45.2
51.3
48.3
ADLD
27.0
51.8
73.8
88.4
34.2
55.0
52.8
69.1
60.9
TABLE 10
F1-frame results of our approach and state-of-the-art adversarial domain adaptation methods on the target domain VGGFace2 [51] when the
source domain datasets are BP4D [12] and GFT [18], respectively.
AU
BP4D+VGGFace2
GFT+VGGFace2
1
2
4
6
12
17
Avg
2
6
12
17
Avg
DRIT-I t
s(a,l)
21.2
16.0
30.8
43.0
69.7
5.7
31.1
17.1
48.4
66.9
6.2
34.6
DRIT-I
t(l)
s(a,l)
21.8
22.3
36.6
44.1
63.9
8.2
32.8
18.6
52.9
68.1
7.3
36.7
T 2 Net-I t
s(a,l)
8.8
7.5
17.4
31.4
57.7
4.6
21.2
6.1
35.6
51.7
6.1
24.9
T 2 Net-I
t(l)
s(a,l)
20.8
14.4
18.3
43.3
69.1
4.4
28.4
17.0
34.6
65.6
14.6
32.9
ADLD
19.7
25.0
27.4
54.8
75.9
10.0
35.5
19.7
60.4
76.3
9.0
41.4
TABLE 11
F1-
TABLE 12 F1
12-frame results of our approach using different landmark definitions on the target domain EmotioNet
TABLE 13
13F1-frame results of different AU detectors when the source domain dataset is BP4D
ACKNOWLEDGMENTS
Deep region and multi-label learning for facial action unit detection. K Zhao, W.-S Chu, H Zhang, IEEE Conference on Computer Vision and Pattern Recognition. IEEEK. Zhao, W.-S. Chu, and H. Zhang, "Deep region and multi-label learning for facial action unit detection," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 3391- 3399.
Eac-net: Deep nets with enhancing and cropping for facial action unit detection. W Li, F Abtahi, Z Zhu, L Yin, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4011W. Li, F. Abtahi, Z. Zhu, and L. Yin, "Eac-net: Deep nets with enhancing and cropping for facial action unit detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2583-2596, 2018.
Deep adaptive attention for joint facial action unit detection and face alignment. Z Shao, Z Liu, J Cai, L Ma, European Conference on Computer Vision. SpringerZ. Shao, Z. Liu, J. Cai, and L. Ma, "Deep adaptive attention for joint facial action unit detection and face alignment," in European Conference on Computer Vision. Springer, 2018, pp. 725-740.
Deep structure inference network for facial action unit recognition. C A Corneanu, M Madadi, S Escalera, European Conference on Computer Vision. SpringerC. A. Corneanu, M. Madadi, and S. Escalera, "Deep structure inference network for facial action unit recognition," in European Conference on Computer Vision. Springer, 2018, pp. 309-324.
Local relationship learning with person-specific shape regularization for facial action unit detection. X Niu, H Han, S Yang, Y Huang, S Shan, IEEE Conference on Computer Vision and Pattern Recognition. IEEEX. Niu, H. Han, S. Yang, Y. Huang, and S. Shan, "Local relationship learning with person-specific shape regularization for facial action unit detection," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2019, pp. 11 917-11 926.
Au r-cnn: Encoding expert prior knowledge into r-cnn for action unit detection. C Ma, L Chen, J Yong, Neurocomputing. 355C. Ma, L. Chen, and J. Yong, "Au r-cnn: Encoding expert prior knowledge into r-cnn for action unit detection," Neurocomputing, vol. 355, pp. 35-47, 2019.
Expression empowered residen network for facial action unit detection. S Jyoti, G Sharma, A Dhall, IEEE International Conference on Automatic Face & Gesture Recognition. IEEES. Jyoti, G. Sharma, and A. Dhall, "Expression empowered residen network for facial action unit detection," in IEEE International Conference on Automatic Face & Gesture Recognition. IEEE, 2019, pp. 1-8.
Facial action coding system: A technique for the measurement of facial movement. P Ekman, W V Friesen, Consulting Psychologists PressP. Ekman and W. V. Friesen, Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, 1978.
P Ekman, W V Friesen, J C Hager, Facial action coding system. Research Nexus. P. Ekman, W. V. Friesen, and J. C. Hager, Facial action coding system. Research Nexus, 2002.
The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. P Lucey, J F Cohn, T Kanade, J Saragih, Z Ambadar, I Matthews, IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEEP. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The extended cohn-kanade dataset (ck+): A com- plete dataset for action unit and emotion-specified expression," in IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2010, pp. 94-101.
Disfa: A spontaneous facial action intensity database. S M Mavadati, M H Mahoor, K Bartlett, P Trinh, J F Cohn, IEEE Transactions on Affective Computing. 42S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh, and J. F. Cohn, "Disfa: A spontaneous facial action intensity database," IEEE Transactions on Affective Computing, vol. 4, no. 2, pp. 151-160, 2013.
Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database. X Zhang, L Yin, J F Cohn, S Canavan, M Reale, A Horowitz, P Liu, J M Girard, Image and Vision Computing. 3210X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, P. Liu, and J. M. Girard, "Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database," Image and Vision Computing, vol. 32, no. 10, pp. 692-706, 2014.
Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. C F Benitez-Quiroz, R Srinivasan, A M Martinez, IEEE Conference on Computer Vision and Pattern Recognition. IEEEC. F. Benitez-Quiroz, R. Srinivasan, and A. M. Martinez, "Emo- tionet: An accurate, real-time algorithm for the automatic annota- tion of a million facial expressions in the wild," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 5562- 5570.
Automatic analysis of facial actions: A survey. B Martinez, M F Valstar, B Jiang, M Pantic, IEEE Transactions on Affective Computing. 103B. Martinez, M. F. Valstar, B. Jiang, and M. Pantic, "Automatic analysis of facial actions: A survey," IEEE Transactions on Affective Computing, vol. 10, no. 3, pp. 325-347, 2019.
Transferring face verification nets to pain and expression regression. F Wang, X Xiang, C Liu, T D Tran, A Reiter, G D Hager, H Quon, J Cheng, A L Yuille, arXiv:1702.06925v1arXiv preprintF. Wang, X. Xiang, C. Liu, T. D. Tran, A. Reiter, G. D. Hager, H. Quon, J. Cheng, and A. L. Yuille, "Transferring face veri- fication nets to pain and expression regression," arXiv preprint arXiv:1702.06925v1, 2017.
Recognition of action units in the wild with deep nets and a new global-local loss. C F Benitez-Quiroz, Y Wang, A M Martinez, IEEE International Conference on Computer Vision. IEEEC. F. Benitez-Quiroz, Y. Wang, and A. M. Martinez, "Recognition of action units in the wild with deep nets and a new global-local loss," in IEEE International Conference on Computer Vision. IEEE, 2017, pp. 3990-3999.
Learning facial action units from web images with scalable weakly supervised clustering. K Zhao, W.-S Chu, A M Martinez, IEEE Conference on Computer Vision and Pattern Recognition. IEEEK. Zhao, W.-S. Chu, and A. M. Martinez, "Learning facial action units from web images with scalable weakly supervised cluster- ing," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018, pp. 2090-2099.
Sayette group formation task (gft) spontaneous facial expression database. J M Girard, W.-S Chu, L A Jeni, J F Cohn, IEEE International Conference on Automatic Face and Gesture Recognition. IEEEJ. M. Girard, W.-S. Chu, L. A. Jeni, and J. F. Cohn, "Sayette group formation task (gft) spontaneous facial expression database," in IEEE International Conference on Automatic Face and Gesture Recog- nition. IEEE, 2017, pp. 581-588.
X2face: A network for controlling face generation using images, audio, and pose codes. O Wiles, A S Koepke, A Zisserman, European Conference on Computer Vision. SpringerO. Wiles, A. S. Koepke, and A. Zisserman, "X2face: A network for controlling face generation using images, audio, and pose codes," in European Conference on Computer Vision. Springer, 2018, pp. 690-706.
Self-supervised learning of a facial attribute embedding from video. British Machine Vision Conference. BMVA Press302--, "Self-supervised learning of a facial attribute embedding from video," in British Machine Vision Conference. BMVA Press, 2018, p. 302.
Self-supervised representation learning from videos for facial action unit detection. Y Li, J Zeng, S Shan, X Chen, IEEE Conference on Computer Vision and Pattern Recognition. IEEE933Y. Li, J. Zeng, S. Shan, and X. Chen, "Self-supervised representa- tion learning from videos for facial action unit detection," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2019, pp. 10 924-10 933.
Domainadversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, Journal of Machine Learning Research. 171Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain- adversarial training of neural networks," Journal of Machine Learn- ing Research, vol. 17, no. 1, pp. 2096-2030, 2016.
Adversarial discriminative domain adaptation. E Tzeng, J Hoffman, K Saenko, T Darrell, IEEE Conference on Computer Vision and Pattern Recognition. IEEEE. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, "Adversarial dis- criminative domain adaptation," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017, pp. 2962-2971.
Diverse image-to-image translation via disentangled representations. H.-Y Lee, H.-Y Tseng, J.-B Huang, M Singh, M.-H Yang, European Conference on Computer Vision. SpringerH.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, "Diverse image-to-image translation via disentangled representa- tions," in European Conference on Computer Vision. Springer, 2018, pp. 36-52.
T2net: Synthetic-to-realistic translation for solving single-image depth estimation tasks. C Zheng, T.-J Cham, J Cai, European Conference on Computer Vision. SpringerC. Zheng, T.-J. Cham, and J. Cai, "T2net: Synthetic-to-realistic translation for solving single-image depth estimation tasks," in European Conference on Computer Vision. Springer, 2018, pp. 798- 814.
Face alignment across large poses: A 3d solution. X Zhu, Z Lei, X Liu, H Shi, S Z Li, IEEE Conference on Computer Vision and Pattern Recognition. IEEEX. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li, "Face alignment across large poses: A 3d solution," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 146-155.
Hand keypoint detection in single images using multiview bootstrapping. T Simon, H Joo, I Matthews, Y Sheikh, IEEE Conference on Computer Vision and Pattern Recognition. IEEET. Simon, H. Joo, I. Matthews, and Y. Sheikh, "Hand keypoint detection in single images using multiview bootstrapping," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017, pp. 4645-4653.
Look at boundary: A boundary-aware face alignment algorithm. W Wu, C Qian, S Yang, Q Wang, Y Cai, Q Zhou, IEEE Conference on Computer Vision and Pattern Recognition. IEEEW. Wu, C. Qian, S. Yang, Q. Wang, Y. Cai, and Q. Zhou, "Look at boundary: A boundary-aware face alignment algorithm," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018, pp. 2129-2138.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. Curran Associates, IncI. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Advances in Neural Information Processing Systems. Curran Associates, Inc., 2014, pp. 2672-2680.
Learning face representation from scratch. D Yi, Z Lei, S Liao, S Z Li, arXiv:1411.7923arXiv preprintD. Yi, Z. Lei, S. Liao, and S. Z. Li, "Learning face representation from scratch," arXiv preprint arXiv:1411.7923, 2014.
Multiple transfer learning and multi-label balanced training strategies for facial au detection in the wild. S Ji, K Wang, X Peng, J Yang, Z Zeng, Y Qiao, IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEES. Ji, K. Wang, X. Peng, J. Yang, Z. Zeng, and Y. Qiao, "Multiple transfer learning and multi-label balanced training strategies for facial au detection in the wild," in IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2020, pp. 414-415.
Simultaneous deep transfer across domains and tasks. E Tzeng, J Hoffman, T Darrell, K Saenko, IEEE International Conference on Computer Vision. IEEEE. Tzeng, J. Hoffman, T. Darrell, and K. Saenko, "Simultaneous deep transfer across domains and tasks," in IEEE International Conference on Computer Vision. IEEE, 2015, pp. 4068-4076.
Residual parameter transfer for deep domain adaptation. A Rozantsev, M Salzmann, P Fua, IEEE Conference on Computer Vision and Pattern Recognition. IEEEA. Rozantsev, M. Salzmann, and P. Fua, "Residual parameter transfer for deep domain adaptation," in IEEE Conference on Com- puter Vision and Pattern Recognition. IEEE, 2018, pp. 4339-4348.
Personalized multiple facial action unit recognition through generative adversarial recognition network. C Wang, S Wang, ACM International Conference on Multimedia. ACMC. Wang and S. Wang, "Personalized multiple facial action unit recognition through generative adversarial recognition network," in ACM International Conference on Multimedia. ACM, 2018, pp. 302-310.
Multi-label learning with missing labels for image annotation and facial action unit recognition. B Wu, S Lyu, B.-G Hu, Q Ji, Pattern Recognition. 487B. Wu, S. Lyu, B.-G. Hu, and Q. Ji, "Multi-label learning with miss- ing labels for image annotation and facial action unit recognition," Pattern Recognition, vol. 48, no. 7, pp. 2279-2289, 2015.
Classifier learning with prior probabilities for facial action unit recognition. Y Zhang, W Dong, B.-G Hu, Q Ji, IEEE Conference on Computer Vision and Pattern Recognition. IEEEY. Zhang, W. Dong, B.-G. Hu, and Q. Ji, "Classifier learning with prior probabilities for facial action unit recognition," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2018, pp. 5108-5116.
Multi-label coregularization for semi-supervised facial action unit recognition. X Niu, H Han, S Shan, X Chen, Advances in Neural Information Processing Systems. X. Niu, H. Han, S. Shan, and X. Chen, "Multi-label co- regularization for semi-supervised facial action unit recognition," in Advances in Neural Information Processing Systems, 2019, pp. 909- 919.
Context-aware feature and label fusion for facial action unit intensity estimation with partially labeled data. Y Zhang, H Jiang, B Wu, Y Fan, Q Ji, IEEE International Conference on Computer Vision. IEEEY. Zhang, H. Jiang, B. Wu, Y. Fan, and Q. Ji, "Context-aware feature and label fusion for facial action unit intensity estimation with partially labeled data," in IEEE International Conference on Computer Vision. IEEE, 2019, pp. 733-742.
Disentangled representation learning gan for pose-invariant face recognition. L Tran, X Yin, X Liu, IEEE Conference on Computer Vision and Pattern Recognition. IEEEL. Tran, X. Yin, and X. Liu, "Disentangled representation learning gan for pose-invariant face recognition," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017, pp. 1283- 1292.
Deforming autoencoders: Unsupervised disentangling of shape and appearance. Z Shu, M Sahasrabudhe, R Guler, D Samaras, N Paragios, I Kokkinos, European Conference on Computer Vision. SpringerZ. Shu, M. Sahasrabudhe, R. Alp Guler, D. Samaras, N. Para- gios, and I. Kokkinos, "Deforming autoencoders: Unsupervised disentangling of shape and appearance," in European Conference on Computer Vision. Springer, 2018, pp. 650-665.
Unpaired image-toimage translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, IEEE International Conference on Computer Vision. IEEEJ.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to- image translation using cycle-consistent adversarial networks," in IEEE International Conference on Computer Vision. IEEE, 2017, pp. 2223-2232.
Deep video portraits. H Kim, P Carrido, A Tewari, W Xu, J Thies, M Niessner, P Pérez, C Richardt, M Zollhöfer, C Theobalt, ACM Transactions on Graphics. 374163H. Kim, P. Carrido, A. Tewari, W. Xu, J. Thies, M. Niessner, P. Pérez, C. Richardt, M. Zollhöfer, and C. Theobalt, "Deep video portraits," ACM Transactions on Graphics, vol. 37, no. 4, p. 163, 2018.
Supervised descent method and its applications to face alignment. X Xiong, F De La, Torre , IEEE Conference on Computer Vision and Pattern Recognition. IEEEX. Xiong and F. De la Torre, "Supervised descent method and its applications to face alignment," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2013, pp. 532-539.
Facial action unit detection using attention and relation learning. Z Shao, Z Liu, J Cai, Y Wu, L Ma, IEEE Transactions on Affective Computing. Z. Shao, Z. Liu, J. Cai, Y. Wu, and L. Ma, "Facial action unit detection using attention and relation learning," IEEE Transactions on Affective Computing, 2019.
Recombinator networks: Learning coarse-to-fine feature aggregation. S Honari, J Yosinski, P Vincent, C Pal, IEEE Conference on Computer Vision and Pattern Recognition. IEEES. Honari, J. Yosinski, P. Vincent, and C. Pal, "Recombinator networks: Learning coarse-to-fine feature aggregation," in IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 5743-5752.
Robust facial landmark detection via recurrent attentive-refinement networks. S Xiao, J Feng, J Xing, H Lai, S Yan, A Kassim, European Conference on Computer Vision. SpringerS. Xiao, J. Feng, J. Xing, H. Lai, S. Yan, and A. Kassim, "Robust facial landmark detection via recurrent attentive-refinement net- works," in European Conference on Computer Vision. Springer, 2016, pp. 57-72.
Learning deep representation for face alignment with auxiliary attributes. Z Zhang, P Luo, C C Loy, X Tang, IEEE Transactions on Pattern Analysis and Machine Intelligence. 385Z. Zhang, P. Luo, C. C. Loy, and X. Tang, "Learning deep rep- resentation for face alignment with auxiliary attributes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 5, pp. 918-930, 2016.
Deep multi-center learning for face alignment. Z Shao, H Zhu, X Tan, Y Hao, L Ma, Neurocomputing. 396Z. Shao, H. Zhu, X. Tan, Y. Hao, and L. Ma, "Deep multi-center learning for face alignment," Neurocomputing, vol. 396, pp. 477- 486, 2020.
Least squares generative adversarial networks. X Mao, Q Li, H Xie, R Y Lau, Z Wang, S P Smolley, IEEE International Conference on Computer Vision. IEEEX. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, "Least squares generative adversarial networks," in IEEE International Conference on Computer Vision. IEEE, 2017, pp. 2813-2821.
Painful data: The unbc-mcmaster shoulder pain expression archive database. P Lucey, J F Cohn, K M Prkachin, P E Solomon, I Matthews, IEEE International Conference on Automatic Face and Gesture Recognition. IEEEP. Lucey, J. F. Cohn, K. M. Prkachin, P. E. Solomon, and I. Matthews, "Painful data: The unbc-mcmaster shoulder pain expression archive database," in IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 2011, pp. 57-64.
Vg-gface2: A dataset for recognising faces across pose and age. Q Cao, L Shen, W Xie, O M Parkhi, A Zisserman, IEEE International Conference on Automatic Face & Gesture Recognition. IEEEQ. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, "Vg- gface2: A dataset for recognising faces across pose and age," in IEEE International Conference on Automatic Face & Gesture Recogni- tion. IEEE, 2018, pp. 67-74.
Network in network. M Lin, Q Chen, S Yan, International Conference on Learning Representations. M. Lin, Q. Chen, and S. Yan, "Network in network," in Interna- tional Conference on Learning Representations, 2014.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, International Conference on Machine Learning. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in Interna- tional Conference on Machine Learning, 2015, pp. 448-456.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, X Zhang, S Ren, J Sun, IEEE International Conference on Computer Vision. IEEEK. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," in IEEE International Conference on Computer Vision. IEEE, 2015, pp. 1026-1034.
Instance normalization: The missing ingredient for fast stylization. D Ulyanov, A Vedaldi, V Lempitsky, arXiv:1607.08022arXiv preprintD. Ulyanov, A. Vedaldi, and V. Lempitsky, "Instance normaliza- tion: The missing ingredient for fast stylization," arXiv preprint arXiv:1607.08022, 2016.
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. Curran Associates, IncA. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in Advances in Neural Information Processing Systems. Curran Associates, Inc., 2019, pp. 8024-8035.
Zhiwen Shao received his B.Eng. degree in Computer Science and Technology from the Northwestern Polytechnical University. D P Kingma, J Ba, International Conference on Learning Representations. 2020Shanghai Jiao Tong UniversityAdam: A method for stochastic optimization. China in 2015. He received the Ph.D. degree from theD. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," in International Conference on Learning Representations, 2015. Zhiwen Shao received his B.Eng. degree in Computer Science and Technology from the Northwestern Polytechnical University, China in 2015. He received the Ph.D. degree from the Shanghai Jiao Tong University, China in 2020.
| [
"https://github.com/ZhiwenShao/ADLD.",
"https://github.com/ZhiwenShao/ADLD."
]
|
[
"Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention",
"Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention"
]
| [
"Gedas Bertasius \nUniversity of Pennsylvania\nUniversity of Pennsylvania\n\n",
"Jianbo Shi [email protected] \nUniversity of Pennsylvania\nUniversity of Pennsylvania\n\n"
]
| [
"University of Pennsylvania\nUniversity of Pennsylvania\n",
"University of Pennsylvania\nUniversity of Pennsylvania\n"
]
| []
| We present a first-person method for cooperative basketball intention prediction: we predict with whom the camera wearer will cooperate in the near future from unlabeled first-person images. This is a challenging task that requires inferring the camera wearer's visual attention, and decoding the social cues of other players. Our key observation is that a first-person view provides strong cues to infer the camera wearer's momentary visual attention, and his/her intentions. We exploit this observation by proposing a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate in the near future, without using manually labeled intention labels. Our cross-model EgoSupervision operates by transforming the outputs of a pretrained pose-estimation network, into pseudo ground truth labels, which are then used as a supervisory signal to train a new network for a cooperative intention task. We evaluate our method, and show that it achieves similar or even better accuracy than the fully supervised methods do. | 10.1109/iccvw.2017.278 | [
"https://arxiv.org/pdf/1709.01630v1.pdf"
]
| 20,743,048 | 1709.01630 | bff7c45f2d7af4e2b64b6efa53511c197bf8f2fa |
Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention
Gedas Bertasius
University of Pennsylvania
University of Pennsylvania
Jianbo Shi [email protected]
University of Pennsylvania
University of Pennsylvania
Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention
We present a first-person method for cooperative basketball intention prediction: we predict with whom the camera wearer will cooperate in the near future from unlabeled first-person images. This is a challenging task that requires inferring the camera wearer's visual attention, and decoding the social cues of other players. Our key observation is that a first-person view provides strong cues to infer the camera wearer's momentary visual attention, and his/her intentions. We exploit this observation by proposing a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate in the near future, without using manually labeled intention labels. Our cross-model EgoSupervision operates by transforming the outputs of a pretrained pose-estimation network, into pseudo ground truth labels, which are then used as a supervisory signal to train a new network for a cooperative intention task. We evaluate our method, and show that it achieves similar or even better accuracy than the fully supervised methods do.
Introduction
Consider a dynamic scene such as Figure 1, where you, as the camera wearer, are playing basketball. You need to make a decision with whom you will cooperate to maximize the overall benefit for your team. Looking ahead at your teammates, you make a conscious decision and then 2-3 seconds afterwards you perform a cooperative action such as passing the ball.
In a team sport such as basketball, an effective cooperation among teammates is essential. Thus, in this paper, we aim to investigate whether we can use a single first-person image to infer with whom the camera wearer will cooperate 2-3 seconds from now? This is a challenging task because predicting camera wearer's cooperative intention requires 1) inferring his/her momentary visual attention, 2) decoding dominant social signals expressed by other players who want to cooperate, and 3) knowing who your teammates are when the players are not wearing any team-specific uni-First-Person Input Image
Predicted Cooperative Intention
Ground Truth Cooperative Intention Figure 1: With whom will I cooperate after 2-3 seconds? Given an unlabeled set of first-person basketball images, we predict with whom the camera wearer will cooperate 2 seconds from now. We refer to this problem as a cooperative basketball intention prediction.
forms.
To make this problem even more challenging we ask a question: "Can we infer cooperative basketball intention without manually labeled first-person data?". Building an unsupervised learning framework is important because manually collecting basketball intention labels is a costly and a time consuming process. In the context of a cooperative basketball intention task, an annotator needs to have highly specific basketball domain knowledge. Such a requirement limits the scalability of the annotation process because such annotators are difficult to find and costly to employ.
However, we conjecture that we can learn cooperative basketball intention in an unsupervised fashion by exploiting the signal provided by the first-person camera. What people see reflects how they are going to act. A firstperson camera placed on a basketball player's head allows Figure 2: The illustration of our cross-model EgoSupervision training scheme. As our base model we use a multi-person pose estimation network from [6], which predicts 1) pose estimates of all people in a given first-person image and 2) the bounding boxes around each person. Next, we feed these outputs to an EgoTransformer, which transforms them such that the transformed output would approximately capture the camera wearer's attention and intentions. Then, we use such transformed output as a supervisory signal to train the network for our cooperative basketball intention task.
us to indirectly tap into that person's mind and reason about his/her internal state based on what the camera wearer sees.
To do so we propose a novel cross-model EgoSupervision learning scheme, which allows us to learn the camera wearer's intention without the manually labeled intention data. Our cross-model EgoSupervision scheme works as follows. First we transform the output of a pretrained poseestimation network such that it would approximately reflect the camera wearer's internal state such as his/her visual attention and intentions. Then, we use such transformed output as a supervisory signal to train another network for our cooperative basketball intention task. We show that such a learning scheme allows us to train our model without manually annotated intention labels, and achieve similar or even better results as the fully supervised methods do.
Related Work
First-Person Vision. In the past, most first-person methods have focused on first-person object detection [29,10,40,15,2], or activity recognition [44,43,38,31,35,13]. Several methods have employed first-person videos to summarize videos [29,34] while recently the work in [46] proposed to predict the camera wearer's engagement detection from first-person videos. The work in [14] used a group of people wearing first-person cameras to infer their social interactions such as monologues, dialogues, or discussions. The method in [37] predicted physical forces experienced by the camera wearer, while the work in [26] recognized the activities performed in various extreme sports. Several recent methods [36,45] also predicted the camera wearer's movement trajectories. Finally, first-person cameras have also been used for various robotics applications [41,18] In comparison to these prior methods, we propose a novel cooperative basketball intention prediction task, that allows us to study cooperative behaviors of the basketball players. Furthermore, we note that these prior first-person methods (except [26]) rely on manually annotated labels for their respective tasks whether it would be an objectdetection, activity recognition, intention prediction or some other task. Instead, in this work, we demonstrate that we can solve a challenging cooperative basketball intention prediction task without using annotated first-person intention labels, which are time consuming and costly to obtain.
Knowledge Transfer across Models. With the introduction of supervised CNN models [27], there has been a lot of interest in adapting generic set of features [11] for different tasks at hand [22,3,16,47,39,42]. Recently, generic image classification features were successfully used for the tasks such as edge detection [3,47], object detection [16,39,42], and semantic segmentation [4,32,33,7]. More related to our work, a recent line of research investigated how to transfer knowledge across different models by a combination of parameter updates [1,12,24], transformation learning [28,17], network distillation [21] or crossmodel supervision [23,19]. The most similar to our work are the methods in [23,19] that use cross-model supervision to transfer knowledge from one model to another.
All of the above methods focus on the third-person data. In contrast, we show how to exploit a first-person view to solve a novel camera wearer's cooperative intention prediction task without using manually labeled first-person data.
Learning Cooperative Basketball Intention
The goal of our cooperative basketball intention task is to predict with whom the camera wearer will cooperate in the near future. Formally, we aim to learn a function g(I i ) that takes a single first-person image I i as an input and outputs a per-pixel likelihood map, where each pixel indicates the cooperation probability. Ideally, we would want such function to produce high probability values at pixels around the person with whom the camera wearer will cooperate, and low probability values around all the other pixels.
We implement g(I i ) via a fully convolutional neural network based on the architecture of a multi-person pose estimation network in [6]. Letŷ denote a per-pixel mask that is given to our network as a target label. We refer toŷ as a pseudo ground truth because we obtain it automatically instead of relying on the manually annotated intention labels. Then, we learn our cooperative basketball intention model by optimizing the following cross-entropy loss objective:
L (i) = − N j=1ŷ (i) j log g j (I i ) + (1 −ŷ (i) j ) log (1 − g j (I i )) ,(1)whereŷ (i)
j is the pseudo ground truth value of image I i at pixel j, g j (I i ) refers to our network's output at pixel j, and N denotes the number of pixels in an image. We now explain how we obtain the pseudo ground truth dataŷ.
EgoTransformer
To construct a pseudo ground truth supervisory signalŷ, we transform the output of a pretrained multi-person pose estimation network [6], such that it would approximately capture the camera wearer's internal state such as his/her visual attention, and intentions. We do so using our proposed EgoTransformer scheme.
Let f (I i ) denote a pretrained fully convolutional network from [6] that takes a first-person image as an input, and outputs the 1) pose part estimates of every person in an image, and 2) their bounding-box detections. We note that the pretrained network f was never trained on any first-person images. Then, formally, let B ∈ R n×5 denote the bounding box of people detected by f . Each of n detected bounding boxes is parameterized by 5 numbers (x, y, h, w, c) denoting the top-left bounding-box coordinates (x, y), the height h, and width w of the bounding box, and its confidence value c. Additionally, let P ∈ R n×18×2 denote the predicted (x, y) locations of 18 pose parts (see [6]) for each of n detected people.
Then our goal is to come up with a transformation function T (B (i) , P (i) ) that takes these two outputs and transforms them into a per-pixel pseudo ground truth maskŷ (i) for our cooperative basketball intention prediction task.
First-Person RGB
Pseudo GT Ground Truth Figure 3: Qualitative comparison of the pseudo ground truth labels obtained via an EgoTransformer versus the actual ground truth. Note that while the pseudo ground truth is not always correct (see the third row), in most cases, it successfully assigns high values around the player with whom the camera wearer will cooperate (see the first two rows).
We do so by exploiting three different characteristics encoded in a first-person view: 1) egocentric location prior, 2) egocentric size prior, and 3) egocentric pose prior. All of these characteristics can be used to reason about the camera wearer's internal state.
For instance, the location where another person is detected in a first-person image can be used to assess how likely the camera wearer is looking at that person [31,2]. The size of another person in a first-person image can be used to infer how far the camera wearer is from that person, and hence, how likely will the camera wearer interact with that person (the nearer the more likely). Finally, most person-to-person interactions involve people looking at each other, which imposes a certain pose prior. We can then use such a pose prior to predict whether two people will cooperate with each other in the near future based on whether another person is looking at the camera wearer at present.
We express our pseudo ground truth dataŷ using these three characteristics using what we refer to as an EgoTransformer scheme:
y = n j=1 V (B j , φ size (B j )) · V (B j , φ pose (B j )) · φ loc (B)(2)
where n denotes the number of detected bounding boxes in a given image, B j depicts a j th bounding box, V is a function that takes two inputs: 1) a bounding box B j , and 2) a scalar value v, and outputs a H × W dimensional mask by assigning every pixel inside this bounding box B j to v, and First-Person RGB
Our Prediction
Ground Truth First-Person RGB Our Prediction Ground Truth Figure 4: The qualitative cooperative basketball intention prediction results. Despite not using any manually annotated first-person labels during training, in most cases, our cross-model EgoSupervision method correctly predicts with whom the camera wearer will cooperate (the first two rows). In the third row, we also illustrate two cases where our method fails to produce correct predictions.
zeros to all the pixels outside B j . Here, H and W depict the height and the width of the original input image. Finally, φ size (B j ) ∈ R 1×1 and φ pose (B j ) ∈ R 1×1 are scalars that capture the size and pose priors associated with a bounding box B j , while φ loc ∈ R H×W is a first-person location prior of the same dimensions as the original input image. Intuitively, the formulation above operates by first assigning a specific value to each of the detected bounding boxes. This yields a H × W dimensional prediction map where every pixel that does not belong to any bounding boxes is assigned a zero value. Then, this prediction map is multiplied with the location prior φ loc ∈ R H×W (using elementwise multiplication). Finally, all the values are normalized to be in range [0, 1], which produces our final pseudo ground truth labels. We now explain each of the components in more detail.
Egocentric Location Prior. The location of the camera wearer's visual attention is essential for inferring his/her cooperative intentions. We know that a first-person camera is aligned with the person's head direction, and thus, it captures exactly what the camera wearer sees. As a result, the way the camera wearer positions himself with respect to other players affects the location where these players will be mapped in a first-person image.
Instead of assuming any specific location a-priori (e.g. a center prior), as is done in [31,29], we find the egocentric location prior directly from the data. As before, Let B ∈ R n×5 denote the bounding boxes detected by a pretrained network. Then we can compute φ loc ∈ R H×W as follows:
φ loc (B) = n j=1 V (B (i) j , c (i) j ) · 1 N N i=1 n j=1 V (B (i) j , c (i) j )) where c (i)
j is the predicted confidence of the j th bounding box in the i th image. Intuitively, the first term n j=1 V (B j , c (i) j ) depicts a H × W dimensional mask that is obtained by assigning confidence values to all pixels in their respective bounding boxes in the current image, and zero values to the pixels outside the bounding boxes. The second term 1
N N i=1 n j=1 V (B j , c (i)
j )) also depicts a H × W dimensional mask that is obtained using this same procedure but across the entire training training dataset rather than a single image. In other words, the second term captures the locations in a first-person image where the bounding box predictions are usually most dense.
We conjecture, that φ loc (I i ) can then be used to approximate the camera wearer's visual attention location, which is essential for inferring the camera wearer's cooperative intentions.
Egocentric Size Prior. Spatial 3D cues provides important information to infer the camera wearer's intentions [36,45]. For instance, the camera wearer is more likely to cooperate with a player who is near him/her. We propose to capture this intuition, by exploiting an egocentric size prior. We know that the size of a bounding box in a firstperson image is inversely related to the distance between the camera wearer and the person in the bounding box. Thus, let h j be the height of the bounding box B j . Then we express the egocentric size prior φ size (B j ) ∈ R 1×1 for a given bounding box as:
φ size (B j ) = exp (− σ h j )
where σ denotes a hyperparameter controlling how much to penalize small bounding boxes. Such a formulation al-
First-Person RGB Subject-1
Subject-2 Subject-3 Subject-5 Ground Truth Figure 5: Several qualitative examples from the top 4 performing subjects in our conducted human study. Each subject specified their prediction by clicking on the person, with whom he/she thought the camera wearer was going to cooperate. We then placed a fixed size Gaussian around the location of the click. Note that based on these results, we can conclude that some instances of this task are quite difficult even for humans, i.e. in these examples, there is no general consensus among the subjects' responses.
lows us to capture the intuition that the camera wearer is more likely to cooperate with players who are physically closer to him/her. Egocentric Pose Prior. In basketball, people tend to look at each other to express their intentions before actually performing cooperative actions. Detecting whether a particular person is facing the camera wearer can be easily done by examining the x coordinates of the paired body parts such as eyes, arms, legs, etc of a person detected in a first-person image. For instance, if a particular person is facing the camera wearer then, we will observe that for most of his/her paired parts visible in a first-person image the following will be true: x(right_part) < x(lef t_part). In other words, the right parts of that person's body will have smaller x coordinate value in a first-person image, than the left parts. We use this intuition to encode the egocentric pose prior φ pose (B j ) ∈ R 1×1 for a given bounding box B j as follows:
φ pose (B j ) = 1 |P| p∈P 1{x(right_part) < x(lef t_part)}
where P is the set of all paired parts, and 1{x(right_part) < x(lef t_part)} is an indicator function that returns 1 if the x coordinate of the right part in a first-person image is smaller than the x coordinate of the left part. The computed value φ pose (B j ) can then be viewed as a confidence that a person in the bounding box B j is facing the camera wearer, which is an important cue for inferring the camera wearer's cooperative intentions.
Pseudo Ground Truth. We then combine all the above discussed components into a unified framework using the Equation 2. Such a formulation allows us to automatically construct pseudo ground truth labels from the outputs of a pretrained multi-person pose estimation network. We illustrate several examples of our obtained pseudo ground truth labels in Figure 3. Notice that while our computed pseudo ground truth is not always correct, in many cases it correctly captures the player with whom the camera wearer will cooperate in the near future. In our experimental section, we will demonstrate that despite the imperfections of our pseudo ground truth labels, we can use them to obtain a model that is almost as good as the model trained in a fully supervised fashion using manually annotated cooperation labels.
Cross-Model EgoSupervision
After obtaining the pseudo ground truth dataŷ, we train our cooperative basketball intention FCN using the crossmodel EgoSupervision scheme as shown in Figure 2. We employ a multi-person pose estimation network from [6] as our base model, which is used to predict the 1) pose estimates of all people in a given image and 2) their bounding boxes. The parameters inside the base network are fixed throughout the entire training procedure. At each iteration, the outputs from the base network are fed to the EgoTransformer, which transforms them into the pseudo ground truth cooperate intention labels. These pseudo ground truth labels are then used as a supervisory signal to train our cooperative basketball intention FCN using a sigmoid cross entropy per-pixel loss as illustrated in Equation 1.
Implementation Details
For all of our experiments, we used a Caffe deep learning library [25]. As our base FCN model we used a multiperson pose estimation network from [6]. Inspired by the success of this method, we also used the same architecture for our cooperative basketball intention FCN. During training, we optimized the network for 5000 iterations with a learning rate of 10 −7 , the momentum equal to 0.9, the weight decay of 0.0005, and the batch size of 15. The weights inside the base FCN network were fixed throughout the entire training procedure. To compute the egocentric
Human Subjects Accuracy
Subject-4 0.802 Subject-2 0.895 Subject-3 0.901 Subject-5 0.904 Subject-1 0.927 Table 1: Quantitative human study results on our cooperative basketball intention task. We ask 5 subjects to predict a player in the first-person image, with whom they think the camera wearer will cooperate after 2 seconds. We then compute the accuracy as the fraction of correct responses.
The results indicate that most subjects achieve the accuracy of about 90%. We conjecture that Subject-4 may be less familiar with the basketball game thus, the lower accuracy.
size prior mask we used σ = 10.
Cooperative Basketball Intention Dataset
We build upon the dataset from [5], which captures firstperson basketball videos of 48 distinct college-level players in an unscripted basketball game. The work in [5] studies a basketball performance assessment problem, and provides 401 training and 343 testing examples of basketball cooperations among players from 10.3 hours of videos.
To obtain ground truth labels corresponding to the specific players, with whom the camera wearer cooperated, we look at the video segments corresponding to all such cooperation. We then identify the player with whom the camera wearer cooperated, go back to the frame about 2 seconds before the cooperation happens, and label that player with a bounding box. The ground truth data is then generated by placing a Gaussian inside the bounding box, according to the height and width of the bounding box.
Once again we note that these labels are only used for the evaluation purposes, and also to train other baseline models. In comparison, our method learns to detect the players with whom the camera wearer will cooperate, without relying on manually annotated intention labels.
Experimental Results
In this section, we present quantitative and qualitative results for our cooperative basketball intention prediction task. To compute the accuracy of each method, we select the player in the image with the maximum predicted probability as the the final prediction and then compute the fraction of all the correct predictions across the entire testing dataset.
Human Study
First, to see how well humans can predict cooperative basketball intention from first-person images, we conduct a human study consisting of 5 human subjects. Each subject
Method
Accuracy DCL [30] 0.222 MPP-pretrained [6] 0.586 DeepLab ‡ [9] 0.644 Pseudo GT 0.665 ResNet-50 ‡ [20] 0.675 PSPNet ‡ [48] 0.695 ResNet-101 ‡ [20] 0.706 DeepLab-v2 ‡ [8] 0.757 MPP-finetuned ‡ [6] 0.778 CMES 0.775 Table 2: The quantitative cooperative basketball intention results evaluated as the fraction of correct predictions. We compare our Cross-Model EgoSupervision (CMES) scheme with a variety of supervised methods (marked by ‡). These results indicate that even without using manually annotated intention labels, our method outperforms most supervised methods, and produces almost identical performance as our main baseline "MPP-finetuned".
is shown 343 testing images one at a time, and asked to click on the player in an image, with whom he/she thinks the camera wearer will cooperate 2 seconds from now. Then the accuracy of each subject is evaluated as the fraction of correct responses. We present these results in Table 1, and demonstrate that this task is not trivial even for humans: most of the subjects achieve about 90% accuracy on our task, which is solid but not perfect. We also point out that we did not collect information on how familiar each subject was with basketball. However, based on the results, we conjecture that Subject-4 who achieved almost 10% lower accuracy than the other subjects was probably not very familiar with basketball, which contributed to his lower performance. In Figure 5, we also visualize the qualitative examples that human subjects found the most difficult, i.e. in these instances, the predictions among the subjects differed substantially.
Quantitative Results
In Table 2, we present quantitative cooperative basketball intention results of our method and several other baselines. As our baselines, we use a collection of methods that were successfully used for other computer vision tasks such as image classification, semantic segmentation or saliency detection. These include a 1) Deep Contrast Saliency (DCL) method [30], 2-3) several variations of highly successful DeepLab semantic segmentation systems [9,8] adapted to our task, 4-5) image classification ResNets [20] adapted to our task, 6) one of the top performing semantic segmentation systems PSPNet [48], 7-8) a pretrained and finetuned multi-person pose estimation (MPP) network [6], and 9) a pseudo ground truth obtained from our EgoTransformer. Note that our Cross-Model EgoSupervision (CMES) method is based on an MPP network architecture [6], and thus, as our main baseline we use the "MPP-finetuned" method, which uses the manually labeled bounding box intention labels to infer with whom the camera wearer will interact. In contrast to this baseline, our CMES method is only trained on the automatically generated pseudo ground truth labels. We note that the supervised methods employing manually labeled data are marked with ‡ . We now discuss several interesting observations based on these results.
Comparison with the Supervised Methods. Based on the results, we observe that despite not using manually annotated bounding box intention labels, our method outperforms a number of supervised baselines and achieves almost equivalent results to our main baseline "MPP-finetuned", which was trained using manually annotated cooperative intention labels. Thus, these results indicatee the effectiveness of our cross-model EgoSupervision scheme.
Comparison with the Pseudo Ground Truth. One interesting and a bit surprising observation from Table 2, is that our cross-model EgoSupervision model achieves substantially better accuracy than the pseudo ground truth, which was used to optimize our model. We conjecture that this happens due to the following reasons. The pseudo ground truth labels are constructed using three different signals: 1) an egocentric location prior, 2) an egocentric size prior, and 3) an egocentric pose prior. Note, that our constructed pseudo ground truth does not incorporate any visual appearance information, i.e. it does not consider how the players look like. In contrast, during training, our network, learns what are the visual appearance cues indicative of the players with high pseudo ground truth values. Arguably, such visual cues provide a stronger signal for a cooperative intention recognition, which then leads to a substantially better performance than the pseudo ground truth labels.
First-Person RGB FCN Activations Ground Truth Figure 6: The visualization of the activation values inside the second to last layer in our trained network. Note that the network produces high activation values around the faces of the players in the camera wearer's field of view. This makes intuitive sense, as facial expressions provide the most informative cues for a cooperative basketball intention task.
Qualitative Results
In Figure 4, we present our qualitative results, where we show that in most cases, our model successfully learns to predict with whom the camera wearer will cooperate. Furthermore, to gain a better understanding of what the network learned, in Figure 6, we visualize the activations inside the second to last FCN's layer. Note that our network has high activation values around the faces of people with whom the camera wearer intends to cooperate. This makes intuitive sense, as face is probably the most useful cue to recognize the camera wearer's intention to cooperate.
Ablation Experiments
In Table 3, we present the results analyzing the behavior of our EgoTransformer scheme. Earlier we discussed that to implement our EgoTransformer scheme we exploit three characteristics: 1) egocentric location prior φ loc , 2) egocentric size prior φ size , and 3) egocentric pose prior φ pose . We want to investigate how much each of these priors affect 1) the quality of our generated pseudo ground truth data, and 2) the quality of our model trained using such pseudo ground truth. To do this, we run experiments with three baselines where for each baseline we remove one of φ loc , φ size , or φ pose components. We denote these three baselines as "no φ loc ", "no φ size " and "no φ pose " respectively. Finally, we include the results of our model using the full EgoTransformer scheme.
Based on the results in Table 3, we first observe that each of these components have a significant impact to the quality of pseudo ground truth that we obtain. Specifically, using our full model yields 9.4% better pseudo ground truth results than the second best baseline. Additionally, note that the network trained to the pseudo ground truth of our full model achieves 4.4% higher accuracy than the second best baseline. These results indicate that each component in our EgoTransformer scheme is crucial for learning a high quality cooperative intention model.
Conclusions
In this work, we present a new task of predicting cooperative basketball intention from a single first-person image. We demonstrate that a first-person image provides strong cues to infer the camera wearer's intentions based on what he/she sees. We use this observation to design a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate, without using manually labeled intention labels. We demonstrate that despite not using such labels, our method achieves similar or even better results than fully supervised methods.
We believe that our proposed cross-model EgoSupervision scheme could be applied on various other first-person vision tasks without the need to manually collect labels for each of such tasks. In the long run, a learning scheme such as ours could effectively replace the supervised methods, which require costly and time consuming annotation process.
Table 3 :
3The quantitative ablation studies documenting
the importance of each component in our EgoTransformer
scheme. We separately remove each of φ loc , φ size , φ pose
and investigate how the accuracy changes. The second col-
umn in the table denotes the accuracy of a pseudo ground
truth, while the third column depicts the accuracy of our
trained model. Based on these results, we can conclude that
each component of our EgoTransformer is essential for an
accurate cooperative basketball intention prediction.
Tabula rasa: Model transfer for object category detection. Y Aytar, A Zisserman, IEEE International Conference on Computer Vision. Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In IEEE International Conference on Computer Vision, 2011. 2
First-person action-object detection with egonet. Gedas Bertasius, Hyun Soo Park, Stella X Yu, Jianbo Shi, Proceedings of Robotics: Science and Systems. Robotics: Science and Systems23Gedas Bertasius, Hyun Soo Park, Stella X. Yu, and Jianbo Shi. First-person action-object detection with egonet. In Proceedings of Robotics: Science and Systems, July 2017. 2, 3
Deepedge: A multi-scale bifurcated deep network for top-down contour detection. Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gedas Bertasius, Jianbo Shi, and Lorenzo Torresani. Deepedge: A multi-scale bifurcated deep network for top-down contour detection. In The IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), June 2015. 2
Semantic segmentation with boundary neural fields. Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gedas Bertasius, Jianbo Shi, and Lorenzo Torresani. Semantic seg- mentation with boundary neural fields. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2
Am I a baller? basketball performance assessment from first-person videos. Gedas Bertasius, Stella X Yu, Hyun Soo Park, Jianbo Shi, abs/1611.05365CoRRGedas Bertasius, Stella X. Yu, Hyun Soo Park, and Jianbo Shi. Am I a baller? basketball performance assessment from first-person videos. CoRR, abs/1611.05365, 2016. 6
Realtime multi-person 2d pose estimation using part affinity fields. Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh, abs/1611.08050CoRR67Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. CoRR, abs/1611.08050, 2016. 2, 3, 5, 6, 7
Semantic image segmentation with deep convolutional nets and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, ICLR. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015. 2
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, Deeplab, arXiv:1606.00915Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016. 6
Attention to scale: Scale-aware semantic image segmentation. Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, Alan L Yuille, CVPRLiang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, and Alan L. Yuille. Attention to scale: Scale-aware semantic image segmenta- tion. CVPR, 2016. 6
You-do, i-learn: Discovering task relevant objects and their modes of interaction from multi-user egocentric video. Dima Damen, Teesid Leelasawassuk, Osian Haines, Andrew Calway, Walterio Mayol-Cuevas, Proceedings of the British Machine Vision Conference. the British Machine Vision ConferenceBMVA PressDima Damen, Teesid Leelasawassuk, Osian Haines, Andrew Calway, and Walterio Mayol-Cuevas. You-do, i-learn: Discovering task rele- vant objects and their modes of interaction from multi-user egocen- tric video. In Proceedings of the British Machine Vision Conference. BMVA Press, 2014. 2
Decaf: A deep convolutional activation feature for generic visual recognition. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell, International Conference in Machine Learning (ICML). Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolu- tional activation feature for generic visual recognition. In Interna- tional Conference in Machine Learning (ICML), 2014. 2
Learning with augmented features for heterogeneous domain adaptation. Lixin Duan, Dong Xu, Ivor W Tsang, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningEdinburgh, ScotlandLixin Duan, Dong Xu, and Ivor W. Tsang. Learning with augmented features for heterogeneous domain adaptation. In Proceedings of the International Conference on Machine Learning, pages 711-718, Ed- inburgh, Scotland, June 2012. Omnipress. 2
Understanding egocentric activities. Alireza Fathi, Ali Farhadi, James M Rehg, ICCV. 2Alireza Fathi, Ali Farhadi, and James M. Rehg. Understanding ego- centric activities. In ICCV. 2
Social interactions: A first-person perspective. Alireza Fathi, Jessica K Hodgins, James M Rehg, Alireza Fathi, Jessica K. Hodgins, and James M. Rehg. Social inter- actions: A first-person perspective. 2
Learning to recognize objects in egocentric activities. Alireza Fathi, Xiaofeng Ren, James M Rehg, CVPR. IEEE Computer SocietyAlireza Fathi, Xiaofeng Ren, and James M. Rehg. Learning to rec- ognize objects in egocentric activities. In CVPR, pages 3281-3288. IEEE Computer Society, 2011. 2
Rich feature hierarchies for accurate object detection and semantic segmentation. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. 2
Geodesic flow kernel for unsupervised domain adaptation. Boqing Gong, Yuan Shi, Fei Sha, Kristen Grauman, 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USABoqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE Con- ference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012, pages 2066-2073, 2012. 2
Building unified human descriptors for multi-type activity recognition. J K Ilaria Gori, Michael S Aggarwal, Ryoo, abs/1507.02558CoRRIlaria Gori, J. K. Aggarwal, and Michael S. Ryoo. Building uni- fied human descriptors for multi-type activity recognition. CoRR, abs/1507.02558, 2015. 2
Cross modal distillation for supervision transfer. Saurabh Gupta, Judy Hoffman, Jitendra Malik, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal dis- tillation for supervision transfer. In The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), June 2016. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, arXiv:1512.03385arXiv preprintKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. 6
Distilling the knowledge in a neural network. Geoffrey E Hinton, Oriol Vinyals, Jeffrey Dean, abs/1503.02531CoRRGeoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. 2
Lsda: Large scale detection through adaptation. Judy Hoffman, Sergio Guadarrama, Eric S Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, Kate Saenko, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editorsCurran Associates, Inc27Judy Hoffman, Sergio Guadarrama, Eric S Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, and Kate Saenko. Lsda: Large scale detection through adaptation. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, edi- tors, Advances in Neural Information Processing Systems 27, pages 3536-3544. Curran Associates, Inc., 2014. 2
Learning with side information through modality hallucination. Judy Hoffman, Saurabh Gupta, Trevor Darrell, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Judy Hoffman, Saurabh Gupta, and Trevor Darrell. Learning with side information through modality hallucination. In The IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2
Efficient learning of domain-invariant image representations. Judy Hoffman, Erik Rodner, Jeff Donahue, Kate Saenko, Trevor Darrell, International Conference on Learning Representations. Judy Hoffman, Erik Rodner, Jeff Donahue, Kate Saenko, and Trevor Darrell. Efficient learning of domain-invariant image representa- tions. In International Conference on Learning Representations, 2013. 2
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell Caffe, arXiv:1408.5093Convolutional architecture for fast feature embedding. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Dar- rell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014. 5
Fast unsupervised ego-action learning for first-person sports videos. Kris Makoto Kitani, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, CVPR. Kris Makoto Kitani, Takahiro Okabe, Yoichi Sato, and Akihiro Sug- imoto. Fast unsupervised ego-action learning for first-person sports videos. In CVPR, 2011. 2
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NIPS. 2012. 2Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS. 2012. 2
What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. B Kulis, K Saenko, T Darrell, Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR '11. the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR '11B. Kulis, K. Saenko, and T. Darrell. What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In Pro- ceedings of the 2011 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR '11, pages 1785-1792, 2011. 2
Predicting important objects for egocentric video summarization. Yong , Jae Lee, Kristen Grauman, IJCV. 24Yong Jae Lee and Kristen Grauman. Predicting important objects for egocentric video summarization. IJCV, 2015. 2, 4
Deep contrast learning for salient object detection. G Li, Y Yu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). G. Li and Y. Yu. Deep contrast learning for salient object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 478-487, June 2016. 6
Delving into egocentric actions. Yin Li, Zhefan Ye, James M Rehg, CVPR. 24Yin Li, Zhefan Ye, and James M. Rehg. Delving into egocentric actions. In CVPR. 2, 3, 4
and Anton van den Hengel. Efficient piecewise training of deep structured models for semantic segmentation. Guosheng Lin, Chunhua Shen, Ian D Reid, abs/1504.01013CoRRGuosheng Lin, Chunhua Shen, Ian D. Reid, and Anton van den Hen- gel. Efficient piecewise training of deep structured models for se- mantic segmentation. CoRR, abs/1504.01013, 2015. 2
Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, CoRR. 2Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolu- tional networks for semantic segmentation. CoRR, 2014. 2
Story-driven summarization for egocentric video. Zheng Lu, Kristen Grauman, CVPR. Zheng Lu and Kristen Grauman. Story-driven summarization for egocentric video. In CVPR, 2013. 2
Going deeper into first-person activity recognition. Kris Kitani Minghuang Ma, Conference on Computer Vision and Pattern Recognition (CVPR). Kris Kitani Minghuang Ma. Going deeper into first-person activity recognition. In Conference on Computer Vision and Pattern Recog- nition (CVPR), 2016. 2
Egocentric future localization. Hyun Soo Park, Jyh-Jing Hwang, Yedong Niu, Jianbo Shi, CVPR. 24Hyun Soo Park, Jyh-Jing Hwang, Yedong Niu, and Jianbo Shi. Ego- centric future localization. In CVPR, 2016. 2, 4
Force from motion: Decoding physical sensation from a first person video. Hyun Soo Park, Jyh-Jing Hwang, Jianbo Shi, CVPR. Hyun Soo Park, Jyh-Jing Hwang, and Jianbo Shi. Force from motion: Decoding physical sensation from a first person video. In CVPR, 2016. 2
Detecting activities of daily living in first-person camera views. Hamed Pirsiavash, Deva Ramanan, CVPR. Hamed Pirsiavash and Deva Ramanan. Detecting activities of daily living in first-person camera views. In CVPR, 2012. 2
Faster R-CNN: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, Neural Information Processing Systems (NIPS). Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R- CNN: Towards real-time object detection with region proposal net- works. In Neural Information Processing Systems (NIPS), 2015. 2
Figure-ground segmentation improves handled object recognition in egocentric video. Xiaofeng Ren, Chunhui Gu, CVPR. Xiaofeng Ren and Chunhui Gu. Figure-ground segmentation im- proves handled object recognition in egocentric video. In CVPR, 2010. 2
Robot-centric activity prediction from first-person videos: What will they do to me?. M S Ryoo, Thomas J Fuchs, Lu Xia, J K Aggarwal, Larry Matthies, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI '15. the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI '15New York, NY, USAACMM. S. Ryoo, Thomas J. Fuchs, Lu Xia, J.K. Aggarwal, and Larry Matthies. Robot-centric activity prediction from first-person videos: What will they do to me? In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI '15, pages 295-302, New York, NY, USA, 2015. ACM. 2
Overfeat: Integrated recognition, localization and detection using convolutional networks. Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann Lecun, Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann Lecun. Overfeat: Integrated recog- nition, localization and detection using convolutional networks. http://arxiv.org/abs/1312.6229. 2
First person action recognition using deep learned descriptors. Suriya Singh, Chetan Arora, C V Jawahar, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Suriya Singh, Chetan Arora, and C. V. Jawahar. First person action recognition using deep learned descriptors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2
Action Recognition in the Presence of One Egocentric and Multiple Static Cameras. Bilge Soran, Ali Farhadi, Linda Shapiro, Bilge Soran, Ali Farhadi, and Linda Shapiro. Action Recognition in the Presence of One Egocentric and Multiple Static Cameras. 2015. 2
Predicting behaviors of basketball players from first person videos. Shan Su, Jung Pyo Hong, Jianbo Shi, Hyun Soo Park, CVPR. 24Shan Su, Jung Pyo Hong, Jianbo Shi, and Hyun Soo Park. Predicting behaviors of basketball players from first person videos. In CVPR, 2017. 2, 4
Detecting engagement in egocentric video. Yu-Chuan Su, Kristen Grauman, European Conference on Computer Vision (ECCV). Yu-Chuan Su and Kristen Grauman. Detecting engagement in ego- centric video. In European Conference on Computer Vision (ECCV), 2016. 2
Holistically-nested edge detection. Saining Xie, Zhuowen Tu, ICCV. Saining Xie and Zhuowen Tu. Holistically-nested edge detection. In ICCV, 2015. 2
Pyramid scene parsing network. Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia, abs/1612.01105CoRRHengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. CoRR, abs/1612.01105, 2016. 6
| []
|
[
"Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction",
"Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction"
]
| [
"Tobias Weber [email protected] \nDepartment of Statistics\nLMU Munich\n\n\nDepartment of Radiology\nUniversity Hospital\nLMU Munich\n\n\nMunich Center for Machine Learning (MCML)\n\n"
]
| [
"Department of Statistics\nLMU Munich\n",
"Department of Radiology\nUniversity Hospital\nLMU Munich\n",
"Munich Center for Machine Learning (MCML)\n"
]
| []
| 0000−0002−5430−2595] , Michael Ingrisch 2,3[0000−0003−0268−9078] , Bernd Bischl 1,3[0000−0001−6002−6980] , and David Rügamer 1,3[0000−0002−8772−9202]Abstract. Undersampling is a common method in Magnetic Resonance Imaging (MRI) to subsample the number of data points in k-space and thereby reduce acquisition times at the cost of decreased image quality. In this work, we directly learn the undersampling masks to derive task-and domain-specific patterns. To solve this discrete optimization challenge, we propose a general optimization routine called ProM : A fully probabilistic, differentiable, versatile, and model-free framework for mask optimization that enforces acceleration factors through a convex constraint. Analyzing knee, brain, and cardiac MRI datasets with our method, we discover that different anatomic regions reveal distinct optimal undersampling masks. Furthermore, ProM can create undersampling masks that maximize performance in downstream tasks like segmentation with networks trained on fully-sampled MRIs. Even with extreme acceleration factors, ProM yields reasonable performance while being more versatile than existing methods, paving the way for data-driven all-purpose mask generation. | null | [
"https://export.arxiv.org/pdf/2305.16376v1.pdf"
]
| 258,947,246 | 2305.16376 | 203f78b16206bcc05d5fa966d6fc488f9dbf26be |
Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction
Tobias Weber [email protected]
Department of Statistics
LMU Munich
Department of Radiology
University Hospital
LMU Munich
Munich Center for Machine Learning (MCML)
Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction
MRI undersampling · discrete optimization · bernoulli
0000−0002−5430−2595] , Michael Ingrisch 2,3[0000−0003−0268−9078] , Bernd Bischl 1,3[0000−0001−6002−6980] , and David Rügamer 1,3[0000−0002−8772−9202]Abstract. Undersampling is a common method in Magnetic Resonance Imaging (MRI) to subsample the number of data points in k-space and thereby reduce acquisition times at the cost of decreased image quality. In this work, we directly learn the undersampling masks to derive task-and domain-specific patterns. To solve this discrete optimization challenge, we propose a general optimization routine called ProM : A fully probabilistic, differentiable, versatile, and model-free framework for mask optimization that enforces acceleration factors through a convex constraint. Analyzing knee, brain, and cardiac MRI datasets with our method, we discover that different anatomic regions reveal distinct optimal undersampling masks. Furthermore, ProM can create undersampling masks that maximize performance in downstream tasks like segmentation with networks trained on fully-sampled MRIs. Even with extreme acceleration factors, ProM yields reasonable performance while being more versatile than existing methods, paving the way for data-driven all-purpose mask generation.
Introduction
Undersampling is an important tool to speed up the acquisition time in magnetic resonance imaging (MRI) by selectively sampling data points in k-space. This can, however, result in decreased image quality and image artifacts. To address this issue, various techniques have been developed to enhance reconstructions and produce high-quality images from undersampled data. The majority of research focuses on improving the reconstruction from an undersampled MRI, e.g., via compressed sensing [16], or, more recently, deep learning. Examples include transformer-based approaches, e.g., for the reconstruction of radial trajectories [11] and sparse-view CTs [26], or diffusion models [7,8,19]. Further approaches include variational models to jointly synthesize and reconstruct MRI images [6], sharpening networks [10] to counter the absence of high-frequency arXiv:2305.16376v1 [eess.IV] 25 May 2023
ProM undersampling mask optimization Fig. 1. Visualization of the ProM optimization routine for the ACDC dataset with an acceleration factor of x8 for a 2D mask. The bottom row shows our Bernoulli mask distribution p θ , where a lighter color implies a higher probability for sampling the respective entry in the cartesian k-space grid. Starting from a randomly initialized distribution, ProM gradually optimizes p θ to maximize reconstruction quality while simultaneously increasing the sparsity of the masks. The resulting distribution converges to a domain-specific mask with desired acceleration factor that preserves most of the image's quality (top row). The original image is displayed on the right. features in undersampled MRIs, or learnable Fourier interpolation [9]. To account for meta information such as the manufacturer, [15] condition the reconstruction network on side information. Using adversarial methods, [4] is able to significantly enhance images with extreme acceleration factors.
Rather than focusing on enhancing the image quality with a predetermined undersampling pattern, the challenge addressed in this paper lies in identifying the optimal mask in terms of reconstruction quality for a given undersampling ratio. In the literature, this is mainly addressed by combined approaches that simultaneously learn a reconstruction network and an undersampling mask [1,[23][24][25]27]. As in this work, the combined approach from [28] considers probabilistic masks but obtains sparsity via pruning. In contrast, and most closely related to our work, [20,21] propose a direct mask optimization scheme based on iterative gradient sampling (IGS), which repeatedly determines k-space elements that contribute the most to a loss criterion. They argue that undersampling masks could be optimized directly but "the undersampled pattern is binary, which cannot be trained by the gradient descent".
In this work, we propose ProM : A fully differentiable probabilistic framework for mask optimization. By framing the search for an optimal mask as a probabilistic optimization problem for a pre-specified acceleration factor, ProM is able to find the optimal undersampling distribution using ideas from relaxed categorical optimization in deep learning research. In particular, this allows tailored results to the given downstream task and anatomic region (Figure 1), making it a versatile and data-driven all-purpose mask generator.
Methods
Following [31], we reformulate the idea of learning sparse neural network weights and introduce a gradient descent routine to learn fully probabilistic undersam-pling masks. In the following, we describe our routine for a single image and later extend this idea to jointly optimize masks across a whole dataset.
Probabilistic Undersampling Masks
In the following, we define D as the number of elements on the 2-dimensional or 3-dimensional k-space grid and use a vectorized notation for all objects for simplicity. Thus,
x k = (x (1) k , . . . , x (D)
k ) ⊤ ∈ C D depicts an image residing in kspace. Partial acquisition is augmented by applying a binary mask m ∈ {0, 1} D to the fully-sampled x k element-wise: x k ⊙ m.
Instead of following a distinct sampling pattern for m, we assume that every element m (i) , i = 1, ..., D, in m is the result of an independent Bernoulli experiment of a random variable M (i) and distribution defined by
p θ := P(M = m) = D i=1 θ m (i) i (1 − θ i ) 1−m (i) ,(1)X ⊆ C D is done by x c = F −1 (x k ⊙ m),
where F −1 is the inverse Fourier transform matrix. This procedure amounts to a simple zero-filling reconstruction strategy. We strive to allow the optimization of the (posterior) sampling distribution (i.e., after accounting for the specific reconstruction task and data) for arbitrary differentiable loss functions L used in general vision deep learning. As these loss functions are typically designed to only work in real-valued space, we transform x c into a real-valued representation using its magnitude imagex = |x c | ∈ R D . The quality of the reconstruction can then be assessed by L(x, x), where x is the fully-sampled original image. Given a data point x, our approach directly maximizes the posterior (or equally, minimizes the posterior empirical risk) by finding θ as arg min
θ L(x, x) dp θ = arg min θ E m∼p θ L(x, x) ≈ arg min θ 1 L L l=1 L(x l , x) , (2)
where we approximate the expectation using L Monte-Carlo samplesx l .
Without further constraints, the optimal solution of Eq. 2 is θ = 1, i.e., the fully-sampled image. We, therefore, introduce an undersampling constraint. This can be done similar to [31] by limiting the sum of all probabilities in p θ to a pre-specified value S, i.e., D i=1 θ i ≤ S. Practically speaking, S will result in the number of sampled k-space elements as D i=1 θ i is the expected value of ||m|| 0 . Given an acceleration factor α, we can define S = ⌊ D α ⌋ and our final objective:
arg min θ E m∼p θ L(x, x) s.t. D i=1 θ i ≤ S and S ∈ {0, ..., D}.(3)
The constraint in Eq. 3, which can equally be expressed as an ℓ 1 -norm penalty for θ, has an intrinsic affinity to be sparse (c.f. Figure 2a).
Differentiability over Reparameterization
So far, the proposed approach is fully differentiable including F −1 and |.|, except for the stochastic sampling of m. In order to use modern autograd frameworks for stochastic masks, we apply the Gumbel-Softmax trick [12] tailored to the Bernoulli distribution [31]. Let ρ := log θ 1−θ be the log odds-ratio for θ. Then, a "soft mask" m soft allowing for differentiability can be obtained by sampling
m soft ∼ 1 (ρ + g 1 − g 0 ≥ 0) ≈ σ (ρ + g 1 − g 0 )τ −1 ,(4)
where g 1 , g 0 are independent and identically distributed samples from a Gumbel(0,1) distribution and the indicator function 1 is relaxed using a sigmoid function σ. A temperature parameter τ controls the softness of the discrete approximation and is annealed during optimization. Here, stochasticity is rerouted over the Gumbel samples and thus a computational graph is able to propagate gradients to θ. As the mask m needs to be strictly binary, which is not the case for m soft , we adopt the straight-through Gumbel estimator trick [12], yielding
m = 1 (m soft ≥ 0.5) − sg[m soft ] + m soft ,(5)
where 1 is applied element-wise and returns our final binary mask sample. sg denotes the stop gradient operation, which blocks gradients from backpropagation. In other words, Eq. (5) yields discrete values while we obtain gradients for its soft approximation.
Constrained Optimization via Projected Gradient Descent
The optimization problem in Eq. 3 cannot be solved effectively with standard gradient descent. Instead, we follow [31] and use a projected gradient approach by first updating the unconstrained parameter vectorθ = [θ 1 , ...,θ D ] usingθ = θ − η∇ θ E m∼p θ L(x, x) with η being the learning rate, and then projectθ into the space of valid elements by solving
D i=1 min(1, max(0,θ i − λ)) = S(6)
for λ ∈ R, yielding θ = min(1, max(0,θ − max(0, λ)1)) .
A solution to Eq. (6) can be obtained using a convex solver or root-finding method such as bisection search. To foster exploration at the beginning of the training and allow for exploitation at later stages, we anneal S during optimization. First, exploration iterations with S = D allow for unrestricted optimization. Then, an annealing phase following the schedule of [32] decreases S to meet the desired acceleration factor. Finally, the exploitation phase optimizes θ under the nominal constrained S (c.f. Figure 2b). Our procedure is summarized in Algorithm 1 for a single image x k . The same procedure can be applied to a whole dataset by iteratively taking random batches for the optimization of θ. Note that the only trainable parameters in ProM are the D parameters θ. Optimization can thus be done in a matter of seconds (single image) or a few minutes (dataset).
Experiments
We investigate the performance of ProM using slices from ACDC [5], BraTS [2,3,17] and the fastMRI Knee [29] dataset:
ACDC are cardiac MRIs with 100 train and 50 test subjects. We extract the end-diastolic frame in 256px resolution including segmentation labels of the left and right ventricular cavity as well as the left ventricular myocardium, yielding 548 train and 338 test slices. k-space data is emulated via Fourier transform.
BraTS contains brain MRIs with T2-FLAIR, T1-, T1Gd-and T2-weighted modalities. The goal of the segmentation is to determine the classes of whole, core, and enhancing tumors. The dataset is split into 387 train and 97 test subjects. We extract 19, 350 train and 4, 850 test slices using slice indices between 60 and 110 with 256px resolution. k-space data is emulated as in ACDC.
fastMRI Knee includes raw k-space data of single coil knee MRI. To focus on pathologies, we extract the annotated subset of fastMRI+ [30] amounting to 8, 057 train and 1524 test slices with a center-crop to 320px resolution.
For ProM we use 2500 iterations with a learning rate of 0.01 in the Adam [13] optimizer. This configuration provided an ideal balance between runtime and convergence. We use batches of size 32 and draw L = 4 Bernoulli samples for each sample, yielding a total batch size of 128. We found that similar to [14], a low number of Monte Carlo samples is sufficient if the batch size is large. The temperature τ follows a linearly decreasing schedule from 1 to 0.03 in the last step, which is in line with [31]. For reconstruction, we use the mean squared error for L. The final masks are obtained by applying Bayesian model averaging [18] on an NVIDIA A100 GPU. We compare our approach against an equispaced mask with fully-sampled central region of 4% [29] and random offset, a 2D variable density Gaussian mask, and the most recent mask optimization approach Iterative Gradients Sampling (IGS; [21]). Reported metrics represent the mean across 10 randomly initialized runs per method.
Domain-specific Masks
Each anatomic region (as a composition of different elemental shapes) yields a distinct k-space representation. Our first experiment not only proves that this is the case in practice but also shows that a data-driven optimization routine for masks such as ProM is indeed necessary to facilitate optimal reconstruction (c.f. Figure 3 for the results of ProM trained on the three different datasets). For example, the cardiac ACDC consists dominantly of elliptic primitives, which results in a completely different optimal mask than fastMRI Knee with a lot of vertical lines and some horizontal elements in image space.
Reconstruction Quality
To assess the reconstruction quality of ProM we evaluate the peak signal-tonoise ratio (PSNR), structural similarity index measure (SSIM), and normalized mean squared error (NMSE) for acceleration factor α x4 to x32 on the test set of fastMRI Knee. Results (Table 1) show that the IGS method works well, the additional flexibility of ProM to operate in 2D, however, allows to obtain superior results. This advantage increases for higher α. As an instance, in the case of factor x4, the SSIM / NMSE of ProM is 5.00% / 19.23% better compared to IGS, with the improvement increasing to 15.64% / 134.78% for factor x32. Moreover, when compared to more conventional masking approaches, the gain in quality of ProM is even larger. In addition, Figure 4 shows that the ProM mask does not introduce noise nor artifacts, unlike the other evaluated methods. However, even with a custom-tailored undersampling strategy, high-frequency details are omitted to a certain degree.
PSNR ↑ SSIM ↑ NMSE ↓ PSNR ↑ SSIM ↑ NMSE ↓ PSNR ↑ SSIM ↑ NMSE ↓ PSNR ↑ SSIM ↑ NMSE ↓
Zero-shot Undersampled Segmentation
Visual reconstruction quality does not necessarily correlate with performance in downstream tasks such as segmentation [20]. We now investigate the quality of ProM in a transfer learning scenario, where we apply a frozen segmentation network trained on fully-sampled MRIs and learn a mask to maximize its performance with undersampled MRIs. For this, we choose a standard U-net [22] implementation and training routine with channel multipliers of (16,32,64,128,256). We substitute L and use the trained segmentation network net as a proxy paired with a combination of Dice and cross-entropy loss L seg , i.e. L(x, x) becomes L seg (net(x), x seg ) where x seg is the segmentation label of x. Additionally, for the task of segmentation, we introduce a 1D variant of ProM to better understand its gain in performance compared to other methods.
The segmentation performance is evaluated in Table 2 via Dice score and Intersection-over-Union (IuO). Examples are visualized in Figure 5. The fullysampled MRI achieve a macro-averaged Dice score of 0.855 and an IoU of 0.763 for ACDC as well as 0.772 and 0.710 for BraTS, respectively. While 1D ProM does not surpass the performance of IGS, our contention is that 1D ProM would benefit from optimized parameters specifically tailored for 1D masking. 2D ProM achieves competitive segmentation performance for α x8 and is overall notably better for extreme α such as x32 with a Dice score / IoU of 0.727 / 0.606 in ACDC and 0.706 / 0.608 in BraTS. The Gaussian mask appears to be an efficient and simple method to achieve competitive performance for smaller α. Note that the ProM mask determined by L seg differs substantially from the one obtained to maximize reconstruction quality (Figure 3), emphasizing the fact that an optimal mask does not only vary with the dataset but also with the task.
Conclusion
This paper proposes ProM as a general framework for data-driven and probabilistic undersampling mask learning. We evaluated ProM on cardiac, brain, and knee MRIs. Our approach shows promising performance, even when reaching acceleration factors of x16 and x32. In clinical practice, ProM would allow for significantly reduced acquisition times, e.g., for high-speed interventions or pathology localization. Our method further allows deriving these optimal masks in a matter of seconds from a single data sample without requiring a large computing infrastructure. As there are no shape requirements for ProM, future research directions could investigate patterns for radial masks or the reconstruction of multi-coil images. Additionally, custom masks for certain scanner parameters, acquisition protocols, or manufacturers could be derived.
Thus, for λ ≥ 0 g(λ) = L(θ, λ)
= 1 2 ||[s − λ1] − + [θ − (λ + 1)1] + || 2 + λ(1 ⊤θ − θ) − D 2 λ 2(12)= 1 2 ||[s − λ1] − || + 1 2 ||[θ − (λ + 1)1] + || 2 + λ(1 ⊤θ − θ) − D 2 λ 2(13)
and
g ′ (λ) = 1 ⊤ [λ1 −θ] + + 1 ⊤ [(λ + 1)1 − θ] − + (1 ⊤θ − θ) − Dλ(15)
= 1 ⊤ min(1, max(0,θ − λ1)) − S
= [ D i=1 min(1, max(0,θ i − λ))] − S .(16)
With g ′ (λ) being a monotone function, λ * 1 a solution for g ′ (λ) = 0 can be obtained by e.g. a convex solver or a bisection method. The maximum of g(λ) with λ ≥ 0 is at λ * 2 = max(0, λ * 1 ). Eventually,
θ * = 1s −λ * 2 1≥1 + (s − λ * 2 1) 1>s−λ * 2 1>0(18)
= min(1, max(0,θ − λ * 2 1)
= min(1, max(0,θ − max(0, λ * 1 )1) . (20)
M
= (M(1) , . . . , M (D) ) ⊤ , and θ i ∈ (0, 1) the sampling probability of the ith element. Eq. 1 can be thought of as a Bayesian prior for the image mask. Once a mask is sampled, the translation of the image from the under-sampled k-space to the complex image domain
Histograms of probabilities in θ during different optimization phases. Starting from a random initialization, the sparsification constraint leads to a distribution with probabilities that tend to be close to zero or one. Sparsity versus reconstruction quality. The amount of active masked elements in m is bounded by the continuously annealed dense rate S D . During optimization, the average SSIM metric (measuring reconstruction) plummets when θ is limited by the constraint (at around 20% progress) but roughly stays constant or even slightly improves while the number of active elements is further restricted.
Fig. 2 .
2Optimization phases of ProM for a fastMRI Knee sample with acc. factor x16.
Fig. 3 .
3Optimized undersampling 2D mask distributions for ACDC, BraTS and fastMRI Knee (rows) with varying acceleration factors (columns). Different anatomic regions (right) have a distinct unique optimal distribution. on 10 different ProM solutions. Optimization is done in PyTorch v.1.13
Fig. 4 .
4Reconstructions of a fastMRI Knee sample. Equidistant spacing and IGS display strong infolding artifacts, which are less pronounced with Gaussian sampling. The reconstruction with the data-driven mask of ProM is much closer to the fully-sampled image and reduces image noise as well as artifacts very clearly.
Fig. 5 .
5Segmentation with a pre-trained U-net using Lseg in x16 accelerated ProM. The contours in ACDC correspond to the left ( ) and right ( ) ventricular cavity as well as the left ventricular myocardium ( ). BraTS markings in the T1Gd sample imply whole ( ), core ( ) and enhancing ( ) tumor.
Table 1 .
1Quality of reconstruction for the fastMRI Knee dataset.x4
x8
x16
x32
Table 2 .
2Segmentation metrics using a pre-trained U-net on ACDC and BraTS, showing that ProM (2D) excels especially for extreme acceleration factors.ACDC
BraTS
Equi. Gauss. IGS
ProM
(1D)
ProM
(2D)
Equi. Gauss. IGS
ProM
(1D)
ProM
(2D)
x8
Dice ↑ 0.671 0.847 0.828 0.762 0.839 0.596 0.733 0.716 0.646 0.739
IoU ↑ 0.546 0.752 0.726 0.650 0.742 0.489 0.638 0.619 0.542 0.646
x16
Dice ↑ 0.645 0.644 0.745 0.717 0.789 0.589 0.597 0.651 0.537 0.735
IoU ↑ 0.517 0.534 0.627 0.599 0.679 0.481 0.494 0.546 0.426 0.640
x32
Dice ↑ 0.644 0.399 0.592 0.587 0.727 0.580 0.315 0.537 0.483 0.706
IoU ↑ 0.517 0.301 0.466 0.460 0.606 0.472 0.226 0.428 0.374 0.608
AcknowledgmentsThis work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of BERD@NFDI -grant number 460037581. The authors gratefully acknowledge LMU Klinikum for providing computing resources on their Clinical Open Research Engine (CORE).▷ Compute batched loss and apply gradients S ← anneal(α) ▷ Anneal to acc. factor over iterations θ ← project(θ, S) ▷ Eq.(6)and(7)end forS annealing scheduleAcceleration factor α, annealing start iteration i min , annealing end iteration i max , current iteration i cur , d target = 1 α .Proof of Constraint ProjectionProof for Eq. 6 and Eq. 7 is taken and adapted from[31]. Transforming updated parametersθ ∈ R D into θ, which fulfills the sparsification constraint can be described as a least-squares convex problem:arg minThis can be solved by the Lagrangian multiplier method:where λ ≥ 0 and 0 ≤ θ i ≤ 1. Minimizing w.r.t. θ results in θ = 1s −λ1≥1 + (s − λ1) 1>s−λ1>0 .
Learning-based optimization of the under-sampling pattern in MRI. C D Bahadir, A V Dalca, M R Sabuncu, Information Processing in Medical Imaging: 26th International Conference, IPMI. Bahadir, C.D., Dalca, A.V., Sabuncu, M.R.: Learning-based optimization of the under-sampling pattern in MRI. In: Information Processing in Medical Imaging: 26th International Conference, IPMI. pp. 780-792 (2019)
Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. S Bakas, H Akbari, A Sotiras, M Bilello, M Rozycki, Scientific data. 41Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., et al.: Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data 4(1), 1-13 (2017)
Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. S Bakas, M Reyes, A Jakab, S Bauer, M Rempfler, arXiv:1811.02629[cs,stat]Bakas, S., Reyes, M., Jakab, A., Bauer, S., Rempfler, M., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv:1811.02629 [cs, stat] (2018)
Towards ultrafast mri via extreme k-space undersampling and superresolution. A Belov, J Stadelmann, S Kastryulin, D V Dylov, Medical Image Computing and Computer Assisted Intervention, MICCAI. Belov, A., Stadelmann, J., Kastryulin, S., Dylov, D.V.: Towards ultrafast mri via extreme k-space undersampling and superresolution. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 254-264 (2021)
Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved?. O Bernard, A Lalande, C Zotti, F Cervenansky, X Yang, P A Heng, I Cetin, K Lekadir, O Camara, M A G Ballester, IEEE Transactions on Medical Imaging. 3711Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, I., Lekadir, K., Camara, O., Ballester, M.A.G., et al.: Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Transactions on Medical Imaging 37(11), 2514-2525 (2018)
A learnable variational model for joint multimodal MRI reconstruction and synthesis. W Bian, Q Zhang, X Ye, Y Chen, Medical Image Computing and Computer Assisted Intervention, MICCAI. Bian, W., Zhang, Q., Ye, X., Chen, Y.: A learnable variational model for joint multimodal MRI reconstruction and synthesis. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 354-364 (2022)
Score-based diffusion models for accelerated mri. H Chung, J C Ye, Medical Image Analysis. 80102479Chung, H., Ye, J.C.: Score-based diffusion models for accelerated mri. Medical Image Analysis 80, 102479 (2022)
Adaptive diffusion priors for accelerated mri reconstruction. S U Dar, Ş Öztürk, Y Korkmaz, G Elmas, M Özbey, arXiv:2207.05876cs, eessDar, S.U.,Öztürk, Ş., Korkmaz, Y., Elmas, G.,Özbey, M., et al.: Adaptive diffusion priors for accelerated mri reconstruction. arXiv:2207.05876 [cs, eess] (2022)
Mri reconstruction by completing under-sampled k-space data with learnable fourier interpolation. Q Ding, X Zhang, Medical Image Computing and Computer Assisted Intervention, MICCAI. Ding, Q., Zhang, X.: Mri reconstruction by completing under-sampled k-space data with learnable fourier interpolation. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 676-685 (2022)
Invertible sharpening network for mri reconstruction enhancement. S Dong, E Z Chen, L Zhao, X Chen, Y Liu, T Chen, S Sun, Medical Image Computing and Computer Assisted Intervention, MICCAI. Dong, S., Chen, E.Z., Zhao, L., Chen, X., Liu, Y., Chen, T., Sun, S.: Invertible sharpening network for mri reconstruction enhancement. In: Medical Image Com- puting and Computer Assisted Intervention, MICCAI. pp. 582-592 (2022)
A projection-based k-space transformer network for undersampled radial mri reconstruction with limited training subjects. C Gao, S F Shih, J P Finn, X Zhong, Medical Image Computing and Computer Assisted Intervention, MICCAI. Gao, C., Shih, S.F., Finn, J.P., Zhong, X.: A projection-based k-space transformer network for undersampled radial mri reconstruction with limited training subjects. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 726-736 (2022)
Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, 5th International Conference on Learning Representations, ICLR. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: 5th International Conference on Learning Representations, ICLR (2017)
Adam: A method for stochastic optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: 3rd Inter- national Conference on Learning Representations, ICLR (2015)
Auto-encoding variational bayes. D P Kingma, M Welling, 2nd International Conference on Learning Representations, ICLR. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: 2nd International Conference on Learning Representations, ICLR (2014)
Undersampled mri reconstruction with side information-guided normalisation. X Liu, J Wang, C Peng, S S Chandra, F Liu, S K Zhou, Medical Image Computing and Computer Assisted Intervention, MICCAI. Liu, X., Wang, J., Peng, C., Chandra, S.S., Liu, F., Zhou, S.K.: Undersampled mri reconstruction with side information-guided normalisation. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 323-333 (2022)
Sparse MRI: The application of compressed sensing for rapid mr imaging. M Lustig, D Donoho, J M Pauly, Magnetic Resonance in Medicine. 586Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: The application of compressed sensing for rapid mr imaging. Magnetic Resonance in Medicine 58(6), 1182-1195 (2007)
The multimodal brain tumor image segmentation benchmark (BRATS). B H Menze, A Jakab, S Bauer, J Kalpathy-Cramer, K Farahani, J Kirby, Y Burren, N Porz, J Slotboom, R Wiest, IEEE Transactions on Medical Imaging. 3410Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboom, J., Wiest, R., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2014)
Pytorch: An imperative style, highperformance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Advances in neural information processing systems. 32Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high- performance deep learning library. Advances in neural information processing sys- tems 32 (2019)
Towards performant and reliable undersampled mr reconstruction via diffusion model sampling. C Peng, P Guo, S K Zhou, V M Patel, R Chellappa, Medical Image Computing and Computer Assisted Intervention, MICCAI. Peng, C., Guo, P., Zhou, S.K., Patel, V.M., Chellappa, R.: Towards performant and reliable undersampled mr reconstruction via diffusion model sampling. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 623-633 (2022)
Optimal MRI undersampling patterns for ultimate benefit of medical vision tasks. A Razumov, O Y Rogov, D V Dylov, arXiv:2108.04914cs, eessRazumov, A., Rogov, O.Y., Dylov, D.V.: Optimal MRI undersampling patterns for ultimate benefit of medical vision tasks. arXiv:2108.04914 [cs, eess] (2021)
Optimal mri undersampling patterns for pathology localization. A Razumov, O Y Rogov, D V Dylov, Medical Image Computing and Computer Assisted Intervention, MICCAI. Razumov, A., Rogov, O.Y., Dylov, D.V.: Optimal mri undersampling patterns for pathology localization. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 768-779 (2022)
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention, MICCAI. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed- ical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention, MICCAI. pp. 234-241 (2015)
B-spline parameterized joint optimization of reconstruction and k-space trajectories (bjork) for accelerated 2d mri. G Wang, T Luo, J F Nielsen, D C Noll, J A Fessler, IEEE Transactions on Medical Imaging. 419Wang, G., Luo, T., Nielsen, J.F., Noll, D.C., Fessler, J.A.: B-spline parameterized joint optimization of reconstruction and k-space trajectories (bjork) for accelerated 2d mri. IEEE Transactions on Medical Imaging 41(9), 2318-2330 (2022)
Pilot: Physics-informed learned optimized trajectories for accelerated mri. T Weiss, O Senouf, S Vedula, O Michailovich, M Zibulevsky, A Bronstein, arXiv:1909.05773Weiss, T., Senouf, O., Vedula, S., Michailovich, O., Zibulevsky, M., Bronstein, A.: Pilot: Physics-informed learned optimized trajectories for accelerated mri. arXiv:1909.05773 (2019)
Joint learning of cartesian under sampling andre construction for accelerated mri. T Weiss, S Vedula, O Senouf, O Michailovich, M Zibulevsky, A Bronstein, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEWeiss, T., Vedula, S., Senouf, O., Michailovich, O., Zibulevsky, M., Bronstein, A.: Joint learning of cartesian under sampling andre construction for accelerated mri. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 8653-8657. IEEE (2020)
A transformer-based iterative reconstruction model for sparse-view ct reconstruction. W Xia, Z Yang, Q Zhou, Z Lu, Z Wang, Y Zhang, Medical Image Computing and Computer Assisted Intervention, MICCAI. Xia, W., Yang, Z., Zhou, Q., Lu, Z., Wang, Z., Zhang, Y.: A transformer-based iterative reconstruction model for sparse-view ct reconstruction. In: Medical Image Computing and Computer Assisted Intervention, MICCAI. pp. 790-800 (2022)
Puert: Probabilistic under-sampling and explicable reconstruction network for cs-mri. J Xie, J Zhang, Y Zhang, X Ji, IEEE Journal of Selected Topics in Signal Processing. 164Xie, J., Zhang, J., Zhang, Y., Ji, X.: Puert: Probabilistic under-sampling and expli- cable reconstruction network for cs-mri. IEEE Journal of Selected Topics in Signal Processing 16(4), 737-749 (2022)
2d probabilistic undersampling pattern optimization for mr image reconstruction. S Xue, Z Cheng, G Han, C Sun, K Fang, Medical Image Analysis. 77102346Xue, S., Cheng, Z., Han, G., Sun, C., Fang, K., et al.: 2d probabilistic undersam- pling pattern optimization for mr image reconstruction. Medical Image Analysis 77, 102346 (2022)
J Zbontar, F Knoll, A Sriram, T Murrell, Z Huang, M J Muckley, A Defazio, R Stern, P Johnson, M Bruno, arXiv:1811.08839fastmri: An open dataset and benchmarks for accelerated mri. physics, statZbontar, J., Knoll, F., Sriram, A., Murrell, T., Huang, Z., Muckley, M.J., De- fazio, A., Stern, R., Johnson, P., Bruno, M., et al.: fastmri: An open dataset and benchmarks for accelerated mri. arXiv:1811.08839 [physics, stat] (2019)
fastmri+: Clinical pathology annotations for knee and brain fully sampled multi-coil mri data. R Zhao, B Yaman, Y Zhang, R Stewart, A Dixon, F Knoll, Z Huang, Y W Lui, M S Hansen, M P Lungren, arXiv:2109.03812Zhao, R., Yaman, B., Zhang, Y., Stewart, R., Dixon, A., Knoll, F., Huang, Z., Lui, Y.W., Hansen, M.S., Lungren, M.P.: fastmri+: Clinical pathology annotations for knee and brain fully sampled multi-coil mri data. arXiv:2109.03812 [physics] (2021)
Effective sparsification of neural networks with global sparsity constraint. X Zhou, W Zhang, H Xu, T Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhou, X., Zhang, W., Xu, H., Zhang, T.: Effective sparsification of neural networks with global sparsity constraint. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3599-3608 (2021)
To prune, or not to prune: Exploring the efficacy of pruning for model compression. M Zhu, S Gupta, 6th International Conference on Learning Representations, ICLR, Workshop Track Proceedings. Zhu, M., Gupta, S.: To prune, or not to prune: Exploring the efficacy of pruning for model compression. In: 6th International Conference on Learning Representations, ICLR, Workshop Track Proceedings (2018)
| []
|
[
"NNLO QCD predictions for Z-boson production in association with a charm jet within the LHCb fiducial region",
"NNLO QCD predictions for Z-boson production in association with a charm jet within the LHCb fiducial region"
]
| [
"R Gauld \nMax Planck Institute for Physics\nFöhringer Ring 680805MünchenGermany\n",
"A Gehrmann-De Ridder \nInstitute for Theoretical Physics\nETH\n8093ZürichSwitzerland\n\nDepartment of Physics\nUniversity of Zürich\n8057ZürichSwitzerland\n",
"E W N Glover \nInstitute for Particle Physics Phenomenology\nDurham University\nDH1 3LEDurhamUK\n\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUK\n",
"A Huss \nTheoretical Physics Department\nCERN\n1211Geneva 23Switzerland\n",
"A Rodriguez Garcia \nInstitute for Theoretical Physics\nETH\n8093ZürichSwitzerland\n",
"G Stagnitto \nDepartment of Physics\nUniversity of Zürich\n8057ZürichSwitzerland\n"
]
| [
"Max Planck Institute for Physics\nFöhringer Ring 680805MünchenGermany",
"Institute for Theoretical Physics\nETH\n8093ZürichSwitzerland",
"Department of Physics\nUniversity of Zürich\n8057ZürichSwitzerland",
"Institute for Particle Physics Phenomenology\nDurham University\nDH1 3LEDurhamUK",
"Department of Physics\nDurham University\nDH1 3LEDurhamUK",
"Theoretical Physics Department\nCERN\n1211Geneva 23Switzerland",
"Institute for Theoretical Physics\nETH\n8093ZürichSwitzerland",
"Department of Physics\nUniversity of Zürich\n8057ZürichSwitzerland"
]
| [
"Eur. Phys. J. C"
]
| We compute next-to-next-to-leading order (NNLO) QCD corrections to neutral vector boson production in association with a charm jet at the LHC. This process is studied in the forward kinematics at √ s = 13 TeV, which may provide valuable constraints on the intrinsic charm component of the proton. A comparison is performed between fixed order and NLO predictions matched to a parton shower showing mutual compatibility within the respective uncertainties. NNLO corrections typically lead to a reduction of theoretical uncertainties by a factor of two and the perturbative convergence is further improved through the introduction of a theory-inspired constraint on the transverse momentum of the vector boson plus jet system. A comparison between these predictions with data will require an alignment of a flavour-tagging procedure in theory and experiment that is infrared and collinear safe. | 10.1140/epjc/s10052-023-11530-x | [
"https://export.arxiv.org/pdf/2302.12844v1.pdf"
]
| 257,219,523 | 2302.12844 | a5b327fa0189eb64c2ae40781e00c01551cfd178 |
NNLO QCD predictions for Z-boson production in association with a charm jet within the LHCb fiducial region
2023
R Gauld
Max Planck Institute for Physics
Föhringer Ring 680805MünchenGermany
A Gehrmann-De Ridder
Institute for Theoretical Physics
ETH
8093ZürichSwitzerland
Department of Physics
University of Zürich
8057ZürichSwitzerland
E W N Glover
Institute for Particle Physics Phenomenology
Durham University
DH1 3LEDurhamUK
Department of Physics
Durham University
DH1 3LEDurhamUK
A Huss
Theoretical Physics Department
CERN
1211Geneva 23Switzerland
A Rodriguez Garcia
Institute for Theoretical Physics
ETH
8093ZürichSwitzerland
G Stagnitto
Department of Physics
University of Zürich
8057ZürichSwitzerland
NNLO QCD predictions for Z-boson production in association with a charm jet within the LHCb fiducial region
Eur. Phys. J. C
83336202310.1140/epjc/s10052-023-11530-xReceived: 10 March 2023 / Accepted: 18 April 2023 / Published online: 27 April 2023Regular Article -Theoretical Physics
We compute next-to-next-to-leading order (NNLO) QCD corrections to neutral vector boson production in association with a charm jet at the LHC. This process is studied in the forward kinematics at √ s = 13 TeV, which may provide valuable constraints on the intrinsic charm component of the proton. A comparison is performed between fixed order and NLO predictions matched to a parton shower showing mutual compatibility within the respective uncertainties. NNLO corrections typically lead to a reduction of theoretical uncertainties by a factor of two and the perturbative convergence is further improved through the introduction of a theory-inspired constraint on the transverse momentum of the vector boson plus jet system. A comparison between these predictions with data will require an alignment of a flavour-tagging procedure in theory and experiment that is infrared and collinear safe.
Introduction
The study of scattering processes that involve the direct production of (heavy)-flavoured jets, i.e. those consistent with originating from charm (c) or bottom (b) quarks, in association with a leptonically decaying vector boson is essential for collider physics phenomenology. They form a major background for several Standard Model (SM) physics processes, including the production of a Higgs boson in association with a gauge boson where the Higgs boson decays into heavyflavoured jets [1][2][3][4][5], as well as signals expected in models of physics beyond the SM (BSM) [6,7]. Furthermore, they can provide unique information on the distribution of flavoured partons inside the proton [8][9][10]. Focussing on the process of Z plus flavoured jet at the Large Hadron Collider (LHC), several measurements have been performed by the ATLAS, CMS and the LHCb collaborations at 7 and 8 TeV protonproton collision energies [11][12][13][14][15][16]. Recent studies at 13 TeV by the CMS collaboration [17] presented measurements of observables related to the production of c and/or b-quark jets in a sample containing a Z-boson produced in association with at least one jet.
The production of a leptonically decaying Z-boson in association with a charm jet, and particularly at forward kinematics [18] which is the focus of this work, could yield a unique probe of the charm content of the proton [18][19][20], provided that precise predictions and measurements of flavoursensitive Z + c-jet observables are available and can be compared at a similar level of accuracy. The LHCb collaboration has recently analysed events containing a Z-boson and a charm jet in the forward region of phase space in protonproton collisions [9]. These measurements simultaneously provide direct access to the small-and large-x regions of the c-quark parton distribution function (PDF) that is not well explored by other experiments. Specifically, LHCb has presented results [9] for the ratio of production cross sections R c j = σ (Z + c-jet)/σ (Z + jet). This ratio is measured differentially as a function of the rapidity of the Z-boson y Z in the range 2.0 < y Z < 4.5. The experimental result for the ratio R c j has been compared with several SM predictions obtained at NLO QCD accuracy interfaced with a parton shower (NLO+PS), each using different input PDF sets. It is demonstrated that the most forward y Z region is particularly sensitive to the theoretical modelling of the charm quark PDF in these sets, with the best agreement between theory and data obtained by choosing a PDF set with a valence-like intrinsic (non-perturbative) charm quark component. The presence or absence of this component is a long standing theoretical issue [21,22]. Recently, the NNPDF collaboration has claimed evidence for an intrinsic charm quark component in the proton [23], with a local significance at the 2.5σ level for momentum fractions in the region 0.3 x 0.6. By including the LHCb data for R c j in a reweighting of their fit, and adopting a theory prediction based on NLO+PS, the local significance further increases to about 3.0σ . Other PDF fitting collaborations have independently investigated the possible presence of intrinsic charm in the proton [24]: for instance, a recent analysis by the CTEQ-TEA collaboration [25] concludes that finding evidence for nonperturbative charm continues to be elusive, by highlighting challenging aspects that must be confronted in extracting nonperturbative charm in PDF fits.
State-of-the-art predictions for this kind of processes featuring the associated production of a vector boson with one or more flavoured jets has reached next-to-next-to-leading order (NNLO) accuracy in QCD calculations with massless quarks [26][27][28]. Given the importance of the LHCb data for PDF extractions, it is highly desirable to have fixed-order predictions for the Z + c-jet process in the forward region, in order to incorporate data for R c j or other flavour-sensitive observables in global PDF analyses based on collinear factorisation. In this paper we focus on a description of Z + c-jet production within the LHCb fiducial region following the approach of [29] to define flavoured jets, and provide a detailed comparison of (new) fixed-order predictions up to NNLO QCD accuracy as well as those based on NLO+PS for a variety of differential distributions.
Despite our focus on the forward kinematics relevant to the LHCb detector, in this work we refrain from performing a comparison to the available LHCb data [9]. This is due to a significant contamination of the observable R c j measured by LHCb from Multiple Particle Interactions (MPI), which should be removed/subtracted before considering this data in a (single parton scattering) collinear PDF fit. Moreover, the experimental definition of jet flavour in [9] is not infrared and collinear (IRC) safe, rendering a massless fixed-order calculation ill defined for the experimental set-up. With the goal in mind to provide constraining information on the potential presence of an intrinsic charm quark component within the proton, it is critical that the definition of the presented data is IRC safe such that a massless calculation (where collinear factorisation for the charm-quark PDF has been performed) can be applied. We elaborate more on these issues in Appendices A and B.
The structure of the paper is as follows. In Sect. 2, we present the main ingredients entering the computation of observables associated with Z +c-jet production both at pure fixed-order perturbative QCD as well as using the NLO+PS framework. We further comment on the proposal of [29], the flavour dressing algorithm, which allows flavour to be assigned to anti-k T jets in an IRC safe way. In Sect. 3, after having described the numerical set-up and defined the scale variation prescriptions, we present for the first time fixed-order predictions up to next-to-next-to-leading order (NNLO) in QCD for several observables related to the Z + c-jet process computed for the LHCb experimental fiducial region at 13 TeV. We compare these fixed-order predictions with NLO predictions matched to a parton shower at the parton level using the flavour dressing procedure to define flavoured jets. We further investigate the impact of an additional constraint on the transverse momentum of the vector boson plus jet system in the computations. We shall find that the inclusion of this theoretically motivated cut, brings NNLO and NLO+PS predictions closer and improves the perturbative convergence of the fixed-order results for a large fraction of the flavour-sensitive observables and for most of the kinematical range studied. We also present predictions for the ratio R c j . In Sect. 4, we summarise our findings and discuss the prospects of a direct comparison between theory and data in the future. The Appendices contain a discussion of the IRC safety of the current experimental definition (Appendix A) and the role of MPI in the current experimental set-up (Appendix B).
Details of the calculation
In this paper, one of the main goals is to present fixedorder predictions including QCD corrections up to O(α 3 s ) for observables related to the process pp → ¯ + c-jet + X , yielding a final state that contains a pair of charged leptons and at least one identified charm jet.
The computation of higher order corrections to observables with identified flavour poses several challenges as compared to flavour-blind cross sections: it requires a complete flavour tracking of the particles in all subprocesses (which are inputs to the flavour-dependent jet reconstruction/tagging), and additionally to those appearing in all subtraction terms (for a calculation based on subtraction). A flavour tracking procedure at parton level has been pioneered for the computation of b-jet flavoured observables in [30] and [26] and implemented within the NNLOJET parton-level generator that is used here. Within this framework, which employs the antenna subtraction method to capture the infrared behaviour of matrix-elements yielding fully differential cross section predictions at NNLO level, the flavour tracking procedure has the crucial property that it can be applied to any flavour-blind computation already present in the NNLOJET code. For example, the computation of Z + b-jet observables in [26] relied on the use of the existing flavour blind Z + jet computation presented in [31] and used the flavour-k T algorithm to select b-jets. To compute observables related to the Z + c-jet production process in this work, we adopt a similar strategy by using the Z + jet computation including up to O(α 3 s ) corrections, and then apply the flavour tracking procedure as was done for the Z + b-jet process.
As was the case in [26], the prediction of scattering processes with heavy-flavour jets can be further improved by exactly including the contribution from a massive heavyflavour quark at fixed order-i.e. the resultant prediction is made in a general mass variable flavour number scheme. We follow a similar procedure here and include mass corrections up to O(α 2 s m c ) in the fixed-order distributions which are labelled as NLO and NNLO.
A further complication which is encountered in a calculation of the type presented here-the calculation of QCD corrections to a scattering process with flavoured jets based on massless quarks-is that an IRC safe definition of jet flavour must be used. A first solution to this problem was introduced in [32], with the formulation of the IRC safe flavour-k T algorithm. This algorithm features a k T -like clustering sequence, and introduces a specific flavour-dependent clustering sequence to achieve IRC flavour safety. However, since the algorithm requires the knowledge of the flavour of all the particles in the event at each step of the clustering, it is challenging to realise experimentally, and so far has not been implemented in experimental analyses. Therefore, the kinematics and flavour of the jets obtained with the flavour-k T algorithm are not compatible with those obtained in experiment. Various alternatives to the use of flavour k T in theoretical predictions have recently been proposed [33][34][35].
In the present analysis, we will adopt the flavour dressing algorithm of [29]. This approach is particularly suitable as it enables to assign flavour quantum numbers to a set of flavour blind jets, obtained with any jet clustering algorithm. This allows us to apply it to jets reconstructed with the antik T algorithm [36], i.e. the same algorithm which is used in experiment to define the kinematics of the jets (although the flavour assignment procedure is different, as detailed in Appendix A). Here, we stress the fact that in the flavour dressing algorithm the flavour assignment is entirely factorised from the initial jet reconstruction, hence the kinematics of the jets is not affected. Such a key property is relevant in the present context, since it ensures that for a ratio observable such as σ (Z + c-jet)/σ (Z + jet) both the numerator and the denominator feature the same sample of anti-k T jets.
A comparison of the fixed-order predictions as described above will also be made to several NLO predictions matched with a parton shower. Those NLO+PS predictions are obtained either with the MadGraph5_aMC@NLO (v. 2.7.3) [37] framework interfaced to Pythia8 (v. 8.243) [38] (default p T -ordered parton shower) or Herwig7 (v. 7.2.2) [39][40][41] (default angular-ordered parton shower), and using the same flavour dressing procedure to define flavoured jets as for the fixed-order predictions. To allow for a more direct comparison, those predictions are obtained at the parton level where neither the impact of MPI or hadronisation effects are included. A discussion on the important role of MPI for the Z + c-jet process in the forward region is provided in Appendix B. Hadronisation effects, on the other hand, were found not to impact the considered observables in any significant manner.
Numerical results
Numerical set-up and scale variation prescription
In this section, we review the calculational set-up as well as the kinematical constraints imposed to obtain the fiducial cross sections for Z+c-jet production. To select our finalstate events, we focus on the forward region with fiducial cuts mirroring those of the LHCb measurement [9] at √ s = 13 TeV.
In particular, the following fiducial cuts for jets and charged leptons are applied: 20 GeV < p T, j < 100 GeV, 2.2 < η j < 4.2, p T, > 20 GeV, 2.0 < y < 4.5, M ¯ ∈ [60, 120] GeV and ΔR( j, ) > 0.5. The jets are reconstructed with the anti-k T algorithm [36] with R = 0.5. As discussed in Sect. 2, the selection of c-jets is performed using the flavour dressing procedure described in [29]. The algorithm proceeds in two stages with internal parameters that control the overall flavour-tagging procedure: a flavour clustering stage that employs a Soft-Drop-inspired criterion [42] with parameters z cut , R cut , and β, followed by a flavour dressing stage based on the flavour-k T distance measure [32] with a parameter α. In the present calculation, we set the parameters to their default values [29]:
z cut = 0.1, R cut = 0.1, β = 2, α = 2.
In addition, events are only retained if the flavour-tagged c-jet is the jet carrying the largest transverse momentum of reconstructed jets passing the selection cuts.
We provide predictions for proton-proton collisions at √ s = 13 TeV and use the PDF4LHC21 Monte Carlo PDF set [43], with α s (M Z ) = 0.118 and n max f = 5, where both the PDF and α s values are accessed via LHAPDF [44]. For the electroweak input parameters, the results are obtained in the G μ -scheme, using a complex mass scheme for the unstable internal particles, and we adopt the following values for the input parameters: M os Z = 91.1876 GeV, Γ os Z = 2.4952 GeV, M os W = 80.379 GeV, Γ os W = 2.085 GeV, and G μ = 1.1663787 × 10 −5 GeV −2 . For differential distributions, the impact of missing higherorder corrections is assessed using the conventional 7-point scale variation prescription: the values of factorisation (μ F ) and renormalisation (μ R ) scales are varied independently by a factor of two around the central scale μ 0 ≡ E T,Z , with the additional constraint that 1 2 ≤ μ F /μ R ≤ 2. When considering theoretical predictions for the ratio of distributions, we estimate the uncertainties in an uncorrelated way between the numerator and denominator i.e. by considering
R c j (μ R , μ F ; μ R , μ F ) = σ Z+c-jet (μ R , μ F ) σ Z+jet (μ R , μ F ) ,(1)
providing a total of 31-points when dropping the extreme variations in any pair of scales.
Z + c-jet distributions
We here present results for the Z + c-jet process at √ s = 13 TeV and choose to focus on the following observables: the leading flavoured jet transverse momentum p c-jet T ( Fig. 1), the leading flavoured jet pseudorapidity η c-jet (Fig. 2), and the rapidity of the Z-boson y Z , reconstructed from the two final-state opposite-charge leptons (Fig. 3). Besides presenting results for the LHCb kinematical set-up as indicated in Sect. 3.1, we also explore the impact of the introduction of a cut on the transverse momentum of the Z + jet system:
p T (Z + jet) < p jet T ,(2)
with the leading jet in the acceptance region. The theoretical motivation behind this cut is to discard those contributions where the flavoured jet is not the jet with the largest transverse momentum in the event, i.e. cases where the hardest jet was disregarded because it fell outside of the LHCb acceptance. At Born level, the p T of the Z + jet system vanishes, hence the cut in (2) limits the hard QCD radiation outside the LHCb acceptance in a dynamical way. For each of the figures presented in the remainder of this section, the left sides present results without the additional cut of (2), whereas the right sides present results where the addi- In particular, the shape of the distribution is modified in the large-η c-jet and large-y Z bins in Fig. 2a in Fig. 3a respectively, and in the smallp c-jet T region in Fig. 1a, featuring a larger cross section at NNLO. Those features are made apparent in the second panel of those figures. In the third panel, the NNLO and both NLO+PS predictions are shown normalised to the central NNLO result. It is found that the NNLO result always lies between the two different NLO+PS results. We observe that the NNLO result is more consistent with the NLO+Herwig7 prediction for lower values of p c-jet T , η c-jet , and y Z , and instead agrees better with NLO+Pythia8 predictions at larger values. Overall, the NLO+Herwig7 prediction seems to be rather similar to the NLO prediction, and the angular-ordered parton shower does not appear to impact flavour-sensitive observables in a significant way. Instead, the NLO+Pythia8 prediction does seem to capture some of the NNLO higher order corrections: at largep c-jet T ,η c-jet and y Z values it tends to reproduce the shape of the fixed-order NNLO corrections.
The theory predictions that do include the additional kinematic constraint of Eq. (2) are shown in Figs. 1b, 2b, and 3b. We observe that the constraint leads to a slight reduction of the fiducial cross section and produces no significant change in the shape of the distributions with the exception of the lowp c-jet T region. The LO, NLO, and NNLO results display an improved mutual compatibility across all considered observables, indicating a better perturbative convergence in the presence of this kinematic constraint. Qualitatively, the comparison between the NNLO and NLO+PS results in the third panel of these figures is similar to that of the case without the kinematic constraint which was already discussed.
Overall, for all considered set-ups, we find that the NNLO corrections bring a new level of precision for the considered Z + c-jet observables. As compared to the corresponding NLO(+PS) results, the uncertainties of the NNLO predictions are reduced by a factor of two or more. It is also reassuring that the NNLO corrections lead to predictions that tend to lie between the two different NLO+PS predictions-the latter differ in the treatment of O(α 3 s ) terms, which begin at the NNLO level. As the perturbative convergence of the predictions appears to be improved by applying the kinematic constraint of Eq. (2), with only a small reduction in the cross section, it seems to be well motivated when directly considering Z + c-jet observables.
The σ (Z + jet) cross-section and the ratio R c j
In Sect. 3.2 we have presented IRC safe predictions for the rates of Z+c-jet production within the LHCb fiducial region. Instead, in this subsection we will consider the Z + jet process (i.e. the flavour inclusive one) and then subsequently the ratio observable R c j = σ (Z + c-jet)/σ (Z + jet). As the experimental measurement of R c j is performed differentially in the rapidity of the Z-boson y Z [9], our theory predictions will also focus on this same quantity. We again note that the we do not perform any comparison to the available data for reasons of consistency, as detailed in Appendix A and B. The theory predictions for Z+jet production are presented in Fig. 4, with the same structure for the plots as shown in Sect. 3.2, i.e. with the "no-cut" and "with-cut" cases shown on the left and right parts of the figure respectively, and with three panels for each sub-figure. As compared to the Z + c-jet predictions, heavy-flavour mass corrections have not been included for these predictions. While this could be achieved following the procedure outlined in [45], the numerical impact of such corrections for the flavour inclusive process is negligible (sub-percent).
The theory predictions without the extra kinematic cut of Eq. (2) are shown in Fig. 4a). The NNLO result is observed to be contained within the scale variation band of the NLO one, except in the large-y Z region, where it also features a different behaviour compared to the NLO+PS results. We also find very good agreement between the two NLO+PS results with Pythia8 and Herwig7.
The impact of applying the additional kinematic cut is shown in Fig. 4b. The cut leads to negative NNLO corrections in the (relatively) low y Z region, resulting in a NNLO predic-tion lying outside the scale variation band of the NLO result, except in the large-y Z region. Examination of the upper panel of Fig 4b shows that the cut has the effect of moving all the curves closer to the LO result, with positive NLO corrections and negative NNLO corrections. The inclusion of this cut thus appears to degrade the perturbative stability for the flavour-inclusive set-up.
We now consider the ratio observable R c j , differential in y Z . The result is constructed using the same inputs which lead to the distributions for Z + c-jet results in Fig. 3 and Z + jet in Fig. 4, but including the uncorrelated uncertainty prescription defined in Eq. (1). The predictions for R c j are displayed in Fig. 5, again the "no-cut" and "with-cut" cases are shown on the left and right sides of the figure respectively. Focussing first on the fixed-order results, the NNLO corrections are observed to be positive and of the order (10-20)% with the largest value observed at large-y Z values. That behaviour is observed for both the "no-cut" (left) and "withcut" (right) cases. Overall, the inclusion of the cut on the Z + jet system does not significantly impact the perturbative Fig. 5 Comparison of parton-level predictions for the ratio R c j = σ Z+c-jet /σ Z+jet differential in the rapidity y Z of the system of the finalstate leptons: fixed-order predictions at LO (green), NLO (blue) and NNLO (red); NLO+PS predictions with Pythia8 (orange) or Her-wig7 (purple) as parton showers. A dynamical cut on the transverse momentum of the Z + jet system is further applied in b behaviour of the fixed-order prediction for the ratio. This is a consequence of the fact that the reduction of uncertainties in the numerator for Z + c-jet is then compensated by increased uncertainties in the denominator, when the cut is applied. However, by inspecting the lowest panels of Fig. 5, we observe slightly better agreement between NNLO and the two NLO+PS predictions when the cut has been applied.
Finally, as a result of the uncorrelated prescription for uncertainties, we note that the relative theory uncertainties for the NNLO predictions of the ratio are increased as compared to the individual predictions for Z + c-jet and Z + jet. The sensitivity to the input PDFs is also typically reduced for such an observable due to correlations between PDFdependence of the numerator and denominator. With the aim of reducing the theory uncertainties due to missing higher corrections, one could therefore consider to include absolute Z +c-jet cross-section data rather than that for the ratio R c j in a collinear PDF fit. Given however that several experimental uncertainties are correlated between numerator and denominator (and therefore cancel in the ratio), and that a treatment of the MPI contribution to the observable should also be considered, overall it is not clear which observable is the most sensitive in constraining PDFs.
Conclusions
In this paper, we have studied the associated production of a Z-boson with a charm-jet at the LHC at 13 TeV in the forward region. We computed NNLO predictions at O(α 3 s ) for a set of differential observables related to the Z + c-jet process using the flavour dressing procedure to define charm-tagged jets. NNLO corrections are found to be at the level of (10-20)%; they can impact the shapes of distributions with the high-y Z and lowp c-jet T regions receiving enhanced corrections. The residual uncertainties as estimated through scale variations are typically ±5% or smaller, a factor two reduction compared to the respective NLO uncertainty estimate. Additionally, comparisons to two different NLO+PS predictions based on the Herwig7 (angular ordering) and Pythia8 ( p T ordering)
showers have been performed. The two predictions can differ by up to 10% but remain mutually compatible within their respective uncertainties that are NLO-like and thus at the level of ±10%. The NNLO distributions are found to lie in between the two NLO+PS predictions, hinting to an insensitivity to missing higher-order effects as modelled by the showers. Moreover, we have found that a theory-inspired constraint on the transverse momentum of the Z + jet system improves the perturbative convergence of all considered distributions.
We further considered the ratio R c j = σ Z+c-jet /σ Z+jet , which has been measured by the LHCb experiment. In this context, the usage of the flavour dressing procedure ensures the same kinematic reconstruction of jets entering the numerator and denominator, thus allowing for a faithful theoretical definition. The pattern of higher-order corrections mimics those of the Z + c-jet process with enhanced NNLO corrections of up to 20% in the high-y Z region, albeit with larger uncertainties due to the de-correlated scale prescription for the ratio.
A direct comparison to the available LHCb data was not performed due to IRC unsafety issues and an unexpectedly large contamination from MPI; both of which are discussed in the Appendices. In the future, a fair comparison between experimental measurements and theory predictions will require a detailed study of the experimental feasibility of the flavour dressing algorithm (or other IRC safe variants). In the case a direct application of an IRC-safe flavour definition is prohibitively challenging, it would be highly desirable for experimental measurements to carry out an unfolding to a IRC-safe definition of jet flavour. Only a joint effort of both communities, theory and experimental, will enable to exploit in the best way the huge amount of data that LHC will provide us in the next decades, better enabling the use flavour signatures as a powerful window into short-distance interactions from GeV to TeV energy scales.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: For this publication only computer-generated pseudo data were used. These have been obtained with either publicly available Monte Carlo event generators or with a private code.]
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development.
Appendinx A: IRC safety
According to Ref. [9], as part of the LHCb measurement, charm jets are defined in the following way. First, jets are reconstructed using the anti-k T algorithm [36] with R = 0.5, and the leading jet passing the fiducial selection is considered (see also Sect. 3.1 for the definition of the fiducial selection). From this, the leading jet is considered to be a charm jet (at truth/unfolded level) if it additionally satisfies the criterion: p T,c hadron > 5 GeV, and ΔR( j, c hadron) < 0.5. That is to say, the jet is considered to be charm tagged if there is at least one c hadron satisfying these selections.
Such an experimental definition of jet flavour is collinear unsafe. This is not an LHCb specific issue (or even particular to this measurement), but is also relevant in one way or another for the definitions of jet flavour commonly adopted by LHC collaborations [46][47][48][49]. We briefly explain why the observable definition taken in Ref. [9] is IRC unsafe, then discuss the implications of this for data interpretation.
In Fig. 6 we depict three specific kinematic configurations which lead to different sources of IRC sensitivity (i.e. those which are IRC unsafe). In each case, the configurations correspond to those encountered in a parton-level fixed-order prediction for the Z + c-jet process. The flavoured quarks are depicted as red lines, i.e. representing charm quarks for the process under consideration here.
(a) The first configuration depicts the production of the lepton pair, recoiling against a hard qq pair produced in a collinear configuration (i.e. at least one, or both, of the quarks are hard but p q · pq → 0). When the anti-k T algorithm is applied, the q andq are reconstructed inside the same jet. According to the prescription to assign jet flavour, as this jet contains at least one quark it will be assigned a quark flavour tag (e.g. charm for q = c). This introduces a collinear sensitivity as it is distinguished from the case where the hard gluon does not split into a collinear qq pair. In that case, the jet (composed of Fig. 6 Examples of configurations which lead to an IRC sensitivity: a a double (even) tagged jet; b a tag with a p min T,q requirement; c a jet tag with a soft sensitivity (in the absence of a p min T,q requirement) a single hard gluon) clearly carries zero quark flavour. This could be overcome by accounting for the total quark flavour in the jet, such as assigning quantum numbers q(q) = +(−)1 and summing them to obtain the net flavour (alternatively one can consider flavoured jets as those with an overall odd number of q andq). (b) The second configuration depicts the production of the lepton pair, recoiling against a hard qg pair in a collinear configuration (i.e. at least one, or both, of the quark and gluon is hard but p q · p g → 0). Again, when the antik T algorithm is applied, both q and g are reconstructed inside the same jet. As the tagging prescription requires the presence of a c-hadron with p T,c > 5 GeV, it is possible that the outgoing quark does not satisfy this criterion (depending on the momentum sharing with the gluon). This introduces a collinear sensitivity as the p T,c requirement may distinguish from the case where the hard initial quark does not split to the collinear qg pair. (c) The collinear sensitivity discussed above can be overcome by removing the p min T,c requirement. However, this would introduce a new problem which is depicted in the third configuration. In this case, the culprit is a soft gluon which subsequently splits to a qq pair at wide angles. It is possible that one of the quarks ( p q in the figure) is produced close in ΔR to a hard parton ( p j ), i.e. ΔR( j, q) < 0.5. This would introduces a soft sensitivity as the flavour of the jet would be altered by the presence of the soft quark.
From a purely experimental point of view, it is clear that the current definition of tagging heavy-flavour jets is a sensible one. Identifying those jets with multiple tags (as oppose to at least one) requires to more carefully account for the experimental (in)efficiency and mistag rates. It is also extremely to difficult distinguish between the signature of one or two collinear heavy-flavour objects (e.g. a bunch of displaced tracks appearing to originate from a single displaced vertex). Furthermore, removing the p min T,c would mean accounting for a region where it is experimentally challenging to identify displaced vertices. However, this choice has serious ramifications for the theory predictions, and importantly for the interpretation of the data.
Theoretical predictions of charm jet observables which are not IRC safe are logarithmically sensitive to the mass of the charm quark m c . The corresponding fixed-order predictions for such observables therefore include corrections which depend logarithmically on the charm quark mass. If the observable/process under consideration involves energy scales which are large compared to the quark mass (e.g. the transverse energy/momentum of a jet or a boson), the logarithmic corrections become large (due to the separation of scales) and thus limit the theory precision/perturbative stability. The m c → 0 limit of such predictions is not well defined (it is divergent), meaning that a calculation based on massless quarks of such observables does not exist. The implications of this are that fixed-order predictions must be performed in a scheme where the charm quark is massive, i.e. in a fixed-flavour-number scheme with n max f = 3, where mass factorisation is not performed for the charm quark, and it is decoupled from the running of α s . Note that the perturbative charm-quark PDF does still exist in the massive scheme (where a logarithmic sensitivity to the charm-quark mass exists, see for example [50][51][52][53][54][55]). Practically, it is generated numerically after integration over the phase-space of the massive quark during the calculation.
The requirement that a fixed-order prediction must be carried out with a massive calculation is problematic for observables which are designed to be sensitive to the nature of the charm quark PDF. Such observables contain a logarithmic sensitivity on the charm-quark mass as a result of the IRC unsafe configurations which were highlighted above (which limit the theory precision/predictability), at the same time the observables are (by design) directly sensitive to the large logarithmic corrections associated to the perturbative charm quark PDF which have not been resummed, and the nonperturbative component of the charm quark PDF (which is aiming to be probed) does not exist.
Instead, the definition of jet flavour used in this work, given its IRC safety, removes all logarithmic sensitivity on the charm quark mass which results from the jet flavour assignment procedure. This also allows for the application of a massless calculation, where mass factorisation for the charm quark is performed and where remaining logarithmic sensitivity to the quark mass is resummed.
Appendix B: Multiple particle interactions
During the high-energy scattering of two protons, there is a probability for multiple hard-interactions to occur (i.e. more than one).
For the LHCb kinematics defined at the beginning of Sect. 3.1, and also applying the (IRC unsafe) definition of jet flavour as in [9], we observe a large contribution to the production of a Z boson and a c-jet from MPI. In Fig. 7a we show the cross-section for Z + c-jet production after fiducial cuts, which is plotted differentially with respect to the Zrapidity y Z . The predictions are provided at NLO+PS accuracy for Z+1 j events generated with MadGraph5_aMC@NLO interfaced with Pythia8 and Herwig7, where the role of MPI is subsequently modelled by the two different Monte Carlo generators. We show the predictions obtained when including/excluding the MPI contributions, which lead to a large (and rapidity dependent) effect on the resultant distribution. In the central rapidity region, the effect is of order 10%, increasing up to 20% at larger rapidities, and the effect is very similar within Pythia8 and Herwig7. In Fig. 7b we further show the same plot for the unflavoured process Z + jet. We note a constant shift of the y Z distribution of order 10% due to MPI effects. The 10-20% effects observed in Fig. 7a for Z+c-jet could be explained with an interplay of a constant 10% effect (impacting both Z + c-jet and Z + jet processes) and an additional 10% flavour-dependent effect. Finally, we Fig. 7 Effect of MPI contributions on the Z rapidity distribution y Z in the Z + c-jet process (a), in the Z + jet process (b) and in the ratio of the two (c). NLO+PS predictions are obtained with Pythia8 (orange) or Herwig7 (purple) as parton showers. In the upper panels predictions including (excluding) MPI contributions are depicted in darker (lighter) colours. The lower panels show the ratios of curves with and without MPI effects consider the ratio R c j , even if its behaviour w.r.t. MPI effects can be straightforwardly inferred from Fig. 7a, b. In particular, we observe a partial cancellation of MPI effects, leading to a difference between predictions which is negligible at y Z ∼ 2.0 and that increases monotonically up to 10% at larger rapidities. Hence, given such a partial cancellation in the ratio, the inclusion of MPI effects seems to be particularly relevant for 3.5 y Z 4.5. Such a region is hugely important for the determination of the intrinsic charm in the proton, given that the LHCb measurement in the bin y Z ∈ [3.5, 4.5] is highly correlated to the charm PDF in the region of the valence peak x 0.45 [23].
As these MPI contributions arise from the overlap of multiple hard interactions, it is not taken into account by a theoretical description based solely on single parton scattering contributions (i.e. those which form the basis of a collinear PDF fit). If the data is to be considered for such a fit, it would be necessary to first remove/subtract the MPI component. Due to the complicated (overlapping) nature of multiple hard interactions, the removal of this component may rely on theoretical modelling. As the theoretical description of inclusive charm-quark production with the LHCb acceptance suffers from uncertainties far in excess of 50% (see for example, Fig. 6 of Ref. [56] for p D T ∼ 5 GeV), the modelling of this contribution (and the associated uncertainty) will require quite some care. We note that such an uncertainty (optimistically, 50% of the MPI contribution as shown in Fig. 7) would already be in excess of the systematic uncertainty quoted for the ratio observable in [9]. It may be possible to overcome this issue if a data-driven based approach can be achieved.
a e-mail: [email protected] b e-mail: [email protected] c e-mail: [email protected] d e-mail: [email protected] e e-mail: [email protected] f e-mail: [email protected] (corresponding author)
Fig. 1
1Comparison of parton-level predictions for the leading flavoured jet transverse momentum p c-jet T in the Z + c-jet process: fixed-order predictions at LO (green), NLO (blue) and NNLO (red); NLO+PS predictions with Pythia8 (orange) or Herwig7 (purple) as parton showers. A dynamical cut on the transverse momentum of the Z + jet system is further applied in b
Fig. 2
2Comparison of parton-level predictions for the leading flavoured jet pseudo-rapidity η c-jet in the Z + c-jet process: fixed-order predictions at LO (green), NLO (blue) and NNLO (red); NLO+PS pre-dictions with Pythia8 (orange) or Herwig7 (purple) as parton showers. A dynamical cut on the transverse momentum of the Z + jet system is further applied in b tional cut has been imposed, such that the impact of the cut can be seen comparing the left and right hand sides of the figures. To highlight the size and shape of the fixed-order results at each perturbative order, and to best compare fixed-order with and without matching to PS results, all figures illustrating the results in this section are composed of three panels: the top-panel shows the absolute predictions at fixed-order (LO, NLO, NNLO) and for NLO+PS where PS is modelled by Pythia8; the middle panel shows the ratio of NNLO to the fixed-order NLO result while the lower panel shows the ratio to NNLO of the NLO+PS results where PS is modeled by either Pythia8 or Herwig7. As noted in Sect. 2, fixed-order predictions labelled as NLO and NNLO in all the figures include QCD corrections up to O(α 2 s ) (NLO) and O(α 3 s ) (NNLO) obtained with massless c-quarks in the computations, and both additionally include the exact charm-quark mass corrections up to O(α 2 s ). The theory predictions which do not include the additional kinematic constraint of Eq. (2) are shown in Figs. 1a, 2a, and 3a. The NNLO corrections are observed to be of the order (10-20)% as compared to the NLO prediction, and typically outside of the scale variation band of the NLO result.
Fig. 3
3Comparison of parton-level predictions for the rapidity distribution of the lepton pair y Z in the Z+c-jet process: fixed-order predictions at LO (green), NLO (blue) and NNLO (red); NLO+PS predictions with Pythia8 (orange) or Herwig7 (purple) as parton showers. A dynamical cut on the transverse momentum of the Z + jet system is further applied in b
Fig. 4
4Comparison of parton-level predictions for the rapidity distribution of the lepton pair y Z for the unflavoured process Z+jet: fixed-order predictions at LO (green), NLO (blue) and NNLO (red); NLO+PS pre-dictions with Pythia8 (orange) or Herwig7 (purple) as parton showers. A dynamical cut on the transverse momentum of the Z + jet system is further applied in b
Observation of H → bb decays and V H production with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1808.08238Phys. Lett. B. 786ATLAS Collaboration, M. Aaboud et al., Observation of H → bb decays and V H production with the ATLAS detector. Phys. Lett. B 786, 59-86 (2018). arXiv:1808.08238
Observation of Higgs boson decay to bottom quarks. A M Sirunyan, CMS CollaborationarXiv:1808.08242Phys. Rev. Lett. 12112121801CMS Collaboration, A.M. Sirunyan et al., Observation of Higgs boson decay to bottom quarks. Phys. Rev. Lett. 121(12), 121801 (2018). arXiv:1808.08242
Measurement of VH, H→ bb production as a function of the vector-boson transverse momentum in 13 TeV pp collisions with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1903.04618JHEP. 05141ATLAS Collaboration, M. Aaboud et al., Measurement of VH, H→ bb production as a function of the vector-boson transverse momentum in 13 TeV pp collisions with the ATLAS detector. JHEP 05, 141 (2019). arXiv:1903.04618
Measurements of W H and Z H production in the H → bb decay channel in pp collisions at 13 TeV with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:2007.02873Eur. Phys. J. C. 812178ATLAS Collaboration, G. Aad et al., Measurements of W H and Z H production in the H → bb decay channel in pp collisions at 13 TeV with the ATLAS detector. Eur. Phys. J. C 81(2), 178 (2021). arXiv:2007.02873
Measurement of the associated production of a Higgs boson decaying into b-quarks with a vector boson at high transverse momentum in pp collisions at √ s = 13 TeV with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:2008.02508Phys. Lett. B. 816136204ATLAS Collaboration, G. Aad et al., Measurement of the asso- ciated production of a Higgs boson decaying into b-quarks with a vector boson at high transverse momentum in pp collisions at √ s = 13 TeV with the ATLAS detector. Phys. Lett. B 816, 136204 (2021). arXiv:2008.02508
Search for supersymmetry in proton-proton collisions at 13 TeV in final states with jets and missing transverse momentum. C M S Collaboration, A M Sirunyan, arXiv:1908.04722JHEP. 10244C.M.S. Collaboration, A.M. Sirunyan et al., Search for supersym- metry in proton-proton collisions at 13 TeV in final states with jets and missing transverse momentum. JHEP 10, 244 (2019). arXiv:1908.04722
Search for new phenomena in final states with b-jets and missing transverse momentum in √ s = 13 TeV pp collisions with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:2101.12527JHEP. 05937. ATLAS Collaboration, G. Aad et al., Search for new phenomena in final states with b-jets and missing transverse momentum in √ s = 13 TeV pp collisions with the ATLAS detector. JHEP 05, 093 (2021). arXiv:2101.12527
Measurement of the production of a W boson in association with a charm quark in pp collisions at √ s = 7 TeV with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:1402.6263JHEP. 0568ATLAS Collaboration, G. Aad et al., Measurement of the produc- tion of a W boson in association with a charm quark in pp collisions at √ s = 7 TeV with the ATLAS detector. JHEP 05, 068 (2014). arXiv:1402.6263
Study of Z Bosons Produced in Association with Charm in the Forward Region. R Aaij, LHCb CollaborationarXiv:2109.08084Phys. Rev. Lett. 128882001LHCb Collaboration, R. Aaij et al., Study of Z Bosons Produced in Association with Charm in the Forward Region. Phys. Rev. Lett. 128(8), 082001 (2022). arXiv:2109.08084
Measurement of the production cross section of a W boson in association with a charm quark in proton-proton collisions at √ s = 13 TeV. CMS-PAS-SMP-21-005GenevaCERNTech. Rep.CMS Collaboration, Measurement of the production cross section of a W boson in association with a charm quark in proton-proton collisions at √ s = 13 TeV. Tech. Rep. CMS-PAS-SMP-21-005, CERN, Geneva, (2022)
Measurement of differential production cross-sections for a Z boson in association with b-jets in 7 TeV proton-proton collisions with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:1407.3643JHEP. 10141ATLAS Collaboration, G. Aad et al., Measurement of differential production cross-sections for a Z boson in association with b-jets in 7 TeV proton-proton collisions with the ATLAS detector. JHEP 10, 141 (2014). arXiv:1407.3643
Measurement of the production cross sections for a Z boson and one or more b jets in pp collisions at sqrt(s) = 7 TeV. S Chatrchyan, C.M.S. CollaborationarXiv:1402.1521JHEP. 06120C.M.S. Collaboration, S. Chatrchyan et al., Measurement of the production cross sections for a Z boson and one or more b jets in pp collisions at sqrt(s) = 7 TeV. JHEP 06, 120 (2014). arXiv:1402.1521
Measurement of the Z+b-jet cross-section in pp collisions at √ s = 7 TeV in the forward region. R Aaij, LHCb CollaborationarXiv:1411.1264JHEP. 0164LHCb Collaboration, R. Aaij et al., Measurement of the Z+b-jet cross-section in pp collisions at √ s = 7 TeV in the forward region. JHEP 01, 064 (2015). arXiv:1411.1264
Measurement of associated Z + charm production in proton-proton collisions at √ s = 8. A M Sirunyan, CMS CollaborationCMS Collaboration, A.M. Sirunyan et al., Measurement of associ- ated Z + charm production in proton-proton collisions at √ s = 8
. Tev, arXiv:1711.02143Eur. Phys. J. C. 784287TeV. Eur. Phys. J. C 78(4), 287 (2018). arXiv:1711.02143
Measurements of the associated production of a Z boson and b jets in pp collisions at √ s = 8 TeV. V Khachatryan, CMS CollaborationarXiv:1611.06507Eur. Phys. J. C. 7711751CMS Collaboration, V. Khachatryan et al., Measurements of the associated production of a Z boson and b jets in pp collisions at √ s = 8 TeV. Eur. Phys. J. C 77(11), 751 (2017). arXiv:1611.06507
Measurements of the production cross-section for a Z boson in association with b-jets in proton-proton collisions at √ s = 13 TeV with the ATLAS detector. G Aad, ATLAS CollaborationarXiv:2003.11960JHEP. 0744ATLAS Collaboration, G. Aad et al., Measurements of the pro- duction cross-section for a Z boson in association with b-jets in proton-proton collisions at √ s = 13 TeV with the ATLAS detec- tor. JHEP 07, 044 (2020). arXiv:2003.11960
Measurement of the associated production of a Z boson with charm or bottom quark jets in proton-proton collisions at √ s=13 TeV. A M Sirunyan, CMS CollaborationarXiv:2001.06899Phys. Rev. D. 102332007CMS Collaboration, A.M. Sirunyan et al., Measurement of the associated production of a Z boson with charm or bottom quark jets in proton-proton collisions at √ s=13 TeV. Phys. Rev. D 102(3), 032007 (2020). arXiv:2001.06899
Direct probe of the intrinsic charm content of the proton. T Boettcher, P Ilten, M Williams, arXiv:1512.06666Phys. Rev. D. 93774008T. Boettcher, P. Ilten, M. Williams, Direct probe of the intrinsic charm content of the proton, Phys. Rev. D 93(7), 074008 (2016). arXiv:1512.06666
Phenomenological implications of the intrinsic charm in the Z boson production at the LHC. G Bailas, V P Goncalves, arXiv:1512.06007Eur. Phys. J. C. 763105G. Bailas, V.P. Goncalves, Phenomenological implications of the intrinsic charm in the Z boson production at the LHC. Eur. Phys. J. C 76(3), 105 (2016). arXiv:1512.06007
Probing proton intrinsic charm in photon or Z boson production accompanied by heavy jets at the LHC. A V Lipatov, G I Lykasov, Y Y Stepanenko, V A Bednyakov, arXiv:1606.04882Phys. Rev. D. 94553011A.V. Lipatov, G.I. Lykasov, Y.Y. Stepanenko, V.A. Bednyakov, Probing proton intrinsic charm in photon or Z boson production accompanied by heavy jets at the LHC. Phys. Rev. D 94(5), 053011 (2016). arXiv:1606.04882
The Intrinsic Charm of the Proton. S J Brodsky, P Hoyer, C Peterson, N Sakai, Phys. Lett. B. 93S.J. Brodsky, P. Hoyer, C. Peterson, N. Sakai, The Intrinsic Charm of the Proton. Phys. Lett. B 93, 451-455 (1980)
A review of the intrinsic heavy quark content of the nucleon. S J Brodsky, A Kusina, F Lyonnet, I Schienbein, H Spiesberger, R Vogt, arXiv:1504.06287Adv. High Energy Phys. 2015231547S.J. Brodsky, A. Kusina, F. Lyonnet, I. Schienbein, H. Spies- berger, R. Vogt, A review of the intrinsic heavy quark content of the nucleon. Adv. High Energy Phys. 2015, 231547 (2015). arXiv:1504.06287
Evidence for intrinsic charm quarks in the proton. R D Ball, NNPDF CollaborationA Candido, NNPDF CollaborationJ Cruz-Martinez, NNPDF CollaborationS Forte, NNPDF CollaborationT Giani, NNPDF CollaborationF Hekhorn, NNPDF CollaborationK Kudashkin, NNPDF CollaborationG Magni, NNPDF CollaborationJ Rojo, NNPDF CollaborationarXiv:2208.08372Nature. 6087923NNPDF Collaboration, R.D. Ball, A. Candido, J. Cruz-Martinez, S. Forte, T. Giani, F. Hekhorn, K. Kudashkin, G. Magni, J. Rojo, Evidence for intrinsic charm quarks in the proton. Nature 608(7923), 483-487 (2022). arXiv:2208.08372
CT14 Intrinsic Charm Parton Distribution Functions from CTEQ-TEA Global Analysis. T.-J Hou, S Dulat, J Gao, M Guzzi, J Huston, P Nadolsky, C Schmidt, J Winter, K Xie, C P Yuan, arXiv:1707.00657JHEP. 0259T.-J. Hou, S. Dulat, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, C. Schmidt, J. Winter, K. Xie, C.P. Yuan, CT14 Intrinsic Charm Parton Distribution Functions from CTEQ-TEA Global Analysis. JHEP 02, 059 (2018). arXiv:1707.00657
M Guzzi, T J Hobbs, K Xie, J Huston, P Nadolsky, C P Yuan, arXiv:2211.01387The persistent nonperturbative charm enigma. M. Guzzi, T.J. Hobbs, K. Xie, J. Huston, P. Nadolsky, C.P. Yuan, The persistent nonperturbative charm enigma. arXiv:2211.01387
Predictions for Z -Boson Production in Association with a b-Jet at O(α 3 s ). R Gauld, A Gehrmann-De Ridder, E W N Glover, A Huss, I Majer, arXiv:2005.03016Phys. Rev. Lett. 12522222002R. Gauld, A. Gehrmann-De Ridder, E.W.N. Glover, A. Huss, I. Majer, Predictions for Z -Boson Production in Association with a b-Jet at O(α 3 s ). Phys. Rev. Lett. 125(22), 222002 (2020). arXiv:2005.03016
NNLO QCD predictions for W+c-jet production at the LHC. M Czakon, A Mitov, M Pellen, R Poncelet, arXiv:2011.01011JHEP. 06100M. Czakon, A. Mitov, M. Pellen, R. Poncelet, NNLO QCD pre- dictions for W+c-jet production at the LHC. JHEP 06, 100 (2021). arXiv:2011.01011
Next-to-next-toleading order QCD corrections to Wbb − production at the LHC. H B Hartanto, R Poncelet, A Popescu, S Zoia, arXiv:2205.01687Phys. Rev. D. 106774016H.B. Hartanto, R. Poncelet, A. Popescu, S. Zoia, Next-to-next-to- leading order QCD corrections to Wbb − production at the LHC. Phys. Rev. D 106(7), 074016 (2022). arXiv:2205.01687
Flavor identification of reconstructed Hadronic. R Gauld, A Huss, G Stagnitto, 10.1103/PhysRevLett.130.161901Jets. Phys. Rev. Lett. 13016161901R. Gauld, A. Huss, G. Stagnitto, Flavor identification of recon- structed Hadronic. Jets. Phys. Rev. Lett. 130(16), 161901. https:// doi.org/10.1103/PhysRevLett.130.161901
Associated production of a Higgs boson decaying into bottom quarks and a weak vector boson decaying leptonically at NNLO in QCD. R Gauld, A Gehrmann-De Ridder, E W N Glover, A Huss, I Majer, arXiv:1907.05836JHEP. 102R. Gauld, A. Gehrmann-De Ridder, E.W.N. Glover, A. Huss, I. Majer, Associated production of a Higgs boson decaying into bottom quarks and a weak vector boson decaying leptonically at NNLO in QCD. JHEP 10, 002 (2019). arXiv:1907.05836
Precise QCD predictions for the production of a Z boson in association with a hadronic jet. A Gehrmann-De Ridder, T Gehrmann, E W N Glover, A Huss, T A Morgan, arXiv:1507.02850Phys. Rev. Lett. 117222001A. Gehrmann-De Ridder, T. Gehrmann, E.W.N. Glover, A. Huss, T.A. Morgan, Precise QCD predictions for the production of a Z boson in association with a hadronic jet. Phys. Rev. Lett. 117(2), 022001 (2016). arXiv:1507.02850
Infrared safe definition of jet flavor. A Banfi, G P Salam, G Zanderighi, arXiv:hep-ph/0601139Eur. Phys. J. C. 47A. Banfi, G.P. Salam, G. Zanderighi, Infrared safe definition of jet flavor. Eur. Phys. J. C 47, 113-124 (2006). arXiv:hep-ph/0601139
A fragmentation approach to jet flavor. S Caletti, A J Larkoski, S Marzani, D Reichelt, arXiv:2205.01117JHEP. 10158S. Caletti, A.J. Larkoski, S. Marzani, D. Reichelt, A fragmentation approach to jet flavor. JHEP 10, 158 (2022). arXiv:2205.01117
Practical jet flavour through NNLO. S Caletti, A J Larkoski, S Marzani, D Reichelt, arXiv:2205.01109Eur. Phys. J. C. 827632S. Caletti, A.J. Larkoski, S. Marzani, D. Reichelt, Practical jet flavour through NNLO. Eur. Phys. J. C 82(7), 632 (2022). arXiv:2205.01109
M Czakon, A Mitov, R Poncelet, arXiv:2205.11879Infrared-safe flavoured anti-k T jets. M. Czakon, A. Mitov, R. Poncelet, Infrared-safe flavoured anti-k T jets. arXiv:2205.11879
The anti-k t jet clustering algorithm. M Cacciari, G P Salam, G Soyez, arXiv:0802.1189JHEP. 0463M. Cacciari, G.P. Salam, G. Soyez, The anti-k t jet clustering algo- rithm. JHEP 04, 063 (2008). arXiv:0802.1189
The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. J Alwall, R Frederix, S Frixione, V Hirschi, F Maltoni, O Mattelaer, H S Shao, T Stelzer, P Torrielli, M Zaro, arXiv:1405.0301JHEP. 0779J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mat- telaer, H.S. Shao, T. Stelzer, P. Torrielli, M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP 07, 079 (2014). arXiv:1405.0301
An introduction to PYTHIA 8.2. T Sjöstrand, S Ask, J R Christiansen, R Corke, N Desai, P Ilten, S Mrenna, S Prestel, C O Rasmussen, P Z Skands, arXiv:1410.3012Comput. Phys. Commun. 191T. Sjöstrand, S. Ask, J.R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C.O. Rasmussen, P.Z. Skands, An intro- duction to PYTHIA 8.2. Comput. Phys. Commun. 191, 159-177 (2015). arXiv:1410.3012
Herwig++ Physics and Manual. M Bahr, arXiv:0803.0883Eur. Phys. J. C. 58M. Bahr et al., Herwig++ Physics and Manual. Eur. Phys. J. C 58, 639-707 (2008). arXiv:0803.0883
Herwig 7.0/Herwig++ 3.0 release note. J Bellm, arXiv:1512.01178Eur. Phys. J. C. 764196J. Bellm et al., Herwig 7.0/Herwig++ 3.0 release note. Eur. Phys. J. C 76(4), 196 (2016). arXiv:1512.01178
Herwig 7.2 release note. J Bellm, arXiv:1912.06509Eur. Phys. J. C. 805452J. Bellm et al., Herwig 7.2 release note. Eur. Phys. J. C 80(5), 452 (2020). arXiv:1912.06509
Soft Drop. . A J Larkoski, S Marzani, G Soyez, J Thaler, arXiv:1402.2657JHEP. 0514642. A.J. Larkoski, S. Marzani, G. Soyez, J. Thaler, Soft Drop. JHEP 05, 146 (2014). arXiv:1402.2657
The PDF4LHC21 combination of global PDF fits for the LHC Run III. R D Ball, PDF4LHC Working Group CollaborationarXiv:2203.05506J. Phys. G. 49880501PDF4LHC Working Group Collaboration, R.D. Ball et al., The PDF4LHC21 combination of global PDF fits for the LHC Run III. J. Phys. G 49(8), 080501 (2022). arXiv:2203.05506
LHAPDF6: parton density access in the LHC precision era. A Buckley, J Ferrando, S Lloyd, K Nordström, B Page, M Rüfenacht, M Schönherr, G Watt, arXiv:1412.7420Eur. Phys. J. C. 75132A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht, M. Schönherr, G. Watt, LHAPDF6: parton density access in the LHC precision era. Eur. Phys. J. C 75, 132 (2015). arXiv:1412.7420
A massive variable flavour number scheme for the Drell-Yan process. R Gauld, arXiv:2107.01226SciPost Phys. 12124R. Gauld, A massive variable flavour number scheme for the Drell- Yan process. SciPost Phys. 12(1), 024 (2022). arXiv:2107.01226
Identification of beauty and charm quark jets at LHCb. R Aaij, LHCb CollaborationarXiv:1504.07670JINST. 10066013LHCb Collaboration, R. Aaij et al., Identification of beauty and charm quark jets at LHCb. JINST 10(06), P06013 (2015). arXiv:1504.07670
Performance of b-Jet Identification in the ATLAS Experiment. G Aad, ATLAS CollaborationarXiv:1512.01094JINST. 11044008ATLAS Collaboration, G. Aad et al., Performance of b-Jet Identi- fication in the ATLAS Experiment. JINST 11(04), P04008 (2016). arXiv:1512.01094
Identification of heavyflavour jets with the CMS detector in pp collisions at 13 TeV. A M Sirunyan, CMS CollaborationarXiv:1712.07158JINST. 13055011CMS Collaboration, A.M. Sirunyan et al., Identification of heavy- flavour jets with the CMS detector in pp collisions at 13 TeV. JINST 13(05), P05011 (2018). arXiv:1712.07158
Identification of charm jets at LHCb. R Aaij, LHCb CollaborationarXiv:2112.08435JINST. 17022028LHCb Collaboration, R. Aaij et al., Identification of charm jets at LHCb. JINST 17(02), P02028 (2022). arXiv:2112.08435
Complete O (alpha-s) corrections to heavy flavor structure functions in electroproduction. E Laenen, S Riemersma, J Smith, W L Van Neerven, Nucl. Phys. B. 392E. Laenen, S. Riemersma, J. Smith, W.L. van Neerven, Complete O (alpha-s) corrections to heavy flavor structure functions in elec- troproduction. Nucl. Phys. B 392, 162-228 (1993)
Rates for inclusive deep inelastic electroproduction of charm quarks at HERA. S Riemersma, J Smith, W L Van Neerven, arXiv:hep-ph/9411431Phys. Lett. B. 347S. Riemersma, J. Smith, W.L. van Neerven, Rates for inclusive deep inelastic electroproduction of charm quarks at HERA. Phys. Lett. B 347, 143-151 (1995). arXiv:hep-ph/9411431
Heavy quark correlations in deep inelastic electroproduction. B W Harris, J Smith, arXiv:hep-ph/9503484Nucl. Phys. B. 452B.W. Harris, J. Smith, Heavy quark correlations in deep inelas- tic electroproduction. Nucl. Phys. B 452, 109-160 (1995). arXiv:hep-ph/9503484
Charm electroproduction viewed in the variable flavor number scheme versus fixed order perturbation theory. M Buza, Y Matiounine, J Smith, W L Van Neerven, arXiv:hep-ph/9612398Eur. Phys. J. C. 1M. Buza, Y. Matiounine, J. Smith, W.L. van Neerven, Charm elec- troproduction viewed in the variable flavor number scheme versus fixed order perturbation theory. Eur. Phys. J. C 1, 301-320 (1998). arXiv:hep-ph/9612398
Soft gluon resummation for heavy quark electroproduction. E Laenen, S.-O Moch, arXiv:hep-ph/9809550Phys. Rev. D. 5934027E. Laenen, S.-O. Moch, Soft gluon resummation for heavy quark electroproduction. Phys. Rev. D 59, 034027 (1999). arXiv:hep-ph/9809550
On the next-tonext-to-leading order QCD corrections to heavy-quark production in deep-inelastic scattering. H Kawamura, N A Lo Presti, S Moch, A Vogt, arXiv:1205.5727Nucl. Phys. B. 864H. Kawamura, N.A. Lo Presti, S. Moch, A. Vogt, On the next-to- next-to-leading order QCD corrections to heavy-quark production in deep-inelastic scattering. Nucl. Phys. B 864, 399-468 (2012). arXiv:1205.5727
Charm production in the forward region: constraints on the small-x gluon and backgrounds for neutrino astronomy. R Gauld, J Rojo, L Rottoli, J Talbert, arXiv:1506.08025JHEP. 119R. Gauld, J. Rojo, L. Rottoli, J. Talbert, Charm production in the forward region: constraints on the small-x gluon and backgrounds for neutrino astronomy. JHEP 11, 009 (2015). arXiv:1506.08025
| []
|
[
"Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images",
"Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images"
]
| [
"Tribhuvanesh Orekondy [email protected] \nMax Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany\n",
"Mario Fritz [email protected] \nMax Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany\n",
"Bernt Schiele [email protected] \nMax Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany\n"
]
| [
"Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany",
"Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany",
"Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken\nGermany"
]
| []
| Images convey a broad spectrum of personal information. If such images are shared on social media platforms, this personal information is leaked which conflicts with the privacy of depicted persons. Therefore, we aim for automated approaches to redact such private information and thereby protect privacy of the individual.By conducting a user study we find that obfuscating the image regions related to the private information leads to privacy while retaining utility of the images. Moreover, by varying the size of the regions different privacy-utility tradeoffs can be achieved. Our findings argue for a "redaction by segmentation" paradigm.Hence, we propose the first sizable dataset of private images "in the wild" annotated with pixel and instance level labels across a broad range of privacy classes. We present the first model for automatic redaction of diverse private information. It is effective at achieving various privacyutility trade-offs within 83% of the performance of redactions based on ground-truth annotation. | 10.1109/cvpr.2018.00883 | [
"https://arxiv.org/pdf/1712.01066v1.pdf"
]
| 21,107,755 | 1712.01066 | cfa6e052b271cb764c4532f160f117b64aac525b |
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images
Tribhuvanesh Orekondy [email protected]
Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken
Germany
Mario Fritz [email protected]
Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken
Germany
Bernt Schiele [email protected]
Max Planck Institute for Informatics Saarland Informatics Campus Saabrücken
Germany
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images
Images convey a broad spectrum of personal information. If such images are shared on social media platforms, this personal information is leaked which conflicts with the privacy of depicted persons. Therefore, we aim for automated approaches to redact such private information and thereby protect privacy of the individual.By conducting a user study we find that obfuscating the image regions related to the private information leads to privacy while retaining utility of the images. Moreover, by varying the size of the regions different privacy-utility tradeoffs can be achieved. Our findings argue for a "redaction by segmentation" paradigm.Hence, we propose the first sizable dataset of private images "in the wild" annotated with pixel and instance level labels across a broad range of privacy classes. We present the first model for automatic redaction of diverse private information. It is effective at achieving various privacyutility trade-offs within 83% of the performance of redactions based on ground-truth annotation.
Introduction
More and more visual data is captured and shared on the Internet. Images and video contain a wide range of private information that may be shared unintentionally such as e.g. email-address, picture-id or finger-print (see Figure 1). Consequently, there is a growing interest within the computer vision community [4,16,20,22,38,40] to assess the amount of leaked information, understand implications on privacy and ultimately control and enforce privacy again. Yet, we are missing an understanding how image content relates to private information and how automated redaction can be approached.
Therefore, we address two important questions in this context. First, how can private information be redacted while maintaining an intelligible image? We investigate this question in a user study with highly encouraging results: we can redact private information in images while preserving its utility. Furthermore, varying the amount of pixels redacted results in different privacy vs. utility tradeoffs. We conclude that redaction by segmentation is a valid approach to perform visual redactions.
We ask a second question in this paper: What kind of privacy-utility trade-offs can be achieved by automatic redaction schemes? Based on our first finding, we approach this as a pixel labeling task on multiple privacy classes (which we refer to as privacy attributes). Segmenting privacy attributes in images presents a new challenge of reasoning about regions including multiple modalities. For instance, in Figure 1, identifying the name and datetime re-quires mapping the relevant pixels to the text domain for understanding, while identifying the student id requires reasoning over both visual and text domains. Our automated methods address these challenges and localize these privacy attributes for redaction via segmentation. By performing both quantitative and human evaluation, we find these automated methods to be effective in segmentation as well as privacy-utility metrics.
Our model and evaluation for automatic redaction is facilitated by a new dataset that extends the Visual Privacy (VISPR) dataset [38] to include high-quality pixel and instance-level annotations. To this end, we propose a dataset containing 8.5k images annotated with 47.6k instances over 24 privacy attributes. We will make the dataset publicly available for future research.
Related Work
Text Sanitation Redaction techniques are primarily studied in the context of confidential text documents, wherein certain sensitive entities need to be removed. Studies focus on identification of such entities [5,8,9,41,42,43] and methods to prevent over-sanitation [5,41]. However, unlike these works which have access to dense structured text data (e.g. documents), we deal with unstructured pixel-level representations of such entities.
Image Perturbations for Privacy Adversarial perturbations [15,18,36] are suggested to evade person identification [22,45]. However, these methods typically assume a white-box CNN-based adversary for the specific task of face recognition. In contrast, we propose redacting content at the expense of some utility to achieve better privacy (measured against humans) across a broad range of privacy classes. [4] proposes de-identifying people by generating an alternate appearance for the person. We study a more fundamental problem of identifying such regions where such methods could be directly applicable. [27,29] study obfuscation of private content, but are limited to constrained surveillance videos and non-automated methods.
Private Information Recognition Many existing studies focus on either detecting faces [47,49], license plates [6,53,54], relationships [46,50], age [3] or occupations [44]. Research in determining privacy risk across a broad range of privacy classes are typically treated as a classification problem [38,48,51]. However, many studies [2,11] demonstrate a "privacy paradox" -users share such images in spite of knowing the privacy risks. Hence in this work, we propose a middle ground for reducing privacy leakage, such that users can still share images by redacting private content while preserving its utility.
Visual Privacy Datasets
PicAlert [52] and YourAlert [51] propose datasets with user-classified privacy labels. VISPR [38] provides a more exhaustive dataset of 22k im-ages annotated with a broad range of image-level privacy labels. The PEViD video dataset [28] provides person-centric bounding box annotation over 20 video sequences in a constrained setting. In contrast, our dataset based on VISPR images provides pixel level annotation from a diverse set of privacy classes.
Segmentation Identifying pixel-level labels from images is a well-studied problem in computer vision. However, most methods [32,34] and datasets [10,13,33] focus on segmenting common objects in visual scenes. We however focus on identifying private regions in a privacy-utility framework, which introduces many new challenges.
The Visual Redactions Dataset
In this section we present our pixel-label visual privacy dataset as an extension to the VISPR dataset [38]. We begin with a discussion on how images (Section 3.1) and attributes (Section 3.2) were selected for the task. This is followed by the annotation procedure (Section 3.3) and a brief analysis (Section 3.4) of the dataset.
Selecting Images for Pixel-level Annotation
The VISPR dataset contains 22k real-world useruploaded publicly available Flickr images which makes this a great starting point for addressing the visual redaction problem "in the wild". 10k of these images are annotated as safe. From the remaining 12k images we pixel-annotate the subset of 8,473 images that contain at most 5 people. The main reason to focus on this subset was to reduce the annotation cost while maximizing the amount of non-person pixels. We preserve the identical 45-20-35 train-val-test split of these images as in the VISPR dataset.
Shortlisting Privacy Attributes
The 22k images in the multilabel VISPR dataset are annotated using 68 image-level privacy attributes (∼5.2 attributes per image). These privacy attributes are compiled from multiple privacy-relevant sources -the US Privacy Act of 1974, EU Data Protection Directive 95/46/EC and various social network website rules. Additionally, they cover a diverse range of private information that can be leaked in images (e.g. face, tattoo, physical disability, personal relationships, passport, occupation). Therefore, we use these as a starting point for redactions in images. We select 42 out of 67 privacy attributes (excluding attribute safe) for three reasons. First, for 11 attributes (e.g. religion, occupation, sports) typically the entire image is linked to the attribute (e.g. scene with church or sport stadium). In such cases, the solution to keeping the information private is to not share such images (as proposed in [38]). We instead focus on attributes which can be localized for redaction, such that the image might still be useful. Second, 8 attributes were extremely tedious to annotate, because of their strong cooccurrence with crowd-scenes (e.g. political and general opinion, occupation) or the effort required to outline them (e.g. hair color). Third, 6 attributes (e.g. place of birth, email content, national id) contained under 30 examples for training. In spite of filtering such attributes, we still cover a broad spectrum of information to help de-identify people in images (such as by obfuscating faces or names). We further merge few groups among these 42 attributes: (i) when they occur as a complete and partial version (e.g. (complete face, partial face) merged into face) (ii) when they localize to the same region (e.g. (race, skin color, gender, relationships) merged into person). As a result, we work with 24 localizable privacy attributes in our dataset representative of 42 of the original 67 VISPR privacy attributes (see Figure 2 for the complete list).
Dataset Annotation
In this section, we discuss the annotation procedure.
Annotation Tool and Instructions We use a customized version of the VGG Image Annotator tool [12]. Consensus and Agreement Measure Agreement is calculated w.r.t. images annotated by one of the authors. We measure agreement using Mean Intersection Over Union (mIoU): tp tp+f p+f n averaged over images. Consensus Experiment and Annotating person We observed 93.8% agreement in consensus task of annotating instances of person in 272 images. Annotators separately annotated person in remaining images. With an annotation effort of ∼240 hours, we obtain 13,171 person instances annotated over 5,920 images.
Annotating face We observed an agreement of 86.2% (lower due to small sizes of instances) in the consensus task for annotating face in 100 images. Using the 5,920 images of people as a starting point, annotators annotated faces in separate sets of images. In ∼60 hours, we gather 8,996 instances of faces.
Annotating Remaining Attributes Images for each of the remaining 22 attributes are annotated together successively by at most a single annotator. 8 of the text-based attributes (e.g. name, phone no) are annotated using 4-sided polygons or bounding boxes. Over ∼220 hours, we gather annotation of 26,676 instances.
Text Annotations
We augment all images in the dataset with text detections obtained using the Google Cloud Vision API to aid localization of text-based attributes. This is provided as OCR and bounding box annotation in structured hierarchy of text elements in the order: characters, words, paragraphs, blocks and pages. In addition, we also gather face and landmark bounding box detections using the same API.
Summary With an annotation effort of ∼800 hours concentrated over four months with five annotators (excluding the authors), we propose the first sizable pixel-labeled privacy dataset of 8,473 images annotated with ∼47.6k instances using 24 privacy attributes.
Dataset Analysis and Challenges
We now present a brief analysis of the dataset and the new challenges it presents for segmentation tasks. Examples of the proposed attributes and their distribution among the 8k images in the dataset are presented in Figure 2.
Popular datasets [10,13,33] provide pixel-level annotation of various common visual objects. These objects are common in visual scenes, such as vehicles (car, bicycle), animals (dog, sheep) or household items (chair , table). Common to all these objects are their distinctive visual cues. Figure 2, one can notice similar cues among the VISUAL attributes, but it is not evident in the others. Recognizing TEXTUAL attributes (such as names or phone numbers) in images instead require detecting and parsing text information and additionally associating it with prior knowledge. While some of the MULTIMODAL attributes can be associated with visual cues, often the text content greatly helps disambiguate instances (a card-like object could be a student id or driv lic).
We also observe a strong correlation between modalities and sizes of instances. We find TEXTUAL instances to occupy on average less than 1% of pixels in images, while the MULTIMODAL attributes predominantly occur as close-up photographs occupying 45% of the image area on average. Consequently, the privacy attributes pose challenges from multiple modalities and require specialized methods to individually address them. Moreover, they provide different insights due to the variance in sizes. Hence, going forward, we treat the modes TEXTUAL, VISUAL and MULTIMODAL as categories to aid analysis and addressing challenges presented by them.
Applicability to other problems We believe the proposed dataset could be beneficial to many other problems apart from visual redactions. In visual privacy, it complements datasets to perform tasks such as person de-identification [4,16]. Outside of the privacy domain, we also provide a sizable face segmentation dataset with 9k face instances, compared to 2.9k in Labeled Faces in the Wild [23] and 200 in FASSEG [24].
Understanding Privacy and Utility w.r.t. Redacted Pixels
In this section, we study how redacting ground-truth pixels of attributes influences privacy and utility of the image by conducting a user study on Amazon Mechanical Turk (AMT). We will also use the results from this study as a reference point for evaluating our proposed automated methods in Section 6.2.
Generating Redactions
Given an image I a containing attribute a, we generate a ground-truth redacted version of the image Iā by simply blacking-out pixels corresponding to a in the ground-truth.
Spatially extending a
We now want to redact fewer or more pixels in image Iā to understand how this influences the image's privacy and utility. We generate multiple versions of the ground-truth redacted image {I s a : s ∈ S} at different scales of redaction, such that I ns a contains n times as many blacked-out pixels of I s a . We achieve different scales of redactions by dilating/eroding the groundtruth binary mask of a, as shown in Figure 3. We use seven scales S = {0.0, 0.25, 0.5, 1.0, 2.0, 4.0, inf}, where I 0 a is the unredacted image, I 1 a (= Iā) is the GT redacted image and I inf a is a completely blacked-out image.
User Study
We create an AMT project of 1,008 tasks (24 attributes × 6 images × 7 scales), each to be responded by 5 unique workers from a pool of 29 qualified workers. Each task contains 2 yes/no questions based on an image I s a , one each for Privacy and Utility. We consider privacy and utility w.r.t. Defining Privacy To understand if attribute a has been successfully redacted in I s a , we pose the privacy question in the form: "Is a visible in the image?". We also provide a brief description of the attribute a along with examples. We consider I s a to be private, if a majority of the users respond no. Defining Utility To understand utility of an image, we pose the question: "Is the image intelligible, so that it can be shared on social networking websites? i.e. does this image convey the main content of the original image (i.e., the image without the black patch)". As a result, we define the utility of an image independent to its aesthetic value and instead associate it with the semantic information. We consider I s a to have utility, if a majority of the users respond yes.
Measuring Privacy and Utility
We label each of the 1,008 images with varying redacted scales their privacy and utility as discussed above. For any given redaction scale s, we aggregate privacy/utility scores simply as the percentage of images considered private/useful. Consequently, an ideal visual redaction has both high privacy and utility.
Analysis
We now discuss results based on the privacy-utility scores obtained over modes and various sizes (i.e. relative size of a in I a ) based on Figure 4.
Privacy is a Step Function
We observe in Figure 4 across all plots, that a minimum number of pixels of attribute a need to be removed to effectively redact it from the image. This minimum number corresponds to exactly the ground-truth redaction (s = 1) -redacting fewer pixels than this makes the image non-private and redacting more pixels achieves marginal privacy gains. More specifically, we achieve 94% privacy with ground-truth redactions. The imperfect privacy score is predominantly (5/9 failure cases) Apart from this, other cases involve contextual cues revealing the attribute (e.g. shadow of a wheelchair) and regions that were not annotated (e.g. outline of a person at a distance).
Gradual Loss in Utility From Figure 4 OVERALL, we find utility to decrease gradually as the size of redacted region increases. Another interesting observation is that utility strongly depends on the size of a in the image. In the bottom row of Figure 4, we see that for smaller GT regions (a = 0 − 10%), we still obtain high utility at larger dilations. However, as the area of the GT regions increases beyond 50% of the image, redaction entails blacking-out the majority of the image pixels and hence zero utility.
Privacy and Utility What can we take away from this while proposing automated methods to preserve privacy while retaining utility? Due to the correlation between modes and sizes, we can predict more pixels for smaller attributes with minimal loss to utility. For instance, for TEXTUAL attributes, we can predict 4x as many groundtruth pixels for redaction. However, for larger ground-truth regions (>50% of image) both privacy and utility are step functions and hence making redaction a choice between privacy and utility.
GT Segmentations are a Good Proxy In general, for images over all attributes and sizes (Figure 4 OVERALL), we see that we can already achieve high privacy while retaining considerable utility of the image. Moreover, we obtain near-perfect privacy with the highest utility in all cases at s = 1, the ground-truth redactions. This justifies to address privacy attribute redaction as a segmentation task.
Pixel-Labeling of Private Regions
In Section 3 we discussed the challenges of attributes occurring across multiple modalities (TEXTUAL, VISUAL, MULTIMODAL). In Section 4, we motivated how groundtruth segmentations in our dataset make a good proxy for visual redactions. In this section we propose automated methods to perform pixel-level labeling (semantic segmentation) of privacy attributes in images, with an emphasis on methods tackling each modality.
We begin with a simple baseline Nearest Neighbor (NN): A 2048-dim feature is extracted using ResNet-50 for each image. At test time, we predict the segmentation mask of the closest training image in terms of L 2 distance.
Methods for TEXTUAL-centric attributes
To facilitate segmenting textual attributes, for each image we first obtain an ordered sequence of bounding box detections of words and their OCR using the Google Cloud Vision API (as discussed in Section 3.3).
Proxy GT We represent n words in an image as a se-
quence [(w i , b i , y i )] n i=1 ,
where w i is the word text, b i is the bounding box and y i is the label. We use 9 labels (8 TEX-TUAL attributes + safe). We assign each y i in the sequence the ground-truth attribute that maximally overlaps with b i , or a safe label in case of zero overlap. At test-time, we segment pixels in region b i if a non-safe label is predicted for word w i . For the test set, we refer to predictions from this proxy dataset as PROXY to obtain an upper-bound for our methods on these text detections. Named Entity Recognition (NER) We use the popular Stanford NER CRFClassifier [14] to label each word of the sequence as from a set of recognized entity classes (e.g. person, organiziation, etc.). We use the model which is trained on case-invariant text to predict one of seven entity classes.
Sequence Labeling (SEQ) We train a sequence labeler similar to [19,31,35] as shown in Figure 5. We preprocess by replacing all digits with 0s and stem each word to reduce the size of the vocabulary. We tokenize the words in the training sequences using a vocabulary of size 4,149 (number of words with at least 4 occurrences). We embed the words using 100-d GloVe embeddings [39]. To capture the temporal nature, we use two-level Bidirectional LSTMs. At each time-step, we obtain a joint embedding by elementwise multiplication of: the text embedding (256-d output of the LSTM) and the image embedding (2048-d ResNet-50 [17] feature reduced to 256-d using an FC layer). We classify this joint embedding into 9 labels using an FC layer followed by softmax activation.
Methods for VISUAL-centric attributes
Recent deep-learning segmentation methods have proven to be effective in localizing objects based on their visual cues. We propose using a state of the art method in addition to few pretrained methods for VISUAL attributes.
Pretrained Models (PTM) We use pretrained methods to classify three classes typically encountered in popular visual scene datasets. (i) face: We use bounding box face detections obtained using the Google Cloud Vision API. (ii) person: We use the state-of-the-art segmentation method FCIS [32] to predict pixels of COCO class "person" (iii) lic plate: We use OpenALPR [37] to detect license plates in images.
FCIS We retrain all layers of the FCIS model [32] for our task and dataset. We train it for 30 epochs with learning rate 0.0005 over trainval examples and their horizontally mirrored versions. We fine tune it from the model provided by the authors trained for segmentation on MS-COCO [33]. We obtained best results using default hyper-parameters.
Methods for MULTIMODAL-centric attributes
Recognizing Multimodal attributes (e.g. driv lic, receipt) require reasoning over both visual and textual domains. We treat this as a classification problem due to: (i) limited training examples (∼125 per multimodal attribute) (ii) large region of these attributes (∼45% image area), which provides only ∼10% utility even after GT-based redaction (Section 4.2).
Weakly Supervised Labeling (WSL) We propose learning a multilabel classifier based on visual-only (WSL:I) and visual+text content (WSL:I+T). If the class probability of an attribute is beyond a certain threshold, we predict all pixels in the image for the attribute. WSL:I is the same approach used in [38] -a multilabel ResNet-50 [17] classifier. In the case of WSL:I+T, we obtain a multimodal embedding by concatenating visual and text representations. We obtain visual representation (identical to WSL:I) with a ResNet-50 architecture. We obtain text representation by encoding all words in the image. We tried three such variants: (i) Bag-of-Words (BOW) encoding: Words in the image are represented as a one-hot vector with vocabulary of size 1,751. (ii) LSTM encoding: Identical to SEQ, we encode the word sequence using an LSTM with 128-hidden units. We use output from the last cell as the text representation. (iii) Conv1D encoding: We use 1D convolutions to encode the word sequence (typically used for sentence classification tasks [25]) followed by max pooling to obtain a fixed-size text representation In all three cases, we reduce the text-representation to 512-d using an FC+ReLU layer. We report BOW encoding results for WSL:I+T in the rest of the paper since this provided the best results. Salient Object Prediction (SAL) Using WSL:I+T as the base classifier, we use the salient object as an approximation of the attribute's location. We obtain class-agnostic saliency obtained using DeepLab-v2 ResNet [7,21].
Weakly Supervised Iterative Refinement (IR)
For document-like objects, the text regions tend to be densely clustered in images. Hence, after classification using WSL:I+T, we refine the convex hull of the text regions using DenseCRF [30] to "spill into" the document region.
Experiments and Discussion
In this section, we discuss segmentation performance (Section 6.1) and privacy-vs-utility performance (Section 6.2) of our proposed methods.
Evaluating Segmentation Performance
We now evaluate methods proposed in Section 5 in terms of its segmentation performance using Mean Average Precision, suggested in Pascal VOC [13]. This is calculated by averaging area under precision-recall curves over the privacy attributes. We use 50 thresholds uniformly spaced between 0 and 1 to obtain this curve. At each threshold t, we: (i) binarize the prediction score masks per image by thresholding pixel-level scores at t (ii) aggregate pixel-level TP, FP, FN counts (normalized by image size) per attribute over all images to obtain attribute-level precision and recall. We ignore GT masks containing under 25 2 pixels during evaluation (<1% GT masks). Table 1 presents the quantitative results of the proposed methods on the test set. Qualitative results in Figure 6 are based on an ENSEMBLE, using predictions of SEQ for TEXTUAL, FCIS for VISUAL, WCS:I+T for MULTIMODAL attributes. We generally observe that NN underperforms simple baselines across all modalities, highlighting the difficulty and diversity presented by the dataset.
TEXTUAL We observe: (i) Patterns, frequency and context: SEQ achieves the best overall score, justifying the need for special methods to tackle text attributes. It is reasonably effective in detecting datetime (Fig. 6a), emailadd and phone no due to patterns they often display. We additionally find SEQ detect attributes which often require prior knowledge (e.g. name, location). The common success modes in such cases are when the words are popular entities (e.g. "Berlin" in Fig. 6a) or have discriminative visual/textual context (e.g. detecting home addr in Fig. 6b).
(ii) Challenges imposed by text detections: PROXY represents an upper bound to our textual methods. The low scores highlights the difficulty of text detection and this is espe-cially severe for scene and handwritten text detection, a frequent case in our dataset (e.g. Fig. 6e,f). Moreover, our text detections do not perfectly overlap with ground-truth annotations. Since text regions are small, we additionally pay a high performance penalty even for correct detections (e.g. IoU=0.42 for home addr in Fig. 6b). Moreover, even in the case of correct text detections, we observe failures in OCR which affects the quality of input for dependent methods. This can be observed by the under-performance of NER, which is typically very effective on clean sanitized text.
VISUAL
We observe: (i) The unreasonable effectiveness of FCIS: We obtain the highest score in the VISUAL category using FCIS. We find FCIS to be highly effective localizing visual objects commonly encountered in other datasets (e.g. person, face). Moreover, we find it achieves reasonable performance even when there is a lack of training data (e.g. only <60 examples of fingerpr, phys disb, see Fig. 6d). The common failure modes are either difficult examples (e.g. face in Fig. 6e) or uncommon visual objects (e.g. signtr in Fig. 6b) multimodal reasoning to detect these attributes. This is particularly necessary to disambiguate similar looking visual objects (e.g. card-like objects driv lic and stud id, Fig. 6b). (iii) Precision-Recall trade-off : We find precision for WSL:I+T for this method can be improved for some attributes (e.g. cr card, ticket) by IR, which instead of the entire image, predicts only the smoothened hull of text regions. We observe FCIS achieve the best overall score due to higher precision.
Privacy vs. Utility Trade-off by Automatic Redaction
In the previous section, we evaluated our approaches w.r.t. segmentation quality. Now, we ask how effective are redactions based on our proposed methods in terms of privacy and utility?
To answer this, we once again run the user study in Section 4.2 on AMT, but now by redacting proposed pixels of our automated method over those exact images. To vary the number of predicted pixels, we vary the threshold to binarize the predicted score masks over attributes. As a result, we obtain 6-8 redacted versions for each of the 144 images (24 attributes × 6 images). Each image is labeled by 5 unique qualified AMT workers.
Results We obtain privacy-utility scores for each threshold and plot it as a curve in Figure 7. We also plot the It should be noted that perfect redactions are unavailable to us and we use these groundtruth based redactions (or manual redactions) only to serve as a reference. We evaluate performance by calculating area under the curve (AUC). We observe: (i) Overall, we find our method obtain a privacy-utility score of 65% -a relative performance of 83% compared to redactions using ground-truth annotation from the dataset. (ii) MUL-TIMODAL attributes present a hard choice between privacy and utility, as these regions are often large. We find the slightly lower AUC(gt) to be an artifact of sampling.
(iii) Although we obtain a low mAP for TEXTUAL attributes, we observe an 81% privacy-utility score. This occurs as we can now over-predict regions, exhibiting low precision and high recall w.r.t. segmentation, but yet retaining high utility due to their small size. Consequently, we can predict more text pixels "for free". Based on these observations, we find the automatic redactions of our models trained on the proposed dataset show highly promising results -they closely mimic performance achieved by redacting ground-truth regions across a broad range of private information.
Conclusion
We proposed a redaction by segmentation approach to aid users selectively sanitize images of private content. To learn automated approaches for this task, we proposed the first sizable visual redactions dataset containing images with pixel-level annotations of 24 privacy attributes. By conducting a user study, we showed that redacting groundtruth regions in this dataset provides near-perfect privacy while preserving the image's utility. We then presented automated approaches to segment privacy attributes in images and observed that we can already reasonably segment these attributes. By performing a privacy-vs-utility evaluation of our automated approach, we achieved a highly encouraging 83% performance w.r.t. GT-based redactions.
Appendices A. Contents
The appendix contains:
• Detailed descriptions, examples and auxiliary analysis of the 24 privacy attributes discussed in Section 3.2
• Precision-Recall curves for the methods discussed in Table 1 • Qualitative results to supplement Figure 6 • Implementation details and qualitative results to supplement Section 4 and Section 6.2
B. Privacy Attributes
In this section, we provide detailed descriptions and examples of the 24 Privacy Attributes used in the proposed dataset. We also present a brief supplementary analysis of the conditional co-occurrence of these attributes in the dataset. Figures 10-12, we provide detailed descriptions and examples of the 24 privacy attributes grouped by category, which was discussed in Section 3.2. The descriptions briefly summarize the instructions provided to the annotators. The figures displays instance-agnostic ground-truth annotations of respective attributes. Ground-truth annotations are stored in a format similar to MS-COCO [33].
Detailed Descriptions and Instructions In
TEXTUAL, signtr and handwrit attributes are annotated using 4-sided polygons or bounding-boxes. For TEX-TUAL attributes, only Latin-based words understandable by English-speakers are annotated. For remaining attributes, the objects are enclosed in a polygon. In case of severe occlusion, the object is enclosed using multiple polygons. Figure 8 represents the conditional co-occurrence matrix (i.e. probability that attribute X occurs in an image containing attribute Y ) of the 24 privacy attributes in images. The privacy attributes along rows and columns are sorted by category. From this plot, we find: (i) Images of MULTIMODAL attributes often appear alongside a variety of TEXTUAL attributes (bottom-left block of matrix). (ii) However, the contrary is not true -TEXTUAL attributes do not frequently occur only in the presence of MULTIMODAL attributes (top-right block of matrix). (iii) person and face occur frequently alongside other VISUAL attributes as they are central to many common visual scenes (central block of matrix).
Auxiliary Privacy Attribute Analysis
C. Precision Recall Curves
The Precision-Recall curves of methods proposed in Table 1 are presented in Figure 9. The first column represents averaged category performance. We plot these curves [13], we correct the curves to have monotonically decreasing precision by setting precision at r to be the highest precision at r ≥ r.
Moreover, precision at r = 0 is extrapolated as highest precision at r ≥ 0. We calculate Average Precision as area under this curve using trapezoidal rule.
Auxiliary Discussion From PR curves in Figure 9, we observe: (i) The under-performance NN indicates diversity and difficulty of the dataset. (ii) TEXTUAL: We find the best performance using SEQ. PROXY denotes a rough upper bound. We find SEQ obtain slightly higher recall as it predicts overlapping masks. (iii) VISUAL: We find FCIS achieve the best performance. For person, we find a similar curve with PTM since both have the same architecture and images from the same domain (Flickr) used for training. (iv) MULTIMODAL: FCIS achieves slightly higher category performance compared to others. WCS:I+T generally achieves better recall across all attributes. IR/SAL improves precision of WCS:I+T by trading off recall.
D. Qualitative Results for Segmentation
We present qualitative results in Figure 16 to supplement results in Figure 6 and discussion in Section 6.1. We present the qualitative results per attribute, sorted by their Intersection Over Union (IoU) Scores. Hence, figures on top represent common success modes and figures at the bottom represent common failure modes. These results were obtained using ENSEMBLE by choosing the operating point with the highest IoU score per mode. Table 1 E. Privacy vs. Utility Trade-off
In this section, we provide implementation details on the redaction scaling strategy used for ground-truth redactions (Section 4.1) and predicted redactions (Section 6.1). In both cases, we perform a black-out of relevant pixels. For phy disb, we black-out w.r.t. a bounding-box region since we observed the silhouette is a strong visual indicator of the attribute. In addition, we provide qualitative results for these strategies in Figures 17 and 18 to supplement Figure 3.
Scaling Ground-truth Redactions
We scale groundtruth redactions using super-pixels to roughly adhere to edges and object boundaries. The downscaled image is first represented using 3000-5000 superpixels generated using SLIC0 [1]. We represent the ground-truth binary mask per attribute using a 0-1 labeling over the graph of superpixels, where 1 represents the node (superpixel) belongs in the redaction. To dilate, we iteratively add 0-nodes with most number of adjacent 1-nodes. To erode, we perform the same operation with an inverted ground-truth binarymask. We parameterize the scaling using s ∈ S (where S = {0.0, 0.25, 0.5, 1.0, 2.0, 4.0, inf}), representing the dilation/erosion factor of the ground-truth mask.
Scaling Predicted Redactions
From the ENSEM-BLE method, we obtain softmax probability score masks R w×h×k for k attributes per image. We compute multiple thresholds per attribute to binarize the score masks, such that at threshold t ∈ T , t times the number of ground-truth attribute pixels are redacted over the entire test-set of images. We use T = {0.25, 0.5, 1.0, 2.0, 4.0, 8.0}. For TEX-TUAL attributes, we use an additional threshold such that all detected text is redacted.
Qualitative Results Auxiliary Discussion Figure 17 and 18 displays examples of common success and failure modes w.r.t. to the attribute mentioned. All images in these figures are from the test set. P and U indicate privacy and utility score, which is simply the percentage of ∼5 AMT workers who agree to the privacy and utility questions. High P indicates the image is private w.r.t. to attribute a and high U indicates the image is intelligible. In these figures, we find: (i) For small private regions, we can redact more pixels without affecting utility ( Figure 17 location and face) (ii) MULTIMODAL attributes often display a hard choice between privacy and utility ( Figure 17 mail) (iii) Text detections or OCR is a common failure mode with handwritten text for automatic redactions (Figure 18 home addr) (iv) Some difficult MULTIMODAL attributes (Figure 18 stud id) can be detected only at high thresholds, entailing complete redactions of many FP images too (v) Figure 18 fingerpr represents one of the failure cases for ground-truth redaction discussed in Section 4.3, where AMT turkers overlook details in the question. In this particular case, the workers were asked to only consider fingerprints from fingertips. However, even at s = 1 where the finger-tips are redacted, many workers incorrectly answer fingerprints as being visible. GT-based are ground-truth regions scaled and redacted as discussed previously. Predicted are automatic redactions generated by method ENSEMBLE. P indicates privacy score and U indicates utility score. In both cases, higher is better. Scores are indicated in green in case of majority agreement and red otherwise.
Figure 1 :
1Users often share images containing private information on the Internet, which poses a privacy risk. For example, in the top row, user might unintentionally leak their fingerprint. We present methods to aid users automatically redact such content by proposing privacy sensitive regions in images.
Figure 2 :
2Examples and distribution of privacy attributes in the dataset.
Five expert annotators draw polygons around instances based on an instruction manual. A summary of instructions, definitions of attributes and examples are provided in the supplementary material.
Figure 3 :
3Dilation/Erosion of attribute fingerprint Looking at the examples of attributes in
(i) two versions of the same image: (I a , I s a ), and (ii) users (AMT workers in our case).
Figure 4 :
4Privacy and Utility using various scales of ground-truth redaction over (Top row) modes (Bottom row) sizes due to turkers overlooking important details in the question.
Rule-based Classification (RULES) We use the following rules to label words in the sequence: (i) name: if it exists in a set of 241k names obtained from the US Census Bureau website. (ii) location, landmark, home address: if it
Figure 5 :
5Architecture to perform Sequence Labeling exists in a set of 3.7k names of cities and countries from Wikipedia's list of locations with a population of more than 110k. (iii) datetime, phone no, birth dt: if the word contains a digit (iv) emailadd: if the word contains the symbol @, we predict this word and adjacent words assuming a format @ .
Figure 6 :
6Qualitative examples from our method
Figure 7 :
7Comparing redactions using predicted and ground-truth segmentations scores obtained for different dilations of redacted groundtruth annotated region.
Figure 9 :
9Precision-Recall curves for methods in
Figure 12 :Figure 13 :Figure 14 :
121314Descriptions and examples of MULTIMODAL attributes. For readability, we display images where attributes are salient. Qualitative results per attribute. In each pair of images, top is ground-truth segmentation and bottom is prediction. Pairs of images in each column are sorted by IoU scores (high to low). Qualitative results per attribute. In each pair of images, top is ground-truth segmentation and bottom is prediction. Pairs of images in each column are sorted by IoU scores (high to low).
Figure 15 :
15Qualitative results per attribute. In each pair of images, top is ground-truth segmentation and bottom is prediction. Pairs of images in each column are sorted by IoU scores (high to low).
Figure 16 :PFigure 17 :Figure 18 :
161718Qualitative results per attribute. In each pair of images, top is ground-truth segmentation and bottom is prediction. Pairs of images in each column are sorted by IoU scores (high to low). = 0, U = 100 P = 0, U = 100 P = 60, U = 100 P = 100, U = 100 P = 100, Common Success Modes of Automatic Redactions. GT-based are ground-truth regions scaled and redacted as discussed previously. Predicted are automatic redactions generated by method ENSEMBLE. P indicates privacy score and U indicates utility score. In both cases, higher is better. Scores are indicated in green in case of majority agreement and red otherwise. Common Failure Modes of Automatic Redactions.
. (ii) Comparison with Baselines: PTM achieves comparable results for person, due to Flickr images used to train both models. However, it underperforms for face (detections are not precise enough) and lic plate (poor performance in the wild).MULTIMODAL We observe: (i) WSL:I is a good simple baseline: WSL:I achieves reasonable performance (45.4) for multimodal attributes, compared to other modes (1.5 in text and 20.8 in visual) although the prediction spans the entire image. This is attributed to large size of MULTI-MODAL instances found in images. (ii) Multimodal reasoning helps: We find WSL:I+T improves performance over WCS:I by 20%, justifying the need for methods to performTEXTUAL
Method mAP loca
tion
home
addr
name birth
dt
phone
no
land
mark
date
time
email
add
PROXY 45.0 31.7 37.8 48.7 52.5 52.6 33.6 52.4 50.8
NN
0.5
0.2
0.4
0.1
0.6
0.0
2.0
0.5
0.0
NER
3.0
6.0
1.7
4.4
0.5
0.0
0.5
10.9 0.0
RULES 4.2
3.1
0.5
2.8
0.6
1.4
1.2
6.4
17.5
FCIS
7.2
4.3
0.2
9.8
0.1
2.5
27.6 12.9 0.0
SEQ
26.8 18.4 19.4 19.1 25.1 45.8 13.9 33.4 38.9
VISUAL
Method mAP face licp
late
per
son
nud
ity
hand
writ
phy
disb
med
hist
fing
erpr
sig
ntr
NN
13.5 8.4 11.4 33.1 6.0 32.1 11.4 7.4 11.7 0.1
WSL:I 20.8 5.0 4.3
30.3 16.4 49.9 13.7 37.7 28.8 1.3
PTM
16.4 47.6 11.6 88.3 0.0 0.0
0.0
0.0 0.0
0.0
FCIS
68.3 83.8 77.9 87.0 69.7 80.7 59.0 45.8 68.1 42.6
MULTIMODAL
Method mAP cr
card
pass
port
driv
lic
stud
id
mail rece
ipt
tic
ket
NN
21.5
9.8
44.2
17.9
13.0
15.7
16.8
33.3
WSL:I+T 56.2
29.7
67.4
82.4
58.4
43.3
54.5
57.8
SAL
36.2
55.9
37.2
23.8
30.4
8.1
42.5
55.1
IR
53.6
41.7
51.2
67.8
48.1
36.9
57.2
72.5
FCIS
59.2
53.2
76.3
66.5
50.3
33.1
59.4
75.4
Table 1 :
1Quantitative results of our methods for segmenting privacy regions. Bold numbers denote highest and italicized numbers second highest scores in the columns.
Region of the image depicting where the photographer might have visited. Includes the following cases: Street signs, addresses, GPS co-ordinates, flags.Someone's home address based on the context, such as on an identity card or mail.Name (name)Someone's name such as on a name-tag or identity card. Any recognizable name in Latin-based text is included, including that of popular figures. Someone's date of birth (day, month and/or year) determined based on context, such as on identity cards or passports. Name of a store, restaurant or a business such as on a store front or a receipt.Figure 10: Descriptions and examples of TEXTUAL attributes. privacy attributes. For readability, we display images where attributes are salient. Region indicating a person's face, containing all visible facial landmarks discussed in[26]. Regions occluded by hair or masks are excluded.Region containing a license plate or vehicle registration or identification number in any language/country. We consider any motorized vehicle (e.g. cars, motorbike, train).Region indicating any part of a person or their reflections. Includes person's body along with wearables (e.g. hats, goggles, backpacks). Excludes objects the person is holding (e.g. shopping bag, guitar). Torso and thigh region of a person, if skin is completely/partially visible in this region. Someone's handwritten text in any language.Region indicating either a) special equipment used by a physically disabled person (e.g. wheelchair) or b) region around limbs, if limbs are absent.Any pharmaceutical consumable such as pills, capsules or syrups (including their containers and packaging). Someone's finger-tips if ridges are clearly visible upon zooming-in or fingerprint impressions on any surface.Figure 11: Descriptions and examples of VISUAL attributes. For readability, we display images where attributes are salient. Either front, rear or any details of a credit card or similar monetary instrument Front, rear or written details of a Drivers License or driving permit Student ID (stud id) Front or rear of a student identity card Mail (mail) Mail including hand-written letters, post-cards or packages Receipt (receipt) A document indicating a financial transaction, such as receipts or checks Ticket (ticket) A ticket, such as for travel, concert or sports matchAttribute
Example
Description
Location
(location)
Home
Address
(home addr)
Birth
Date
(birth dt)
Phone
no.
(phone no)
A syntactically-correct phone number (either personal or business), determined
either based on context or pattern.
Landmark
(landmark)
Date/Time
(datetime)
A date or time, such as revealing a time-frame when the photograph might have
been captured.
Email address
(emailadd)
A syntactically-correct email address
Attribute
Example
Description
Face (face)
License Plate
(lic plate)
Person
(person)
Nudity
(nudity)
Handwriting
(handwrit)
Physical
Disability
(phy disb)
Medical
History
(med hist)
Fingerprint
(fingerpr)
Signature
(signtr)
Region indicating someone's signature
Attribute
Example
Description
Credit Card
(cr card)
Passport
(passport)
Any page (including cover) of a Passport
Drivers
License
(driv lic)
Acknowledgement This research was partially supported by the German Research Foundation (DFG CRC 1223). We thank Anna Khoreva and Alina Dima for feedback on the paper.
Slic superpixels compared to state-of-the-art superpixel methods. R Achanta, A Shaji, K Smith, A Lucchi, P Fua, S Süsstrunk, TPAMI. 12R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk. Slic superpixels compared to state-of-the-art superpixel methods. TPAMI, 2012. 12
Imagined communities: Awareness, information sharing, and privacy on the facebook. A Acquisti, R Gross, PET. A. Acquisti and R. Gross. Imagined communities: Aware- ness, information sharing, and privacy on the facebook. In PET, 2006. 2
Age recognition in the wild. C Bauckhage, A Jahanbekam, C Thurau, ICPR. C. Bauckhage, A. Jahanbekam, and C. Thurau. Age recog- nition in the wild. In ICPR, 2010. 2
I know that person: Generative full body and face de-identification of people in images. K Brkic, I Sikiric, T Hrkac, Z Kalafatic, CVPRW, 2017. 1. 24K. Brkic, I. Sikiric, T. Hrkac, and Z. Kalafatic. I know that person: Generative full body and face de-identification of people in images. In CVPRW, 2017. 1, 2, 4
Efficient techniques for document sanitization. V T Chakaravarthy, H Gupta, P Roy, M K Mohania, CIKM. V. T. Chakaravarthy, H. Gupta, P. Roy, and M. K. Mohania. Efficient techniques for document sanitization. In CIKM, 2008. 2
Automatic license plate recognition. S.-L Chang, L.-S Chen, Y.-C Chung, S.-W Chen, IEEE Trans. Intelligent Transportation Systems. 2S.-L. Chang, L.-S. Chen, Y.-C. Chung, and S.-W. Chen. Au- tomatic license plate recognition. IEEE Trans. Intelligent Transportation Systems, 2004. 2
Semantic image segmentation with deep convolutional nets and fully connected crfs. L.-C Chen, G Papandreou, I Kokkinos, K Murphy, A L Yuille, ICLR. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep con- volutional nets and fully connected crfs. In ICLR, 2015. 6
Detecting privacy leaks using corpus-based association rules. R Chow, P Golle, J Staddon, KDD. R. Chow, P. Golle, and J. Staddon. Detecting privacy leaks using corpus-based association rules. In KDD, 2008. 2
Sanitization's slippery slope: the design and study of a text revision assistant. R Chow, I Oberst, J Staddon, SOUPS. R. Chow, I. Oberst, and J. Staddon. Sanitization's slippery slope: the design and study of a text revision assistant. In SOUPS, 2009. 2
The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, CVPR. 23M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. 2, 3
Facebook and online privacy: Attitudes, behaviors, and unintended consequences. B Debatin, J P Lovejoy, A.-K Horn, B N Hughes, Journal of Computer-Mediated Communication. 2B. Debatin, J. P. Lovejoy, A.-K. Horn, and B. N. Hughes. Facebook and online privacy: Attitudes, behaviors, and unin- tended consequences. Journal of Computer-Mediated Com- munication, 2009. 2
Vgg image annotator (via). A Dutta, A Gupta, A Zissermann, 2017-11-08. 3A. Dutta, A. Gupta, and A. Zissermann. Vgg image anno- tator (via), 2016. http://www.robots.ox.ac.uk/˜vgg/ software/via/ Accessed: 2017-11-08. 3
The pascal visual object classes (voc) challenge. IJCV. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, 711M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010. 2, 3, 7, 11
Incorporating non-local information into information extraction systems by gibbs sampling. J R Finkel, T Grenager, C D Manning, ACL. J. R. Finkel, T. Grenager, and C. D. Manning. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL, 2005. 6
Explaining and harnessing adversarial examples. I Goodfellow, J Shlens, C Szegedy, ICLR. I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. 2
Cartooning for enhanced privacy in lifelogging and streaming videos. E T Hassan, R Hasan, P Shaffer, D Crandall, A Kapadia, CVPRW. 14E. T. Hassan, R. Hasan, P. Shaffer, D. Crandall, and A. Ka- padia. Cartooning for enhanced privacy in lifelogging and streaming videos. In CVPRW, 2017. 1, 4
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 6
Universal adversarial perturbations against semantic image segmentation. J Hendrik Metzen, M Kumar, T Brox, V Fischer, ICCV. J. Hendrik Metzen, M. Chaithanya Kumar, T. Brox, and V. Fischer. Universal adversarial perturbations against se- mantic image segmentation. In ICCV, 2017. 2
Bidirectional lstm-crf models for sequence tagging. Z Huang, W Xu, K Yu, arXiv:1508.01991arXiv preprintZ. Huang, W. Xu, and K. Yu. Bidirectional lstm-crf mod- els for sequence tagging. arXiv preprint arXiv:1508.01991, 2015. 6
Faceless person recognition; privacy implications in social media. S Oh, R Benenson, M Fritz, B Schiele, ECCV. S. Joon Oh, R. Benenson, M. Fritz, and B. Schiele. Faceless person recognition; privacy implications in social media. In ECCV, 2016. 1
Exploiting saliency for object segmentation from image level labels. S Oh, R Benenson, A Khoreva, Z Akata, M Fritz, B Schiele, CVPR. S. Joon Oh, R. Benenson, A. Khoreva, Z. Akata, M. Fritz, and B. Schiele. Exploiting saliency for object segmentation from image level labels. In CVPR, 2017. 6
Adversarial image perturbation for privacy protection -a game theory perspective. S Oh, M Fritz, B Schiele, ICCV. 1S. Joon Oh, M. Fritz, and B. Schiele. Adversarial image per- turbation for privacy protection -a game theory perspective. In ICCV, 2017. 1, 2
Augmenting CRFs with Boltzmann machine shape priors for image labeling. A Kae, K Sohn, H Lee, E Learned-Miller, CVPR. A. Kae, K. Sohn, H. Lee, and E. Learned-Miller. Augment- ing CRFs with Boltzmann machine shape priors for image labeling. In CVPR, 2013. 4
Multi-class semantic segmentation of faces. K Khan, M Mauro, R Leonardi, ICIP. K. Khan, M. Mauro, and R. Leonardi. Multi-class semantic segmentation of faces. In ICIP, 2015. 4
Convolutional neural networks for sentence classification. Y Kim, EMNLP. Y. Kim. Convolutional neural networks for sentence classifi- cation. In EMNLP, 2014. 6
Annotated facial landmarks in the wild: A large-scale, realworld database for facial landmark localization. M Koestinger, P Wohlhart, P M Roth, H Bischof, ICCVW. 14M. Koestinger, P. Wohlhart, P. M. Roth, and H. Bischof. Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization. In ICCVW, 2011. 14
. P Korshunov, C Araimo, F D Simone, C Velardo, J.-L , P. Korshunov, C. Araimo, F. D. Simone, C. Velardo, J.-L.
Subjective study of privacy filters in video surveillance. T Dugelay, Ebrahimi, MMSP. 2Dugelay, and T. Ebrahimi. Subjective study of privacy filters in video surveillance. MMSP, 2012. 2
Pevid: privacy evaluation video dataset. P Korshunov, T Ebrahimi, SPIE. P. Korshunov and T. Ebrahimi. Pevid: privacy evaluation video dataset. In SPIE, 2013. 2
Using warping for privacy protection in video surveillance. P Korshunov, T Ebrahimi, DSP. 2P. Korshunov and T. Ebrahimi. Using warping for privacy protection in video surveillance. DSP, 2013. 2
Efficient inference in fully connected crfs with gaussian edge potentials. P Krähenbühl, V Koltun, NIPS. P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011. 6
Neural architectures for named entity recognition. G Lample, M Ballesteros, S Subramanian, K Kawakami, C Dyer, NAACL. G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer. Neural architectures for named entity recogni- tion. In NAACL, 2016. 6
Fully convolutional instance-aware semantic segmentation. Y Li, H Qi, J Dai, X Ji, Y Wei, CVPR. 26Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei. Fully convolutional instance-aware semantic segmentation. In CVPR, 2017. 2, 6
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, ECCV. 611T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In ECCV, 2014. 2, 3, 6, 11
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, CVPR. J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2
End-to-end sequence labeling via bidirectional lstm-cnns-crf. X Ma, E Hovy, ACL. X. Ma and E. Hovy. End-to-end sequence labeling via bi- directional lstm-cnns-crf. In ACL, 2016. 6
Universal adversarial perturbations. S.-M Moosavi-Dezfooli, A Fawzi, O Fawzi, P Frossard, CVPR. S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In CVPR, 2017. 2
. Openalpr, 2017-11-08. 6Openalpr. https://github.com/openalpr/openalpr Accessed: 2017-11-08. 6
Towards a visual privacy advisor: Understanding and predicting privacy risks in images. T Orekondy, B Schiele, M Fritz, ICCV. 6T. Orekondy, B. Schiele, and M. Fritz. Towards a visual privacy advisor: Understanding and predicting privacy risks in images. In ICCV, 2017. 1, 2, 6
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, EMNLP. J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014. 6
Protecting visual secrets using adversarial nets. N Raval, A Machanavajjhala, L P Cox, CVPRW. N. Raval, A. Machanavajjhala, and L. P. Cox. Protecting visual secrets using adversarial nets. In CVPRW, 2017. 1
Toward sensitive document release with privacy guarantees. Engineering Applications of AI. D Sánchez, M Batet, D. Sánchez and M. Batet. Toward sensitive document re- lease with privacy guarantees. Engineering Applications of AI, 2017. 2
Detecting sensitive information from textual documents: An information-theoretic approach. D Sánchez, M Batet, A Viejo, MDAI. D. Sánchez, M. Batet, and A. Viejo. Detecting sensitive in- formation from textual documents: An information-theoretic approach. In MDAI, 2012. 2
Automatic generalpurpose sanitization of textual documents. D Sánchez, M Batet, A Viejo, IEEE Transactions on Information Forensics and Security. 2D. Sánchez, M. Batet, and A. Viejo. Automatic general- purpose sanitization of textual documents. IEEE Transac- tions on Information Forensics and Security, 2013. 2
What do you do? occupation recognition in a photo via social context. M Shao, L Li, Y Fu, ICCV. M. Shao, L. Li, and Y. Fu. What do you do? occupation recognition in a photo via social context. In ICCV, 2013. 2
Accessorize to a crime: Real and stealthy attacks on state-ofthe-art face recognition. M Sharif, S Bhagavatula, L Bauer, M K Reiter, ACM CCS. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Ac- cessorize to a crime: Real and stealthy attacks on state-of- the-art face recognition. In ACM CCS, 2016. 2
A domain based approach to social relation recognition. Q Sun, B Schiele, M Fritz, CVPR. Q. Sun, B. Schiele, and M. Fritz. A domain based approach to social relation recognition. In CVPR, 2017. 2
Face detection using deep learning: An improved faster rcnn approach. X Sun, P Wu, S C H Hoi, CoRR. 2X. Sun, P. Wu, and S. C. H. Hoi. Face detection using deep learning: An improved faster rcnn approach. CoRR, 2017. 2
Privacy prediction of images shared on social media sites using deep features. A Tonge, C Caragea, arXiv:1510.08583arXiv preprintA. Tonge and C. Caragea. Privacy prediction of images shared on social media sites using deep features. arXiv preprint arXiv:1510.08583, 2015. 2
Robust real-time face detection. P A Viola, M J Jones, IJCV. 2P. A. Viola and M. J. Jones. Robust real-time face detection. IJCV, 2001. 2
Seeing people in social context: Recognizing people and social relationships. G Wang, A C Gallagher, J Luo, D A Forsyth, ECCV. G. Wang, A. C. Gallagher, J. Luo, and D. A. Forsyth. See- ing people in social context: Recognizing people and social relationships. In ECCV, 2010. 2
Kompatsiaris. Personalized privacy-aware image classification. E S Xioufis, S Papadopoulos, A Popescu, Y , ICMR. E. S. Xioufis, S. Papadopoulos, A. Popescu, and Y. Kompat- siaris. Personalized privacy-aware image classification. In ICMR, 2016. 2
I know what you did last summer!: Privacy-aware image classification and search. S Zerr, S Siersdorfer, J Hare, E Demidova, ACM SIGIR. S. Zerr, S. Siersdorfer, J. Hare, and E. Demidova. I know what you did last summer!: Privacy-aware image classifica- tion and search. In ACM SIGIR, 2012. 2
Learning-based license plate detection using global and local features. H Zhang, W Jia, X He, Q Wu, International Conference on Pattern Recognition (ICPR). H. Zhang, W. Jia, X. He, and Q. Wu. Learning-based license plate detection using global and local features. In Interna- tional Conference on Pattern Recognition (ICPR), 2006. 2
Principal visual word discovery for automatic license plate detection. W Zhou, H Li, Y Lu, Q Tian, IEEE Transactions on Image Processing. 2W. Zhou, H. Li, Y. Lu, and Q. Tian. Principal visual word discovery for automatic license plate detection. IEEE Trans- actions on Image Processing, 2012. 2
| [
"https://github.com/openalpr/openalpr"
]
|
[
"Re-thinking Spatial Confounding in Spatial Linear Mixed Models",
"Re-thinking Spatial Confounding in Spatial Linear Mixed Models"
]
| [
"Kori Khan \nDepartment of Statistics\nIowa State University\n\n",
"Candace Berret \nDepartment of Statistics\nBrigham Young University\n\n"
]
| [
"Department of Statistics\nIowa State University\n",
"Department of Statistics\nBrigham Young University\n"
]
| []
| In the last two decades, considerable research has been devoted to a phenomenon known as spatial confounding. Spatial confounding is thought to occur when there is collinearity between a covariate and the random effect in a spatial regression model. This collinearity is considered highly problematic when the inferential goal is est-imating regression coefficients, and various methodologies have been proposed to "alleviate" it. Recently, it has become apparent that many of these methodologies are flawed, yet the field continues to expand. In this paper, we offer the first attempt to synthesize work in the field of spatial confounding. We propose that there are at least two distinct phenomena currently conflated with the term spatial confounding. We refer to these as the analysis model and the data generation types of spatial confounding. We show that these two issues can lead to contradicting conclusions about whether spatial confounding exists and whether methods to alleviate it will improve inference. Our results also illustrate that in most cases, traditional spatial linear mixed models do help to improve inference of regression coefficients. Drawing on the insights gained, we offer a path forward for research in spatial confounding. | null | [
"https://export.arxiv.org/pdf/2301.05743v1.pdf"
]
| 255,942,779 | 2301.05743 | aedaef622b205573feb3482484e42afd5497e649 |
Re-thinking Spatial Confounding in Spatial Linear Mixed Models
Kori Khan
Department of Statistics
Iowa State University
Candace Berret
Department of Statistics
Brigham Young University
Re-thinking Spatial Confounding in Spatial Linear Mixed Models
In the last two decades, considerable research has been devoted to a phenomenon known as spatial confounding. Spatial confounding is thought to occur when there is collinearity between a covariate and the random effect in a spatial regression model. This collinearity is considered highly problematic when the inferential goal is est-imating regression coefficients, and various methodologies have been proposed to "alleviate" it. Recently, it has become apparent that many of these methodologies are flawed, yet the field continues to expand. In this paper, we offer the first attempt to synthesize work in the field of spatial confounding. We propose that there are at least two distinct phenomena currently conflated with the term spatial confounding. We refer to these as the analysis model and the data generation types of spatial confounding. We show that these two issues can lead to contradicting conclusions about whether spatial confounding exists and whether methods to alleviate it will improve inference. Our results also illustrate that in most cases, traditional spatial linear mixed models do help to improve inference of regression coefficients. Drawing on the insights gained, we offer a path forward for research in spatial confounding.
Introduction
In myriad applications, the use of standard regression models for spatially referenced data can result in spatial dependence in the residuals. For the better part of a century, the solution to this problem was to use a spatial regression model. In these models, a spatial random effect is introduced to account for the residual spatial dependence and thereby (theoretically) improve inference, whether the inferential goal was associational or predictive.
This practice continued, unchallenged, until about two decades ago. At that time, a phenomenon now known as spatial confounding was introduced by Reich et al. (2006) and Hodges and Reich (2010) (see also, Paciorek, 2010). If, historically, spatial statisticians believed that incorporating spatial dependence with spatial regression models would improve inference; now those interested in spatial confounding suggest that incorporating spatial dependence with traditional models will distort inference. Originally focused on a setting where the estimation of individual covariate effects were important, interest in spatial confounding has since expanded to other inferential focuses (e.g., Page et al., 2017;Papadogeorgou et al., 2019). Spatial confounding is typically described as occurring when there is multicollinearity between a spatially-referenced covariate and a spatial random effect. It is thought to be quite problematic. For example, Marques et al. (2022) states spatial confounding can lead to "severely biased" regression coefficients, Reich et al. (2006) claims that it can lead to "large changes" in these estimates, and Prates et al. (2019) argues that both the "sign and relevance of covariates can change drastically" in the face of spatial confounding.
Despite the fact that many of these claims are not empirically supported, research into spatial confounding and methods to alleviate it has exploded (Hanks et Hanks et al., 2015). Recently, many of the methods designed to alleviate spatial confounding have been shown to lead to counterintuitive results by Khan and Calder (2020) and have even been classified as "bad statistical practice" (Zimmerman and Ver Hoef, 2021). Yet, efforts to study and alleviate spatial confounding continue without any attempt to address these observations, increasingly influencing new fields of study such as causal inference and even criminology (Reich et al., 2021;Kelling et al., 2021) In this paper, we (1) synthesize the existing body of work in spatial confounding, reviewing it in the context of historical teachings from spatial statistics; (2) characterize two distinct albeit related phenomena currently conflated with the term spatial confounding; and (3) show, through theoretical and simulation results, that these two issues can lead to contradicting conclusions about whether spatial confounding exists and whether methods to alleviate it will actually improve inference. Importantly, by examining spatial confounding in this way, these three key understandings show how ignoring the nuances of "spatial confounding" can lead to methodologies that distort inferences in the very settings for which they are designed to be used.
The rest of this paper is organized as follows: In Section 2, we introduce the analytical set-up for the rest of the paper. Using this set-up, we provide an overview of spatial confounding in the broader context of spatial statistics. Section 3 provides a framework for understanding the two types of spatial confounding and illustrates how current (and past) research fits into this scheme. It also explores how efforts to mitigate spatial confounding can be organized into this framework. Section 4 introduces theoretical results assessing the impact of both sources of spatial confounding on bias for a regression coefficient. In Section 5, we use simulation studies to explore settings that have been identified in the literature as situations in which spatial confounding will lead to increased bias in regression coefficient settings. We illustrate that in these cases, traditional spatial analysis models often outperform both non-spatial models and models designed to alleviate spatial confounding. Finally, in Section 6, we propose a clear path towards resolving the contradictions explored in this paper.
Background
We begin by introducing the analytical set-up that will be used throughout the rest of the paper. We then use it to provide a brief history of how spatial confounding became a topic of concern in spatial statistics research and explore where it has gone since.
Analytical Set-Up
Throughout this paper, we distinguish between a data generating model and an analysis model. The former is a model meant to approximate how the data likely arose; while the latter is a model used to analyze the observed data.
Spatial regression models are traditionally used when there is residual spatial dependence after accounting for measured variables. Residual spatial dependence is thought to be the result of either an unobserved, spatially varying variable or an unobserved spatial process (Waller and Gotway, 2004). To define a data generating model, we focus on the former as this most closely matches the intuition motivating efforts to mitigate spatial confounding (see e.g., Reich et al., 2006;Paciorek, 2010;Dupont et al., 2022;Page et al., 2017).
Specifically, we assume y i is observed at location s i ∈ R 2 for i = 1, . . . , n and it can be modeled as follows:
Generating Model: y i (s i ) = β 0 + β x x i (s i ) + β z z i (s i ) + i ,(1)
where x (s) = (x 1 (s 1 ), . . . , x n (s n )) T and z (s) = (z 1 (s 1 ), . . . , z n (s n )) T are each univariate variables, = ( 1 , . . . , n ) T is the vector of errors with mean 0 and variance-covariance matrix σ 2 I, and φ = β 0 , β x , β z , σ 2 T are unknown. Throughout this paper, we assume that x (s) and y (s) are observed and z (s) is unobserved. We also assume that the primary inferential interest is on β x . We consider three possible approaches to modeling the relationship between y (s) and x (s): 1) A non-spatial linear approach, 2) a "traditional" spatial approach, and 3) an "adjusted" spatial approach. Each framework is associated with one or more analysis models that can be fit to the observed y (s) and x (s).
Non-Spatial Analysis Model:
y i (s i ) = β 0 + β N S x x i (s i ) + i(2)
Spatial Analysis Model:
y i (s i ) = β 0 + β S x x i (s i ) + g(s i ) + i (3) Adj. Spatial Analysis Model:ỹ i (s i ) = β 0 + β AS xxi (s i ) + h(s i ) + i (4)
For (2)-(4), i are i.i.d with mean 0 and unknown variance σ 2 . The regression coefficients β 0 , β N S x , β S x , and β AS x are unknown. We note that σ 2 and β 0 will vary based on the analysis model chosen. In other words, to be precise, we would use notation such as β N S 0 ,β S 0 , and β AS 0 . As our primary interest is β x , we refrain from doing so for the sake of simplicity.
The spatial random effects g(s) and h(s) are assumed to have mean zero and unknown, positive-definite variance-covariance matrices. We note that models relying on Gaussian Markov Random Fields (GMRFs) can be considered as special cases of this if the variance-covariance matrices are defined to be pseudoinverses of the singular precisions (Paciorek, 2009). The tildes over y (s) and x (s) in (4) reflect that they may be functions of the originally observed y (s) and x (s) respectively. In future sections, we distinguish between a realization of the variables x (s) and z (s) and the stochastic processes that could have generated such realizations. We use capital letters (e.g., X(s) and Z(s) ) to refer to stochastic processes and lower case letters to indicate a realization of the variables (e.g., x(s) and z(s) ). After this, we drop notation indicating the dependence on spatial location unless it is needed for clarity.
Spatial Models
When there is residual spatial dependence, the conventional wisdom in spatial statistics literature is that a model which accounts for this spatial dependence will offer better inference than a model which does not account for it (Cressie, 1993;Bivand et al., 2008;Waller and Gotway, 2004). Historically, this view first appeared in the context of geostatistics and interpolation efforts. There, the goal was to improve predictions for the values of a stochastic process at unobserved locations (e.g., Wikle, 2010). In other words β S x in (3) was merely a tool to de-trend the data, and the primary interest was often estimating the variancecovariance matrix of the spatial random effect. The idea that accounting for spatial dependence improves inference later inspired many popular spatial models proposed for areal data. These models were often developed with the goal, either implicit or explicit, of ensuring that β S x in (3) was "close" to β x in (1) (Besag et al., 1991;Hodges and Reich, 2010;Clayton et al., 1993). In recent decades, the lines delineating methods for geostatistical data and areal data have became blurred with advancements in computing and the popular class of models proposed by Diggle et al. (1998). However, across analysis goals and types of data, the consensus continued to be that models accounting for spatial dependence should be preferred over models that did not account for spatial dependence.
Recently, however, this view has shifted. The challenge to the prevailing view arose in a line of research about a phenomena now known as "spatial confounding". Clayton et al. (1993) is often referenced as the first article to describe spatial confounding. These authors noticed what they referred to as "confounding by location": estimates for regression coefficients changed when a spatial random effect was added to the analysis model. Clayton et al. (1993) interpreted this as a favorable change -one in which the estimates of the association between a response and an observed covariate was adjusted to account for an unobserved spatially-varying confounder (see also, Hodges and Reich, 2010). The modern conceptualization of spatial confounding arose in work by Reich et al. (2006) and Hodges and Reich (2010). These articles were the first to suggest that fitting spatial models could induce bias in the estimates of the regression coefficients and an "over-inflation" of the uncertainty associated with these estimates. Spatial confounding is almost always introduced as an issue of multicollinearity between a spatially varying covariate and a spatial random effect in a spatial analysis model (Reich et al., 2006;Hodges and Reich, 2010;Hefley et al., 2017;Reich et al., 2021;Dupont et al., 2022;Thaden and Kneib, 2018). This statement is often deemed sufficient to identify the phenomena of spatial confounding. However, there is no consensus on a formal definition for spatial confounding. While there have been two previous efforts to formalize spatial confounding, both were definitions considering special cases of a broader phenomena (Thaden and Kneib, 2018;Khan and Calder, 2020).
Spatial Confounding
Despite the ambiguity of spatial confounding as a concept, researchers using the term have developed shared expectations for the phenomenon. These expectations have, in turn, shaped multiple methods aimed at alleviating spatial confounding. Some researchers have noticed inconsistencies and contradictions arising in some of the conclusions reached by the spatial confounding literature. For example, Hanks et al. (2015) and Nobre et al. (2021) have both observed that a distortion in inference for β x can occur in the absence of stochastic dependence between x and z, contradicting some stated expectations for spatial confounding. These inconsistencies have largely remained unresolved even as research on spatial confounding has increasingly begun influencing other lines of work, such as causal inference (e.g., Reich et al., 2021;Papadogeorgou et al., 2019).
We propose that some of these contradictions arise because at least two distinct categories of issues are being studied by researchers in spatial confounding. Loosely speaking, we can think of these categories as encompassing a data generation phenomena and an analysis model phenomena. Importantly, once teased apart, these two issues can lead to different conclusions about whether spatial confounding is present and whether spatial analysis models should be adjusted.
Types of Spatial Confounding
As previously noted, spatial confounding is typically described as an issue of multicollinearity between a spatially varying covariate and a spatial random effect in a spatial analysis model. It appears, however, that researchers can disagree about the source of multicollinearity as well as what it means for a covariate to be spatially varying (in a problematic sense). In this section, we tease apart what we refer to as data generation spatial confounding and analysis model spatial confounding. In Figure 1, we summarize how the problematic relationships that are thought to cause spatial confounding differ by type of spatial confounding, and we elaborate on these relationships shortly. We emphasize that this framework is not currently in use. Instead, it is a novel attempt meant to help organize some of the existing conceptualizations of spatial confounding in the literature. Importantly, many articles can have references to both types of spatial confounding within them. In the following discussion, we sort works based on the primary focus of the article.
Spatial Confounding
Analysis Model
Problematic Relationship
• x andΣ −1 g in Spatial Analysis Model (3)
Data Generation
Problematic Relationship
• X and Z in Generating Model (1) • x and z in Generating Model (1) Figure 1: Primary Source of Spatial Confounding by Type Reich et al. (2006) and Hodges and Reich (2010) are the works that introduced the modern conceptualization of spatial confounding. These papers, and many of the works they inspired, focused on what we will refer to as analysis model spatial confounding. Research motivated by the analysis model issue often does not consider how y or x were generated. In other words, these works do not assume there is a missing z or a data generation model of the form (1) (Hodges and Reich, 2010). Instead, this conceptualization of spatial confounding focuses on the relationship between an observed x and the spatial random effect in a spatial analysis model ( Similarly, x is considered spatially varying (in a problematic sense) if it is highly correlated with such a low-frequency eigenvector ofΣ −1 g . We note that, in spatial confounding literature, no one has precisely defined what it means for an eigenvector to be low-frequency (but see, Reich and Hodges, 2008). However, when displayed graphically, they tend to show spatial patterns where nearby things are more similar than others. Thus, the problematic relationship which causes data generation spatial confounding is thought to be primarily between x andΣ −1 g , as summarized in Figure 1. There are several common beliefs underlying work focused on this conceptualization of spatial confounding. First, spatial confounding occurs as a result of fitting a spatial analysis model. While a distortion to inference can be expected in any spatial analysis model (Hodges and Reich, 2010), it is plausible that the degree of distortion may vary based on the particular analysis model chosen (see e.g., Hefley et al., 2017). Second, efforts should be taken to determine whether spatial confounding needs to be adjusted for in the analysis model. In this line of work, many authors acknowledge that it not clear when spatial confounding needs to be accounted for (Prates et al., 2019;Hanks et al., 2015;Hui and Bondell, 2021). In other words, there is at least an implicit understanding that spatial analysis models may still be preferable over adjusted spatial analysis models at times. Finally, determining whether spatial confounding exists will involve studying characteristics of the observed data (in particular x) along with properties of the chosen analysis model.
Analysis Model Spatial Confounding
Data Generation Spatial Confounding
In work that focuses on data generation spatial confounding, researchers often do assume that y is generated from a model of the form (1). In the context of our analytical set-up, the interest is typically on how the relationship between X or Z (or alternatively x and z) impacts inference on β S
x when a spatial analysis model of the form (3) is used to fit the data (Paciorek, In this line of work, spatial confounding is still often defined as an issue of multicollinearity (Dupont et al., 2022;Thaden and Kneib, 2018). However, the source of the multicollinearity and the definition of spatially varying (in the problematic sense) are not always clear. Paciorek (2010) has shaped much of the current work focused on data generation spatial confounding, as well as many of the most recent methods designed to alleviate spatial confounding (see e.g., Dupont Marques et al., 2022). In that article and the many that followed, researchers make assumptions about the variables (x and z) or the stochastic processes that generated them (X and Z).
Researchers who focus on X and Z often assume that these processes are generated from spatial random fields parameterized by some set of known parameters (Paciorek, 2010;Page et al., 2017;Nobre et al., 2021). X and Z are typically assumed to be generated in such a way that X has two components of spatial structure: 1) one that is shared with Z (the confounded component), and 2) one that is not shared with Z (the unconfounded component). Based on characteristics of these assumed processes, theoretical results or observations have been used to identify when fitting a spatial analysis model of the form (3) will distort inference on β x (Paciorek, 2010;Nobre et al., 2021). In other words, the problematic relationship is between X and Z, as summarized in Figure 1.
Most of the theoretical results related to the data generation source of spatial confounding focus on X and Z. However, when it comes to methods designed to alleviate spatial confounding, there can be assumptions made about x and z. For example, Dupont et al. (2022) and Thaden and Kneib (2018) assume that x is a linear combination of z and Gaussian noise. In these cases, z is either chosen to have a fixed spatial structure or is generated from a spatial random field or process. The focus on the relationship between X and Z suggests that the problematic multicollinearity is between x and z, as summarized in Figure 1. The fact that some methods designed to alleviate spatial confounding focus on situations where x and z are collinear lend support to this idea. However, the theoretical results in this line of work are usually not related to characteristics of the observed realization x (or z), and the assumptions made in the theoretical results do not always ensure empirical collinearity between a given set of realizations x and z. In a similar manner, the characteristics of a particular realization x are not assessed in determining whether it is spatially dependent in a problematic sense. It is possible that the underlying belief is that if x and z are collinear and "spatial", then there will be collinearity between x and a spatial random effect in an analysis model. However, papers in this line of work spend very little time discussing the impact of spatial analysis models. For example, Thaden and Kneib (2018) defines spatial confounding as occurring when: 1) X and Z are stochastically dependent, 2) E (Y |X, Z) = E (Y |X), and 3) Z has a "spatial" structure. Notice, that in this definition, the emphasis is on the relationship between X, Z, and Y , and it mirrors more general definitions of confounders in causal inference research. It is not entirely clear what it means for X to have spatial structure or why it is problematic for X to have such a structure. More importantly, by this definition, spatial confounding exists regardless of the analysis model chosen.
We note that not every paper completely ignores the analysis model. For example, Dupont et al. (2022) explicitly stated they were viewing spatial confounding from the perspective of fitting spatial models via thin plate splines. While they stated the smoothing that comes from fitting a spatial model contributes to the problem, the emphasis seemed to still be on the relationship between x and z. For example, the authors emphasized "if the correlation between the covariate and the spatial confounder is high, the smoothing applied to the spatial term in the model can disproportionately affect the estimate of the covariate effect." In other words, it did not appear that the smoothing alone was problematic. It is for this reason we group this work here, rather than the analysis model spatial confounding, although we note this work is clearly one with elements of both types of spatial confounding.
We take a moment to highlight several notable beliefs commonly found in the data generation spatial confounding line of work. First, the primary source of spatial confounding comes from the (potentially unknown) process that generated the data rather than the process of fitting a model. Second, fitting a spatial analysis model will lead to distortion in inferences when spatial confounding is present. However, here, spatial analysis models -whether of the form of (3), a generalized additive model (GAM), or something else-are often treated as interchangeable. There is often no exploration of the impact of a particular choice of spatial model on inference, and inferior inferences for one type of spatial model are assumed to hold for other spatial models. Finally, it seems researchers assume the observed data (i.e. y and x) do not give insight into whether spatial confounding is present or should be accounted for in analyses.
Approaches to Alleviating Spatial Confounding
There have been numerous methods designed to alleviate spatial confounding. In this sub-section, we take a moment to point out that most of them can be categorized as being motivated by either the analysis model or data generation type of spatial confounding.
The first methods to alleviate spatial confounding were motivated by the analysis model source of spatial confounding. For areal analyses, Reich et al. (2006) and Hodges and Reich (2010) first proposed a methodology sometimes known as restricted spatial regression. This method suggested to, in a sense, replace the spatial random effect g(s) in a spatial analysis model with a new spatial random effect h(s) in an adjusted spatial analysis model. This new spatial random effect is projected onto the orthogonal complement of the column space of x. By "smoothing" orthogonally to the fixed effects, this methodology aimed to alleviate collinearity between the x and the estimated variancecovariance matrix of h(s). In doing so, it directly addresses the analysis model source of confounding. This approach motivated and continues to motivate many further methodologies designed to alleviate spatial confounding ( . Most of these methods continue to involve changing the spatial random effect (or analogue of it for other models) in the spatial analysis model. In other words, the adjustment from a model of the from (3) to (4) primarily involves replacing the spatial random effect and the data remains unaltered. As noted previously, these adjusted analysis models are typically offered with the caveat that there may be some situations when traditional analysis models would be more appropriate (although it is currently unclear when that is).
In this paper, we do not explore methods influenced by analysis model spatial confounding in the rest of the paper. Most of these methods have been influenced by restricted spatial regression analysis models. Recently, these models have been shown to perform poorly. Khan and Calder (2020) demonstrated that inference on β x is often worse with restricted spatial regression analysis models than with non-spatial analysis models. Zimmerman and Ver Hoef (2021) subsequently offered a more in-depth, thorough review of restricted spatial regression analysis models. These authors showed that smoothing orthogonally to the fixed effects distorted inference for a variety of inferential goals and concluded that employing such analysis models was "bad statistical practice."
Researchers motivated by data generation spatial confounding rely heavily on assumptions about how the data arose when developing methodology to alleviate spatial confounding. Thus, there can be various formulations. We focus on two methodologies proposed by Thaden and Kneib (2018) and Dupont et al. (2022) as illustrative examples of such approaches (described in more detail in Section 4.2). In both these works, the authors assume that the observed data are truly from a model with a form similar to (1) (in simulation studies Dupont et al. (2022) introduced another unobserved spatial random effect to this model) and that x = β z z + x , where x is Gaussian noise. Based on these assumptions, the authors proposed methodologies to alleviate spatial confounding that replace (or are equivalent to replacing) either y or x in the analysis model. The details of these approaches are given in Section 4.2.
Recall, Thaden and Kneib (2018) offered little discussion of the impact of the spatial analysis model on inference, and Dupont et al. (2022) felt that their proposed methodology would work in settings beyond the thin plate splines setting they explored. Subsequent work has claimed both approaches are useful for other types of spatial models (Schmidt, 2021;Dupont et al., 2022). As discussed in Section 3.2, this is characteristic of work motivated by data generation spatial confounding. The unspoken belief is that something must be known about how the data were generated to appropriately analyze it. If the data were truly generated in line with the assumptions made, the proposed methodologies should be superior to traditional spatial regression analysis models (and non-spatial analysis models).
In the rest of this paper, we give theoretical results that show that both the analysis model and data generation types of spatial confounding can impact inference, sometimes in competing ways. Importantly, we also show that methods designed to alleviate spatial confounding that focus on only one type of spatial confounding can, in some cases, distort inference more than a spatial regression model.
Two Views of Spatial Confounding Bias
In this section, we introduce theoretical results exploring the bias in estimates of β x for various analysis models. We compare and contrast results derived with an emphasis on data generation and analysis model spatial confounding.
Throughout all sub-sections, we assume that data are originally generated from a model of the form (1). We consider a non-spatial analysis model of the form (2), spatial analysis models of the form (3), and adjusted spatial analysis models of the form (4). For the last category, we focus on the geoadditive structural equation modeling (GSEM) and Spatial+ approaches developed by Thaden
Bias: Non-Spatial and Spatial Analysis Models
In this sub-section, we consider how the data generation and analysis model types of spatial confounding may impact bias in the estimation of β x . We consider this for the non-spatial analysis and spatial analysis models. To do so, we follow the set-up explored in Paciorek (2010). This article has shaped much of the current work focused on the data generation issue, as well as many of the most recent methods designed to alleviate spatial confounding (see e.g. Mirroring the work in Paciorek (2010), we begin by assuming that our response variable was generated from a model of the form Equation (1). However, instead of a particular set of realizations for x and z, we use the processes X and Z:
Y (s i ) = β 0 + β x X(s) + β z Z(s) + i ,(5)
where i is defined as in (1). We assume that X and Z are each generated from Gaussian random processes with positive-definite, symmetric covariance structures. In Paciorek (2010), the author considered two settings: one in which he stated there was no confounding in the data generation process and one in which he stated there was confounding in the data generation process. We restrict our attention to the situation where there is confounding in the data generation process. Throughout this section, we assume X and Z are generated from Gaussian processes with Matérn spatial correlations:
C (h|θ, ν) = 1 Γ(ν)2 ν−1 2 √ νh θ ν K ν 2 √ νh θ ,(6)
where h is the Euclidean distance between two locations, K ν is the modified Bessel function of the second order with smoothness parameter ν, and θ is the spatial range. We allow
X = X c + X u , where Cov (X) = σ 2 c C (θ c ) + σ 2 u C (θ u ), Cov (Z) = σ 2 z C (θ c ), and Cov (X, Z) = ρσ c σ z C (θ c ).
We assume that C (θ c ) and C (θ u ) are each members of (6) with the same ν and potentially different spatial range parameters. We stress that the source of confounding here is ρ, and the spatial aspect of the confounding is the shared spatial correlation functions in C (θ c ) and C (θ u ). There is no guarantee that a particular set of realizations x and z will be collinear or share specific spatial patterns.
Data Generation Confounding
We first explore bias from the perspective of data generation spatial confounding. Work on data generation confounding tends to treat X and Z stochastically when deriving bias terms (Paciorek, 2010;Page et al., 2017). When considering bias for a spatial regression analysis model, generalized least squares estimators are used. We adopt this approach here.
In Remark 1 we calculate the bias terms Bias β N S x |X * = β x − E βNS |X * , for a non-spatial regression analysis model, and Bias β S x |X * = β x − E βS |X * , for a spatial regression analysis model of the form (3). Here,
X * = [1 X].
Remark 1.
Let the data generating model be of the form (5) with X = X c +X u and Z having the following characteristics:
1. Cov (X) = σ 2 c C (θ c ) + σ 2 u C (θ u ) 2. Cov (Z) = σ 2 z C (θ c ), and 3. Cov (X, Z) = ρσ c σ z C (θ c )
where C (θ c ) and C (θ u ) are of the form (6) with the same ν. If a non-spatial analysis model of the form (2) is employed with variance parameters assumed known, then the Bias β N S X |X * = β x − E βNS X |X * can be expressed as:
β z ρ σ z σ c X * T X * −1 X * T K (X − µ x 1) 2 (7)
If instead, a spatial analysis model of the form (3) is employed with variance parameters assumed known, then Bias β S X |X * = β x − E βS X |X * can be expressed as:
β z ρ σ z σ c X * T Σ −1 X * −1 X * T Σ −1 K (X − µ x 1) 2 (8) where K = p c p c I + (1 − p c )C (θ u ) C (θ c ) −1 −1 , p c = σ 2 c σ 2 c +σ 2 u , Σ = β 2 z σ 2 z C (θ c ) + σ 2 I, and [] 2
indicates the second element of the vector.
Proof. See Appendix B.1 and Appendix B.2 for derivations.
We note that (8) is equivalent to Equation (6) in Paciorek (2010) when ν = 2. The bias terms (7) and (8) are very complicated. We take a moment to point out several things. First, for the spatial model the "true" precision of Y (conditional on X), Σ −1 , is used, effectively ignoring the impact of the particular analysis model chosen. As we have discussed, this is very common in explorations of bias influenced by the data generation spatial confounding. However, we note that Paciorek (2010) did include a brief description of the impact of analysis models in Section 2.1 of that paper. Second, it is difficult to derive insights from these forms of bias. They are heavily dependent not only on the spatial range parameters and the various other variance parameters, but also on the distributional assumptions on X.
In Paciorek (2010), he measured the bias due to spatial confounding
with the term c S (X) = X * T Σ −1 X * −1 X * T Σ −1 K (X − µ x 1) from Remark 1.
Here, we also introduce the non-spatial equivalent c N S (X) =
X * T X * −1 X * T K (X − µ x 1) from Remark 1.
More specifically, he considered E X (c S (X)). To control for the influence for the marginal variance parameters and β z , he calculated (via simulations) E X (c S (X)) for various values of p c (defined in Remark 1) and the term p z =
β 2 z σ 2 z β 2 z σ 2 z +σ 2 .
He did this for the case where C (θ c ) and C (θ u ) are members of (6) with ν = 2.
The results, replicated from his code available at https://www. stat.berkeley.edu/~paciorek/research/code/code.html, suggested that a spatial regression analysis model could result in reduced bias relative to a nonspatial analysis when θ u θ c . On the other hand, a spatial regression analysis could also increase bias relative to a non-spatial analysis when θ u θ c . To see this, note that Paciorek (2010) stated that E X (c N S (X)) ≈ p c . It can also be shown that E X (c S (X)) ≈ p c when θ c = θ u . Figure 2 provides images of E X (c S (X)) for 100 locations on a grid of the unit square for different fixed values of p c , p v , θ c , and θ u when ν = 2. The upper left subplot of the image matrix provides a colored image of E X (c S (X)) when p c = p z = 0.1 and θ c varies from 0 to 1 (x-axis) and θ u varies from 0 to 1 (y-axis). As θ c increases, holding all else constant, E X (c S (X)) decreases, In contrast, as θ u increases, holding all else constant, E X (c S (X)) increases. Moving to the other subplots within the image matrix shows the same colored representation of E X (c S (X)), but for different values of p c and p z . As either p c or p z increase, E X (c S (X)) also increases. Notice, however, that for any given value of p c and p z , we see the same behavior for E X (c S (X)) as θ u and θ c changes that we saw in the first subplot considered. Namely that reduced bias is observed when θ u θ c .
ν = 2. Recall, E X (c N S (X)) ≈ p c , and E X (c S (X)) ≈ p c when θ c = θ u .
Thus, terms lower than the diagonal represent a reduction in bias by modeling the residual spatial dependence. This image was created from Christopher Paciorek's code using the fields and lattice packages (Sarkar, 2008; Douglas Nychka et al., 2021).
Paciorek (2010) explicitly acknowledged that the case where θ u θ c is likely of limited interest in real applications. However, this case has increasingly influenced further research in spatial confounding. Or rather, the fact that bias for a spatial analysis model can be increased relative to the bias for a non-spatial model has influenced further research. Other papers often use this observation to support statements suggesting that spatial confounding occurs when "spatial range of the observed risk factors is larger than the unobserved counterpart" (Marques et al., 2022). However, it is rarely acknowledged that these simulations considered only a very specific case (there are exceptions, see e.g., Keller and Szpiro, 2020b).
In the context of data generation spatial confounding, this can be problematic because the behavior of bias from spatial confounding is so dependent on the distributional assumptions for X. To illustrate this issue, we now repeat the simulation study for the case when C (θ c ) and C (θ u ) are members of (6) with ν = .5. Here, the spatial process is less smooth than the case considered in Paciorek (2010). As in Paciorek (2009), for this case E X (c N S (X)) ≈ p c , and E X (c S (X)) ≈ p c when θ c = θ u . For small values of θ u and θ c , it turns out that the images can look fairly flat. To better illustrate trends, we consider values of θ u and θ c up to 10. In Figure 3, we see the bias modification term is almost always equal to the non-spatial equivalent. It appears that bias reduction can occur when θ u is less than 2, regardless of of the value of θ c . Similarly, bias can be increased when θ c is less than 2, across all values of θ u . In other words, there is no longer strong evidence to support statements that spatial confounding impacts bias when the "spatial range of the observed risk factors is larger than the unobserved counterpart." Figure 3: This image depicts E X (c S (X)) for locations on grid of the unit square when C (θ u ) and C (θ c ) belong to the Matérn class ν = .5. Again, E X (c N S (X)) ≈ p c , and E X (c S (X)) ≈ p c when θ c = θ u . Thus, terms lower than the diagonal represent a reduction in bias by modeling the residual spatial dependence. This images was created from an adaptation of Christopher Paciorek's code using the fields and lattice packages (Sarkar, 2008;Douglas Nychka et al., 2021).
Importantly, these examples illustrate how sensitive our conclusions about the impact of spatial confounding are to the distributional assumptions we make about X and Z.
Analysis Model Source of Spatial Confounding
In this sub-section, we focus on the analysis model type of spatial confounding. In order to make our results comparable to the setting explore in Section 4.1.1, we assume that for a particular set of realizations x and z, the response y is generated from a model of the form Equation (1). We can assume that the processes X and Z are generated as before. However, the results in this section do not depend on any distributional assumptions about X and Z. Unlike in Section 4.1.1, we assume that all variance parameters are unknown. As we will see, this results in conceptualizing spatial confounding by the relationships that the x, y, and z have with the eigenvectors of an estimated precision matrix Σ −1 .
We consider both the non-spatial analysis model and the class of spatial analysis models of the form (3). First, in Lemma 1, we derive the bias term that results from fitting a non-spatial analysis model.
β z ||1|| 2 ||x|| 2 − [ x, 1 ] 2 ||1|| 2 x, z − x, 1 z, 1 ,
where ·, · is the standard Euclidean inner product and || · || represents the norm induced by it.
Because this ends up being a special case of the bias for the GLS estimators discussed next, we delay a discussion of these terms. Now, we assume that we fit a spatial analysis model of the form (3).
β z ||1|| 2 Σ −1 ||x|| 2 Σ −1 − x, 1 Σ−1 2 ||1|| 2 Σ −1 x, z Σ−1 − x, 1 Σ−1 z, 1 Σ−1 . (9)
Proof. See Appendix B.4 for the calculations.
Here, the estimate of precision matrix Σ −1 isΣ −1 . We define the inner product m, n Σ−1 = m TΣ−1 n for m, n ∈ R n , and we let || · ||Σ −1 be the norm induced by it (see Appendix A for more details). We do not make any assumptions of how the termΣ −1 is estimated (e.g., Bayesian vs. residual maximum likelihood), but we acknowledge two different methods of fitting the same analysis model could result in differentΣ −1 . Finally, we note in Remark 2 that the bias term in Lemma 1 is a special case of the bias term in Lemma 2. The bias term in (9) is a function of β z , which makes intuitive sense. Although this will, of course, not be known, we note that its impact on inference is the same across all analysis models belonging to the spatial analysis models as well as the non-spatial analysis model. For the moment, we focus on the other terms. We begin with the numerator of (9) (ignoring β z ):
||1|| 2 Σ −1 x, z Σ−1 − x, 1 Σ−1 z, 1 Σ−1 .(10)
Broadly speaking, this term tends to get smaller when one of two things happens. The first situation occurs when the low frequency eigenvectors ofΣ −1 are "flat". We say an eigenvector is "flat" if it there is a small angle (with respect to the Euclidean norm) between it and the column vector of ones, 1. We say an eigenvector is "low-frequency" if its associated eigenvalue is less than 1.
When this occurs, all terms involving 1 (i.e., ||1|| 2 Σ −1 , x, 1 Σ−1 , and z, 1 Σ−1 ) will become smaller in magnitude. To illustrate this, we randomly generate locations for 140 observations on a [0, 10]×[0, 10] window. Using these locations, we represent different potentialΣ −1 's by calculating the inverses of variancecovariance matrices for members of the Matérn class. In Figure 4, we use colors to denote three possible values of ν: ν = .5 (the exponential), ν = 1 (Whittle), and ν = 2. For fixed ν, we then calculate various variance-covariance matrices by allowing θ to vary. For the inverse of each unique matrix, we calculate ||1||Σ −1 . For fixed ν, we can expect the lowest frequency eigenvectors of the eigendecomposition of the associatedΣ −1 to become flatter as θ increases. In Figure 4(a), we can see that ||1||Σ −1 decreases in magnitude as θ increases for all values of ν. This trend will also be seen in cross-products involving 1, as can be seen in Figure 4b). Note that in these plots, the black line denotes the Euclidean norm. Almost all of the values of ||1||Σ −1 are less than this in magnitude. In many practical situations where spatial covariance matrices are employed, this will tend to happen.
The second situation that will tend to decrease the magnitude of (10) occurs when there are small angles (again with respect to the Euclidean norm) between either x or z and low frequency eigenvectors ofΣ −1 . When this occurs, we say that x (or z) is spatially smooth with respect toΣ −1 . Recall, for us, low frequency eigenvectors are those with associated eigenvalues less than 1. We note that (10) is symmetric in x and z. As just one of these variables becomes more correlated with a low frequency eigenvector, all terms involving it will tend to decrease in magnitude. Both variables being correlated with low frequency eigenvectors will tend to be associated with a further reduction in the magnitude of the bias. As an illustration, we again use the 140 locations just discussed. We generate a realization x from an exponential process ((6) with ν = .5) at these locations with θ = 10. In Figure 4c), we illustrate how this realization appears spatially smooth. Because x is spatially smooth, it will often be correlated with low frequency eigenvectors. Unsurprisingly, the ||x||Σ −1 is always smaller than the corresponding Euclidean norm. We note that if either x or z is linearly dependent with 1, then the bias term in (10) will be 0. Thus, the flatter x and z become, the smaller the bias.
a) ||1||Σ −1 b) x, 1 Σ−1
Illustration of x ||x||Σ −1 Figure 4: Illustrations of components of (10). All plots were made with Wickham (2016).
The behavior of (10) supports the traditional view that fitting a spatial analysis model helps improve inference on β x . When the low frequency eigenvectors ofΣ −1 mirrors the patterns of either x or z, fitting a spatial analysis model will tend to result in better estimates of β x than a non-spatial model. It also highlights that what it means to be "spatially smooth" for the purposes of bias reduction depends on the analysis model chosen. To see this, note that in Figure 4 a), b), and d) the magnitudes can be quite different for different choices ofΣ −1 , particularly when θ is small. Recall from our discussion in Section 3.2, researchers are sometimes concerned with collinearity between x and z as a possible source of confounding bias. We note that when z = αx, for α = 0, then (10) is always less than or equal to α||1|| 2 Σ −1 ||x|| 2 Σ −1 . If x is correlated with low-frequency eigenvectors or the low-frequency eigenvectors are flat, this term will typically be smaller for a spatial analysis model than for a non-spatial analysis model. In other words, this suggests spatial analysis models can still reduce bias relative to a non-spatial analysis model when x or z are collinear so long as at least one of them is spatially smooth.
We now turn our attention to the denominator of (8):
||1|| 2 Σ −1 ||x|| 2 Σ −1 − x, 1 Σ−1 2 = ||1|| 2 Σ −1 ||x|| 2 Σ −1 sin φ x,1 2 ,
where φ x,1 is the angle between x and 1 with respect to the Riemannian metric induced byΣ −1 (see Appendix A for more details). The term sin φ x,1 will be minimized when x is linearly dependent with 1, and it will be maximized when x is perpendicular (with respect to the Reimannian metric induced byΣ −1 ) to 1. In other words, because we are considering the denominator of the bias, the flatter x becomes, the larger the bias. This behavior supports the insights from research into the analysis model source of spatial confounding: x which are "too" spatially smooth can distort inference on β x . Pulling these insights together, we see that, generally speaking, bias will decrease with a spatial analysis model in settings where x, z are spatially smooth or cases in which low frequency eigenvectors ofΣ −1 are flat. We emphasize again that what it means to be spatially smooth depends on the relationship of x and z with the low frequency eigenvectors ofΣ −1 . However, for cases when x is not only spatially smooth, but flat, the numerator and denominator of (8) work in opposite directions. At the extreme, when x is collinear with 1, the bias will be 0 (where we use the mathematical convention that 0 0 = 0). However, as x becomes flatter, it's possible that the denominator will shrink faster than the numerator in some settings. In this case, the flatness of x can effectively serve to increase the bias. This reinforces the observations made by researchers influenced by analysis model spatial confounding. Finally, we note that for the case of collinearity between x and z (i.e., returning to z = αx, α = 0), the overall bias term is α for both spatial analysis models and non-spatial analysis models. This suggests, contrary to some research in data generation spatial confounding, that bias induced by collinearity between x and z is not exacerbated by fitting a spatial analysis model.
Bias: Adjusted Spatial Analysis Models
In this section, we consider the impact of the analysis model on inference on β x for the GSEM and Spatial+ approaches referenced in Section 3.3. Recall, these models were developed to improve inference on β x when certain assumptions about the data generation process are assumed to be true. For the GSEM and Spatial+ methods, these assumptions include that x = β z + x . When this is the case, the data generation source of spatial confounding suggests that fitting GSEM or Spatial+ will reduce bias relative to a spatial analysis model or nonspatial analysis model.
We take a moment to give details on both approaches. The GSEM approach, summarized in Adjusted Spatial Analysis Method 1, is equivalent to replacing y and x with r y and r x (Thaden and Kneib, 2018). This latter set of variables are defined to be the residuals, respectively, from spatial analysis models using y and x as the response variable with no covariates. These residuals are then used to fit a non-spatial analysis model of the form (2), and inference for β x is based on the outcome. We note that while Thaden and Kneib (2018) did claim that the GSEM approach is equivalent to these steps, their work did not explore this equivalence. Dupont et al. (2022) utilized the approach for GSEM described in Adjusted Spatial Analysis Method 1 and found that the GSEM approach improved inference only when smoothing was used in Steps 1 and 2, and we adopt this convention from hereon out. The Spatial+ approach, summarized in Adjusted Spatial Analysis Method 2, involves replacing x with r x . The analysis model used for inference is then a spatial regression analysis model with response y and covariate r x .
Adjusted Spatial Analysis Method 1 (GSEM). The GSEM approach can be summarized as follows:
1. Define r x to be the residuals from a spatial regression model with x as the response and only an intercept 2. Define r y to be the residuals from a spatial regression model with y as the response and only an intercept 3. Fit an analysis model of the form (2) with response r y and covariate r x Adjusted Spatial Analysis Method 2 (Spatial+). The Spatial+ approach can be summarized as follows:
1. Define r x to be the residuals from a spatial regression model with x as the response and only an intercept 2. Fit an analysis model of the form (4) with response y and covariate r x Both GSEM and Spatial+ can be framed as special cases of adjusted spatial regression analysis models of the form (4). To see this, we assume, unless otherwise stated, that every step of these methods use spatial analysis models of the form (3) (i.e., we use models of the form (3) to find r y and r x in Adjusted Spatial Analysis Method 1 and Adjusted Spatial Analysis Method 2, as outlined in Section 4.2). In Theorem 1, we consider the bias in estimating β x when r x replaces x in a final analysis model.
β z ||1|| 2 Σ −1 x, z Σ−1 − x, 1 Σ−1 z, 1 Σ−1 ||1|| 2 Σ −1 ||x|| 2 Σ −1 − x, 1 Σ−1 2 ,(11)
Proof. See Appendix B.5 for the proof.
β z ||1|| 2 x, z − x, 1 z, 1 ||1|| 2 ||x|| 2 − [ x, 1 ] 2 .(12)
Proof. Consider the case whenΣ −1 = I.
The bias for the GSEM method is equivalent to the bias for a non-spatial analysis model (given in (12)). Thus, an immediate insight is that the GSEM approach will result in inference on β x equivalent to that of a non-spatial analysis using the originally observed y and x. We note that Dupont et al. (2022) showed, theoretically, that the bias from the GSEM methodology would be equivalent to the bias from a non-spatial model in the context of thin plate splines when no smoothing occurs. However, their simulations did not find this to be true when smoothing was used. There is a close connection between thin plate splines and mixed models (Ruppert et al., 2003). Here, our mixed model results are most akin to a thin plate spline model where smoothing is used, and thus our results are at odds with those in Dupont et al. (2022). Whereas Dupont et al. (2022)'s simulations suggest that the GSEM methodology would improve inference relative to the spatial model when smoothing is used, our results suggest the opposite. As discussed previously, the non-spatial bias will tend to be larger than the bias from a spatial analysis model when x is spatially smooth. Thus, in cases where x is spatially smooth, the GSEM method can lead to inferior inference compared to a spatial analysis model.
On the other hand, the bias for the Spatial+ method is of the same form as the bias for a spatial analysis model (given in (11)). All of the discussion involving the behavior of these terms in Section 3.1 are relevant here. Interpreting the impact of performing the Spatial+ method relative to the performance of a spatial analysis model is difficult, however. For example, consider comparing the Spatial + method to the spatial analysis model employed in step 2 of Adjusted Spatial Analysis Method 2. The difference between using r x and x in this spatial analysis model boils down to the estimatedΣ −1 . If r x is defined to the residuals from a spatial analysis model of the form (3), then r x = x − δ1 for δ ≥ 0. The fact that r x is a translation of x suggests that the estimated covariances will likely be similar when the analysis uses a positive-definite covariance structure (we would expect the largest difference to be in the estimation of β 0 ). This insight will not necessarily hold for models employing GMRF's, where the precision is non-singular. Proving these insights theoretically is difficult. We rely on simulation studies in Section 5 to explore these ideas more thoroughly. If these intuitions hold, however, then the Spatial+ method will yield almost equivalent inference as a traditional spatial analysis model with positive-definite covariance structures.
Importantly, we emphasize these results suggest adjusted spatial analysis models will not improve inference for regression coefficients even in the settings they are designed to be used in when spatial linear mixed models are used to fit them.
Simulation Studies
In all of the following simulation studies, we consider settings that have been identified in the literature as times when spatial confounding can distort inference for a regression coefficient. For each of these settings, we consider the absolute value of the bias for a regression coefficient for non-spatial, spatial, and adjusted spatial analysis models. Each simulation study is designed to explore whether insights from analysis model spatial confounding explored in Section 3.1 or the data generation spatial confounding explored in Section 3.2 have any relevance to the patterns of bias observed for estimates of regression coefficients.
The results of this paper have primarily focused on spatial linear mixed models that involve positive-definite covariance structures. However, the intrinsic conditional autoregressive (ICAR) model plays an important role in the spatial confounding literature. It was the model first considered in Hodges and Reich (2010) and Reich et al. (2006) in the modern introduction to the phenomena of spatial confounding. As referenced previously, the Spatial + methodology was originally developed for the thin spline plate setting, but the authors stated that the methodology should extend to the ICAR model (Besag et al., 1991). The methodology in Thaden and Kneib (2018) was also originally proposed for areal data where the ICAR model is traditionally used, and the spatial model these authors considered is thought to be equivalent to the ICAR model. Thus, in these simulation studies, we consider both geostatistical data fit to the class of models considered in our results (referred to as the "Geostatistical data setting") as well as areal data fit to an ICAR model (referred to as the "Areal data setting" because the ICAR model employs a GRMF).
Because we have not previously defined the ICAR model before, we take a moment to do so here. The ICAR model incorporates spatial dependence for areal data with the introduction of an underlying, undirected graph G = (V, E). Non-overlapping spatial regions that partition the study area are represented by vertices, V = {1, . . . , n}, and edges E defined so that each pair (i, j) represents the proximity between region i and region j. We represent G by its n × n binary adjacency matrix A with entries defined such that diag(A) = 0 and A i,j = 1 (i,j)∈E,i =j . The ICAR model could be considered a generalization of the spatial analysis model of the form (3), by stating that the spatial random effect has a distribution proportional to a multivariate normal distribution with mean 0 and precision matrix τ 2 (IA1 − A) = τ 2 Q, where τ 2 controls the rate decay for the spatial dependence and Q is the graph Laplacian.
This precision matrix is not of full rank, so we use a Bayesian analysis to fit all the relevant spatial models. We note there is a close connection between certain types of Bayesian analysis in this setting and modeling spatial random effects through the use of a smoothing penalty, as is done in the thin plate spline setting (Dupont et al., 2022;Rue and Held, 2005;Kimeldorf and Wahba, 1970). Because the graphs we consider are connected, there is an implicit intercept present in the ICAR model (Paciorek, 2009). Therefore, we omit an intercept from our spatial analysis models. For a Bayesian analysis, σ 2 and τ 2 require priors. Here, we give them Inverse-Gamma distributions with scale and rate .01 each. Finally, to make the non-spatial model comparable, we also use a Bayesian analysis, giving the σ 2 parameter an Inverse-Gamma prior with the same hyperparameters as the spatial model. All models are fit with using Markov Chain Monte Carlo (MCMC) algorithms with Gibbs updates. All MCMC's are run for 80,000 iterations with a 20,000 burn-in.
Non-spatial and Spatial Analysis Models
In this sub-section, we use simulation studies to compare a spatial and nonspatial model. For the spatial linear mixed model setting, we simulate data to ensure that spatial confounding from a data generation perspective is present. For the Gaussian Markov Random Field setting, we simulate data to ensure that spatial confounding from an analysis model setting is present.
Geostatistical Data Setting
In this subsection, we simulate data to replicate the setting explored in Section 3.2. The data are all generated from a model of the form (1) as follows:
y i = 0.3 + x i + 2z i + i ,
where i are independently simulated from a normal distribution with mean 0 and variance 0.1.
The 200 locations of the data are randomly generated on [0, 10] × [0, 10] window one time, and these locations are then held fixed. The realizations x and z are simulated from mean zero spatial processes, denoted respectively X and Z, with spatial covariance structures defined by C (d, θ) = 0.1 exp{ −h θ } for euclidean distance h (i.e., the exponential field).
We define X = X c + X u and Z as follows: Cov (X) = C (θ c ) + C (θ u ), Cov (Z) = C (θ c ), and Cov (X, Z) = ρC (θ c ). We generate 1000 datasets for each (θ u , θ c , ρ) ∈ {1, 5, 10} × {1, 5, 10} × {−.9, −.6, −.3, 0, .3, .6, .9}. For each dataset, we then fit a non-spatial analysis model of the form (2) and a spatial analysis model of the form (3). For the latter, g() is assumed to have spatial structure defined by C (d, θ) = σ 2 s exp{ −h θ } + σ 2 I, with unknown θ, σ 2 s , and σ 2 . Both analysis models are fit via residual maximum likelihood (REML).
We consider the absolute value of the bias for β x for both the non-spatial and spatial analysis models. Recall from Section 3.2 that many researchers use results from Paciorek (2010) (visualized in Figure 2) to support statements that fitting a spatial analysis model will lead to increased bias whenever θ c θ u . In Figure 4, we can see that the absolute bias tends to be larger for nonspatial models than for spatial models for all possible combinations for θ c and θ u . These results support our findings in Section 3.1 (as well as findings in Section 2.1 of Paciorek (2009) regarding a spatial model fit via REML) that spatial analysis models will tend to reduce bias relative to non-spatial models. The discrepancy may simply be due to the fact that Paciorek (2009)'s original observation was made about a different type of spatial structure. However, when we repeated his results for the exponential spatial structure used here (visualized in Figure 3), the data generation focus on spatial confounding suggested that bias for spatial analysis models would increase (relative to nonspatial models) when θ c < 2. However, this did not appear to be true here in these simulations. Importantly, this is evidence that focusing on the data generation source of spatial confounding alone may not be able to explain bias in regression coefficients.
Of course, Figure 5a) considers how bias behaved across all datasets. If we compare the absolute value of the bias from a non-spatial analysis model and a spatial analysis model for a fixed dataset, the spatial analysis model resulted in less bias approximately 72% of the time. This remained true across all possible combinations of θ c and θ u (with the percentage of times the spatial bias was preferable varying from 62% to 77%). There did not appear to be a strong relationship in the patterns of bias as a function of ρ. Across all combinations of θ c and θ u , the maximum absolute bias observed for the spatial model was 0.74. On the other hand, the maximum absolute bias for the non-spatial model was 1.6, and approximately 1.1% of the time the nonspatial analysis model resulted in an absolute value of bias over 1. To get a feel for the more general trends, we consider cases in which the bias for an analysis model accounted for over a 25% change in β x (i.e., when β x . −βx βx > .25). Such cases occurred approximately 15% of the time for the spatial analysis model and 42% of the time for the non-spatial model. When these more extreme cases of bias occurred for the spatial analysis model tended to vary across particular combinations of θ u and θ c . In particular when θ u is 1, the bias from a spatial analysis model accounted for a 25% change in estimates β x less frequently (less than 1% of the time) compared to over 20% of the time for cases when θ u > 1. The insight for data generation spatial confounding were not particularly relevant in predicting the performance (with respect to absolute bias) for the spatial analysis model.
In summary, these results suggest that, on average, in the presence of either a spatially smooth x or residual spatial dependence (z), a spatial analysis model will result in less bias than the a non-spatial analysis model. Perhaps more importantly, the magnitude of the bias when things go wrong is much larger for the non-spatial analysis model than the spatial analysis model. We emphasize that these results suggest that the spatial analysis model outperforms the nonspatial analysis model in settings where the data generation spatial confounding focus suggests the opposite should happen.
Areal Data Setting
In the second setting, we work with areal data on an 11 × 11 grid on the unit square. Recall, work in analysis model spatial confounding suggest that a covariate which is collinear with low-frequency eigenvectors of the precision matrix of the spatial random effect could induce bias in the estimation of β x . This is thought to be true regardless of whether there is a "missing" spatially dependent covariate. Here, we attempt to explore whether that is the case by simulating datasets with both a spatially-smooth covariate and with a covariate without much spatial structure.
For all simulated datasets, the response y is generated from a model of the form (1) as follows:
y i = 0.3 + 3x i + i ,
where each i is independently distributed from a normal distribution with mean 0 and variance 1. We explicitly leave out any residual spatial dependence from the data generation model in order to explore the impact of a covariate alone. We consider two possible choices of x: one in which x is spatially-smooth from an analysis model spatial confounding perspective and one in which it is not. For the latter category, we simply generate x from a normal distribution with mean 0 and variance √ .06 once, as depicted in the left plot of Figure 6. We hold this vector fixed and simulate the response variable 100 times. In order to generate the spatially-smooth covariate we use the eigenvectors of the graph Laplacian Q. For the ICAR model, there is not a variance-covariance matrix, but rather the singular precision matrix. However, we can treat this as the pseudo-inverse of a variance-covariance matrix (Paciorek, 2009). In this case, then, if x is strongly correlated with a low-frequency eigenvector of Q, the spatial analysis model may perform more poorly than the non-spatial model. Thus, we let x to be the eigenvector of Q associated with smallest, non-zero eigenvalue, depicted in the right plot of Figure 6. As before, we hold this vector fixed for 100 simulated datasets.
For each of the 200 datasets, we consider 2 analysis models: 1) a non-spatial analysis model, and 2) a spatial analysis model. Here, the spatial analysis models is the ICAR model. We use a Bayesian approach for both the spatial and the non-spatial analysis models, as described in the introduction of this section.
When x is not spatially smooth (i.e., randomly generated from a normal distribution), the spatial analysis and non-spatial analysis models gave relatively similar inferences for β x . In the left hand plot of Figure 7, we see that across datasets, the absolute bias was relatively similar for both analysis models. In the right hand plot of Figure 7, we see that for individual datasets, the inference was also fairly similar for the two analysis models as well. In this plot, the absolute bias for the spatial analysis model is on the x-axis and the absolute bias for the non-spatial analysis model is on the y-axis. Datasets for which the covariate is not spatially smooth are colored red. The black dashed line represents when the spatial and non-spatial analysis models had equivalent bias; while the gray dashed lines represent where the absolute bias differed by 1 between the analysis models. A triangular shape indicates that the spatial analysis model had a smaller absolute bias than the non-spatial analysis model. The spatial model resulted in less absolute bias 49% of the time. x defined as the low-frequency eigenvector of Q ("spatially smooth"). Plots made with raster package (Hijmans, 2022). When x is spatially smooth (i.e., it is the low frequency eigenvector of Q), the story does change a bit. In the left hand plot of Figure 7, we see that across datasets, the absolute bias of the spatial model has a slightly more rightskewed distribution. In the right hand plot of Figure 7, we see that for individual datasets, the inference was still fairly similar for the two analysis models. In this plot, recall, the absolute bias for the spatial analysis model is on the x-axis and the absolute bias for the non-spatial analysis model is on the y-axis. Datasets for which the covariate is spatially smooth are colored blue. The spatial model resulted in less absolute bias 39% of the time. Figure 7: On the left, we see boxplots of the absoute bias for the spatial and non-spatial analysis models. On the right, the points are individual simulated datasets with the absolute bias from the spatial analysis model on the x-axis and absolute bias from the non-spatial analysis model on the y-axis. Plots made with ggplot2 package (Wickham, 2016).
In sum, there is evidence that a spatially-smooth covariate (i.e., one correlated with a low frequency eigenvector of the graph Laplacian in this setting) may cause the spatial analysis model to have higher absolute bias than the non-spatial analysis model. However, the impact may not be particularly large in magnitude. Here we considered a covariate that was perfectly correlated with the eigenvector associated with the smallest (non-zero) eigenvalue. This is essentially a worst-case scenario, and the spatial analysis model and non-spatial analysis models stilled yielded similar inference. In fact, as seen in Figure 7, when the absolute bias for the non-spatial and spatial model differed by more than 1, the non-spatial model tended to have the higher absolute bias.
Spatial Analysis and Adjusted Spatial Analysis Models
In this sub-section, we generate data to replicate the setting explored in (Thaden and Kneib, 2018) and (Dupont et al., 2022). For the geostatistical data setting, we seek to explore whether a spatial analysis model of the form (3) induces more bias than two adjusted spatial analysis models of the form (4). We restrict our attention to the the Spatial Linear Mixed Model setting, as the impropriety of the ICAR model would require careful consideration of how to define a residual in a model with no covariates.
Geostatistical Data Setting
In this setting, the 400 locations of the data are randomly generated on [0, 10] × [0, 10] window one time, and these locations are then held fixed throughout the subsequent simulations. Thaden and Kneib (2018) and Dupont et al. (2022) studied similar set-ups. Both papers considered settings in which:
x = 0.5z + x .
Thaden and Kneib (2018) chose z to be fixed to three possible spatial patterns. Dupont et al. (2022) generated z from a exponential process (i.e., a spatial structure of the form (6) with ν = .5) with θ = 5 and then replaced z with the fitted values of a spatial thin plate regression spline fitted to it. This approach was meant to ensure that both the response variable and the covariates can be described by thin splines, and therefore eliminate bias due to model misspecification. Their supplemental materials suggested simply using the Gaussian processes yielded similar results, though (see Web Appendix F of Dupont et al. (2022)). Because we find covariates defined to be the fitted values from a thin plate spline to be very restrictive, we adopt the convention of the supplemental material from Dupont et al. (2022) here (i.e., a spatial structure of the form (6) with ν = .5 and θ = 5). We chose x to be independently distributed from a normal with mean 0 and variance σ 2 x = 0.1. The response y is generated from a model of the form (1) as follows:
y i = 0.3 + 2x i − z i + i ,
where i are independently generated from a normal distribution with mean 0 and variance σ 2 y = 0.1. We consider 4 analysis models: 1) a non-spatial analysis model of the form (2) ("NS"), 2) a spatial analysis model of the form (4) ("S"), 3) the GSEM adjusted spatial analysis model, and 4) a Spatial+ adjusted spatial analysis model. All models are fit with REML and the spatial random effects are all represented with a spatial structure defined by C (d, θ) = σ 2 s exp{ −h θ } + σ 2 I, with unknown θ, σ 2 s , and σ 2 . As predicted in Section 4.2, the GSEM model yields the same inference as the non-spatial analysis model for all simulated datasets. Similarly, the Spatial+ model yields essentially the same observed biases as the spatial analysis model. In Figure 8a) we plot the absolute value for the observed bias across all analysis models. The spatial analysis model and Spatial+ model tend to result in significantly less bias than the GSEM and non-spatial model. We note that for a fixed data set, the spatial analysis model always produced less bias than the non-spatial and GSEM. In summary, we find that even if there is high collinearity between x and z, the spatial linear mixed model significantly improves inference on β x relative to a non-spatial model. Additionally, the GSEM and Spatial+ methodologies in this context do not improve inference for β x . Importantly, this again illustrates that insights from a data generation perspective of spatial confounding may not be particularly useful in explaining the patterns of bias.
Approaches for Fitting Adjusted Spatial Analysis Models
The Spatial+ and the GSEM models were proposed in papers that did not utilize spatial linear mixed models in simulation studies. Instead, both considered spatial analysis models that fit with the R package mgcv. The Spatial + method employed thin plate splines, while the GSEM model utilized a smoothing penalty that is supposed to be equivalent to the ICAR model. In this subsection, we consider both the geostatistical data setting and the areal data setting and we now we fit our spatial models with the mgcv package, utilizing thin plate splines as in Dupont et al. (2022).
Geostatistical Data Setting
Here, we simulate data exactly as in Section 5.2. We explore fitting 4 analysis models. The first three are: 1) a spatial analysis model ("S PS"), 2) A GSEM analysis model ("GSEM PS"), and 3) a Spatial+ model ("Spatial + PS"). We fit all associated spatial models with the default settings of the mgcv package as in Thaden and Kneib (2018) and Dupont et al. (2022). The default settings involve using penalized thin plate splines. We note that manual increases of the number of knots did not substantially change results for a simulated dataset, so we simply used the default selections. For comparison, we include the fourth analysis model ("S"). This is the same spatial analysis model considered in Section 5.2 fit via REML. We plot the absolute value of the biases for all analysis models in Figure 9. If we only considered models fit with the mgcv package, we note that both the Spatial+ and GSEM analysis models tend to have slightly smaller bias than the spatial model. However, the spatial analysis model fit via REML outperforms all models. This suggests, as did our results in Section 3.1 and Section 3.3, that collinearity between x and z is not necessarily problematic in the spatial linear mixed model setting. In fact, relative to both a non-spatial model (explored in Section 5.2) and the two adjusted spatial analysis models here, the spatial analysis model is the best at reducing absolute bias for estimates of β x . In the context of the broader literature, we note the finding that the Spatial + and GSEM improve inference relative to a penalized thin plate splines agrees with the simulation results in Spatial +. In other words, it's possible that the smoothing in this setting in combination with collinearity between x and z may increase absolute bias in the estimates for β x . This problem may be mitigated by the Spatial + and GSEM approaches, however, still results in increased bias relative to a spatial linear mixed model.
Areal Data Setting
In the second setting, we work with areal data on an 11 × 11 grid on the unit square as in Section 5.1.2. Following the simulation studies in Thaden and Kneib (2018), we hold z fixed for the generation of all data. We then define the rest of the model as in Thaden and Kneib (2018) as follows:
x = 0.5z + x ,
where x is independently distributed from a normal distribution with mean 0 and variance σ 2
x . The response y is generated from a model of the form (1) as follows:
y i = 0.3 + 3x i − z i + i ,
where each i is independently distributed from a normal distribution with mean 0 and variance σ 2 y . In Thaden and Kneib (2018), the authors observed the largest differences between the GSEM analysis model and the spatial analysis model for (σ x , σ y ) = {(0.15, 1), (0.15, 0.15)}. We consider these two settings, and choose z to be the eigenvector of Q associated with the smallest non-zero eigenvalue as in Section 5.1.2 (depicted in Figure 6b). We choose this value to allow for some comparisons to our findings in Section 5.1.2. Although we note the exact x and its sample variance will be, of course, different, x will still tend to be highly collinear with the lowest frequency eigen-vector of the graph Laplacian.
For both combinations of (σ x , σ y ), we simulate 100 datasets for analysis. We consider four analysis models. The first three are again: 1) a spatial analysis model ("S PS"), 2) A GSEM analysis model ("GSEM PS"), and 3) a Spatial+ model ("Spatial + PS"). All these models are again fit with the mgcv package, utilizing thin plate splines as in Section 5.3.1. For comparisons, we also consider the ICAR model ("S"). In Figure 10, we summarize the results for the absolute bias. The absolute bias, unsurprisingly, increases for all analysis models as the σx σy decreases. For both combinations of (σ x , σ y ), the ICAR analysis model tends to have the smallest absolute bias followed by the penalized thin plate analysis model. The GSEM and Spatial+ analysis models gave similar inferences, and tended to be higher than either of the unadjusted spatial models. We take a moment to compare our results from the ICAR analysis model here to the ICAR analysis model in Section 5.1.2. Here, the setting most similar to that in the previous section is when σ y = 1, and in both simulation studies the bias we observe is very similar. For example, here the mean of the absolute biases was approximately 2.4 and the median of the absolute biases was approximately 1.9. In Section 5.1.2, the mean of the absolute biases was approximately 2.1 and the median was approximately 1.9. Recall, in Section 5.1.2, there was no missing confounder (z) or residual spatial dependence in the data generation model. For the ICAR analysis model, the fact that we observe similar bias patterns in Section 5.1.2 and Section 5.3.2 offers evidence that the collinearity between x and z is not the sole source of the bias we observe here. Instead, it would seem the collinearity between x and the Graph Laplacian is the primary driver of bias.
We note that the fact that the Spatial+ and GSEM approaches yield larger absolute biases (although the differences are not huge) than either the ICAR analysis model or the penalized thin plate splines directly contradicts the findings in Thaden and Kneib (2018) and Dupont et al. (2022). This offers, at least, some evidence that these approaches may not be entirely appropriate in the Bayesian context.
Discussion
In this paper, we have synthesized the broad, and often muddled, literature on spatial confounding. We have introduced two broad focuses in the spatial confounding literature: the analysis model focus and the data generation focus. Using the spatial linear mixed model, we have shown how papers focused on the former category often conceptualize the problem of spatial confounding as originating from the relationship between an observed covariate x and the estimated precision matrixΣ −1 g of a spatial random effect. We then showed how papers focused on the latter category typically identify the problem of spatial confounding as originating from the relationship between an observed covariate (x) and a collinear, unobserved covariate (z).
Our results highlight two important conclusions: 1) the original conceptualization of spatial confounding as "problematic" may not have been entirely correct, and 2) the analysis model and data generation perspectives of spatial confounding can lead to directly contradictory conclusions about whether spatial confounding exists and whether it adversely impacts inference on regression coefficients. With respect to the first point, the modern conceptualization of spatial confounding arose in work by Reich et al. (2006) and Hodges and Reich (2010). In our proposed framework, these papers focused on an analysis model type of spatial confounding. In the context of an ICAR model, they argued that whenever x was collinear with a low-frequency eigenvector of the graph Laplacian Q, the regression coefficients would be biased (relative the regression coefficients obtained from a non-spatial model). Our results suggest, that, in general, collinearity between x and low-frequency eigenvectors of the graph Laplacian helps to reduce bias in regression coefficients. It is only in relatively extreme cases, where x is "flat" and there is no spatially smooth residual dependence that bias for regression coefficients can increase. In our simulation study, we produced such a setting by generating x to be perfectly correlated with a low frequency eigenvector of the graph Laplacian. Even in this extreme scenario, however, the bias seen in a spatial analysis model was not that much different from the bias seen in a non-spatial analysis model.
Turning our attention to the second point, the data generation perspective of spatial confounding often relies on very specific assumptions about the processes that generated x and z or on very specific assumptions about the relationship between these variable (i.e., x is a combination of z and some Gaussian noise). Our results suggested that many of the scenarios that are identified as problematic from a data generation perspective are not problematic (at all) from an analysis model perspective. This is potentially problematic because many of these papers propose methods to "alleviate" spatial confounding based on the perceived problem (the relationship between x and z). In our simulation studies, we studied scenarios identified in the literature as being problematic from a data generation focus on spatial confounding. We considered two settings: a geostatistical data setting and an areal data setting. For the geostatistical data setting, a spatial analysis model fit with REML tended to outperform a non-spatial analysis model in all cases. Additionally, this spatial model either outperformed or was equivalent to the inference derived from adjusted spatial analysis models. For the areal data setting, we found that the adjusted spatial analysis models increased the absolute bias relative to two types of spatial analysis model. In other words, focusing on the relationship between x and z (or the processes that generated them) did not help identify settings where a spatial analysis model distorted inference. Using these insights to "adjust" for spatial confounding lead to inferences on regression coefficients that were worse than a standard spatial analysis model.
Taken together, the results and simulation studies in this paper offer support for conventional wisdom of spatial statistics: accounting for residual spatial dependence tends to improve inference on regression coefficients. However, spatial analysis models are not interchangeable: the analysis model and the method used to fit it matter. For example, Dupont et al. (2022) correctly identified settings in which collinearity between x and z could lead to bias when penalized thin plate splines were used. In those settings, the Spatial+ methodology does reduce bias relative to the spatial penalized thin plate splines model. However, a spatial linear mixed model fit via REML outperforms both of these models with respect to bias. Importantly, the Spatial+ methodology did not continue to improve inference for a Bayesian approach. In order to avoid the pitfalls that currently plague the field of spatial confounding, future work motivated by spatial confounding needs to be more careful to both precisely define what is being studied and the analysis model being utilized.
A Useful Facts about Differential Geometry and
Linear Algebra
A.1 Notation for metrics and norms Definition 1 (Standard Euclidean Inner Product and Norm). We use the notation ·, · to denote the standard Euclidean inner product on the vector space of R n . The notation ||·|| is then used to refer to the norm inducted by this metric. Specifically, for a, b ∈ R n :
a, b = a T b ||a|| = √ a T a
Notational Convention 1 (Angles with respect to the Standard Euclidean Inner Product). Given a, b ∈ R n , we use θ a,b to refer to the angle between these two vectors with respect to the standard Euclidean norm. Specifically:
θ a,b = arccos a, b ||a||||b||
Notational Convention 2 (Spectral Decomposition ofΣ −1 ). LetΣ −1 be a n × n real, symmetric, positive definite matrix.
We define U DU =Σ −1 to be the spectral decomposition ofΣ −1 with D a diagonal matrix with diagonal d 1 ≥ . . . ≥ d n > 0.
Notational Convention 3 (Angles between Vector and Eigenvectors ofΣ −1 ). LetΣ −1 be a n×n real, symmetric, positive definite matrix, and v be an arbitrary vector in R n .
Let U DU =Σ −1 be the spectral decomposition ofΣ −1 as defined in Notational Convention 2. In this paper, we use the notation θ v,U to define a n × 1 vector whose ith element is the angle θ v,ui (with respect to the Euclidean norm as in Notational Convention 1) between v and the ith column u i of U .
Definition 2 (Precision Matrix Induced Inner Product and Norm). Given a n×n real, symmetric, positive definite matrix,Σ −1 we use the notation ·, · Σ−1 to denote the inner product defined by the matrix on the vector space of R n . The notation || · ||Σ −1 is then used to refer to the norm induced by this inner product. More precisely, for a, b ∈ R n :
a, b Σ−1 = a TΣ−1 b ||a||Σ −1 = a TΣ−1 a
Notational Convention 4 (Angles with respect to the Precision Matrix Induced Inner Product). Given a, b ∈ R n , we use φ a,b to refer to the angle between them with respect to the standard Euclidean norm. Technically, it would be more appropriate to use φΣ −1 a,b . However, we drop the dependency onΣ −1 unless it is required for ease of reading. Specifically:
φ a,b = arccos a, b Σ−1 ||a||Σ −1 ||b||Σ −1
A.2 Useful Facts
Fact 1 (Re-expression of Standard Euclidean Inner Product). For a, b ∈ R n , the standard Euclidean metric defined in Definition 1 can be re-expressed as a, b = ||a||||b|| cos (θ a,b ), where θ a,b is the angle between a and b with respect to the standard Euclidean metric as defined in Notational Convention 1.
Fact 2 (Re-expression of Precision Matrix Induced Inner Product). For a, b ∈ R n , the precision matrix induced inner product defined in Definition 2 can be re-expressed as a, b Σ−1 = ||a||Σ −1 ||b||Σ −1 cos (φ a,b ), where φ a,b is the angle between a and b with respect to the precision matrix induced metric as defined in Notational Convention 4.
Fact 3 (Preservation of Angles with Eigenvectors ofΣ −1 ). SupposeΣ −1 is a n×n real, symmetric, positive definite matrix, and v s is an arbitrary unit vector with respect to the norm || · ||Σ −1 as defined in Definition 2 (i.e., ||v s ||Σ −1 = 1).
Let v α = αv s for α > 0. Define θ v s ,U and θ vα,U as in Notational Convention 3.
It is the case that θ v s ,U ≡ θ vα,U .
Fact 4 (Re-Expression of Precision Matrix Induced Norm). For a given vector v ∈ R n and n × n real, symmetric, positive definite matrixΣ −1 , it is possible to re-express ||v||Σ −1 as a function of of the sample meanv, sample variance s 2 v , and a unique set of n angles (defined with respect to the standard Euclidean norm). Let U DU =Σ −1 be the spectral decomposition ofΣ −1 with D a diagonal matrix with diagonal d 1 ≥ . . . ≥ d n > 0. Define θ v,U to be a n × 1 vector whose ith element is the angle θ v,ui (with respect to the Euclidean norm as in Notational Convention 1) between v and the ith column u i of U .
||v||Σ −1 = [(n − 1) s 2 v + nv 2 ] n i=1 cos (θ v,ui ) 2 d i x 1 −1 1 TΣ−1 x
x. For the following, we denote r * x = [1 r x ].
E β h = r * x TΣ−1 r * x −1 r * x TΣ−1 (β 0 1 + β x x + β z z) = r * x TΣ−1 r * x −1 r * x TΣ−1 (β 0 1 + β x x) + r * x TΣ−1 r * x −1 r * x TΣ−1 (β z z) = A (r x , x) + B (r x , x)(13)
The first term A (r x , x) is no longer simply β as it was in Appendix B.4. For clarity, we consider the two terms A (r x , x) and B (r x , x) separately.
Simplifying A (r x , x) In this sub-section, we focus on the first term of (13). We employ the notation of Definition 2 in this section.
A (r x , x) = r * x TΣ−1 r * x −1 r * x TΣ−1 (β 0 1 + β x x) = 1 TΣ−1 1 1 TΣ−1 r x r x TΣ−1 1 r x T Σ −1 r x −1 r * x TΣ−1 1β 0 + 1 TΣ−1 1 1 TΣ−1 r x r x TΣ−1 1 r x T Σ −1 r x −1 r * x TΣ−1 xβ x = 1 1 TΣ−1 1r x TΣ−1 r x − 1 TΣ−1 r x r x TΣ−1 1 × r x TΣ−1 r x 1 TΣ−1 1 − 1 TΣ−1 r x r x TΣ−1 1 1 TΣ−1 1r x TΣ−1 1 − r x TΣ−1 11 TΣ−1 1 β 0 + r x TΣ−1 r x 1 TΣ−1 x − 1 TΣ−1 r x r x TΣ−1 x 1 TΣ−1 1r x TΣ−1 x − r x TΣ−1 11 TΣ−1 x β x = 1 ||1|| 2 Σ −1 ||r x || 2 Σ −1 − r x , 1 Σ−1 2 × ||1|| 2 Σ −1 ||r x || 2 Σ −1 − r x , 1 Σ−1 2 0 β 0 + ||r x || 2 Σ −1 x, 1 Σ−1 − r x , 1 Σ−1 r x , x Σ−1 ||1|| 2 Σ −1 r x , x Σ−1 − r x , 1 Σ−1 x,
al., 2015; Keller and Szpiro, 2020a; Adin et al., 2021; Marques et al., 2022; Hughes and Haran, 2013; Azevedo et al., 2021, 2022; Thaden and Kneib, 2018; Prates et al., 2019; Chiou et al., 2019; Dupont et al., 2022; Hefley et al., 2017; Hui and Bondell, 2021; Nobre et al., 2021). A closer look at the body of work highlights inconsistencies in definitions of spatial confounding as well as the purported impact it can have on inference (Khan and Calder, 2020; Zimmerman and Ver Hoef, 2021; Nobre et al., 2021;
These works have inspired a serious and active line of research into the phenomena of spatial confounding (Hughes and Haran, 2013; Paciorek, 2010; Thaden and Kneib, 2018; Hefley et al., 2017; Nobre et al., 2021; Prates et al., 2019; Azevedo et al., 2021; Dupont et al., 2022; Yang et al., 2021; Marques et al., 2022).
Reich et al., 2006; Hodges and Reich, 2010; Hughes and Haran, 2013; Hanks et al., 2015; Hefley et al., 2017; Prates et al., 2019; Azevedo et al., 2021; Hui and Bondell, 2021). In this line of work, identifying the problematic source of multicollinearity and defining what it means for x to be spatially varying both rely on the analysis model. More specifically, in the context of our analytical set-up, they typically rely on the eigenvectors of the estimated precision matrixΣ −1 g of the spatial random effect g(s) in (4) (Reich et al., 2006; Hanks et al., 2015; Hefley et al., 2017; Prates et al., 2019; Azevedo et al., 2021). For example, statistics developed to identify spatial confounding involve using both the observed x and the estimated precision matrix for a particular spatial analysis model (Reich et al., 2006; Hefley et al., 2017; Prates et al., 2019). These statistics all identify, loosely, whether x is correlated with low-frequency eigenvectors of a decomposition of Σ −1 g .
2010; Thaden and Kneib, 2018; Page et al., 2017; Dupont et al., 2022; Nobre et al., 2021).
Hughes and Haran, 2013; Hanks et al., 2015; Prates et al., 2019; Marques et al., 2022; Chiou et al., 2019; Hui and Bondell, 2021; Azevedo et al., 2021; Adin et al., 2021)
and Kneib (2018) and Dupont et al. (2022) respectively (previously referenced in Section 3.3).
, Dupont et al., 2022; Thaden and Kneib, 2018; Page et al., 2017; Keller and Szpiro, 2020b; Marques et al., 2022).
Figure 2 :
2This image depicts E X (c S (X)) for 100 locations on a grid of the unit square when C (θ u ) and C (θ c ) belong to the Matérn class with
Lemma 1 .
1Let the data generating model be of the form (1) with y and x known. If a non-spatial analysis model of the form (2) is fit, then Bias βNSx = β x − E βNS xcan be expressed as:
Lemma 2 .
2Let the data generating model be of the form Equation (1) with y and x known. If a spatial analysis model of the form (3) is fit, and results in the positive definite estimateΣ, then the bias Bias βS x = β x − E βS x can be expressed as:
Theorem 1 .
1Let the data generating model be of the form Equation (1) with y and x known. We assume that r x are the residuals from a spatial analysis model of the form (4) with response x and only an intercept.If the final analysis model is a spatial analysis model of the form (3) with x = r x and results in the estimateΣ, then the bias Bias βASx = β x −E βAS xcan be expressed as:
Lemma 3 .
3If a non-spatial final analysis model of the form (2) is used instead of a spatial analysis model in Theorem 1, then Bias βAS x is:
Figure 5 :
5Boxplots of the absolute value of the observed bias i.e., |Bias (β x )|. Plot made with ggplot2 Wickham (2016).
Figure 6 :
6Visualization of a) x generated from a normal distribution ("random"), b)
Figure 8 :
8Boxplots of the absolute value of the observed bias, All plots were made withWickham (2016).
Figure 9 :
9Boxplots of the absolute value of the observed bias. Plot made with the ggplot2 package (Wickham, 2016).
Figure 10 :
10Boxplots of the absolute value of the observed bias. On the left hand side (σ x , σ y ) = (0.15, 0.15), and on the right hand side (σ x , σ y ) = (0.15, 1). Plot made with the ggplot2 package(Wickham, 2016).
et al., 2022; Thaden and Kneib, 2018; Page et al., 2017; Keller and Szpiro, 2020b;
1 Σ−1 Now, we restrict our attention to the second component. To further simplify we note that r x = [x − α1] with α = x,1 Σ −1 ≥ 0. We can therefore useβ x
x
||1|| 2
Σ
−1
x
Bias for β x Combining our results from(14) and(15):
observations enumerated in Fact 5. to simply further:Here we make use of the identities in Fact 5Simplifying B (r x , x) In this sub-section, we focus on the second term of (13). We again employ the notation of Definition 2. We note that the term is equivalent to the second term of Appendix B.4 with r x replacing x. Therefore, we can borrow the result there to note:Note, that restricting our attention to the second component and again using Fact 5, we can further simplify as follows:Using the Fact 5 for the denominator= Using the Fact 5 for the numerator
Alleviating confounding in spatio-temporal areal models with an application on crimes against women in india. A Adin, T Goicoa, J S Hodges, P M Schnell, M D Ugarte, Statistical Modelling. Adin, A., Goicoa, T., Hodges, J. S., Schnell, P. M., and Ugarte, M. D. (2021). Alleviating confounding in spatio-temporal areal models with an application on crimes against women in india. Statistical Modelling, page 1471082X211015452.
Mspock: Alleviating spatial confounding in multivariate disease mapping models. D R Azevedo, M O Bandyopadhyay, D , Journal of Agricultural, Biological and Environmental Statistics. Azevedo, D. R., , M. O., and Bandyopadhyay, D. (2021). Mspock: Alleviating spatial confounding in multivariate disease mapping models. Journal of Agricultural, Biological and Environmental Statistics, pages 1-28.
Alleviating spatial confounding in frailty models. D R Azevedo, M O Prates, D Bandyopadhyay, Biostatistics. Azevedo, D. R., Prates, M. O., and Bandyopadhyay, D. (2022). Alleviating spatial confounding in frailty models. Biostatistics.
Bayesian image restoration, with two applications in spatial statistics. J Besag, J York, A Mollié, Annals of the institute of statistical mathematics. 431Besag, J., York, J., and Mollié, A. (1991). Bayesian image restoration, with two applications in spatial statistics. Annals of the institute of statistical mathematics, 43(1):1-20.
Applied spatial data analysis with R. R S Bivand, E J Pebesma, V Gómez-Rubio, E J Pebesma, Springer747248717Bivand, R. S., Pebesma, E. J., Gómez-Rubio, V., and Pebesma, E. J. (2008). Applied spatial data analysis with R, volume 747248717. Springer.
An adjusted parameter estimation for spatial regression with spatial confounding. Y.-H Chiou, H.-D Yang, Chen , C.-S , Stochastic Environmental Research and Risk Assessment. 338Chiou, Y.-H., Yang, H.-D., and Chen, C.-S. (2019). An adjusted parameter estimation for spatial regression with spatial confounding. Stochastic Environmental Research and Risk Assessment, 33(8):1535-1551.
Spatial correlation in ecological analysis. D G Clayton, L Bernardinelli, C Montomoli, International Journal of Epidemiology. 226Clayton, D. G., Bernardinelli, L., and Montomoli, C. (1993). Spatial correlation in ecological analysis. International Journal of Epidemiology, 22(6):1193-1202.
Statistics for Spatial Data. N Cressie, WileyCressie, N. (1993). Statistics for Spatial Data. Wiley.
Model-based geostatistics. P J Diggle, J Tawn, R Moyeed, Journal of the Royal Statistical Society: Series C (Applied Statistics). 473Diggle, P. J., Tawn, J., and Moyeed, R. (1998). Model-based geostatistics. Journal of the Royal Statistical Society: Series C (Applied Statistics), 47(3):299-350.
fields: Tools for spatial data. Douglas Nychka, Reinhard Furrer, John Paige, Stephan Sain, R package version 14.0Douglas Nychka, Reinhard Furrer, John Paige, and Stephan Sain (2021). fields: Tools for spatial data. R package version 14.0.
Spatial+: a novel approach to spatial confounding. E Dupont, S N Wood, Augustin , N H , Biometrics. Dupont, E., Wood, S. N., and Augustin, N. H. (2022). Spatial+: a novel approach to spatial confounding. Biometrics.
Restricted spatial regression in practice: geostatistical models, confounding, and robustness under model misspecification. E M Hanks, E M Schliep, M B Hooten, J A Hoeting, Environmetrics. 264Hanks, E. M., Schliep, E. M., Hooten, M. B., and Hoeting, J. A. (2015). Restricted spatial regression in practice: geostatistical models, confounding, and robustness under model misspecification. Environmetrics, 26(4):243-254.
The bayesian group lasso for confounded spatial data. T J Hefley, M B Hooten, E M Hanks, R E Russell, D P Walsh, Journal of Agricultural, Biological and Environmental Statistics. 221Hefley, T. J., Hooten, M. B., Hanks, E. M., Russell, R. E., and Walsh, D. P. (2017). The bayesian group lasso for confounded spatial data. Journal of Agricultural, Biological and Environmental Statistics, 22(1):42-59.
raster: Geographic Data Analysis and Modeling. R package version 3. R J Hijmans, Hijmans, R. J. (2022). raster: Geographic Data Analysis and Modeling. R package version 3.5-15.
Adding spatially-correlated errors can mess up the fixed effect you love. J S Hodges, B J Reich, The American Statistician. 644Hodges, J. S. and Reich, B. J. (2010). Adding spatially-correlated errors can mess up the fixed effect you love. The American Statistician, 64(4):325-334.
Dimension reduction and alleviation of confounding for spatial generalized linear mixed models. J Hughes, M Haran, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 751Hughes, J. and Haran, M. (2013). Dimension reduction and alleviation of confounding for spatial generalized linear mixed models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(1):139-159.
Spatial confounding in generalized estimating equations. F K Hui, H D Bondell, The American Statistician. Hui, F. K. and Bondell, H. D. (2021). Spatial confounding in generalized estimating equations. The American Statistician, pages 1-10.
Selecting a scale for spatial confounding adjustment. J P Keller, A A Szpiro, Journal of the Royal Statistical Society: Series A (Statistics in Society). 1833Keller, J. P. and Szpiro, A. A. (2020a). Selecting a scale for spatial confounding adjustment. Journal of the Royal Statistical Society: Series A (Statistics in Society), 183(3):1121-1143.
Selecting a scale for spatial confounding adjustment. J P Keller, A A Szpiro, Journal of the Royal Statistical Society. Series A. 18331121Statistics in Society)Keller, J. P. and Szpiro, A. A. (2020b). Selecting a scale for spatial confounding adjustment. Journal of the Royal Statistical Society. Series A,(Statistics in Society), 183(3):1121.
Modeling the social and spatial proximity of crime: domestic and sexual violence across neighborhoods. C Kelling, C Graif, G Korkmaz, M Haran, Journal of quantitative criminology. 372Kelling, C., Graif, C., Korkmaz, G., and Haran, M. (2021). Modeling the social and spatial proximity of crime: domestic and sexual violence across neighborhoods. Journal of quantitative criminology, 37(2):481-516.
Restricted spatial regression methods: Implications for inference. K Khan, C A Calder, Journal of the American Statistical Association. Khan, K. and Calder, C. A. (2020). Restricted spatial regression methods: Implications for inference. Journal of the American Statistical Association, pages 1-13.
Spline functions and stochastic processes. G S Kimeldorf, G Wahba, Sankhyā: The Indian Journal of Statistics, Series A. Kimeldorf, G. S. and Wahba, G. (1970). Spline functions and stochastic processes. Sankhyā: The Indian Journal of Statistics, Series A, pages 173- 180.
Mitigating spatial confounding by explicitly correlating gaussian random fields. I Marques, T Kneib, N Klein, Environmetrics. 2727Marques, I., Kneib, T., and Klein, N. (2022). Mitigating spatial confounding by explicitly correlating gaussian random fields. Environmetrics, page e2727.
On the effects of spatial confounding in hierarchical models. W S Nobre, A M Schmidt, J B Pereira, International Statistical Review. 892Nobre, W. S., Schmidt, A. M., and Pereira, J. B. (2021). On the effects of spatial confounding in hierarchical models. International Statistical Review, 89(2):302-322.
Technical vignette 5: Understanding intrinsic gaussian markov random field spatial models, including intrinsic conditional autoregressive models. C Paciorek, Technical reportPaciorek, C. (2009). Technical vignette 5: Understanding intrinsic gaussian markov random field spatial models, including intrinsic conditional autoregressive models. Technical report.
The importance of scale for spatial-confounding bias and precision of spatial regression estimators. C J Paciorek, Statistical Science: A Review Journal of the Institute of Mathematical Statistics. 251107Paciorek, C. J. (2010). The importance of scale for spatial-confounding bias and precision of spatial regression estimators. Statistical Science: A Review Journal of the Institute of Mathematical Statistics, 25(1):107.
Estimation and prediction in the presence of spatial confounding for spatial linear models. G L Page, Y Liu, Z He, D Sun, Scandinavian Journal of Statistics. 443Page, G. L., Liu, Y., He, Z., and Sun, D. (2017). Estimation and prediction in the presence of spatial confounding for spatial linear models. Scandinavian Journal of Statistics, 44(3):780-797.
Adjusting for unmeasured spatial confounding with distance adjusted propensity score matching. G Papadogeorgou, C Choirat, C M Zigler, Biostatistics. 202Papadogeorgou, G., Choirat, C., and Zigler, C. M. (2019). Adjusting for unmeasured spatial confounding with distance adjusted propensity score matching. Biostatistics, 20(2):256-272.
Alleviating spatial confounding for areal data problems by displacing the geographical centroids. M O Prates, R M Assuncao, E C Rodrigues, Bayesian Analysis. 142Prates, M. O., Assuncao, R. M., and Rodrigues, E. C. (2019). Alleviating spatial confounding for areal data problems by displacing the geographical centroids. Bayesian Analysis, 14(2):623-647.
Identification of the variance components in the general two-variance linear model. B J Reich, J S Hodges, Journal of Statistical Planning and Inference. 1386Reich, B. J. and Hodges, J. S. (2008). Identification of the variance components in the general two-variance linear model. Journal of Statistical Planning and Inference, 138(6):1592-1604.
Effects of residual smoothing on the posterior of the fixed effects in disease-mapping models. B J Reich, J S Hodges, V Zadnik, Biometrics. 624Reich, B. J., Hodges, J. S., and Zadnik, V. (2006). Effects of residual smoothing on the posterior of the fixed effects in disease-mapping models. Biometrics, 62(4):1197-1206.
A review of spatial causal inference methods for environmental and epidemiological applications. B J Reich, S Yang, Y Guan, A B Giffin, M J Miller, A Rappold, International Statistical Review. Reich, B. J., Yang, S., Guan, Y., Giffin, A. B., Miller, M. J., and Rappold, A. (2021). A review of spatial causal inference methods for environmental and epidemiological applications. International Statistical Review.
H Rue, L Held, Gaussian Markov Random Fields: Theory and Applications. CRC pressRue, H. and Held, L. (2005). Gaussian Markov Random Fields: Theory and Applications. CRC press.
D Ruppert, M P Wand, R J Carroll, Semiparametric regression. Number 12. Cambridge university pressRuppert, D., Wand, M. P., and Carroll, R. J. (2003). Semiparametric regression. Number 12. Cambridge university press.
Lattice: Multivariate Data Visualization with R. D Sarkar, 978-0-387-75968-5SpringerNew YorkSarkar, D. (2008). Lattice: Multivariate Data Visualization with R. Springer, New York. ISBN 978-0-387-75968-5.
Discussion on "spatial+: A novel approach to spatial confounding" by emiko dupont, simon n. wood, and nicole h. augustin. A M Schmidt, Biometrics. Schmidt, A. M. (2021). Discussion on "spatial+: A novel approach to spatial confounding" by emiko dupont, simon n. wood, and nicole h. augustin. Biometrics.
Structural equation models for dealing with spatial confounding. H Thaden, T Kneib, The American Statistician. 723Thaden, H. and Kneib, T. (2018). Structural equation models for dealing with spatial confounding. The American Statistician, 72(3):239-252.
Applied Spatial Statistics for Public Health Data. L A Waller, C A Gotway, John Wiley & Sons368Waller, L. A. and Gotway, C. A. (2004). Applied Spatial Statistics for Public Health Data, volume 368. John Wiley & Sons.
ggplot2: Elegant Graphics for Data Analysis. H Wickham, Springer-VerlagNew YorkWickham, H. (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
Low-rank representations for spatial processes. C K Wikle, Handbook of spatial statistics. CRC PressWikle, C. K. (2010). Low-rank representations for spatial processes. In Handbook of spatial statistics, pages 114-125. CRC Press.
Estimation and selection for spatial confounding regression models. H.-D Yang, Y.-H Chiou, C.-S Chen, Communications in Statistics-Theory and Methods. Yang, H.-D., Chiou, Y.-H., and Chen, C.-S. (2021). Estimation and selection for spatial confounding regression models. Communications in Statistics-Theory and Methods, pages 1-17.
On deconfounding spatial confounding in linear models. D L Zimmerman, J M Ver Hoef, The American Statistician. Zimmerman, D. L. and Ver Hoef, J. M. (2021). On deconfounding spatial confounding in linear models. The American Statistician, pages 1-9.
Given an inner product ·, · * on the vector space R n and a vector a ∈ R n. Fact 5 (Inner Products of Certain Kinds of Differences between Vectors). let b =Fact 5 (Inner Products of Certain Kinds of Differences between Vectors). Given an inner product ·, · * on the vector space R n and a vector a ∈ R n , let b =
E β AS − β x = A 2 (r x , x) − β x + B 2 (r x. E β AS − β x = A 2 (r x , x) − β x + B 2 (r x , x) =
| []
|
[
"TINY TRANSDUCER: A HIGHLY-EFFICIENT SPEECH RECOGNITION MODEL ON EDGE DEVICES",
"TINY TRANSDUCER: A HIGHLY-EFFICIENT SPEECH RECOGNITION MODEL ON EDGE DEVICES"
]
| [
"Yuekai Zhang \nTencent Technology Co\nLtd, BeijingChina\n\nThe Johns Hopkins Univeristy\nBaltimoreMDUSA\n",
"Sining Sun \nTencent Technology Co\nLtd, BeijingChina\n",
"Long Ma \nTencent Technology Co\nLtd, BeijingChina\n"
]
| [
"Tencent Technology Co\nLtd, BeijingChina",
"The Johns Hopkins Univeristy\nBaltimoreMDUSA",
"Tencent Technology Co\nLtd, BeijingChina",
"Tencent Technology Co\nLtd, BeijingChina"
]
| []
| This paper proposes an extremely lightweight phonebased transducer model with a tiny decoding graph on edge devices. First, a phone synchronous decoding (PSD) algorithm based on blank label skipping is first used to speed up the transducer decoding process. Then, to decrease the deletion errors introduced by the high blank score, a blank label deweighting approach is proposed. To reduce parameters and computation, deep feedforward sequential memory network (DFSMN) layers are used in the transducer encoder, and a CNN-based stateless predictor is adopted. SVD technology compresses the model further. WFST-based decoding graph takes the context-independent (CI) phone posteriors as input and allows us to flexibly bias user-specific information. Finally, with only 0.9M parameters after SVD, our system could give a relative 9.1% -20.5% improvement compared with a bigger conventional hybrid system on edge devices. | 10.1109/icassp39728.2021.9413854 | [
"https://arxiv.org/pdf/2101.06856v2.pdf"
]
| 231,632,589 | 2101.06856 | 9d76e5d19f42b6acb506f2431f6695592ecca4c7 |
TINY TRANSDUCER: A HIGHLY-EFFICIENT SPEECH RECOGNITION MODEL ON EDGE DEVICES
Yuekai Zhang
Tencent Technology Co
Ltd, BeijingChina
The Johns Hopkins Univeristy
BaltimoreMDUSA
Sining Sun
Tencent Technology Co
Ltd, BeijingChina
Long Ma
Tencent Technology Co
Ltd, BeijingChina
TINY TRANSDUCER: A HIGHLY-EFFICIENT SPEECH RECOGNITION MODEL ON EDGE DEVICES
Index Terms-Transduceron-device modelphone syn- chronous decoding
This paper proposes an extremely lightweight phonebased transducer model with a tiny decoding graph on edge devices. First, a phone synchronous decoding (PSD) algorithm based on blank label skipping is first used to speed up the transducer decoding process. Then, to decrease the deletion errors introduced by the high blank score, a blank label deweighting approach is proposed. To reduce parameters and computation, deep feedforward sequential memory network (DFSMN) layers are used in the transducer encoder, and a CNN-based stateless predictor is adopted. SVD technology compresses the model further. WFST-based decoding graph takes the context-independent (CI) phone posteriors as input and allows us to flexibly bias user-specific information. Finally, with only 0.9M parameters after SVD, our system could give a relative 9.1% -20.5% improvement compared with a bigger conventional hybrid system on edge devices.
INTRODUCTION
Recently, end-to-end (E2E) models [1,2,3,4,5] for automatic speech recognition (ASR) have become popular in the ASR community. Comparing with conventional ASR systems [6,7], including three components: acoustic model (AM), pronunciation model (PM), and language model (LM), E2E models only have a single end-to-end trained neural model but with comparable performance with the conventional systems. Thus, E2E models are gradually replacing the traditional hybrid models in the industry [5,8]. Another research line focuses on deploying ASR systems on devices such as cellphones, tablets, and embedded devices [8,9,10,11]. However, deployment of E2E models on devices remains several challenges: first, on-device ASR tasks usually require a streamable E2E model with low latency. Popular E2E models such as attention-based encoder-decoder (AED) [12,13,14] have shown state-of-the-art performance on many tasks, but the attention mechanism is naturally unfriendly to online ASR. Second, the customizable ability is * Work performed during internship at Tencent. † The first two authors contributed equally to this work desired in many on-device ASR scenarios. The model should have a promising performance on user-specific information such as contacts' phone numbers and favorite song names. In [15], shallow fusion is combined with E2E models' prediction during decoding. In [16], text-to-speech (TTS) technology is utilized to generate training samples from text-only data. However, they all need to retrain the acoustic model or language model (LM). Finally, especially on edge devices where the memory and computing resources are highly constrained, ASR systems have to be very compact. (e.g., Embedded devices for vehicles could only attribute low memory and computing budget to ASR.) To satisfy the above requirements, we present a highlyefficient ASR system, suitable for ASR tasks with insufficient computing resources. Our proposed system consists of a lightweight phone-based speech transducer and a tiny decoding graph. The transducer converts speech features to phone sequences. The decoding graph, composing of a lexicon and a grammar FST , named LG graph, maps phone posteriors to word sequences. On the one hand, compared with conventional senone-based acoustic modeling, phone-based speech transducer simplifies the acoustic modeling process. On the other hand, combining with the LG graph will easily fuse language model or bias user-special information into the decoding graph. Within our proposed architecture, we first adopt a phone synchronous decoding (PSD) algorithm based on transducer with blank skipping strategy, improving decoding speed dramatically with no recognition performance drop. Then, to alleviate the deletion error caused by the over-scored blank prediction, we propose a blank label deweighting approach during speech transducer decoding, which can reduce the deletion error significantly in our experiments. To reduce model parameters and computation, a deep feedforward sequential memory block (DFSMN) is used to replace the RNN encoder, and a casual 1-D CNN-based (Conv1d) stateless predictor [17,18] is adopted. Finally, we apply the singular value decomposition (SVD) to our speech transducer to further compress the model. Our tiny transducer could achieve a promising performance with only 0.9M parameters. [19] as an improvement of connectionist temporal classification (CTC) [20], which removes the strong prediction independence assumption of CTC. RNN-T includes three parts: an encoder, a predictor, and a joint network. Traditionally, both encoder and predictor consist of a multi-layer recurrent neural network such as LSTM, resulting in high computation on devices. In this work, DFSMN-based encoder and a casual Conv1d stateless predictor are used to achieve efficient computation on devices. Fig 1 illustrates the architecture of our transducer model.
TINY TRANSDUCER
RNN-T model is proposed in
Streamable DFSMN Encoder
Due to the limited computation resources, the popular streaming architecture, LSTM, is replaced with a DFSMN layer. DFSMN combines FSMN [21] with low-rank matrix factorization [22,23] to reduce network parameters.To keep a good trade-off between model accuracy and latency, we set the number of left context frames in the DFSMN layer as eight, and the right context includes two frames. In this way, the deeper layers would have a wider receptive field with more future information. Additionally, two CNN layers with stride size two each layer are inserted before the DFSMN layers to perform subsampling, which leads to four times subsampling.
Casual Conv1d Stateless Predictor
The predictor network only has one casual Conv1d layer. It takes M previous predictions as input. We set M to four in our experiments. Formally, the predictor output at step u is
h pred u = Conv1d(Embed(y u−M , ..., y u−1 ))(1)
where Embed() maps the predicted labels to the corresponding embeddings. Fig 1 also shows our Conv1d predictor.
DECODING WITH TINY TRANSDUCER
In this work, we choose to use CI phones as prediction units.
Combining with a traditional WFST decoder allows us to flexibly inject biased contextual information into the decoder graph without retraining the acoustic model and LM model. During the decoding process, the CI phone probability posteriors from the transducer model would be the WFST decoder's input. Our WFST decoder includes two separate WFSTs: lexicons (L), and language model, or grammars (G). The final search graph (LG) can be presented as follow:
LG = min(det(L • G))(2)
where min and det represent determinize and minimize operations, respectively. To further speed up the decoding process and reduce parameters, we introduce our PSD algorithm and SVD technology in this section.
Phone Synchronous Decoding with Blank Skipping
PSD algorithm is first used in [24] to speed up the decoding and reduce the memory usage with CTC lattice. A CTC model's peaky posterior property allows the PSD algorithm to ignore blank prediction frames and compress the search space. We found the same peaky posterior property also exists in an RNN-T model. With transducer lattice, most frames are aligned with blank symbols. Motivated by this, we presented a PSD algorithm based on RNN-T lattice. We introduce our PSD method below. The decoding formulation for the RNN-T model using phone as prediction is derived as below:
w * = arg max w {P (w)p(x|w)} = arg max w {P (w)p(x|p w ) = arg max w {P (w) P (x)p(p w |x) P (p w ) } (3)
where x is the acoustic feature sequences, w and p w are the word sequences and the corresponding phone sequences. Equation 3 could be further simplified into below:
w * = arg max w { P (w) P (p w ) max p w p(p w |x)}(4)
We denote the standard decoding method as frame synchronous decoding (FSD) algorithm. When using the Viterbi beam search algorithm, FSD viterbi beam search could be transformed from the above equation 4 into:
w * = arg max w { P (w) P (p w ) max π:π∈L ,β(π 1:T )=p w ) { t ∈U y t πt × t∈U y t blank }}(5)
where π is the possible alignment path. L is the CI phone set plus the blank symbol. y t k represents the posterior probability of RNN-T output unit k at time t. U is a set including the time steps when y t blank closes to one. The size of U could be controlled by setting a threshold for blank labels' posterior y t blank . Since π t = blank won't change the corresponding phone sequences output β(π 1:T ), assuming all competing alignment paths share the similar blank frames' positions, we could ignore the score of the blank frames. The below equa-tion formulates our PSD algorithm on RNN-T lattice:
w * = arg max w { P (w) P (p w ) max π:π∈L ,β(π 1:T )=p w ) t ∈U y t πt }(6)
In this way, the PSD method avoids redundant searches due to plenty of blank frames. The PSD algorithm is summarized in Algorithm 1. We break the transducer lattice rule a little bit in decoding. One frame only outputs one phone label or blank [25].
To reduce the deletion errors caused by high blank label scores, we combine blank frames skipping strategy with blank label deweighting technology into Algorithm 1. We first deweight the blank scores by subtracting a deweighting factor in log domain. Then frames with blank scores more than predefined threshold would be filtered. The results in section 4.4 show the deweighting method could reduce the deletion errors significantly. By changing the threshold of blank frames, we could control how many blank frames would be skipped. if p t,u (blank) ≤ γ blank then 13: Enqueue(Q posterior , p t,u ) 14: w t,u =WFSTDecoding(LG, Q posterior ) 15: Enqueue(w * , w t,u ) 16: end if 17: end for 18: return w *
Model Compression with SVD
We further reduce the model parameters using SVD. Since our parameters mainly come from the feed-forward projection layers in the DFSMN encoder, SVD is only used on these projection layers' weight matrices. Following the strategy in [26], we first reduce the model size by SVD then fine-tune the compressed model to reduce the accuracy loss.
EXPERIMENT
The experiments are conducted on an 18,000 hours in-car Mandarin speech dataset, which includes enquiries, naviga-tions, and conversations speech collected from Tencent in-car speech assistant products. All the data are anonymized and hand transcribed. Development and Test set consist of 3382 and 6334 utterances, about 4 hours and 7 hours, respectively.
Model and Training Details
Our model takes 40-dimensional power-normalized cepstral coefficients (PNCC) feature [27] as input, which uses a 25ms window with a stride of 10ms. Adam optimizer, with an initial learning rate of 0.0005, is used to train the transducer model. Specaugment [28] with mask parameter (F = 20), and ten time masks with maximum time-mask ratio (pS = 0.05) is used as preprocessing. A 4-gram language model is trained using text data and additional text-only corpus. We have three different configurations for the large, medium, and small model's encoder. Predictors are all one layer CNN with different input dimensions according to the corresponding encoder size. The output units include 210 context-independent (CI) phones and the blank symbol. Transducer models are implemented with ESPnet [29] toolkit. We first storage the predicted posterior probability matrices of CI phones. Then EESEN [30] toolkit is used to process the posterior probabilities and gives the decoding results. Table 1 summarizes the model architecture details. Table 2 shows the word error rate (WER) results of the conventional hybrid system, four RNN-T models with different sizes. They use the same language model. The hybrid system uses TDNN as the acoustic model with 2.5M parameters, which is comparable with our small transducer model. By combining the end-to-end transducer model with LG WFST decoder, we could surpass the hybrid system's performance and keep the flexibility of WFST to better customize the ASR system. Furthermore, Table 2 also shows results of the small model after SVD. With only 0.9M parameters, the SVD model with fine-tune could acheive 19.57% CER, still better than hybird system.
RTF Results for PSD and FSD Algorithms
In this section, we would show the relationship between the blank rate and the corresponding threshold. Then we give the real-time factor (RTF), and WER results on our small model. We denote the blank rate α as follows:
α = 1 T size(U (γ blank ))(7)
where T is the sequence length and U (γ blank ) is the set including all blank frames:
U (γ blank ) = {f rames : p t,u (blank) > γ blank } (8)
The number of blank frames is controlled by the blank posterior probability threshold γ blank . In decoding, all frames in the set U would be skipped. When γ blank is larger than 1, no frames would be regarded as blank frames. In this case, the PSD algorithm would degrade to the FSD algorithm. Table 3 gives a comparison of different threshold values and the corresponding results. Since PSD and FSD algorithms only have differences during WFST decoding time, we use RTF to represent the entire computation process, including transducer forward time and decoding time. S-RTF denotes the WFST search time. We conduct the experiments on a server with Intel(R) Xeon(R) E5 CPU for proof-of-concept. The results show that setting γ blank as 0.95 would give a good balance between speed and accuracy.
Blank Frames Deweight
Following the strategy in [31], we don't normalize the phone label posteriors in decoding. We deweight the blank labels' posteriors to add a cost to deletion errors in decoding. Other label posteriors keep unchanged. We try to subtract a blank deweighting value on the log probability domain, which equals to divide a constant weight on blank labels' posterior probability. Figure 2 shows the deletion, substitution, and insertion errors for our small transducer model on the development set. We could always reduce the deletion errors by subtracting a higher deweighting number. However, too large deweighting values would increase the total WER. We tune the deweighting value on the development set and using two as a deweighting value to get the best result.
Performance on edge devices
We also deploy our system on edge devices. Int8 quantization is used to reduce memory consumption and speed up inference. Note that, in order to trade off speech recognition accuracy and inference efficiency, only FSMN layers, which are parameter intensive, are quantized. Because our quantized model obtains similar accuracy as the results in Table 2, we only report mean CPU usage and RTF of our small RNN-T model in Table 4. From Table 4, our proposed PSD method can significantly reduce CPU usage and RTF compared with FSD.
CONCLUSION
This paper introduces the pipeline of designing a highly compact speech recognition system for extremely low-resource edge devices. To fulfill the streaming attribute with lowcomputation and small model size constraints, we choose transducer lattice with DFSMN encoder. LSTM predictor is replaced with a Conv1d layer to reduce the parameters and computation further. To keep the contextual and customizable recognition ability, we use CI phones as our modeling unit and bias the language model at the WFST decoding part. A novel PSD decoding algorithm based on transducer lattice is first proposed to speed up the decoding process. Also, blank weight deweighting and SVD technologies are adopted to improve recognition performance. The proposed system shows a speech recognizer with few parameters which could realize streaming, fast, and accurate speech recognition.
Fig. 1 .
1The Architecture of Transducer Model
Algorithm 1
1PSD algorithm Input: Features {x 0 , ..., x T −1 }, blank deweight value β blank , blank threshold γ blank , Conv1d look-back M Output: Predicted word sequences w * 1: y in = Zeros(M), u = 1, Q posteroir = {}, w * = {}, 2: h pred 0 =Predictor(y in ) 3: for time t ← 0 to T −
Fig. 2 .
2WER for different blank deweight value β blank
Table 1 .
1Details for Large, Medium and Small modelModel
Large Medium Small
# Parameters
11M
4.5M
1.6M
Encoder Dim
1024
512
400
# DFSMN layers 8
8
8
Joint Dim
512
256
100
4.2. WER Results on Models
Table 2 .
2WER results on dev and test set
WER(%)
# parameters
Dev
Test
Hybrid System
2.5M
19.77 21.53
Large Model
11M
10.49 14.44
Medium Model
4.5M
11.72 15.17
Small Model
1.6M
14.12 18.11
+ SVD fine-tune
0.9M
15.71 19.57
Table 3 .
3Resultswith different threshold values
Speed
WER(%)
Method γ blank α(%) RTF S-RTF Dev
Test
FSD
1.0
0
0.069 0.053 14.12 18.11
PSD
0.95
77.08 0.034 0.017 14.12 18.10
PSD
0.85
80.73 0.033 0.017 19.73 23.85
PSD
0.75
82.76 0.032 0.016 36.79 41.28
Table 4 .
4On-device CPU usage and RTF resultsArm CPU
ARMv7
4-core
AArch64
4-core
CPU usage
FSD
48.5%
38.8%
PSD
21.5%
6.2%
RTF
FSD
2.88
2.66
PSD
0.55
0.42
Joint ctcattention based end-to-end speech recognition using multi-task learning. Suyoun Kim, Takaaki Hori, Shinji Watanabe, ICASSP. Suyoun Kim, Takaaki Hori, and Shinji Watanabe, "Joint ctc- attention based end-to-end speech recognition using multi-task learning," in ICASSP, 2017.
Exploring rnn-transducer for chinese speech recognition. Senmao Wang, Pan Zhou, Wei Chen, Jia Jia, Lei Xie, APSIPA. IEEE. Senmao Wang, Pan Zhou, Wei Chen, Jia Jia, and Lei Xie, "Ex- ploring rnn-transducer for chinese speech recognition," in 2019 APSIPA. IEEE, 2019, pp. 1364-1369.
Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, arXiv:2010.13956Recent developments on espnet toolkit boosted by conformer. arXiv preprintPengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, et al., "Recent developments on espnet toolkit boosted by conformer," arXiv preprint arXiv:2010.13956, 2020.
Conformer: Convolutionaugmented transformer for speech recognition. Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, InterspeechAnmol Gulati, James Qin, Chung-Cheng Chiu, Niki Par- mar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zheng- dong Zhang, Yonghui Wu, et al., "Conformer: Convolution- augmented transformer for speech recognition," Interspeech, 2020.
Developing rnn-t models surpassing high-performance hybrid models with customization capability. Jinyu Li, Rui Zhao, Zhong Meng, Yanqing Liu, Wenning Wei, Sarangarajan Parthasarathy, Vadim Mazalov, Zhenghao Wang, Lei He, Sheng Zhao, InterspeechJinyu Li, Rui Zhao, Zhong Meng, Yanqing Liu, Wenning Wei, Sarangarajan Parthasarathy, Vadim Mazalov, Zhenghao Wang, Lei He, Sheng Zhao, et al., "Developing rnn-t models sur- passing high-performance hybrid models with customization capability," Interspeech, 2020.
Sequence discriminative training of deep neural networks. Arnab Ghoshal, Daniel Povey, INTERSPEECH. Arnab Ghoshal and Daniel Povey, "Sequence discriminative training of deep neural networks," in INTERSPEECH, 2013.
A time delay neural network architecture for efficient modeling of long temporal contexts. Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, INTERSPEECH. Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur, "A time delay neural network architecture for efficient model- ing of long temporal contexts," in INTERSPEECH, 2015.
A streaming on-device end-toend model surpassing server-side conventional model quality and latency. N Tara, Yanzhang Sainath, Bo He, Arun Li, Ruoming Narayanan, Antoine Pang, Bruguier, Shuo-Yiin, Wei Chang, Raziel Li, Zhifeng Alvarez, Chen, ICASSP. Tara N Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruom- ing Pang, Antoine Bruguier, Shuo-yiin Chang, Wei Li, Raziel Alvarez, Zhifeng Chen, et al., "A streaming on-device end-to- end model surpassing server-side conventional model quality and latency," in ICASSP, 2020.
Streaming end-toend speech recognition for mobile devices. Yanzhang He, N Tara, Rohit Sainath, Ian Prabhavalkar, Raziel Mc-Graw, Ding Alvarez, David Zhao, Anjuli Rybach, Yonghui Kannan, Ruoming Wu, Pang, ICASSP. Yanzhang He, Tara N Sainath, Rohit Prabhavalkar, Ian Mc- Graw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kan- nan, Yonghui Wu, Ruoming Pang, et al., "Streaming end-to- end speech recognition for mobile devices," in ICASSP, 2019.
Fully neural network based speech recognition on mobile and embedded devices. Jinhwan Park, Yoonho Boo, Iksoo Choi, NeuralIPSJinhwan Park, Yoonho Boo, Iksoo Choi, et al., "Fully neu- ral network based speech recognition on mobile and embedded devices," in NeuralIPS, 2018.
Cascade rnn-transducer: Syllable based streaming on-device mandarin speech recognition with a syllable-to-character converter. Xiong Wang, Zhuoyuan Yao, Xian Shi, Lei Xie, arXiv:2011.08469arXiv preprintXiong Wang, Zhuoyuan Yao, Xian Shi, and Lei Xie, "Cas- cade rnn-transducer: Syllable based streaming on-device man- darin speech recognition with a syllable-to-character con- verter," arXiv preprint arXiv:2011.08469, 2020.
Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. Linhao Dong, Shuang Xu, Bo Xu, ICASSP. Linhao Dong, Shuang Xu, and Bo Xu, "Speech-transformer: a no-recurrence sequence-to-sequence model for speech recog- nition," in ICASSP, 2018.
A comparative study on transformer vs rnn in speech applications. Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique , Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, ASRUShigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson En- rique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al., "A comparative study on transformer vs rnn in speech applica- tions," in ASRU, 2019.
Simplified self-attention for transformer-based end-to-end speech recognition. Haoneng Luo, Shiliang Zhang, Ming Lei, Lei Xie, arXiv:2005.10463arXiv preprintHaoneng Luo, Shiliang Zhang, Ming Lei, and Lei Xie, "Sim- plified self-attention for transformer-based end-to-end speech recognition," arXiv preprint arXiv:2005.10463, 2020.
Shallow-fusion end-to-end contextual biasing. Ding Zhao, Tara N Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, Ruoming Pang, InterspeechDing Zhao, Tara N Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, and Ruoming Pang, "Shallow-fusion end-to-end contextual biasing," Interspeech, 2019.
Personalization of end-to-end speech recognition on mobile devices for named entities. Khe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, ASRUKhe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, et al., "Personalization of end-to-end speech recognition on mobile devices for named entities," in ASRU, 2019.
Rnn-Transducer with stateless prediction network. Mohammadreza Ghodsi, Xiaofeng Liu, James Apfel, Rodrigo Cabrera, Eugene Weinstein, ICASSP. Mohammadreza Ghodsi, Xiaofeng Liu, James Apfel, Rodrigo Cabrera, and Eugene Weinstein, "Rnn-Transducer with state- less prediction network," in ICASSP, 2020.
Minimum bayes risk training of rnn-transducer for end-to-end speech recognition. Chao Weng, Chengzhu Yu, Jia Cui, InterspeechChao Weng, Chengzhu Yu, Jia Cui, et al., "Minimum bayes risk training of rnn-transducer for end-to-end speech recogni- tion," Interspeech, 2020.
Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. Alex Graves, "Sequence transduction with recurrent neural networks," arXiv:1211.3711, 2012.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber, ICML. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, "Connectionist temporal classification: la- belling unsegmented sequence data with recurrent neural net- works," in ICML, 2006.
Feedforward sequential memory networks: A new structure to learn longterm dependency. Shiliang Zhang, Cong Liu, Hui Jiang, arXiv:1512.08301Shiliang Zhang, Cong Liu, Hui Jiang, et al., "Feedforward sequential memory networks: A new structure to learn long- term dependency," arXiv:1512.08301, 2015.
Compact feedforward sequential memory networks for large vocabulary continuous speech recognition. Shiliang Zhang, Hui Jiang, InterspeechShiliang Zhang, Hui Jiang, et al., "Compact feedforward se- quential memory networks for large vocabulary continuous speech recognition," Interspeech, 2016.
Deepfsmn for large vocabulary continuous speech recognition. Shiliang Zhang, Ming Lei, Zhijie Yan, Lirong Dai, ICASSP. Shiliang Zhang, Ming Lei, Zhijie Yan, and Lirong Dai, "Deep- fsmn for large vocabulary continuous speech recognition," in ICASSP, 2018.
Phone synchronous speech recognition with ctc lattices. Zhehuai Chen, Yimeng Zhuang, Yanmin Qian, Kai Yu, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 251Zhehuai Chen, Yimeng Zhuang, Yanmin Qian, and Kai Yu, "Phone synchronous speech recognition with ctc lattices," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 1, pp. 90-101, 2016.
Monotonic recurrent neural network transducer and decoding strategies. Anshuman Tripathi, Han Lu, Hasim Sak, Hagen Soltau, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEAnshuman Tripathi, Han Lu, Hasim Sak, and Hagen Soltau, "Monotonic recurrent neural network transducer and decoding strategies," in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019, pp. 944-948.
Restructuring of deep neural network acoustic models with singular value decomposition.," in Interspeech. Jian Xue, Jinyu Li, Yifan Gong, Jian Xue, Jinyu Li, and Yifan Gong, "Restructuring of deep neural network acoustic models with singular value decompo- sition.," in Interspeech, 2013, pp. 2365-2369.
Power-normalized cepstral coefficients (PNCC) for robust speech recognition. Chanwoo Kim, M Richard, Stern, IEEE/ACM Transactions on audio, speech, and language processing. 247Chanwoo Kim and Richard M Stern, "Power-normalized cepstral coefficients (PNCC) for robust speech recognition," IEEE/ACM Transactions on audio, speech, and language pro- cessing, vol. 24, no. 7, pp. 1315-1329, 2016.
Specaugment: A simple data augmentation method for automatic speech recognition. S Daniel, William Park, Yu Chan, Chung-Cheng Zhang, Barret Chiu, Zoph, D Ekin, Quoc V Cubuk, Le, InterspeechDaniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le, "Specaugment: A simple data augmentation method for automatic speech recog- nition," Interspeech, 2019.
ESPnet: End-to-end speech processing toolkit. Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson-Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, InterspeechShinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson-Enrique Yalta So- plin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, et al., "ESPnet: End-to-end speech processing toolkit," Interspeech, 2018.
EESEN: End-to-end speech recognition using deep rnn models and wfst-based decoding. Yajie Miao, Mohammad Gowayyed, Florian Metze, ASRUYajie Miao, Mohammad Gowayyed, and Florian Metze, "EESEN: End-to-end speech recognition using deep rnn mod- els and wfst-based decoding," in ASRU, 2015.
Learning acoustic frame labeling for speech recognition with recurrent neural networks. Haşim Sak, Andrew Senior, Kanishka Rao, Ozan Irsoy, Alex Graves, Françoise Beaufays, Johan Schalkwyk, ICASSP. Haşim Sak, Andrew Senior, Kanishka Rao, Ozan Irsoy, Alex Graves, Françoise Beaufays, and Johan Schalkwyk, "Learning acoustic frame labeling for speech recognition with recurrent neural networks," in ICASSP, 2015.
| []
|
[
"Entanglement of Formation for a Class of Quantum States",
"Entanglement of Formation for a Class of Quantum States"
]
| [
"Shao-Ming Fei [email protected]:[email protected]:[email protected]:[email protected] \nInstitut für Angewandte Mathematik\nUniversität Bonn\n53115Bonn\n",
"Jürgen †1 \nMax-Planck-Institute for Mathematics in the Sciences\n04103Leipzig\n",
"Jost \nMax-Planck-Institute for Mathematics in the Sciences\n04103Leipzig\n",
"Xianqing Li-Jost \nMax-Planck-Institute for Mathematics in the Sciences\n04103Leipzig\n",
"Guo-Fang Wang \nMax-Planck-Institute for Mathematics in the Sciences\n04103Leipzig\n\nInstitut für Angewandte Mathematik\nUniversität Bonn\n53115Bonn\n",
"\nDepartment of Mathematics\nCapital Normal University\n100037Beijing\n"
]
| [
"Institut für Angewandte Mathematik\nUniversität Bonn\n53115Bonn",
"Max-Planck-Institute for Mathematics in the Sciences\n04103Leipzig",
"Max-Planck-Institute for Mathematics in the Sciences\n04103Leipzig",
"Max-Planck-Institute for Mathematics in the Sciences\n04103Leipzig",
"Max-Planck-Institute for Mathematics in the Sciences\n04103Leipzig",
"Institut für Angewandte Mathematik\nUniversität Bonn\n53115Bonn",
"Department of Mathematics\nCapital Normal University\n100037Beijing"
]
| []
| Entanglement of formation for a class of higher dimensional quantum mixed states is studied in terms of a generalized formula of concurrence for N-dimensional quantum systems. As applications, the entanglement of formation for a class of 16 × 16 density matrices are calculated. Quantum entanglement plays important roles in quantum communication, information processing and quantum computing [1], such as in the investigation of quantum teleportation [2, 3, 4], dense coding [5], decoherence in quantum computers [1] and the evaluation of quantum cryptographic schemes [6]. To quantify entanglement, a well justified and mathematically tractable measure of entanglement is needed. A number of entanglement measuressuch as the entanglement of formation and distillation[7,8,9], negativity[10,11], and relative entropy[9,12]have been proposed for bipartite states[6,8,[12][13][14][15][16][17]. Nevertheless, most 1 | 10.1016/s0375-9601(03)00379-7 | [
"https://export.arxiv.org/pdf/quant-ph/0304095v1.pdf"
]
| 12,376,469 | quant-ph/0304095 | b3d49a97af57e03d942237e77355aa0b94e66677 |
Entanglement of Formation for a Class of Quantum States
Apr 2003
Shao-Ming Fei [email protected]:[email protected]:[email protected]:[email protected]
Institut für Angewandte Mathematik
Universität Bonn
53115Bonn
Jürgen †1
Max-Planck-Institute for Mathematics in the Sciences
04103Leipzig
Jost
Max-Planck-Institute for Mathematics in the Sciences
04103Leipzig
Xianqing Li-Jost
Max-Planck-Institute for Mathematics in the Sciences
04103Leipzig
Guo-Fang Wang
Max-Planck-Institute for Mathematics in the Sciences
04103Leipzig
Institut für Angewandte Mathematik
Universität Bonn
53115Bonn
Department of Mathematics
Capital Normal University
100037Beijing
Entanglement of Formation for a Class of Quantum States
Apr 2003arXiv:quant-ph/0304095v1 13 1numbers: 0365Bz8970+c Key words: Entanglement of FormationGeneralized Concurrence
Entanglement of formation for a class of higher dimensional quantum mixed states is studied in terms of a generalized formula of concurrence for N-dimensional quantum systems. As applications, the entanglement of formation for a class of 16 × 16 density matrices are calculated. Quantum entanglement plays important roles in quantum communication, information processing and quantum computing [1], such as in the investigation of quantum teleportation [2, 3, 4], dense coding [5], decoherence in quantum computers [1] and the evaluation of quantum cryptographic schemes [6]. To quantify entanglement, a well justified and mathematically tractable measure of entanglement is needed. A number of entanglement measuressuch as the entanglement of formation and distillation[7,8,9], negativity[10,11], and relative entropy[9,12]have been proposed for bipartite states[6,8,[12][13][14][15][16][17]. Nevertheless, most 1
proposed measures of entanglement involve extremizations which are difficult to handle analytically.
The "entanglement of formation" is intended to quantify the amount of quantum communication required to create a given state [7]. Although it is defined for arbitrary dimensions, so far no explicit analytic formulae for entanglement of formation have been found for systems larger than a pair of qubits, due to the fact that two dimensional bipartite mixed states are special in many ways [16], except for some special symmetric states [17].
In this letter we study the entanglement of formation for a class of higher dimensional quantum mixed states. For certain N-dimensional pure quantum systems, we show that the entanglement of formation is a monotonically increasing function of a kind of generalized concurrence. As applications, the entanglement of formation for a class of 16 × 16 density matrices is calculated in detail. The method applies to a large class of quantum states.
The construction of these states are presented for N dimensional, N = 2 k+1 , 2 ≤ k ∈ IN, bipartite systems.
Let H be an N-dimensional complex Hilbert space with orthonormal basis e i , i = 1, ..., N.
A pure state on H ⊗ H is generally of the form,
|ψ = N i,j=1 a ij e i ⊗ e j , a ij ∈ C (1) with normalization N i,j=1 a ij a * ij = 1 .(2)
The entanglement of formation E is defined as the entropy of either of the two sub-Hilbert
space H ⊗ H [8], E(|ψ ) = −Tr (ρ 1 log 2 ρ 1 ) = −Tr (ρ 2 log 2 ρ 2 ) ,(3)
where ρ 1 (resp. ρ 2 ) is the partial trace of |ψ ψ| over the first (resp. second) Hilbert space of H ⊗ H.
Let A denote the matrix with entries given by a ij in (1). ρ 1 can be expressed as
ρ 1 = AA † .(4)
For a given density matrix of a pair of quantum systems on H⊗H, consider all possible purestate decompositions of ρ, i.e., all ensembles of state |ψ i of the form (1) with probabilities
p i , ρ = M i=1 p i |ψ i ψ i |, M i=1 p i = 1
for some M ∈ IN . The entanglement of formation for the mixed state ρ is defined as the average entanglement of the pure states of the decomposition, minimized over all possible decompositions of ρ,
E(ρ) = min M i=1 p i E(|ψ i ).(5)
It is a challenge to calculate (5) for general N. Till now a general explicit formula of E(ρ) is obtained only for the case N = 2. In this case (3) can be written as
E(|ψ )| N =2 = h( 1 + √ 1 − C 2 2 ), where h(x) = −x log 2 x − (1 − x) log 2 (1 − x),
C is called concurrence [15]:
C(|ψ ) = | ψ|ψ | = 2|a 11 a 22 − a 12 a 21 |,
where |ψ = σ y ⊗ σ y |ψ * , |ψ * is the complex conjugate of |ψ , σ y is the Pauli matrix,
σ y = 0 −i i 0 .
As E is a monotonically increasing function of C, C can be also taken as a kind of measure of entanglement. Calculating (5) is reduced to calculate the corresponding minimum of
C(ρ) = min M i=1 p i C(|ψ i ),
which simplifies the problems.
For N ≥ 3, there is no such concurrence C in general. The concurrences discussed in [18] can be only used to judge whether a pure state is separable (or maximally entangled) or not [19,20]. The entanglement of formation is no longer a monotonically increasing function of these concurrences. Nevertheless, for a special class of quantum states, we can find certain quantities (generalized concurrence) to simplify the calculation of the corresponding entanglement of formation.
[Theorem 1]. If AA † has only two non-zero eigenvalues (each of which may be degenerate), the maximal non-zero diagonal determinant D of AA † is a generalized concurrence. The entanglement of formation of the corresponding pure state is a monotonically increasing function of D.
[Proof]. Let λ 1 (resp. λ 2 ) be the two non-zero eigenvalues of AA † with degeneracy n (resp. m), n + m ≤ N. That is,
D = λ n 1 λ m 2 .(6)
From the normalization of |ψ , one has T r(AA † ) = 1, i.e.,
nλ 1 + mλ 2 = 1 .(7)
λ 1 (resp. λ 2 ) takes values (0, 1 n ) (resp. (0, 1 m )). In this case the entanglement of formation of |ψ is given by
E(|ψ ) = −nλ 1 log 2 λ 1 − mλ 2 log 2 λ 2 .(8)
According to (6) and (7) we get
∂E ∂D = mλ 1−n 1 1 − nλ 1 − mλ 1 1 − nλ 1 m 1−m log 2 1 − nλ 1 mλ 1 ,(9)
which is positive for λ 1 ∈ (0, 1 n ). Therefore E(|ψ ) is a monotonically increasing function of D. D is a generalized concurrence and can be taken as a kind of measure of entanglement in this case.
Remark:
We have assumed that λ 1 , λ 2 = 0 in our theorem. In fact the right hand side of (9) keeps positive even when λ 1 (or equivalently λ 2 ) goes to zero. Hence E(|ψ ) is a monotonically increasing function of D for λ 1 ∈ [0, 1 n ] (resp. λ 2 ∈ [0, 1 m ]) satisfying the relation (7). Nevertheless if λ 1 = 0 (or λ 2 = 0), from (6) one gets D = 0, which does not necessarily mean that the corresponding state |ψ is separable. As E(|ψ ) is just a monotonically increasing function of D, D only characterizes the relative degree of the entanglement among the class of these states. From (7) and (8), the quantum states with the measure of entanglement characterized by D are generally entangled. They are separated when n = 1, λ 1 → 1 (λ 2 → 0) or m = 1,
λ 2 → 1 (λ 1 → 0)
. For the case n = m > 1, all the pure states in this class are non-separable.
In this case, we have
E(|ψ ) = n −x log 2 x − ( 1 n − x) log 2 ( 1 n − x) ,(10)where x = 1 2 1 n + 1 n 2 (1 − d 2 ) and d ≡ 2nD 1 2n = 2n λ 1 λ 2 .(11)
In this case we define d to be the generalized concurrence. d takes value from 0 to 1. From (10) one can show that E(d) is a convex function (that is, curving upward):
∂ 2 E ∂d 2 = log 1+ √ 1−d 2 1− √ 1−d 2 − 2 √ 1 − d 2 (1 − d 2 ) 3/2 log 4 > 0, ∀ d ∈ [0, 1] .
Instead of calculating E(ρ) directly, one may calculate the minimum decomposition of D(ρ) or d(ρ) to simplify the calculations. In the following, as an example, we calculate the entanglement of formation for a class of mixed states with N = 4.
We consider a class of pure states (1) with the matrix A given by
A = 0 b a 1 b 1 −b 0 c 1 d 1 a 1 c 1 0 −e b 1 d 1 e 0 ,(12)a 1 , b 1 , c 1 , d 1 , b, e ∈ C.
The matrix AA † has two eigenvalues with degeneracy two, i.e.,
n = m = 2. det(AA † ) = |b 1 c 1 − a 1 d 1 + be| 4 .(13)
According to our theorem, the generalized concurrence
d = 4|b 1 c 1 − a 1 d 1 + be|(14)
is a kind of measure of entanglement for all pure states of the form (12). (14) can be written as
d = | ψ|pψ * | ≡ | ψ|ψ |,(15)
where ψ|ψ = ψ|pψ * .
We now calculate the entanglement of formation for a special class of mixed states. Let Ψ denote the set of pure states (1) with A given as the form of (12). We consider all mixed states with density matrix ρ such that its decompositions are of the form
ρ = M i=1 p i |ψ i ψ i |, M i=1 p i = 1, |ψ i ∈ Ψ.(16)
Let s ≤ 16 be the rank of ρ and |v i , i = 1, ..., s, be a complete set of orthogonal eigenvectors corresponding to the nonzero eigenvalues of ρ, such that v i |v i is equal to the ith eigenvalue. Other decomposition {|w i } of ρ can then be obtained through unitary transformations:
|w i = s j=1 U * ij |v j ,(17)
where U is a t × t unitary matrix, t ≥ s. The states |w i are so normalized that ρ = i |w i w i |. It is obvious that for any |ψ i ∈ Ψ, the state of complex linear combination of |ψ i (unitary transformations) also belongs to Ψ.
The decomposition according to the orthogonal eigenvectors |v i of ρ is not the one satisfying (5) in general. As the generalized concurrence can be written as the form (15) for a pure state, we consider the quantity w i |w j . From (17) we have
w i |w j = (Uτ U T ) ij ,
where the matrix τ is defined by τ ij ≡ v i |v j . The matrix p in (15) is a symmetric one. Therefore τ is also symmetric and can always be diagonalized by a unitary matrix U such that Uτ U T = diag(Λ 1 , ..., Λ s ) [21]. The diagonal elements Λ i , in deceasing order, can always be made to be real and non-negative. Since Uτ τ * U † is also diagonal, Λ i are just the square roots of the eigenvalues of τ τ * . It is straightforward to check that they are also the eigenvalues of the Hermitian matrix R ≡ √ ρpρ * p √ ρ, or, alternatively, the square roots of the eigenvalues of the non-Hermitian matrix ρpρ * p.
Hence there always exits a decomposition consisting of states |w i , i = 1, . . . , s, such that
w i |w j = Λ i δ ij .(18)
We can now deal with the problem in a way similar to [15]. Set
|y 1 = |w 1 , |y j = i|w j for j = 2, ..., s.(19)
Any decomposition can be written in terms of the states |y i via the equation
|z i = s j=1 V * ij |y j ,
where V is a t × s matrix whose s columns are othonormal vectors.
The average concurrence of a general decomposition is given by
d = i |(V Y V T ) ii | = i j (V ij ) 2 Y jj ,(20)
where Y is the real diagonal matrix defined by Y ij = y i |y j . Using the fact that
i |(V ij ) 2 | = 1, one gets d ≥ | ij (V ij ) 2 Y jj | ≥ Λ 1 − 16 i=2 Λ i .
Therefore the minimum decomposition of the generalized concurrence is
d(ρ) = Λ 1 − 16 i=2 Λ i .(21)
Similar to the case N = 2, there are decompositions such that the generalized concurrence of each individual state is equal to d(ρ). Therefore the average entanglement is E(d(ρ)).
Different from the case N = 2, the entanglement of formation of density matrices (16) can not be zero in general. As every individual pure state in the decompositions is generally an entangled one, this class of mixed states are not separable.
In the following we call an N-dimensional pure state (1) d-computable if A satisfies the following relations:
det(AA † ) = ([A][A] * ) N/2 det(AA † − λId N ) = (λ 2 − A λ + [A][A] * ) N/2 ,(22)
where [A] and A are any quadratic forms of a ij (these quadratic forms could be different for different matrix A), Id N is the N × N identity matrix. We denote A the set of matrices satisfying (22), which implies that for A ∈ A, AA † has at most two different eigenvalues and each one has order N/2. Formula (21) can be generalized to general N 2 × N 2 density matrices with decompositions on N-dimensional d-computable pure states.
A class of N-dimensional, N = 2 k , 2 ≤ k ∈ IN, d-computable states has been constructed in [22]. These states give rise to a special class of density matrices with decompositions in these pure states, and the entanglement of formation for these density matrices can be calculated analytically according to the method above.
Let A be an N × N matrix with entries a ij ∈ C, i, j = 1, ..., N, with the following properties:
Set
A 2 = a −c c d ,
where a, c, d ∈ C. For any b 1 , c 1 ∈ C, a 4 × 4 matrix A 4 ∈ A can be constructed in the following way,
A 4 = B 2 A 2 −A t 2 C t 2 = 0 b 1 a −c −b 1 0 c d −a −c 0 −c 1 c −d c 1 0 ,(23)
where
B 2 = b 1 J 2 , C 2 = c 1 J 2 , J 2 = 0 1 −1 0 .
A 4 satisfies the relations in (22):
A 4 A † 4 = [(b 1 c 1 + ad + c 2 )(b 1 c 1 + ad + c 2 ) * ] 2 = ([A 4 ][A 4 ] * ) 2 , A 4 A † 4 − λId 4 = (λ 2 − (b 1 b * 1 + c 1 c * 1 + aa * + 2cc * + dd * )λ + (b 1 c 1 + ad + c 2 )(b 1 c 1 + ad + c 2 ) * ) 2 = (λ 2 − A 4 λ + [A 4 ][A 4 ] * ) 2 , where [A 4 ] = (b 1 c 1 + ad + c 2 ), A 4 = b 1 b * 1 + c 1 c * 1 + aa * + 2cc * + dd * .(24)
A 8 ∈ A can be obtained from A 4 ,
A 8 = B 4 A 4 −A t 4 C t 4 ,(25)
where
B 4 = b 2 J 4 , C 4 = c 2 J 4 , J 4 = 0 0 0 1 0 0 1 0 0 −1 0 0 −1 0 0 0 , b 2 , c 2 ∈ C.(26)
For general construction of high dimensional matrices A 2 k+1 ∈ A, 2 ≤ k ∈ IN, we have
A 2 k+1 = B 2 k A 2 k (−1) k(k+1) 2 A t 2 k C t 2 k ≡ b k J 2 k A 2 k (−1) k(k+1) 2 A t 2 k c k J t 2 k ,(27)J 2 k+1 = 0 J 2 k (−1) (k+1)(k+2) 2 J t 2 k 0 ,(28)
where b k , c k ∈ C, B 2 k = b k J 2 k , C 2 k = c k J 2 k . It can be verified that A 2 k satisfies the following relations [22]:
|A 2 k+1 A † 2 k+1 | = ([A 2 k+1 ][A 2 k+1 ] * ) 2 k = [((−1) k(k+1) 2 b k c k − [A 2 k ])((−1) k(k+1) 2 b * k c * k − [A 2 k ] * )] 2 k , |A 2 k+1 A † 2 k+1 − λId 2 k+1 | = (λ 2 − ||A 2 k+1 ||λ + [A 2 k+1 ][A 2 k+1 ] * ) 2 k .(29)
Therefore the states given by (27) are d-computable. In terms of (11) the generalized concurrence for these states is given by
d 2 k+1 = 2 k+1 |[A 2 k+1 ]| = 2 k+1 |b k c k + b k−1 c k−1 + ... + b 1 c 1 + ad + c 2 |.
Let p 2 k+1 be a symmetric anti-diagonal 2 2k+2 × 2 2k+2 matrix with all the anti-diagonal elements 1 except for those at rows 2 k+1 − 1 + s(2 k+2 − 2), 2 k+1 + s(2 k+2 − 2), 2 k+2 − 1 + s(2 k+2 − 2), 2 k+2 + s(2 k+2 − 2), s = 0, ..., 2 k+1 − 1, which are −1. d 2 k+1 can be written as
d 2 k+1 = | ψ 2 k+1 |p 2 k+1 ψ * 2 k+1 | ≡ | ψ 2 k+1 |ψ 2 k+1 |,(30)
where
|ψ 2 k+1 = 2 k+1 i,j=1 (A 2 k+1 ) ij e i ⊗ e j .(31)
According to the calculations on entanglement of formation for d-computable states, for a 2 2k+2 × 2 2k+2 density matrix ρ 2 k+2 with decompositions on pure states of the form (31), its entanglement of formation is given by E(d 2 k+1 (ρ 2 2k+2 )), where
d 2 k+1 (ρ 2 2k+2 ) = Ω 1 − 2 2k+2 i=2 Ω i ,(32)
and Ω i , in decreasing order, are the the square roots of the eigenvalues of the matrix
ρ 2 2k+2 p 2 k+1 ρ * 2 2k+2 p 2 k+1 .
We have studied the entanglement of formation for a class of higher dimensional quantum mixed states. It is shown that for certain N-dimensional pure quantum systems, the entanglement of formation is a monotonically increasing function of a generalized concurrence. From this generalized concurrence the entanglement of formation for a large class of quantum states can be calculated analytically. The physical properties of these states are remained to be studied further.
Let p be a 16 × 16 matrix with only non-zero entries p 1,16 = p 2,15 = −p 3,14 = p 4,10 = p 5,12 = p 6,11 = p 7,13 = −p 8,8 = −p 9,9 = p 10,4 = p 11,6 = p 12,5 = p 13,7 = −p 14,3 = p 15,2 = p 16,1 = 1. d in
. See, D P Example, Divincenzo, Science. 270255See, for example, D.P. DiVincenzo, Science 270, 255 (1995).
. C H Bennett, G Brassard, C Crépeau, R Jozsa, A Peres, W K Wootters, Phys. Rev. Lett. 701895C.H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W.K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
. S Albeverio, S M Fei, Phys. Lett. A. 276S. Albeverio and S.M. Fei, Phys. Lett. A 276, 8-11(2000).
. S Albeverio, S M Fei, W L Yang, Commun. Theor. Phys. 38S. Albeverio and S.M. Fei and W.L. Yang, Commun. Theor. Phys. 38, 301-304(2002);
. Phys. Rev. A. 6612301Phys. Rev. A 66, 012301(2002).
. C H Bennett, S J Wiesner, Phys. Rev. Lett. 692881C.H. Bennett and S.J. Wiesner, Phys. Rev. Lett. 69, 2881 (1992).
. C A See, N Fuchs, R B Gisin, C-S Griffiths, A Niu, Peres, Phys. Rev., A. 561163and references thereinSee, for example, C.A. Fuchs, N. Gisin, R.B. Griffiths, C-S. Niu, and A. Peres, Phys. Rev., A 56, 1163 (1997) and references therein.
. C H Bennett, D P Divincenzo, J A Smolin, W K Wootters, Phys. Rev. A. 543824C.H. Bennett, D.P. DiVincenzo, J.A. Smolin, and W.K. Wootters, Phys. Rev. A 54, 3824 (1996).
. C H Bennett, H J Bernstein, S Popescu, B Schumacher, Phys. Rev. A. 532046C.H. Bennett, H.J. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A 53, 2046 (1996).
. V Vedral, M B Plenio, M A Rippin, P L Knight, Phys. Rev. Lett. 782275V. Vedral, M.B. Plenio, M.A. Rippin, and P.L. Knight, Phys. Rev. Lett. 78, 2275 (1997);
. V Vedral, M B Plenio, K Jacobs, P L Knight, Phys. Rev. A. 564452V. Vedral, M.B. Plenio, K. Jacobs, and P.L. Knight, Phys. Rev. A 56, 4452 (1997);
. V Vedral, M B Plenio, Phys. Rev. A. 571619V. Vedral and M.B. Plenio, Phys. Rev. A 57, 1619 (1998).
. A Peres, Phys. Rev. Lett. 771413A. Peres, Phys. Rev. Lett. 77 1413 (1996).
. K Życzkowski, P Horodecki, Phys. Rev. A. 58883K.Życzkowski and P. Horodecki, Phys. Rev. A 58, 883 (1998).
B Schumacher, M D Westmoreland, quant-ph/0004045Relative entropy in quantum information theory. B. Schumacher and M.D. Westmoreland, Relative entropy in quantum information the- ory, quant-ph/0004045.
. M Horodecki, P Horodecki, R Horodecki, Phys. Rev. Lett. 805239M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. 80, 5239 (1998).
. E M Rains, IEEE Trans. Inform. Theory. 47E.M. Rains, IEEE Trans. Inform. Theory 47, 2921-2933 (2001).
. S Hill, W K Wootters, Phys. Rev. Lett. 785022S. Hill and W.K. Wootters, Phys. Rev. Lett. 78, 5022 (1997).
. W K Wootters, Phys. Rev. Lett. 802245W.K. Wootters, Phys. Rev. Lett. 80, 2245 (1998).
. R F Werner, M M Wolf, Phys. Rev. A. 6162102R.F. Werner and M.M. Wolf, Phys. Rev. A 61, 062102 (2000).
. B M Terhal, K Gerd, K G H Vollbrecht, Phys. Rev. Lett. 852625B.M. Terhal, K. Gerd and K.G.H. Vollbrecht, Phys. Rev. Lett. 85, 2625 (2000).
. A Uhlmann, Phys. Rev. A. 6232307A.Uhlmann, Phys. Rev. A 62, 032307 (2000).
. S Albererio, S M Fei, J. Opt. B: Quantum Semiclass. Opt. 3S. Albererio and S.M. Fei, J. Opt. B: Quantum Semiclass. Opt. 3 1-5(2001).
. P Rungta, V Bužek, C M Caves, M Hillery, G J Milburn, Phys. Rev. A. 64042315P. Rungta, V. Bužek, C.M. Caves, M. Hillery, G.J. Milburn, Phys. Rev. A 64, (042315) (2001).
. S Albeverio, S M Fei, D Goswami, Phys. Lett. A. S. Albeverio, S.M. Fei and D. Goswami, Phys. Lett. A, 91-96 (2001).
. S M Fei, X H Gao, X H Wang, Z X Wang, K Wu, Phys. Lett. A. 300S.M. Fei, X.H. Gao, X.H. Wang, Z.X. Wang and K. Wu, Phys. Lett. A 300, 559- 566(2002).
R A Horn, C R Johnson, Matrix Analysis. New YorkCambridge University PressR.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, New York, 1985.
A Special Class of Matrices and Quantum Entanglement. S M Fei, X Q Li, MISpreprintS.M. Fei and X.Q. Li, A Special Class of Matrices and Quantum Entanglement, MIS- preprint 2002.
| []
|
[
"Neural Networks Learnable Heterogeneous Convolution: Learning both topology and strength",
"Neural Networks Learnable Heterogeneous Convolution: Learning both topology and strength"
]
| [
"Rongzhen Zhao \nLynxi Technologies\n100097BeijingChina\n",
"Zhenzhi Wu \nLynxi Technologies\n100097BeijingChina\n",
"Qikun Zhang \nLynxi Technologies\n100097BeijingChina\n"
]
| [
"Lynxi Technologies\n100097BeijingChina",
"Lynxi Technologies\n100097BeijingChina",
"Lynxi Technologies\n100097BeijingChina"
]
| []
| a b s t r a c tExisting convolution techniques in artificial neural networks suffer from huge computation complexity, while the biological neural network works in a much more powerful yet efficient way. Inspired by the biological plasticity of dendritic topology and synaptic strength, our method, Learnable Heterogeneous Convolution, realizes joint learning of kernel shape and weights, which unifies existing handcrafted convolution techniques in a data-driven way. A model based on our method can converge with structural sparse weights and then be accelerated by devices of high parallelism. In the experiments, our method either reduces VGG16/19 and ResNet34/50 computation by nearly 5× on CIFAR10 and 2× on ImageNet without harming the performance, where the weights are compressed by 10× and 4× respectively; or improves the accuracy by up to 1.0% on CIFAR10 and 0.5% on ImageNet with slightly higher efficiency. The code will be available on www.github.com/Genera1Z/ LearnableHeterogeneousConvolution. (Q. Zhang).For recipe (1), a large model of high performance must be available firstly, and the pruning-retrain iteration is very timeconsuming; for recipe (2), it is hard to design an excellent structure by hand, and also too costly to search one even with tens of GPUs and days. So we choose to combine recipes (3) and (4), namely, sparsifying convolutions by learning from the brain.A biological neuron connects from multiple predecessor neuron via dendrites, and to a subsequent neuron via an axon. The connections are plastic both in topology and strength, through forming/losing/developing the synapses on dendritic spines(Bhatt, Zhang, & Gan, 2009;Harms & Dunaevsky, 2007). To make an analogy (Beysolow II, 2017), likeFig. 1, for the c o kernels in a standard convolution layer, each kernel of shape (k, k, c i ) belongs to a neuron, which is shared at different spatial positions of feature maps; for the c i slices in a kernel, each kernel slice of shape (k, k) is a dendrite; for the k * k elements in a slice, each element is a synapse. But unlike biological neurons, in a standard convolution layer, only the strength of kernel elements, i.e., weights is learnable, while the topology of kernel slices is not, let alone the number of slices and kernels. Such differences are where the CNN can learn from the brain. | 10.1016/j.neunet.2021.03.038 | [
"https://export.arxiv.org/pdf/2301.05440v1.pdf"
]
| 233,483,827 | 2301.05440 | 1df07c9b0b05f08a9196881226332921adefb5b6 |
Neural Networks Learnable Heterogeneous Convolution: Learning both topology and strength
Available online 20 April 2021
Rongzhen Zhao
Lynxi Technologies
100097BeijingChina
Zhenzhi Wu
Lynxi Technologies
100097BeijingChina
Qikun Zhang
Lynxi Technologies
100097BeijingChina
Neural Networks Learnable Heterogeneous Convolution: Learning both topology and strength
Available online 20 April 2021Article history: Received 8 August 2020 Received in revised form 10 March 2021 Accepted 29 March 2021a r t i c l e i n f oConvolution neural network Efficiency & performance Learning topology & strength Fine-grained but structural Hardware acceleration
a b s t r a c tExisting convolution techniques in artificial neural networks suffer from huge computation complexity, while the biological neural network works in a much more powerful yet efficient way. Inspired by the biological plasticity of dendritic topology and synaptic strength, our method, Learnable Heterogeneous Convolution, realizes joint learning of kernel shape and weights, which unifies existing handcrafted convolution techniques in a data-driven way. A model based on our method can converge with structural sparse weights and then be accelerated by devices of high parallelism. In the experiments, our method either reduces VGG16/19 and ResNet34/50 computation by nearly 5× on CIFAR10 and 2× on ImageNet without harming the performance, where the weights are compressed by 10× and 4× respectively; or improves the accuracy by up to 1.0% on CIFAR10 and 0.5% on ImageNet with slightly higher efficiency. The code will be available on www.github.com/Genera1Z/ LearnableHeterogeneousConvolution. (Q. Zhang).For recipe (1), a large model of high performance must be available firstly, and the pruning-retrain iteration is very timeconsuming; for recipe (2), it is hard to design an excellent structure by hand, and also too costly to search one even with tens of GPUs and days. So we choose to combine recipes (3) and (4), namely, sparsifying convolutions by learning from the brain.A biological neuron connects from multiple predecessor neuron via dendrites, and to a subsequent neuron via an axon. The connections are plastic both in topology and strength, through forming/losing/developing the synapses on dendritic spines(Bhatt, Zhang, & Gan, 2009;Harms & Dunaevsky, 2007). To make an analogy (Beysolow II, 2017), likeFig. 1, for the c o kernels in a standard convolution layer, each kernel of shape (k, k, c i ) belongs to a neuron, which is shared at different spatial positions of feature maps; for the c i slices in a kernel, each kernel slice of shape (k, k) is a dendrite; for the k * k elements in a slice, each element is a synapse. But unlike biological neurons, in a standard convolution layer, only the strength of kernel elements, i.e., weights is learnable, while the topology of kernel slices is not, let alone the number of slices and kernels. Such differences are where the CNN can learn from the brain.
Introduction
Convolution neural networks (CNNs) are showing their superior performance in vision tasks like classification, detection and segmentation, but their advantages in practice are impeded by their heavy computation.
To improve CNN efficiency, researchers have developed many recipes: (1) compressing existing models with pruning Zhou, Zhang, Wang, & Tian, 2019), quantization (Cao et al., 2019;Gong et al., 2019) or knowledge distillation (Jin et al., 2019;Peng et al., 2019); (2) designing efficient network structures either by hand (Hu, Shen, & Sun, 2018) and Ma, Zhang, Zheng, and Sun (2018) or by automatic search (Chen, Xie, Wu, & Tian, 2019;Howard et al., 2019); (3) exploiting efficient convolution operators, which are either smaller (Simonyan & Zisserman, 2014), or factorized (Szegedy, Ioffe, Vanhoucke, & Alemi, 2016;Szegedy, Vanhoucke, Ioffe, Shlens, & Wojna, 2016), or sparse (Huang, Liu, M.L.V., & Weinberger, 2018;Sun, Li, Liu, & Wang, 2018). On the other hand, (4) the neural network in a brain, which features sparse activation and high dynamics (Holtmaat et al., 2005;Stettler, Yamahachi, Li, Denk, & Gilbert, 2006), works in a much more complex and powerful yet efficient way (Merolla et al., 2014). Inception and HetConv sparsify the dendritic topology square to a row, line or dot, which are also not learnable (Singh, Verma, Rai, & Namboodiri, 2019;Szegedy et al., 2015); works like L0 training hit the mark by a fluke, but the weights learnt is non-structural, which is not friendly for hardware to accelerate (Christos, Max, & Diederik, 2018). See Section 2 for detailed reviewing. Our method, Learnable Heterogeneous Convolution (LHC), a seamless replacement to the standard convolution, is proposed to break those limitations by integrating the plasticity in both the synaptic strength, i.e., learnable weights, which is originally possessed by the standard convolution, and additionally the dendritic topology, i.e., learnable kernel slice shapes. With the latter as a unit, the number of slices in a neuron, and the number of neurons in a layer all become learnable.
Comprehensive experiments demonstrate the superiority of our method. At parallelism = 512, suppose batch size is 1, we can either reduce VGG/ResNet computation by nearly 5× on CIFAR10 and 2× on ImageNet without harming the performance, where the weights are compressed by 10× and 4× respectively; or improve the accuracy by up to 1.0% on CIFAR10 and 0.5% on ImageNet with slightly higher efficiency. The extra costs are no more than slightly longer training time.
Our contributions are:
(1) LHC is proposed to realize dual-plasticity of strength and topology in convolution, which is fine-grainedly yet structurally sparse thus fits hardware acceleration;
(2) It sparsifies convolutions in full back-propagation, instead of undifferentiable ways used by most pruning methods such as cutting of weights that have least magnitude;
(3) It requires negligible extra costs at training and can greatly improve CNN models' efficiency at inference stage even with some performance gain.
The remaining content is organized as follows: related works are reviewed in Section 2; the proposal is detailed in Section 3; how our method unifies various convolution techniques is analyzed in Section 4; experiments are presented in Section 5; more advanced analyses are discussed in Section 6; the conclusion is drawn in Section 7.
Given a convolution layer, the notations list in Table 1.
Related works
Here the sparse convolution techniques for improving CNN efficiency are reviewed, and keypoints where the convolution can imitate the brain are inducted.
Structural Sparse Convolutions: Handcrafted
Convolution can be sparsified in spatial dimensions. A 2D convolution kernel is factorized into two perpendicular 1D ones in Szegedy, Ioffe, Vanhoucke, and Alemi (2016) and Szegedy, Vanhoucke, Ioffe, Shlens, and Wojna (2016). Kernels are designed into incremental sizes in Tan and Le (2019). Slices of a 3*3 kernel are replaced by 1*1 sizes at intervals in Singh et al. (2019) shown in Fig. 8.
Sparsification can also be taken in channels. In a GWC (SIfre & Mallat, 2014), the input channels are grouped and each output channel is correlated with one of the groups. Features among different groups are further exchanged in Sun et al. (2018), Xie et al. (2018) and Zhang, Qi, Xiao, and Wang (2017). Shifting or negation saves kernels/channels either, like in Shang, Sohn, Almeida, and Lee (2016) and Yan, Li, Li, Zuo, and Shan (2018).
Dimensions of space and channel can be considered together. Like in Wang, Xu, Chunjing, Xu, and Tao (2018), standard kernels are turned into secondary kernels, which are degressive in space and partially connected in channel.
Neuron number in a layer, dendrite number in a neuron, and dendritic topology are all handcrafted in such methods; only the synaptic strength is learnable.
Structural Sparse Convolutions: Learnt
Compared with the above, works thinking in this way bring in learnability of sparsity. But the techniques employed are still restricted to GWC, DWC and PWC.
Works like Huang et al. (2018) require too many iterations, where connections between input and output channels of less importance are cut off progressively while training to get a desired group number. Some realize learnability under the guidance of Singular Value Decomposition like Peng et al. (2018), which is built of GWC and PWC in a way similar to Howard et al. (2017). Others like Zhang (2019) realize unevenly grouped GWC layers in a model by fusing network architecture search into training.
Only sparsity in channel dimension is considered by them. Namely, besides the synaptic strength, they do take into account the plasticity of neurons number in a layer or dendrite number in a neuron, but overlook the dendritic topology.
By the way, Verelst and Tuytelaars (2020) is worth learning from, which generates feature masks to skip computations corresponding to zeros, even if it sparsifies activations instead of weights.
Non-Structural Sparse Convolutions: Learnt
Such works are similar to the pruning methods Zhou et al., 2019), except that the sparsity is gained during rather than after training. Since non-structural sparsity goes against hardware acceleration (Deng, Li, Han, Shi, & Xie, 2020), recent works are not that many. (Yin et al., 2019) to address specific image contents, which inspired our work.
L0 regularization are often used to train models full of zeros, like Christos et al. (2018), enabling joint optimization of weights' value and topology via non-negative random gates of hard concrete distribution. Inspired by dynamic connections in the brain Bhatt et al. (2009) and Harms and Dunaevsky (2007), the rewiring mechanism is proposed by Guillaume et al. (2018) to enable simultaneous learning of connections and weights for constrained hardware resources.
Their implementation of plasticity in dendritic topology, namely, neuron number in a layer or dendrite number in a neuron, are good references.
Our inspiration: making it possible in a convolution layer to (a) learn the dendritic topology and synaptic strength at the same time, and (b) realize fine-grained yet structural sparsity for hardware acceleration.
Proposed method
How LHC integrates the plasticity in dendritic topology and synaptic strength is elaborated here. The structural sparsification and hardware acceleration of LHC-based models are also described.
Prototype
Convolution Kernels of Uncommon Shapes
For either traditional or CNN-based algorithms, it is quite common to use convolution kernels of square shapes. However, SWF (Yin, Gong, & Qiu, 2019), a latest work, broke fresh ground and is equal in force compared with CNN-based algorithms. The key is that kernels of uncommon shapes, shown in Fig. 2, were meticulously designed for different image patterns to gain excellent feature extraction capability.
Enlightened by this, we empirically design 15 rigid kernel shapes, shown in Fig. 3(a). Among them, shape ⟨1⟩1, ⟨2⟩1 and ⟨6⟩1 are common in use; shape ⟨3⟩1 and ⟨3⟩3 are already used in GoogLeNet Inceptions; shape group ⟨4⟩ and ⟨5⟩ inherit from SWF. Shape group ⟨2⟩∼⟨5⟩ are designed to sparsify a convolution layer in spatial dimensions, while group ⟨1⟩ is to reduce redundancy in channel dimension. The theoretical foundation for such a design is that CNNs are able to extract visual patterns of different levels like dot, edge, curve and plane at different layers (Goodfellow, Bengio, & Courville, 2016). Accordingly, very sparse shape groups ⟨2⟩ and ⟨3⟩ are designed to address simple patterns like dots and edges; shape groups ⟨4⟩∼⟨6⟩, not that sparse, are to handle complex patterns like curves and planes.
Further on, we relieve the constraints of the aforementioned rigid shapes, and for a 3*3 kernel, we provide 2 3 * 3 = 512 free shapes for an LHC layer to choose, shown in Fig. 3(b). Obviously, the rigid shapes are a sub-set of the free shapes.
An LHC layer equipped with the rigid shapes is called LHCR, and LHC with the free shapes is LHCF. In LHC, these shapes are formed as masks consisting of 0/1 elements, and are paired with corresponding effect factors to guide the learning of the shapes of kernel slices. Details are presented in Section 3.3. To make a comparison, LHCR implies priori knowledge of those rigid shapes while LHCF has more freedom in sparsification, which is verified by experiments in Section 5.
Learnable Heterogeneous Convolution
For a Learnable Heterogeneous Convolution (LHC) layer, its kernel slices are learnt to be these uncommon shapes, rather than pre-defined.
Suppose a convolution layer with c i input channels and c o output channels. With the aforementioned uncommon shapes, the kernels can be shaped (1) kernel-by-kernel (KbK), where shapes are the same within a kernel and different among kernels, or (2) slice-by-slice (SbS), where shapes are different both within a kernel and among kernels. Refer to Fig. 4 the top two rows.
Obviously, SbS is more flexible than KbK and thus can eliminate the convolution redundancy in both spatial and channel dimensions better; but the computation graph of SbS is too fragmented for the hardware to accelerate. So we split the difference: only one shape combination is allowed for every adjacent c gi slices and every adjacent c go kernels. Then we can learn a structural sparse LHC layer, which supports parallelism up to c gi * c go ≡ 64 * 8. Refer to the bottom two rows of Fig. 4. Besides, these constraints happen to be a kind of regularization, benefiting the performance. See Section 6 for details.
Computation reduction
To facilitate discussion, we take the 3*3 convolution as an example, and use the indexes in Fig. 3 to represent shapes.
Following the aforementioned setting, the computation quantity or the total number of multiplication-addition of a standard convolution layer is:
C STD = h o × w o × c i × c o × ∥s 511 ∥ 0 = h o × w o × c i × c o × 9(1)
where s 511 the 511th shape in Fig. 3 and its L0 norm is 9. Similarly, the computation quantity of an LHC layer is:
C LHC = h o × w o × ( co/cgo ∑ y=1 ( c i /c gi ∑ x=1 ∥s x,y ∥ 0 × c gi ) × c go ) = h o × w o × c gi × c go × c i /c gi ×co/cgo ∑ s=1 n s(2)
where c gi and c go are the aforementioned topology constraints; s x,y is the shape of the xth slice in the yth kernel; n s is L0 norm of the sth shape in all c i /c gi × c o /c go positions, i.e., ∥s x,y ∥ 0 = n s .
So, the computation reduction is:
∆C = h o × w o × (c i × c o × 9 − c gi × c go × c i /c gi ×co/cgo ∑ s=1 n s )(3)
when n s = 9, ∆C = 0, where all shapes in LHC is s 511 , i.e., the standard convolution; when n s = 0, ∆C = C STD , where all shapes in LHC is s 0 , which means all features are redundant and thus can only be approximated. Besides, existing convolution techniques can be unified by LHC. See Section 4 for details.
Learnability
At the training stage, the only difference between LHC and a standard convolution is the construction of the kernels. As shown in Fig. 5, the masks composed of zeros and ones are constructed first, then multiplied with the kernels, where elements multiplied with zeros in the masks are deactivated during both forward and back-propagation; finally, the masked kernels are convolved with the input features, and the output features are obtained. The masks and kernels realize plasticity in dendritic topology and synaptic strength respectively.
Guide the Learning of Topology For a convolution layer, its computation quantity is proportional to the density of its kernels; for a CNN model, its computation quantity is positively correlated with its global density, (1) every c gi * c go kernel slices are equipped with a set of effect factors, each of which points to a shape belonging to the rigid shapes or the free; (2) through a differentiable step function, a mask slice is got then tiled into masks partial of shape (k, k, c gi , c go ). Center box: do (2) for every c gi * c go slices in the kernels, then the masks of shape (k, k, c i , c o ) are constructed. Right box: (3) multiply the masks with the kernels element-wisely; (4) do the convolution with the masked kernels and the input feature maps of
shape (b, h i , w i , c i ), finally get the output features of shape (b, h o , w o , c o ). (5)
In the backward propagation, the shapes get different gradients according to their contributions. (6) As training goes on, the advantage of different shapes is accumulated in the effect factors, and the most suitable shapes gradually win out. Note: in the left box, purple stars are where gradients pass during back propagation; purple stars of the other two boxes are omitted; in the right box, the biases and activation are omitted for simplicity. of which the maximum is 1, namely, no sparsification, and the minimum limit is 0.
Given a model with L LHC layers, we can set a global density target and calculate mask regularization loss; then by minimizing it, we can guide the model to converge to a state that costs much less computation:
l mask = |d t − ∑ L l=1 ∥M l ∥ 1 ∑ L l=1 size(M l ) | (4)
where l mask is the mask regularization loss; d t is the density target; M l is the masks in the lth LHC layer.
Here, L1 norm is used instead of L0, because (1) M l only consists of zeros and ones, which means L0 is equal to L1; and (2) L1 norm is differentiable.
Hence, optimizing this model turns into simultaneously minimizing the mask regularization loss and the task loss, e.g., categorical cross-entropy loss:
min M 1 ,...M L α × l mask + l task (5) s.t. c gi = C 1 , c go = C 2 (6)
where α is a positive constant; l task is the task loss; c gi and c go are the aforementioned topology constraints for structural sparsity, and their value C 1 ≡ 64 and C 2 ≡ 8 typically.
Now the point is how to construct the M l . Given the kernels of a layer and the topology constraints c gi and c go , to construct the masks, we have c i /c gi × c o /c go positions to determine. In other words, for each of these positions, we need to calculate a mask slice, a k * k matrix, using the aforementioned shapes that are either rigid or free.
Enroll Shapes into Competition
To calculate a mask slice, we hope those shapes illustrated in Fig. 3 are enrolled in all together so that they compete with one another along with the training steps, letting the fittest eventually win out.
For LHCR, at each of these positions, a 15D vector e called effect factors is used to represent the effects of those 15 rigid shapes. So the mask slice for this position is:
m = 15 ∑ i=1 step(e i , e) × s i (7) step(e i , e) = { 1 e i = max(e) 0 e i < max(e) (8) ∇step(e i , e) ≡ { 1 |e i − mean(e)| < 1 c others (9)
where e i is the ith element in e and s i is the ith of those 15 rigid shapes; c is a small positive constant, empirically set to 0.1; step is differentiable and ensures the summation of effect factors be one. softmax is not used because it is heavy to calculate and can hardly make one of the shapes win out. For LHCF, equip each of these positions with a 3*3 matrix e as the effect factors, which represent the effects of those 512 free shapes for this position. So the mask slice for this position is:
m = step(e) (10) step(e i,j ) = { 1 e i,j > 0 0 e i,j ≤ 0 (11) ∇step(e i,j ) ≡ { 1 |e i,j | < 1 c others(12)
where i, j indexes the elements in matrix e; c is a small positive constant, empirically set to 0.1.
Both of the ∇step functions are designed to have grad = 1 and grad = 0.1 segments, so that under Xavier-like initialization, which is widely adopted, the shape of each mask slice is actively learnable at the early stage of training and becomes more stable but not absolutely stable at the late stage of training. Refer to Section 6 for more details.
Via the aforementioned formulas, a mask slice m is calculated, then tiled/repeated in the dimensions of input and output channel c gi and c go times respectively, then get the masks partial 5. conduct this training epoch 6. turn to step 4 until n warm is reached Algorithm 1: mask enabling warm-up, enables the masks in LHC layers randomly with increasing possibilities at the beginning epochs, to eliminate non-convergence issues.
Algorithm 2: mask regularizing warm-up, increases the weight of r mask gradually at the starting epochs, to let the CNN model learn strength first and then topology.
Extra Cost Analysis
Compared with the standard convolution, the extra weights, i.e., the effect factors, brought in by an LHC layer during training The extra computation, i.e., the construction of the masks, brought in by an LHC layer during training is 3 * 3 * c
is 3 * 3 * c i * c o /(c gi * c go )/(3 * 3 * c i * c o ) = 1/(c gi * ci * c o /(h o * w o * 3 * 3 * c i * c o ) = 1/(h o * w o )
. Suppose a 224*224 input image and a VGG16 model, the computation increases by about 0.3856%. So at the training stage, the cost of our method is nearly the same as that of standard convolution. See Section 5 for more details.
Acceleration
At the inference stage, CNN models built of LHCs can be easily accelerated by hardware of high parallelism, which consumes much less memory and computation.
Given a MAC array of parallelism = 512, i.e., 512 multiplication and addition units. To maximize hardware utilization, two groups of 512 numbers must be fed to them in one clock period, which means such numbers must be stored continuously respectively. This can be achieved under ''structural sparsity'', but cannot in fragmented computation graphs. This is the key point in implementing the optimal hardware acceleration.
Efficient Hardware Implementation
As mentioned above, the calculation of convolution is the only part influenced by LHC, so the following discussion focuses on this part.
To realize high parallel computing of a standard convolution layer, sufficient units of multipliers and adders to form a MAC array are needed. Similarly, to make the most of the structural sparsity of LHC layers, it is needed that (1) In the weight buffer, only valid weights of LHC are loaded from external storage before runtime, and are stored in an intensive way. The memory consumption of an LHC layer is just about 50%∼1% of that of a standard convolution layer, so bandwidth and computation at runtime are saved.
(2) In the I/O buffers, input and output feature maps are stored at runtime, just like that of standard convolution.
Data Management and Parallel Computing
Here explains how all invalid weights and invalid computation are avoided and how high parallelism is achieved.
As shown in Fig. 6, as well as Fig. 7, the input/output features are organized in a c gi -aligned /c go -aligned manner, i.e., the width of the IO buffers is c gi /c go ; yet the weights are organized in a c gi * c go -aligned manner.
Given input features of shape (h i , w i , c i ), for every pixel of shape (c i , ) in it, every c gi adjacent elements are stored as a row in the input buffer. At each convolution step, elements of shape (3, 3, c i ) in current sliding window are copied to the window buffer and are stored in the same way.
Given weights of shape (3, 3, c i , c o ), for each kernel slice group of shape (3, 3, c gi , c go ) in it, every c gi * c go elements are stored as a row in the weight buffer, with full-zero c gi * c go -length segments being skipped. At every convolution step, feature elements in the buffer window that correspond to these full-zero segments are skipped according to the discontinuous addresses provided by ALUT. Since employing different storing ways, the window buffer and input buffer need different fetch addresses. Data in the weights buffer are fetched via the incremental address counted by AGU (address generation unit); data in the window buffer are fetched via the pre-defined address saved in ALUT (address lookup table).
With the addresses from AGU and ALUT, every row of length c gi * c go in the weight buffer, and the corresponding row of length c gi in the window buffer, are sent to the MAC array to do parallel multiplication-addition; then the c go results are sent to c go output registers to accumulate 3 * 3 * c i /c gi times; finally the output element at location (x, y) in the corresponding channel of the output features is got. By iterating all the sliding windows and all the kernels, the full output features are got. Under topology constraints c gi and c go , for a kernel, every c gi adjacent kernel slices have the identical topology, and for a layer, every c go adjacent kernels have the identical topology. So at each clock cycle, c gi * c go elements can be fed to the MAC array. Suppose c gi = 64 and c go = 8, parallelism = 512 is reached. This means there are 1024 multiplication and addition operations per clock, and the peak compute power reaches 1TOPS at 1 GHz clock frequency, which meets the compute requirement of most terminal devices specifically designed for CNN acceleration (Andri, Karunaratne, Cavigelli, & Benini, 2020;Moons, Bankman, Yang, Murmann, & Verhelst, 2018).
Of course the parallelism can be further improved if taking the batch dimension into account. Given batch size b, the parallelism can be further increased to b * c gi * c go .
Extra Cost Analysis
Compared with the standard convolution, extra components are required to accelerate an LHC layer during inference, i.e., ALUT for indexing sparse weights and AGU for skipping invalid feature elements.
For ALUT, every c gi * c go weights need an address index, and the overall density of all LHC layers is no greater than 20%, so extra
memory consumption is just k * k * c i /c gi * c o /c go /(k * k * c i * c o ) = 1/c gi /c go = 0.
1953% of that of the standard convolution. Let alone 80+% of space is already saved by our method. For AGU, the implementation just requires negligible resources compared with the whole.
So at the inference stage, a large portion of computational resources can be saved, even though a little extra cost is introduced in.
Unification
With the topology plasticity, various convolution techniques discussed in Section 2 can be unified by LHC. As shown in Figs. 8 and 9, representative ones like GWC, DWC, HetConv and Inception are used as examples.
The shape of the xth slice in the yth kernel of a convolution layer is notated as s x,y .
Group-Wise Convolution
Group-Wise Convolution or GWC (Alex et al., 2012), can be viewed as an LHC using shapes ⟨1⟩1 and ⟨6⟩1 only, illustrated in Fig. 8 the first row. Under the settings of Section 3.3, when the following conditions are satisfied, an LHC degenerates into a GWC:
s x,y = ⎧ ⎨ ⎩ s 511 ( kc gi < x ≤ (k + 1)c gi kc go < y ≤ (k + 1)c go ) s 0 others(13)
where c gi = c i /n group , c go = c o /n group , k = 0, 1, . . . , n group -1.
Depth-Wise Convolution
Depth-Wise Convolution or DWC (Sandler et al., 2018), illustrated in Fig. 8 the second row, a special case of GWC, is an LHC using shapes ⟨1⟩1 and ⟨6⟩1 only. When the following conditions are satisfied, an LHC degenerates into a DWC:
s x,y = ⎧ ⎨ ⎩ s 511 ( kc gi < x ≤ (k + 1)c gi kc go < y ≤ (k + 1)c go ) s 0 others(14)
where c gi = 1, c go = c o /c i , and k = 0, 1, . . . , c i -1.
HetConv
HetConv (Singh et al., 2019) is an LHC using shapes ⟨2⟩1 and ⟨6⟩1 only, illustrated in Fig. 8 3rd row. When the following conditions are satisfied, given p as the number of shape ⟨6⟩1 in a kernel, LHC degenerates into HetConv:
s x,y = { s 511 (x + y − 1)%p = 0 s 1 others(15)
where c gi = 1, c go = 1, and p is a hyper parameter selected from {1, 2, . . . , c i }.
Inception
GoogLeNet Inceptions (Szegedy, Ioffe, Vanhoucke, & Alemi, 2016) can be rebuilt with LHC as shown in Fig. 9. Each dashed purple rectangle is equivalent to an LHC layer. For Inception-A, the 2nd and 3rd rectangles are GWCs where the groups are not evenly divided, and thus can be replaced by LHC. In Inception-B/C, orthogonal 1D convolution pairs are employed to approximate standard 2D convolutions, either serially or parallelly, where the former amounts to double non-linearity and cannot be equivalent to one LHC layer. But for simplicity, they are all replaced by LHC, then a unified Inception shown in the bottom right of Fig. 9 is got.
Others Other convolution techniques designed for efficiency can also be unified. PWC can be viewed as standard convolution of kernel size 1*1; MixConv (Tan & Le, 2019) is a GWC variant where kernel sizes are different; SeeSaw (Zhang, 2019), IGC Xie et al., 2018;Zhang et al., 2017), etc., are unevenly grouped GWCs.
Experiments
Here experiments are presented to compare LHC with other convolution techniques that are either widely used or achieve state-of-the-art results.
Experiment settings
Listed in Table 2 are the experiment items.
Networks and Convolution Techniques
The comparison of LHC, standard convolution, HetConv, FPGM (He et al., 2019) and Taylor (Molchanov et al., 2019) are conducted on VGG16/19 and ResNet34/50. Among them, HetConv represents techniques of structural sparse convolution; and FPGM/Taylor represent techniques of non-structural sparse convolution.
The comparison of LHC, Inception, GWC+channel-shuffle and DWC+PWC are conducted on GoogLeNet InceptionV4, Shuffle-NetV1 and MobileNetV1 respectively.
Switchover of Different Convolution Techniques
For VGG16/19 and ResNet34/50, all convolution layers except the 1st are replaced by LHCR/LHCF and HetConv.
For GoogLeNet, Inceptions are rebuilt by LHCR in the way described in Fig. 9. Shapes used in LHCR are ⟨1⟩1, ⟨2⟩1, ⟨6⟩1, ⟨3⟩1 and ⟨3⟩3.
For ShuffleNet, all GWC+channel-shuffle structures are replaced by LHCR, where shapes used are ⟨1⟩1 and ⟨6⟩1. where shapes used are ⟨1⟩1, ⟨2⟩1 and ⟨6⟩1.
Tasks for Making Evaluations
The tasks for evaluating different convolution techniques are the image classification, on two widely recognized datasets, CI-FAR10 and ImageNet.
Training procedures are designed to be identical: on CIFAR10/ImageNet, models are to be trained 200/100 epochs, with early stopping patience 40/20, using SGD with initial learning rate 1e−2/1e−3 and decay factor 0.1. Data augmentations include random flipping, translation, rotation, hue, saturation, brightness and contrast.
Specifically, for networks built of LHC, (1) Hyper-parameter d t , is set to {invalid, 0.2, 0.1, 0.05, 0.01} for CIFAR10, and {invalid, 0.25, 0.1} for ImageNet. Here invalid means no density target is set, and thus the network can explore the most suitable shapes and converge to the best performance.
(2) Hyper-parameter c gi and c go are set to 64 and 8 respectively as parallelism ≥ 512 is a typical value of hardware acceleration (Andri et al., 2020;Moons et al., 2018). For FPGM and Taylor, experiments are carried out by following their official procedures.
Result analysis
Results are presented in computation reduction vs classification accuracy, as shown in Fig. 10. Fig. 11. Hardware resource saving with LHC. Clocks of 5× can be saved, which either benefits improving the throughput or reducing energy consumption. Memory of 10× can also be saved, which dramatically reduces the on-chip memory block resource. This model VGG16 is trained on CIFAR10 under constraints of parallelism = 512 and no obvious accuracy loss. Fig. 10 are the results of LHC and other convolution techniques. The x-axis is the computation reduction in percentage, and y-axis is the classification accuracy. The more a line to the top right corner, the better it is.
LHC vs Standard Conv, HetConv, FPGM & Taylor
Shown in
By comparing LHCR and LHCF, LHCR works slightly better if ∆flops is smaller, while LHCF is a bit superior if ∆flops is larger.
The former can be attributed to the priori knowledge implied in the design of those rigid shapes, and the latter is likely due to the full freedom of those free shapes in dropping redundant weights. By comparing LHC with HetConv/FPGM/Taylor, the networks built of LHC always achieve better accuracy at any computation reduction ratio. Given the same computation reduction, ours accuracy surpasses others by 2.0% on CIFAR10 or 1.0% on ImageNet. For example, our method can improve VGG16 computation efficiency by nearly 5× on CIFAR10 or 2× on ImageNet without harming its performance.
Interestingly, there is always appropriate ∆flops value, at which networks built of LHC surprisingly outperform the original, i.e., the horizontal dashed blue line in every sub-figure. This rarely happens with existing techniques. Specifically, our method improves top1 classification accuracy by 1.0% on CIFAR10 or 0.5% on ImageNet at most. Thus, our method is also a powerful training technique.
The parameter reduction is directly up to the d t we set, We also conducted the hardware acceleration simulation, mentioned in Section 3.4, on a VGG16 model, which is trained on CIFAR10 under d t = 0.1 and achieves ∆flops = 78% and acc = 91.45% (almost the same as the original performance). The number of clocks and memory rows (each row contains 3 * 3 * c gi * c go weights) required during the inference of an input image of shape 32*32 are drawn in Fig. 11. In hardware implementation, the number of clocks and memory rows can also be saved by 5× and 10×. With less requirements in clock and memory, the power dissipation of hardware is accordingly reduced.
LHC vs GWC+Channel-Shuffle, DWC+PWC & Inception
Here ∆flops are evaluated when ShuffleNet/MobileNet/ Clearly, the efficiency of GoogLeNet/ShuffleNet/MobileNet, either well-designed or light-weight, can be further improved by LHC, which means the efficiency gain contributed by our layerlevel method even beats that of beyond-layer-level methods.
Interestingly, the computation reduction amplitude of our method on ShuffleNet is obviously less than on the other two, which verifies the necessity of information exchange provided by channel-shuffle.
Discussions
Here we discuss why LHC works. The VGG16 models trained on ImageNet with d t = invalid are chosen, since other models have similar results.
What LHC Learns
The core difference between LHC and standard convolution is the dendritic topology plasticity, i.e., learning masks of suitable shapes for current task in a data-driven manner. We count the number of shapes learnt by the 3rd/7th/11th LHC layer, as shown in Fig. 12. The x-axis is the serial indexes of all 512 shapes drawn in Fig. 3, and the y-axis is the ratio a shape takes up in an LHC layer.
By comparing the shape distribution of different LHCR/LHCF layers, we find that: (1) low layers prefer denser shapes and mainly have no shape ⟨1⟩1, which is totally sparse; (2) high layers have the least amount of denser shapes but the most amount of shape ⟨1⟩1; (3) middle layers takes the intermediate shape
distribution.
This means that enough number of kernels in low layers is necessary for extracting sufficient basic patterns, yet after the non-linearity of a layer upon a layer, the patterns in the feature maps become increasingly abstract and sparse, and thus just need sparser kernels to extract.
How LHC Converges
We penetrate into the training process by inspecting how the shapes/masks in LHC layers evolve along with the training epochs. We calculate the masks correlation like Guillaume et al. (2018) of an LHC layer between two adjacent epochs, i.e., the mean of Fig. 13. Evolution of masks along with the training epochs of the 3rd/7th/11th LHCF layer. The x-axis is the epoch number; y-axis is the masks correlation. Mask correlation grows as training goes, which means LHC masks generally evolve from dynamic states to more static states.
Fig. 14.
The spectrums of the 3rd/7th/11th LHC layers and corresponding standard convolution layers from two models that are also used in Fig. 11. The x-axis is spectrum components, and y-axis is their amplitudes. The more the convolution layers' spectrum of a CNN model approximates uniform distribution, the better the model's performance will be. Apparently, LHC layers' spectrums are always more uniformly distributed than that of standard convolutions, which intuitively explains why under appropriate conditions models based on our method have performance better than models based on the standard convolution. element-wise-logical-and. We choose the 3rd/7th/11th layers of VGG16 built of LHCF to visualize in Fig. 13; LHCR has similar results. The x-axis is the number of epochs and the y-axis is the correlation of masks between two epochs.
The correlation of masks in each layer increases as the training epochs carry forward, but is always less than 100%, even if the model converges enough. This means our differentiable step function keeps the dendritic topology updating all the time (of course so is the synaptic strength), just like the highly dynamic biological neural networks in the brain (Holtmaat et al., 2005;Stettler et al., 2006).
Besides, the correlation of lower layers is larger than the higher, which is also as expected, because the lower layers learn the patterns that are more task-agnostic and thus requires more stable dendritic topology.
Why LHC Excels
As pointed out by OCNN (Wang, Chen, Chakraborty, & Yu, 2020), the spectrum of a convolution layer's DBT (doubly block Toeplitz) matrix reasonably indicates how well the CNN model's capacity is utilized. The more uniformly the spectrums distribute, the better the model performances. Clearly drawn in Fig. 14, our method's spectrums are always closer to uniform distribution than the original. Combined with Fig. 13, we may assure that it is the changing masks that force the model to utilize its capacity better.
Now we know about the weights in a model built of LHC:
(1) there are a lot of zeros, up to 80+%, which can be taken as a case of L0/L1 regularization training, i.e., strength regularization;
(2) these zeros distribute in a structural way in the kernels, which further provides another kind of regularization, i.e., topology regularization;
(3) the position of these zeros changes along with the training steps, which can be seen as the dropout of the weights.
Hence, it is not difficult to understand why our method surpasses existing methods.
Conclusion
Our method LHC integrates the plasticity of both dendritic topology and synaptic strength, making both the kernel shapes and the weights learnable during training. CNN models built of LHC can be greatly sparsified structurally and can be accelerated in high parallelism. Experiments against existing convolution techniques show that LHC always achieves better performance at any computation reduction ratio; and experiments of rebuilding typical network structures show that LHC can improve their efficiency even further. Our method also achieves better CNN performance if taken as a training technique.
CRediT authorship contribution statement
Rongzhen Zhao: Conceptualization, Methodology, Software. Zhenzhi Wu: Methodology. Qikun Zhang: Software.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
1A convolution layer (green, the biases and activation are ignored) between input feature maps (orange) and output features (blue). For the standard convolution, 1 ⃝ the number of kernels in a layer is fixed c o , 2 ⃝ the number of slices in each kernel is fixed c i , 3 ⃝ the shape of each kernel slice (topology) is fixed (k, k), and only 4 ⃝ the value of the k * k elements (strength) in each kernel slice can be learnt. Our method makes 1 ⃝∼ 4 ⃝ all learnable. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 2 .
2Kernels of uncommon shapes designed handcrafted by SWF
Fig. 3 .
3Two sets of uncommon shapes for LHC. (a) 15 rigid shapes: designed empirically in six groups; digits on the top and right are their indexes, e.g., ⟨3⟩4. (b) 2 3×3 = 512 free shapes: generated by arbitrarily setting each element to 0 or 1; digits on every top-right corner are their serial indexes, e.g., 502. The former is a sub-set of the latter.
Fig. 4 .
4Topology of LHC Kernels. 1st row: LHC-KbK, where kernel slices have the same shapes within a kernel and different shapes among kernels; 2nd row: LHC-SbS, where slices have different shapes both within and among kernels; 3rd row: LHCR with topology constraints c gi and c go , where slices are shaped into those rigid shapes and every c gi * c go slices have the same shape; 4th row: LHCF, where slices are shaped into those free shapes.
Fig. 5 .
5Training principle of LHC. Left box:
M
part of shape (3, 3, c gi , c go ) for this position. By iterating all those c i /c gi × c o /c go positions, the masks M of shape (3, 3, c i , c o ) are finally constructed.Smoothing the Training ProcessTo smooth the training process, two warm-up tricks are employed.Algorithm 1 Mask Enabling Warm-UpSet the number of epochs n warm for warm-up 1. calculate the incremental unit per epoch δ = 1/n warm 2. on the ith epoch begins: enable every mask at probability p i = δ * (i − 1) 3. conduct this training epoch 4. turn to step 2 until n warm reachedAlgorithm 2 Mask Regularizing Warm-UpSet the number of epochs n warm , the weight for mask regularization α t , target density d t 1. calculate the incremental unit per epoch δ = α t /n warm 2. conduct a training epoch, and keep the task loss l task 3. calculate the upper limit and the scale factor of this loss: l max = abs(1.0 − d t ), f = l task /l max 4. on epoch i begins: update the weight α = f * δ * (i − 1)
go ). Suppose c gi = 64 and c go = 8, 0.1953% of extra storage is paid for the topology plasticity.
Fig. 6 .
6Parallel processing of LHC inference on hardware -how all invalid weights and computation are avoided, and how high parallelism is reached. Input buffer: input features are stored in c gi -aligned manner. Window buffer: elements in current sliding window are copied from the input buffer. Weight buffer: weights are stored in c gi * c go -aligned manner, with full-zero segments of length c gi * c go being skipped. AGU: address generation unit, counts addresses for the weights buffer incrementally. ALUT: address look-up table, stores addresses calculated in advance for the window buffer. At every convolution step, every row of c gi * c go weights in the weight buffer, and the corresponding row of length c gi in the window buffer, are sent to the MAC array; then the c go results are sent to c go output registers, where the value accumulates 3 * 3 * c i /c gi times to get the final c go results in the output features. With topology constraints c gi ≡ 64 and c go ≡ 8, a 512-way parallel processing is achieved. If taking the batch dimension b (not drawn for simplicity) into account, the parallelism can be further increased by b times.
Fig. 7 .
7Time scheduling of LHC vs traditional convolution. (a) The timing of an LHC layer, where its structural sparsity is fully utilized and thus all redundant clocks and other computational resources are saved. (b) The timing of a standard convolution layer, where sparsity (gray parts) is not considered. Each column is c gi × c go parallel operations such as read/write, multiply-add and accumulation. Refer to Fig. 6 for the corresponding hardware logic. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 8 .
8The unification of various convolution techniques. GWC, DWC and HetConv can all be taken as LHCs that use some certain shapes.
Fig. 9 .
9Rebuilding GoogLeNet InceptionV4-A/B/C with LHC. The shapes used by LHCs are the ones used in the original Inceptions. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 10 .
10Curves of computation reduction vs accuracy of different networks built of different techniques on different datasets. The x-axis ∆flops is computation reduction in percentage, and the y-axis acc is classification accuracy in percentage. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) For MobileNet, all DWC+PWC structures are replaced by LHCR,
Fig. 12 .
12Shape distribution of the 3rd/7th/11th layer of VGG16 built of LHCR and LHCF respectively. Left: LHCR; right: LHCF. The x-axis is the serial index of all 512 shapes; y-axis is the ratio a shape takes up in an LHC layer.
Table 1
1Notations used in this article. Height/width of input/output featuresh i , w i , h o , w oSpatial size of a kernel k * k,(k, k) Shape of the kernels
(k, k, c i , c o )
Shape of input feature maps or ''feat in''
(b, h i , w i , c i )
Shape of output feature maps or ''feat out''
(b, h o , w o , c o )
Batch dimension of feature maps
b
Number of input/output channels
c i , c o
Topology constraint of input/output channels
c gi , c go
Tera Operations Per Second
TOPS
Floating Point Operation(s)
FLOP(s)
Multiply Accumulate (unit)
MAC
Table 2
2Experiment items: candidate convolution techniques on mainstream networks.
Origin
GWC (Alex
et al., 2012)
DWC (Sandler
et al., 2018)
Inception (Szegedy,
Ioffe, Vanhoucke, &
Alemi, 2016)
HetConv (Singh
et al., 2019)
LHC
(ours)
FPGM (He, Liu,
Wang, Hu, &
Yang, 2019)
Taylor (Molchanov,
Mallya, Tyree, Frosio,
& Kautz, 2019)
VGG16 (Simonyan &
Zisserman, 2014)
✓
✓
✓
✓
✓
VGG19 (Simonyan &
Zisserman, 2014)
✓
✓
✓
✓
✓
ResNet34 (He, Zhang,
Ren, & Sun, 2016)
✓
✓
✓
✓
✓
ResNet50 (He et al.,
2016)
✓
✓
✓
✓
✓
GoogLeNet (Szegedy,
Ioffe, Vanhoucke, &
Alemi, 2016)
-
✓
✓
ShuffleNet (Zhang, Zhou,
Lin, & Sun, 2018)
-
✓
✓
MobileNet (Howard
et al., 2017)
-
✓
✓
namely, d t ∈ {invalid, 0.2, 0.1, 0.05, 0.01} for CIFAR10, and d t ∈ {invalid, 0.25, 0.1} for ImageNet. Without harming the accuracy, our method can compress VGG/ResNet weights by about 10× on CIFAR10, or 4× on ImageNet.
GoogLeNet and their LHC variants have similar accuracy.The GoogLeNet model, with its Inceptions rebuilt by LHC, achieves computation reduction ∆flops = 24.39%, and parameter compression ∆param = 64.56%; the MobileNet model, with its DWC+PWC modules replaced, achieves ∆flops = 15.68% and ∆params = 18.82%; and the ShuffleNet model, with its GWC+ channel-shuffle modules replaced, achieves ∆flops = 5.36% and ∆params = 11.73%.
AcknowledgmentsThis work was supported by ''Science and Technology Innovation 2030 -New Generation of Artificial Intelligence'', China project (2020AAA0109100) and Beijing Science and Technology Plan, China (Z191100007519009).
Imagenet classification with deep convolutional neural networks. Alex , K Sutskever, I Hinton, G E , Advances in neural information processing systems 25. Alex, K., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems 25.
R Andri, G Karunaratne, L Cavigelli, L Benini, arXiv:2005.07137ChewBaccaNN: A flexible 223 TOPS/W BNN accelerator. arXiv preprintAndri, R., Karunaratne, G., Cavigelli, L., & Benini, L. (2020). ChewBaccaNN: A flexible 223 TOPS/W BNN accelerator. arXiv preprint arXiv:2005.07137.
I I Beysolow, T , Convolutional neural networks (CNNs/ConvNets. Beysolow II, T. (2017). Convolutional neural networks (CNNs/ConvNets). https: //cs231n.github.io/convolutional-networks.
Dendritic spine dynamics. D H Bhatt, S Zhang, W Gan, Annual Review of Physiology. 711Bhatt, D. H., Zhang, S., & Gan, W. (2009). Dendritic spine dynamics. Annual Review of Physiology, 71(1), 261-282.
SeerNet: Predicting convolutional neural network feature-map sparsity through low-bit quantization. S Cao, L Ma, W Xiao, C Zhang, Y Liu, L Zhang, IEEE conference on computer vision and pattern recognition. Cao, S., Ma, L., Xiao, W., Zhang, C., Liu, Y., Zhang, L., et al. (2019). SeerNet: Pre- dicting convolutional neural network feature-map sparsity through low-bit quantization. In IEEE conference on computer vision and pattern recognition.
Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. X Chen, L Xie, J Wu, Q Tian, IEEE international conference on computer vision. Chen, X., Xie, L., Wu, J., & Tian, Q. (2019). Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In IEEE international conference on computer vision.
Learning sparse neural networks through L 0 regularization. L Christos, W Max, P K Diederik, International conference on learning representations. Christos, L., Max, W., & Diederik, P. K. (2018). Learning sparse neural net- works through L 0 regularization. In International conference on learning representations.
Model compression and hardware acceleration for neural networks: A comprehensive survey. B L Deng, G Li, S Han, L Shi, Y Xie, Proceedings of the IEEE. 1084Deng, B. L., Li, G., Han, S., Shi, L., & Xie, Y. (2020). Model compression and hardware acceleration for neural networks: A comprehensive survey. Proceedings of the IEEE, 108(4), 485-532.
Differentiable soft quantization: Bridging full-precision and low-bit neural networks. R Gong, X Liu, S Jiang, T Li, P Hu, J Lin, IEEE international conference on computer vision. Gong, R., Liu, X., Jiang, S., Li, T., Hu, P., Lin, J., et al. (2019). Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In IEEE international conference on computer vision.
Deep learning. I Goodfellow, Y Bengio, A Courville, MIT pressGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
Deep rewiring: Training very sparse deep networks. B Guillaume, K David, M Wolfgang, L Robert, International conference on learning representations. Guillaume, B., David, K., Wolfgang, M., & Robert, L. (2018). Deep rewiring: Training very sparse deep networks. In International conference on learning representations.
Dendritic spine plasticity: Looking beyond development. K J Harms, A Dunaevsky, Brain Research. 11841Harms, K. J., & Dunaevsky, A. (2007). Dendritic spine plasticity: Looking beyond development. Brain Research, 1184(1), 65-71.
Filter pruning via geometric median for deep convolutional neural networks acceleration. Y He, P Liu, Z Wang, Z Hu, Y Yang, IEEE conference on computer vision and pattern recognition. He, Y., Liu, P., Wang, Z., Hu, Z., & Yang, Y. (2019). Filter pruning via geometric me- dian for deep convolutional neural networks acceleration. In IEEE conference on computer vision and pattern recognition (pp. 4340-4349).
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE conference on computer vision and pattern recognition. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition.
Transient and persistent dendritic spines in the neocortex in vivo. A Holtmaat, J T Trachtenberg, L Wilbrecht, G M Shepherd, X Zhang, G Knott, Neuron. 452Holtmaat, A., Trachtenberg, J. T., Wilbrecht, L., Shepherd, G. M., Zhang, X., Knott, G., et al. (2005). Transient and persistent dendritic spines in the neocortex in vivo. Neuron, 45(2), 279-291.
Searching for MobileNetV3. A Howard, M Sandler, G Chu, L Chen, B Chen, M Tan, IEEE international conference on computer vision. Howard, A., Sandler, M., Chu, G., Chen, L., Chen, B., Tan, M., et al. (2019). Searching for MobileNetV3. In IEEE international conference on computer vision.
A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, arXiv:1704.04861Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprintHoward, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., et al. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
Squeeze-and-excitation networks. J Hu, L Shen, G Sun, IEEE conference on computer vision and pattern recognition. Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In IEEE conference on computer vision and pattern recognition.
CondenseNet: An efficient denseNet using learned group convolutions. G Huang, S Liu, M L V Der, K Q Weinberger, IEEE conference on computer vision and pattern recognition. Huang, G., Liu, S., M.L.V., Der, & Weinberger, K. Q. (2018). CondenseNet: An efficient denseNet using learned group convolutions. In IEEE conference on computer vision and pattern recognition.
Knowledge distillation via route constrained optimization. X Jin, B Peng, Y Wu, Y Liu, J Liu, D Liang, IEEE international conference on computer vision. Jin, X., Peng, B., Wu, Y., Liu, Y., Liu, J., Liang, D., et al. (2019). Knowledge distil- lation via route constrained optimization. In IEEE international conference on computer vision.
MetaPruning: Meta learning for automatic neural network channel pruning. Z Liu, H Mu, X Zhang, Z Guo, X Yang, K Cheng, IEEE international conference on computer vision. Liu, Z., Mu, H., Zhang, X., Guo, Z., Yang, X., Cheng, K., et al. (2019). MetaPrun- ing: Meta learning for automatic neural network channel pruning. In IEEE international conference on computer vision.
ShuffleNet V2: Practical guidelines for efficient CNN architecture design. N Ma, X Zhang, H Zheng, J Sun, European conference on computer vision. Ma, N., Zhang, X., Zheng, H., & Sun, J. (2018). ShuffleNet V2: Practical guidelines for efficient CNN architecture design. In European conference on computer vision.
A million spiking-neuron integrated circuit with a scalable communication network and interface. P A Merolla, J V Arthur, R Alvarezicaza, A S Cassidy, J Sawada, F Akopyan, Science. 3456197Merolla, P. A., Arthur, J. V., Alvarezicaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., et al. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668-673.
Importance estimation for neural network pruning. P Molchanov, A Mallya, S Tyree, I Frosio, J Kautz, IEEE conference on computer vision and pattern recognition. Molchanov, P., Mallya, A., Tyree, S., Frosio, I., & Kautz, J. (2019). Importance estimation for neural network pruning. In IEEE conference on computer vision and pattern recognition (pp. 11264-11272).
BinarEye: An always-on energy-accuracy-scalable binary CNN processor with all memory on chip in 28 nm CMOS. B Moons, D Bankman, L Yang, B Murmann, M Verhelst, IEEE custom integrated circuits conference. Moons, B., Bankman, D., Yang, L., Murmann, B., & Verhelst, M. (2018). BinarEye: An always-on energy-accuracy-scalable binary CNN processor with all mem- ory on chip in 28 nm CMOS. In IEEE custom integrated circuits conference (pp. 1-4).
Few-Shot image recognition with knowledge transfer. Z Peng, Z Li, J Zhang, Y Li, G Qi, J Tang, IEEE international conference on computer vision. Peng, Z., Li, Z., Zhang, J., Li, Y., Qi, G., & Tang, J. (2019). Few-Shot image recognition with knowledge transfer. In IEEE international conference on computer vision.
Extreme network compression via filter group approximation. B Peng, W Tan, Z Li, S Zhang, D Xie, S Pu, European conference on computer vision. Peng, B., Tan, W., Li, Z., Zhang, S., Xie, D., & Pu, S. (2018). Extreme network compression via filter group approximation. In European conference on computer vision (pp. 300-316).
MobileNetV2: Inverted residuals and linear bottlenecks. M Sandler, A Howard, M Zhu, A Zhmoginov, L Chen, IEEE conference on computer vision and pattern recognition. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. In IEEE conference on computer vision and pattern recognition.
Understanding and improving convolutional neural networks via concatenated rectified linear units. W Shang, K Sohn, D Almeida, H Lee, International conference on machine learning. Shang, W., Sohn, K., Almeida, D., & Lee, H. (2016). Understanding and improving convolutional neural networks via concatenated rectified linear units. In International conference on machine learning (pp. 2217-2225).
Rigid-Motion scattering for texture classification. L Sifre, S Mallat, Pennsylvania State UniversitySIfre, L., & Mallat, S. (2014). Rigid-Motion scattering for texture classification. Pennsylvania State University.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintSimonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
HetConv: Heterogeneous kernel-based convolutions for deep CNNs. P Singh, V K Verma, P Rai, V P Namboodiri, IEEE conference on computer vision and pattern recognition. Singh, P., Verma, V. K., Rai, P., & Namboodiri, V. P. (2019). HetConv: Hetero- geneous kernel-based convolutions for deep CNNs. In IEEE conference on computer vision and pattern recognition.
Axons and synaptic boutons are highly dynamic in adult visual cortex. D D Stettler, H Yamahachi, W Li, W Denk, C D Gilbert, Neuron. 496Stettler, D. D., Yamahachi, H., Li, W., Denk, W., & Gilbert, C. D. (2006). Axons and synaptic boutons are highly dynamic in adult visual cortex. Neuron, 49(6), 877-887.
IGCV3: Interleaved low-rank group convolutions for efficient deep neural networks. K Sun, M Li, D Liu, J Wang, British machine vision conference. Sun, K., Li, M., Liu, D., & Wang, J. (2018). IGCV3: Interleaved low-rank group convolutions for efficient deep neural networks. In British machine vision conference.
Inception-v4, Inception-ResNet and the impact of residual connections on learning. C Szegedy, S Ioffe, V Vanhoucke, A A Alemi, Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2016). Inception-v4, Inception-ResNet and the impact of residual connections on learning.
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, IEEE conference on computer vision and pattern recognition. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In IEEE conference on computer vision and pattern recognition.
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, IEEE conference on computer vision and pattern recognition. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In IEEE conference on computer vision and pattern recognition.
M Tan, Q V Le, arXiv:1907.09595Mixconv: Mixed depthwise convolutional kernels. arXiv preprintTan, M., & Le, Q. V. (2019). Mixconv: Mixed depthwise convolutional kernels. arXiv preprint arXiv:1907.09595.
Dynamic convolutions: Exploiting spatial sparsity for faster inference. T Verelst, T Tuytelaars, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionVerelst, T., & Tuytelaars, T. (2020). Dynamic convolutions: Exploiting spatial sparsity for faster inference. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2320-2329).
Orthogonal convolutional neural networks. J Wang, Y Chen, R Chakraborty, X Yu, IEEE conference on computer vision and pattern recognition. Wang, J., Chen, Y., Chakraborty, R., & Yu, X. (2020). Orthogonal convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 11505-11515).
Learning versatile filters for efficient convolutional neural networks. Y Wang, C Xu, X Chunjing, C Xu, D Tao, Advances in neural information processing systems. Wang, Y., Xu, C., Chunjing, X., Xu, C., & Tao, D. (2018). Learning versatile filters for efficient convolutional neural networks. In Advances in neural information processing systems (pp. 1608-1618).
Interleaved structured sparse convolutional neural networks. G Xie, J Wang, T Zhang, J Lai, R Hong, G Qi, IEEE conference on computer vision and pattern recognition. Xie, G., Wang, J., Zhang, T., Lai, J., Hong, R., & Qi, G. (2018). Interleaved structured sparse convolutional neural networks. In IEEE conference on computer vision and pattern recognition.
Shift-net: Image inpainting via deep feature rearrangement. Z Yan, X Li, M Li, W Zuo, S Shan, European conference on computer vision. Yan, Z., Li, X., Li, M., Zuo, W., & Shan, S. (2018). Shift-net: Image inpainting via deep feature rearrangement. In European conference on computer vision.
Side window filtering. H Yin, Y Gong, G Qiu, IEEE conference on computer vision and pattern recognition. Yin, H., Gong, Y., & Qiu, G. (2019). Side window filtering. In IEEE conference on computer vision and pattern recognition.
J Zhang, arXiv:1905.03672Seesaw-Net: Convolution neural network with uneven group convolution. arXiv preprintZhang, J. (2019). Seesaw-Net: Convolution neural network with uneven group convolution. arXiv preprint arXiv:1905.03672.
Interleaved group convolutions. T Zhang, G Qi, B Xiao, J Wang, IEEE international conference on computer vision. Zhang, T., Qi, G., Xiao, B., & Wang, J. (2017). Interleaved group convolutions. In IEEE international conference on computer vision.
ShuffleNet: An extremely efficient convolutional neural network for mobile devices. X Zhang, X Zhou, M Lin, J Sun, IEEE conference on computer vision and pattern recognition. Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In IEEE conference on computer vision and pattern recognition.
Accelerate CNN via recursive Bayesian pruning. Y Zhou, Y Zhang, Y Wang, Q Tian, IEEE international conference on computer vision. Zhou, Y., Zhang, Y., Wang, Y., & Tian, Q. (2019). Accelerate CNN via recursive Bayesian pruning. In IEEE international conference on computer vision.
| []
|
[
"Why is Black Hole Entropy Affected by Rotation?",
"Why is Black Hole Entropy Affected by Rotation?"
]
| [
"Brett Mcinnes [email protected] \nNational University of Singapore\n\n"
]
| [
"National University of Singapore\n"
]
| []
| It is well known that an asymptotically flat four-dimensional Kerr black hole has a smaller (specific) entropy than a Schwarzschild black hole of the same mass. We show here that the same is true if the temperature, rather than the mass, is held fixed; and we also show that an asymptotically AdS 5 -Kerr black hole has a smaller specific entropy than an AdS 5 -Schwarzschild black hole of the same temperature, except in a negligibly small class of special examples. The AdS 5 -Kerr case is particularly interesting, because here the gaugegravity duality applies; if we further accept that there is a useful analogy between the strongly coupled field theories dual to AdS black holes and the best-understood example of a strongly coupled fluid (the Quark-Gluon Plasma), then we can apply QGP theory to predict the behaviour of black hole entropy in this case. The prediction agrees with our study of AdS 5 -Kerr entropy. The hope is that such results might lead ultimately to an identification of black hole microstates. | 10.1007/jhep02(2023)072 | [
"https://export.arxiv.org/pdf/2210.11751v3.pdf"
]
| 253,080,432 | 2210.11751 | 3de6942206d17b5c27362c71722be79784cec57a |
Why is Black Hole Entropy Affected by Rotation?
19 Feb 2023
Brett Mcinnes [email protected]
National University of Singapore
Why is Black Hole Entropy Affected by Rotation?
19 Feb 2023
It is well known that an asymptotically flat four-dimensional Kerr black hole has a smaller (specific) entropy than a Schwarzschild black hole of the same mass. We show here that the same is true if the temperature, rather than the mass, is held fixed; and we also show that an asymptotically AdS 5 -Kerr black hole has a smaller specific entropy than an AdS 5 -Schwarzschild black hole of the same temperature, except in a negligibly small class of special examples. The AdS 5 -Kerr case is particularly interesting, because here the gaugegravity duality applies; if we further accept that there is a useful analogy between the strongly coupled field theories dual to AdS black holes and the best-understood example of a strongly coupled fluid (the Quark-Gluon Plasma), then we can apply QGP theory to predict the behaviour of black hole entropy in this case. The prediction agrees with our study of AdS 5 -Kerr entropy. The hope is that such results might lead ultimately to an identification of black hole microstates.
Rotation and Black Hole Entropy
The celebrated Kerr black hole in the Cygnus X-1 system has many interesting properties. One of the less-discussed of these properties is that the specific entropy (that is, the entropy [1,2] per unit mass) of this black hole has only slightly more than half of the value it would have if it were a Schwarzschild black hole of the same mass. This can be deduced from the formula for the specific entropy of a Kerr black hole (see below) and from the fact that the angular momentum of the Cygnus X-1 black hole is close to the maximum value permitted by Cosmic Censorship: the angular momentum per mass squared (in Planck units), a * , which is unity for an exactly extremal Kerr black hole, is claimed in [3] to be at least 1 0.9696 for Cygnus X-1 . This is not a small effect. Astrophysical black holes notoriously have very large entropies [6], even when their large masses are taken into account. (This last point is one reason why, throughout our discussion below, we focus on the specific entropy, and not the entropy itself.) This means that the rapid rotation of black holes like the one in Cygnus X-1 not only reduces the entropy, it reduces it by an amount which is very substantial even by astrophysical standards.
The rapid rotation in itself is not a mystery: there are well-developed detailed theories of the formation of such black holes (see for example [7]), explaining why their angular momenta are so large, and the final total entropy is of course still (much) larger than the entropy of the initial system, in agreement with the Second Law.
What is mysterious is that rotation should be in any way related to the black hole entropy 2 . It is true that the "rotation" of a black hole is actually a rather subtle matter; a black hole is not a physical object, so it does not "rotate" in the usual sense. Even leaving this to one side, a black hole does not rotate as if it were a rigid body -for example, when the two horizons are distinct they have different angular velocities [10]. Even taking these subtleties into account, however, it remains far from clear why this admittedly complex "rotation" should affect the specific entropy.
The reader may protest that this mystery is just one of many resulting from the fact that the microstates associated with black hole entropy are not well understood. We prefer to turn this around: if we could understand (even in special cases) why rotation reduces black hole (specific) entropy, then this might yield insights which could prove useful to the search for those microstates 3 .
The reader might also protest: why make comparisons at fixed mass? Would it not be more natural to compare the specific entropies of rotating and non-rotating black holes at the same temperature? In this work, we will do this; and we find that it does not change the above discussion in its essentials: in the asymptotically flat case, a rotating black hole always has a lower specific entropy than a non-rotating black hole at the same 1 A more recent analysis [4] suggests slightly smaller, but still very large values, in the 0.86 to 0.92 range. Of course, this black hole is by no means unique: for example, the black hole associated with the GW200129 gravitational waves is thought [5] to have a value of a * which may exceed 0.9. 2 We note in passing that this phenomenon plays an important role in applications of the Second Law of thermodynamics to black hole fragmentation [8] (because the fact that extremal black holes have the smallest possible entropy among black holes of a given mass allows one to put a lower bound on the entropy of the fragments), and it is fundamental in studies of the Penrose bound [9]. 3 For example, one can ask this question in the context of the recent remarkable results discussed in [11,12].
temperature. (The asymptotically AdS case is more complex, but in nearly all cases the conclusion is the same.) The fixed-temperature case is actually the more interesting one for our purposes here, in particular for the (holographic) application to be discussed below (since temperature has a clear interpretation on both sides of the duality, and because the temperature is in fact approximately fixed when one makes comparisons using actual strongly coupled matter), so, after Section 3 below, we always take it as the basis of comparison.
One might try to explain the relation between rotation and entropy through a deep analysis of black hole entropy itself. However, even the most basic questions here have long been a matter of debate [13]; even defining an entropy for an intrinsically gravitational system is already notoriously difficult [14][15][16]. Attempts to understand it (for example) in terms of densities of states lead to all manner of complications [17] (see also [18] for a recent discussion of the issues).
In view of all this, we prefer to focus on trying to understand our problem in a limited but still very instructive context: that of black holes with a holographic interpretation.
Rotation and Holography
The gauge-gravity duality [19][20][21][22][23] offers the possibility of approaching black hole entropy through a study of conformal field theories and related strongly coupled field theories. This approach has enjoyed several notable recent successes [24][25][26] in computing black hole entropies, and there is some hope [27] that it might in future be developed to the point where the precise duals of black hole microstates can be explicitly identified.
Some holographic formulations of the problem of identifying black hole microstates emphasise the possibility that the latter might perhaps be not very different from the microstates of other forms of strongly coupled matter. The concept of "black hole molecules" [28] belongs to this category; see also more recent work along these lines summarised in [29,2].
In this spirit, it might be possible to understand the relation between black hole entropy and rapid rotation by studying another strongly coupled system in which ultrahigh angular momentum to energy density ratios are to be found, namely the Quark-Gluon Plasma (QGP) produced in high energy collisions of nuclei [30]. These plasmas exhibit attractor behaviour [31] and constitute a well-defined thermodynamic system. When the collision is peripheral, that is, not "head-on", one can expect very large specific angular momenta to be generated, and this has been confirmed experimentally in the STAR observations [32] at the RHIC facility (see [33] for up-to-date references). In this system, one can compare the "vortical QGP" with non-vortical plasmas produced in central collisions in the same beam, that is, at the same impact energy (and, approximately, the same temperature).
These systems are relevant here because, as is well known, the real, strongly coupled QGP resembles, in some ways, the field theories appearing in gauge-gravity duality. (See [34][35][36][37] and their references for the application of holography to rotating strongly coupled matter). If we accept this resemblance as a heuristic guide, then the various physical parameters describing the QGP are mapped to analogous parameters of an asymptotically AdS 5 black hole: temperature to Hawking temperature, the plasma angular momen-tum/energy density ratio to the specific angular momentum 4 of an AdS 5 -Kerr black hole, the QGP entropy/energy density ratio to the specific entropy of that black hole, and so on. Now a comparison of the entropy densities of vortical and non-vortical plasmas produced in high-energy collisions can be made rather explicitly, as follows. The vorticities reported [32] for peripheral collisions are thought to be [38,39] averages over small vortices produced in shearing layers. These small vortices tend to align with each other, due to spin-spin coupling of the angular momentum vectors of the underlying particles. This gives rise to two observable phenomena.
First, in the specific case of the decay of Λ/Λ hyperons produced in heavy ion collisions, protons are emitted along the direction of the spin of the parent hyperon, and this polarization effect is the one reported in the relevant STAR experiments.
The spin-spin coupling also gives rise to a magnetic field [40,41], in close analogy to the classical Barnett effect [42]. These (very large) magnetic fields, arising in peripheral but not central collisions, have been studied extensively: see for example [43][44][45][46]. They can give rise to observable effects, and in many circumstances they can act as a proxy for the average vorticity, with which they are correlated.
The key point is this: the Barnett-effect alignments constrict the relevant phase space, reducing the entropy density/energy density ratio (or the entropy per particle) at a given temperature. Thus we expect the plasmas with high angular momentum densities to have smaller specific entropies than their counterparts produced in central collisions at the same impact energy, producing plasmas of approximately the same temperature.
In fact, phenomenological studies [47] strongly suggest that a decline in entropy density with increasing magnetic fields (at fixed temperature) is indeed a quite general property of QCD thermodynamics, though this is only confirmed at low temperatures; however there are indications of similar phenomena occurring in the QGP (see for example [48]). It is reasonable to expect that the same is true when the magnetic field is generated by the Barnett effect. The influence of rotation on the thermodynamics of the QGP remains a topic of great current interest [49], and perhaps this prediction can be confirmed using such methods.
As with any application of holography, one has to be aware of the limitations of as well as the insights gained from this method [50]. In the case of the usual AdS 5 -Kerr black hole (with a topologically spherical event horizon), the dual system is not a simple vortex in a strongly coupled fluid: it is an entire rotating three-sphere. However, the Barnett effect involves the entropy per particle, as we mentioned earlier. We can reasonably hope to capture this behaviour holographically by focusing exclusively on an intensive quantity, the specific entropy, which by construction is insensitive to the sizes of the two systems. (The fact that the boundary space is curved should not be a problem, since [34] this curvature is quite small.) This is another reason for us to focus on this particular quantity: we do not necessarily expect to be able to model the entropy itself in this manner.
To the extent that the real QGP mirrors the properties of field theories that are dual to AdS 5 bulk black holes -or, to put it differently, if strongly coupled matter is always subject to the analogue of the Barnett effect -holographic duality then predicts that rotating AdS 5 black holes should have lower specific entropies than their non-rotating counterparts at the same temperature.
As was mentioned briefly in the preceding section, in this work we will confirm this prediction by showing that rotation does (nearly) always reduce the specific entropy of asymptotically AdS 5 black holes, when the temperature is held fixed as the basis of comparison. To be precise, we find that, for extremely small values of the specific angular momentum, the specific entropy (in the case of so-called "large" AdS 5 black holes) actually rises at first, but only when the temperature is carefully adjusted to a value close to the minimal possible value for an asymptotically AdS 5 -Schwarzschild black hole 5 . Furthermore, even when it occurs at all, this effect is very quickly reversed as the specific angular momentum increases. Generically, then, the tendency is for the specific entropy of an AdS 5 black hole to decrease as the specific angular momentum increases, at fixed temperature.
To summarize this whole discussion: in the concrete example of the QGP produced in heavy ion collisions, one has a rather clear statistical-mechanical picture, in terms of physics similar to that underlying the Barnett effect (and its generalization [40,41] to the QGP), of the decrease of the ratio of the entropy and energy densities as we move from central to peripheral collisions in the same beam (with temperature therefore approximately fixed). If this is a universal property of strongly coupled matter, holography then predicts the decrease of the specific entropy of AdS 5 -Kerr black holes as their specific angular momenta are increased to large values at fixed temperature. The main technical objective of this work is to show that, with minor exceptions, this is in fact the case. In this sense, we have an explanation, admittedly only in the AdS case, of the fact that rotation affects (in fact, reduces) black hole specific entropy.
While our primary concern here is with rotating black holes, electromagnetically charged black holes are also of interest and often throw light on the (much) more intricate rotating case. These Reissner-Nordström and AdS 5 -Reissner-Nordström black holes were long regarded as unphysical, but they have recently undergone a revival of interest: see for example [51,52]. In particular, the magnetically charged case has attracted considerable attention [53,54]. Again, electromagnetic charges reduce the specific entropy (at either fixed mass or fixed temperature). Indeed, these black holes are expected [55] generically to be close to extremality, and an extremal Reissner-Nordström black hole has an even smaller specific entropy than an extremal Kerr black hole of the same mass: in the asymptotically flat case it has only one quarter of the specific entropy of the corresponding Schwarzschild black hole.
We will consider three cases: asymptotically flat, four-dimensional black holes at fixed mass -this because the details are so simple that the issues can be seen clearly and explicitly -and then asymptotically flat, four-dimensional black holes at fixed temperature, and finally the case of real interest to us, asymptotically AdS, five-dimensional black holes at fixed temperature, which are technically considerably more challenging.
Before we begin, note that in all cases we will be computing entropies from the areas of event horizons, which of course means that we assume that those horizons exist: that is, we assume that Cosmic Censorship holds in all cases. In the strict sense, this can be disputed, but it now seems likely [56] that if naked singularities can occur, they are severely constricted in both space and time; that is, they arise in certain very limited regions of highly dynamical spacetimes, for example when black holes collide or bifurcate. This is probably true even in the asymptotically AdS case [57]. Since we are only interested in non-dynamical spacetimes here, we anticipate no difficulties on this score, and so we assume Censorship henceforth.
Four-Dimensional, Asymptotically Flat, Fixed Mass
The First Law of black hole thermodynamics [1] takes the familiar form 6
dM = T dS + ΩdJ + ΦdQ,(1)
where T is the Hawking temperature, S is the black hole entropy, Ω is the angular velocity of a light ray skimming the horizon, Φ is the potential at the horizon, M is the black hole mass, J is the black hole angular momentum, and Q is the charge. It is clear that, if we increase the charge or the angular momentum while fixing the mass, then dS < 0; the entropy (and likewise the specific entropy) has to be smaller. This statement is readily made explicit. For example, an asymptotically flat fourdimensional Reissner-Nordström black hole with mass M, charge Q, and metric
g (AFRN 4 ) = − 1 − 2Mℓ 2 4 r + Q 2 ℓ 2 4 4πr 2 dt 2 + dr 2 1 − 2Mℓ 2 4 r + Q 2 ℓ 2 4 4πr 2 (2) + r 2 dθ 2 + sin 2 θ dφ 2 ,
has a specific entropy given by
s AFRN 4 = πr 2 H ℓ 2 4 M = πMℓ 2 4 1 + 1 − Q 2 4πℓ 2 4 M 2 + 2 1 − Q 2 4πℓ 2 4 M 2 .(3)
Here r H is the radial coordinate at the event horizon, and ℓ 4 is the four-dimensional Planck length. (The extremal case is Q 2 = 4πℓ 2 4 M 2 , so indeed, as claimed above, the (specific) entropy in that case is one quarter of the corresponding Schwarzschild specific entropy, 4πMℓ 2 4 .) It is obvious in this case that the specific entropy is indeed a monotonically decreasing function of the charge, assuming the mass to be fixed. The point is simply that adding charge causes the event horizon to contract (for fixed mass) and of course this reduces the specific entropy. This does not explain the reduction in statistical-mechanical terms, but at least one has a clear intuitive picture of it.
Let us now consider the asymptotically flat four-dimensional Kerr metric,
g (AFK 4 ) = − ∆ r ρ 2 dt − j sin 2 θ dφ 2 + ρ 2 ∆ r dr 2 + ρ 2 dθ 2 (4) + sin 2 θ ρ 2 j dt − r 2 + j 2 dφ 2 ,
where
ρ 2 = r 2 + j 2 cos 2 θ, ∆ r = r 2 + j 2 − 2Mℓ 2 4 r,(5)
and where M is the mass, and j is the specific angular momentum, of the black hole.
(The reason for the slightly unconventional notation here will be explained later.)
The effect of rotation on the entropy is less clear here than in the Reissner-Nordström case, because rotation distorts the event horizon, causing it to become an oblate spheroid; the area is now 4π (r 2 H + j 2 ), where r H denotes the location of the event horizon. If r H remained constant, then clearly rotation would increase the specific entropy, because of the presence of the j 2 term in the formula for the area.
The resolution is that, once again, rotation causes the radius to become smaller for fixed mass, and this effect is larger than and overcomes the effect of the distortion of the event horizon; but at a qualitative level it is not clear that this is so. It can only be seen from the explicit formula for the specific entropy in this case:
s AFK 4 = π (r 2 H + j 2 ) ℓ 2 4 M = 2πMℓ 2 4 1 + 1 − j 2 ℓ 4 4 M 2 .(6)
Again, the specific entropy decreases with j for fixed mass, as the First Law requires. But we see that the reduction is smaller than in the Reissner-Nordström situation: in the extremal case, j 2 = ℓ 4 4 M 2 , the specific entropy is one half of the value 4πMℓ 2 4 for a Schwarzschild black hole of the same mass, not one quarter.
In summary: in the Kerr case, rotation has two, opposite, effects on the specific entropy, because it affects both the size and the shape of the black hole. Unless the First Law is invoked, it is not immediately clear which effect will prevail. Clearly, the rotating case is considerably more subtle than its charged counterpart.
Next we turn to the case in which the temperatures are fixed, rather than the masses.
Four-Dimensional, Asymptotically Flat, Fixed Temperature
In this section, we repeat the computations above, but now keeping the Hawking temperature fixed as we vary the charge or angular momentum. This means that we must avoid expressions involving the mass explicitly, since the mass will vary in some way that is hard to control. This in turn means that we have no guarantee from the First Law that the specific entropy will necessarily decrease with increasing charge or angular momentum. For the remainder of this work, we never allow the Hawking temperature to be zero; that is, we exclude extremal black holes. The reasons for this exclusion will be explained later, when we consider our primary interest, AdS 5 -Kerr black holes. Henceforth, therefore, we permit ourselves to divide by the Hawking temperature where necessary.
We begin with the charged case. The Hawking temperature of a four-dimensional, asymptotically flat Reissner-Nordström black hole can be put into the form
T AFRN 4 = 1 4πr H − Q 2 ℓ 2 4 16π 2 r 3 H ,(7)
where, as explained, we have eliminated the mass, using the definition of r H . Similarly, the specific entropy can be expressed in a way such that the mass does not appear:
s AFRN 4 = 2πr H 1 + Q 2 ℓ 2 4 4πr 2 H(8)
It is easily shown that, if we fix T AFRN 4 and then use (7) to regard r H as a function of Q, then it is a decreasing function. Consequently, (8) implies that s AFRN 4 is still a decreasing function of the charge, just as it was in the preceding section.
More explicitly: solving (7) for r H , and substituting the result into (8), one obtains an expression for s AFRN 4 in terms of T AFRN 4 (which we abbreviate to T ) and Q:
s AFRN 4 = 1 6T 3 −54 π Q 2 T 2 ℓ 4 2 + 6 √ 3Qℓ 4 27 π Q 2 T 2 ℓ 4 2 −1 π π T + 1 + 1 3 −54 π Q 2 T 2 ℓ 4 2 +6 √ 3Qℓ 4 27 π Q 2 T 2 ℓ 4 2 −1 π π T +1 + 1 1 + 144π 2 T 2 Q 2 ℓ 2 4 3 −54 π Q 2 T 2 ℓ 4 2 +6 √ 3Qℓ 4 27 π Q 2 T 2 ℓ 4 2 −1 π π T +1 + 1 3 −54 π Q 2 T 2 ℓ 4 2 +6 √ 3Qℓ 4 27 π Q 2 T 2 ℓ 4 2 −1 π π T +1 +1 2
(9) One sees that the level of complexity here has risen very substantially (compare with (3)). It is by no means clear that the function on the right side of (9) should have any particular monotonicity property. But, as we know, it does: for all fixed T AFRN 4 , s AFRN 4 is monotonically decreasing on its domain (which is determined by the fact that there is a (mass-independent) upper bound 7 on the charge when the temperature is fixed). For example, if we (temporarily) use units in which ℓ 4 = 1 and set T AFRN 4 = 0.25, then the graph of s AFRN 4 is shown as Figure 1.
A straightforward calculation shows that r H is bounded below by 1/(6πT AFRN 4 ), and that the specific entropy is bounded below by 1/ (4T AFRN 4 ), which is exactly one half of the value when Q = 0. This contrasts sharply with the minimal value (one quarter) in the preceding section. That is, when the temperature is fixed rather than the mass, the specific entropy is still reduced by increasing the charge, but much less effectively. 7 This upper bound is given by
|Q| ≤ 1 3 √ 3π ℓ 4 T AFRN 4 .T AFK 4 = r H 1 − j 2 r 2 H 4π (j 2 + r 2 H ) .(10)
One can show that, if we use this to think of r H as a function of j for fixed T AFK 4 , then it is a decreasing function. As before, we wish to express the specific entropy in a manner that does not involve M explicitly. To that end, we set ∆ r = 0 in (5) and use that equation to express M in terms of r H and j. The result is surprisingly simple:
s AFK 4 = π (r 2 H + j 2 ) ℓ 2 4 M = 2πr H .(11)
In this case there is no "competition", and so it is immediate that the effect of increasing the specific angular momentum while fixing the temperature is to cause the specific entropy to decrease. Explicitly, and again abbreviating T AFK 4 to T , we have
s AFK 4 = 1 6T 3 −288 π 2 T 2 j 2 + 12 √ 3 √ 256 π 4 T 4 j 4 + 176 π 2 T 2 j 2 − 1 π T j + 1 (12) − 48 π 2 T 2 j 2 − 1 3 −288 π 2 T 2 j 2 + 12 √ 3 √ 256 π 4 T 4 j 4 + 176 π 2 T 2 j 2 − 1 π T j + 1 + 1 .
Similarly to the Reissner-Nordström case, here there is a (mass-independent) upper bound 8 on j, for fixed T AFK 4 . This corresponds to a lower bound on the specific entropy itself, given by
s AFK 4 ≥ √ 5 − 1 4T AFK 4 ≈ 0.309 T AFK 4 .(13)
This means that the specific entropy can in this case never be smaller than approximately 61.8% of the specific entropy of a non-rotating black hole of the same temperature (which is 1/(2T AFK 4 )). As in the Reissner-Nordström case, angular momentum is less effective in reducing the specific entropy when the temperature is fixed than when the mass is fixed.
For an example illustrating these points, if we use four-dimensional Planck units and set the temperature at 0.1, then the graph of s AFK 4 is shown as Figure 2. With this preparation, we turn to the case of five-dimensional asymptotically AdS 5 black holes, with a fixed temperature.
Five-Dimensional, Asymptotically AdS 5 , Fixed Temperature
With a view to applying the gauge-gravity duality, let us now turn to the asymptotically AdS, five-dimensional case. (For simplicity, we will assume that the topology of the event 8 This upper bound is given by
j ≤ √ 2 √ 5 − 1 5/2 32πT AFK 4
horizon, in both cases to be considered, is that of the three-sphere. See for example [61] for a recent discussion of other choices. For other aspects of the thermodynamics of AdS black holes, see for example [62].) For reasons to be explained, it turns out that the AdS 5 -Kerr case is very intricate. We therefore begin with the asymptotically AdS 5 -Reissner-Nordström black hole, which is much simpler and yet surprisingly similar to the rotating case in practice.
AdS 5 -Reissner-Nordström, Fixed Temperature
The metric in this case is
g(AdSRN 5 ) = − 1 + r 2 L 2 − 8Mℓ 3 5 3πr 2 + k 5 Q 2 ℓ 3 5 3π 3 r 4 dt 2 + dr 2 1 + r 2 L 2 − 8Mℓ 3 5 3πr 2 + k 5 Q 2 ℓ 3 5 3π 3 r 4(14)
+ r 2 dθ 2 + sin 2 θ dφ 2 + cos 2 θ dψ 2 .
Here M and Q are the physical mass and charge, L is the asymptotic AdS 5 curvature length scale, ℓ 5 is the AdS 5 Planck length, k 5 is the five-dimensional Coulomb constant (with units of length, as mentioned earlier), and the coordinates on the three-sphere are Hopf coordinates 9 . The Hawking temperature can be expressed as
T AdSRN 5 = 1 4π 4r H L 2 + 2 r H − 2k 5 Q 2 ℓ 3 5 3π 3 r 5 H ,(15)
where r H locates the event horizon as before. The specific entropy is
s AdSRN 5 = π 2 r 3 H 2ℓ 3 5 M = 4πr H 3 1 + r 2 H L 2 + k 5 Q 2 ℓ 3 5 3π 3 r 4 H .(16)
One can usefully write this, by using equation (15), as
s AdSRN 5 = 4πr H 3 2 + 3r 2 H L 2 − 2πT AdSRN 5 r H .(17)
Now recall that AdS 5 -Schwarzschild black holes cannot have arbitrarily small temperatures: the minimal temperature is √ 2/ (πL). This is thought [63] to reflect holographically the fact that strongly coupled matter at zero baryonic chemical potential cannot exist below a certain temperature: it confines. For any temperature above √ 2/ (πL), there are two such black holes, one with a larger event horizon than the other; one speaks of "small" and "large" black holes. The "large" black holes are the ones that actually describe strongly coupled matter holographically, since (at these temperatures) they have lower action and more conventional thermodynamic behaviour (notably, positive specific heat) than asymptotically flat black holes. In addition, they are able to attain equilibrium [64]. This reflects the actual thermodynamics of the equivalent boundary matter, which we hope to use to study the behaviour of the dual black hole.
Henceforth, then, we mainly confine attention to AdS 5 black holes which, when they have no charge or angular momentum, are "large". We then study the effects of adding charge or angular momentum to these black holes, while fixing the temperature as usual. This means that (nearly) all of our black holes have temperature greater than or equal to √ 2/ (πL); in particular, it means that we do not consider extremal black holes, as has been our understanding in all of our discussions thus far. (AdS 5 -Kerr black holes with temperatures less than √ 2/ (πL) will however be discussed briefly below, when we discuss the possible superradiant instability of such black holes.)
We begin by observing that we can use equation (15) to regard r H as a function of Q, while fixing the temperature. Ideally we would like to solve (15) for r H and then use the result to express s AdSRN 5 as a function of Q. Unfortunately that is not feasible (since (15) is a sextic in r H ), so we proceed more indirectly, by studying the nature of the relationship between r H and Q.
If we fix the temperature at some value above √ 2/ (πL), and graph r H as a function of Q, then we find that the graph consists of two disconnected parts, corresponding on the vertical axis to "large" and "small" AdS 5 -Schwarzschild black holes. The branch of the graph corresponding to the "large" black hole of some given temperature is of course the upper one: see for example We see now that adding electric charge to a "large" AdS 5 -Schwarzschild black hole, while keeping its temperature fixed, actually increases 10 r H . Note that there is no upper bound on r H ; it can attain any value above the minimum if the charge is sufficiently large. (If r H were bounded above as Q approached infinity, there would be a contradiction with equation (15). ) We now turn to the study of the specific entropy (confining attention to the "large" case). To see what happens to it when the charge is increased, note first that, as we have seen, r H is an increasing, unbounded function of the charge; and so, to study the effect of increasing charge on the specific entropy, we can study instead the effect of increasing r H .
From equation (17), we see that when the specific entropy is regarded as a function of r H , it actually increases, reaching a maximum value at a certain value of r H : see for example Figure 4. A straightforward calculation shows that this value is always (that is, for all temperatures) 2/3 L ≈ 0.816L. However, this in itself does not mean that the specific entropy can increase with increasing charge: we need to determine whether the physical domain for r H begins to the left or to the right of that maximum point.
As we have seen, in the "large" case the smallest possible value of r H at any fixed temperature T AdSRN 5 is given by its value when the charge is zero, so the left end of the physical domain for r H is determined by its value for a "large" AdS 5 -Schwarzschild black hole of that temperature. That value, for any temperature T above the minimum, is given by
r AdS 5 Large Sch H = L 2 2 πT + π 2 T 2 − (2/L 2 ) .(18)
Consider first the case when this temperature takes the lowest possible value for an AdS 5 -Schwarzschild black hole; as was mentioned earlier (and as can be seen by inspecting (18)), that value is √ 2/ (πL). In that case, r H = L/ √ 2 ≈ 0.707L, which is indeed smaller than the value at the maximum. It follows that, at this temperature, the specific entropy actually increases as the charge increases from zero. However, this is only true for very small charges: as soon as r H increases from ≈ 0.707L and reaches ≈ 0.816L, the specific entropy begins to decline: see Figure 4.
Furthermore, this only happens when the temperature is just above the minimal possible temperature of an AdS 5 -Schwarzschild black hole. To see this, we simply set r H = 2/3 L, since that is the location of the maximum. One finds easily that r H in the AdS 5 -Schwarzschild case attains this value when the temperature is 7 √ 3/12 ≈ 1.010 times the minimal possible temperature. Beyond this temperature, the specific entropy always decreases with r H on its physical domain: see Figure 5.
In short, unless one specifically chooses the temperature at a one percent level of precision, the specific entropy of an asymptotically AdS 5 -Reissner-Nordström black hole of fixed temperature (above the minimum possible value for an AdS 5 -Schwarzschild black hole) is always a decreasing function of the charge on the physical domain.
We see directly from equation (17) that the specific entropy can be made arbitrarily small by taking r H , that is, the charge, sufficiently large. The physical meaning of this is as follows.
The parameter dual to the black hole charge is essentially the baryonic chemical potential, µ B , which measures the disparity between particles and antiparticles; one can also counterparts: that is, the event horizon contracts with increasing charge. As in the asymptotically flat case, there is a mass-independent upper bound on the charge of "small" black holes of fixed temperature (but not in the "large" case). Here r H is a proxy for the charge, with which it varies as a monotonically increasing function when the temperature is fixed. The physical domain is to the right of r H ≈ 0.707, which is slightly smaller than at the maximum, r H ≈ 0.816. think of it as a measure of the mass density of ultra-compact matter, as in neutron star cores. The relation between Q and µ B is found by evaluating the asymptotic value of the electromagnetic one-form: see [65]. One finds that
µ B = k 5 Q 4π 2 r 2 H .(19)
Large values of the charge correspond to large values of both the numerator and the denominator here, if the temperature is fixed as usual. But we can clarify by combining this with equation (15), obtaining
32πℓ 3 5 µ 2 B 3k 5 = 4r 2 H L 2 − 4πT AdSRN 5 r H + 2.(20)
The larger root of the quadratic on the right side corresponds to the value of r H for a "large" AdS 5 -Schwarzschild black hole; the quadratic increases indefinitely from there. That is, µ B is a monotonically increasing function of the charge, and so large charge corresponds to large baryonic chemical potential. The holographic model predicts that matter at extremely high values of the baryonic chemical potential can have an arbitrarily small specific entropy compared to matter with smaller chemical potentials at the same temperature. It might be possible to explain this low specific entropy using the properties of the "colour-flavour locked" state thought to exist at the highest densities [66]. Throughout this discussion, we have assumed that we begin with an electrically neutral black hole, and then studied the consequences of gradually increasing the charge from zero. However, it is of course possible to begin with a black hole which is already charged, and such a black hole can have a temperature below the AdS 5 -Schwarzschild minimum (2)/(πL). In the dual system, this would correspond to temperatures such that the matter in question not strongly coupled at low temperatures and zero baryonic chemical potential, but it becomes strongly coupled ("quark matter") at sufficiently high µ B . This is thought to be what does indeed happen in ultra-dense matter, possibly in the cores of neutron stars; see again [66].
Allowing lower temperatures than √ 2/(πL) means that we can extend the range of permitted values for r H below those shown in Figure 4, and this might mean that there is a physically interesting range of electric charges for which the specific entropy is an increasing function of the charge. However, there are two points to be borne in mind here.
The first is that, as we saw above, at any temperature, the specific entropy can only be an increasing function on the domain of r H values between zero and 2/3 L. However, for such small values of r H , equation (20) requires µ B to be likewise small 11 . But, in 11 An elementary argument shows that, for 0 < r H < 2/3 L and 0 < T AdSRN 5 < √ 2/(πL), the dimensionless quantity on the left in (20) is bounded above by 14/3; in fact, for all but the smallest values of the temperature, it is considerably smaller than that. reality, low-temperature quark matter only exists at extremely high values of the baryonic chemical potential. Admittedly, one would need a detailed analysis, with explicit data values, to define "small" in a precise way here, but it seems very unlikely that this domain of parameter values is anywhere near the physical domain.
Secondly, to show that this parameter domain is physical, one would need a detailed investigation of the stability of relatively cold AdS 5 -Reissner-Nordström black holes. In particular, one would need to understand the effects of electromagnetic superradiance [67]. On the basis of our findings in the analogous AdS 5 -Kerr situation (see below) we suspect that the upshot would be that the conclusions of this section would not be greatly affected; that is, these unusual black holes are probably unstable. However, a more detailed analysis would be required to prove this.
In any case, we do not have an analogue of the Barnett effect in this case, so we have no reason to expect on holographic grounds that the specific entropy should decrease for all (physical) ranges of values of the baryonic chemical potential. The fact that, in nearly all cases, it actually does, is interesting; it suggests that the charged case might repay further investigation.
Here, however, our primary concern is with rotating black holes and vortical strongly coupled matter, to which we now turn.
AdS 5 -Kerr, Fixed Temperature
The metric for the AdS 5 -Kerr black hole (in the case of rotation around a single axis, the case with a clear holographic interpretation, therefore the only one we shall consider) takes the form [58][59][60]
g(AdSK 5 ) = − ∆ r ρ 2 dt − a Ξ sin 2 θ dφ 2 + ρ 2 ∆ r dr 2 + ρ 2 ∆ θ dθ 2 (21) + sin 2 θ ∆ θ ρ 2 a dt − r 2 + a 2 Ξ dφ 2 + r 2 cos 2 θ dψ 2 ,
where ρ 2 = r 2 + a 2 cos 2 θ,
∆ r = r 2 + a 2 1 + r 2 L 2 − 2M, ∆ θ = 1 − a 2 L 2 cos 2 θ, Ξ = 1 − a 2 L 2 .(22)
Here L is the background AdS curvature length scale, and M and a are purely geometric parameters (with units of length squared and length, respectively) which are not equal or even simply related to the physical mass and the angular momentum per unit mass (see below). The angular coordinates on the (topological) r = constant three-spheres are as before.
The quantity Ξ plays an important role in the sequel, so let us explain its origin 12 . Let us choose fixed values t = t * , ψ = 0, r = r * ; then θ and φ can be interpreted as ordinary polar coordinates on a two-dimensional hemisphere with pole at θ = 0; we can think of the hemisphere as rotating about the axis through this pole, in the φ direction. Now on this hemisphere take a circle centred on θ = 0, located at some fixed value of θ.
To lowest order in θ, ∆ θ ≈ Ξ, and, if ρ * denotes the value of ρ when r = r * , ρ * 2 ≈ r * 2 + a 2 , so if θ is small, the radius is given approximately by
√ r * 2 +a 2 θ √ Ξ
. The presence of Ξ here begins a cascade which affects most aspects of the physics of these black holes.
The circumference of this circle is
C = 2π Ξ ρ * − a 2 ∆ * r sin 4 θ + (r * 2 + a 2 ) 2 ∆ θ sin 2 θ,(23)
where ∆ * r is ∆ r evaluated at r * . Notice that the factor of 1/Ξ on the right is due to its explicit appearance in the metric tensor.
To lowest order in θ, C = 2π
√ r * 2 +a 2 θ √ Ξ
. The point is that there is a danger that, as θ approaches zero, the ratio of the circumference of the circle to its radius will not approach 2π, meaning that the hemisphere (that is, every hemisphere of this form, throughout the exterior spacetime) develops a conical singularity. We see that this is avoided here, but only because dφ is divided by Ξ in each of its appearances in the metric. (If these factors of Ξ had not been included, the apical angle of the cones would have been 2 arcsin(Ξ).)
The consequence of this, however, is that the area of the event horizon inevitably acquires a factor of 1/Ξ, and hence so does the entropy of the black hole:
S AdS 5 K = π 2 (r 2 H + a 2 ) r H 2ℓ 3 5 Ξ .(24)
But now, if the First Law is to hold, the presence of Ξ in the entropy means that it must also be present in the physical mass and the angular momentum [60]. The upshot is that the physical mass M is related to the geometric parameter M in a complex manner:
M = πM (2 + Ξ) 4 ℓ 3 5 Ξ 2 ,(25)
while the physical angular momentum is given by
J = πMa 2 ℓ 3 5 Ξ 2 .(26)
The specific angular momentum j is therefore given by
j = 2a 2 + Ξ = 2a 3 − (a 2 /L 2 ) .(27)
There are two important observations to be made regarding (27). First, (27) implies that j is a monotonically increasing function of a; this will be useful later.
Secondly, one shows easily that j/L < 1 if and only if a/L < 1, as we are requiring here. (For all a with 0 < a/L < 1, j/L is smaller than a/L.) This means that the asymptotic AdS 5 curvature scale L has a physical interpretation as the upper bound on the possible specific angular momenta attainable by spinning up such a black hole (and therefore, by holography, on the possible specific angular momenta of the strongly coupled matter at infinity). Intuition suggests that this upper bound is actually imposed by the requirement that the vortical motion should never lead to superluminal speeds. Let us confirm this intuition.
Consider the equator of the hemisphere we discussed above; it is located at θ = π/2, and it defines (in the usual way) an equator on the corresponding hemisphere at conformal infinity. Now take a massive particle with angular momentum per unit mass equal to j fixed on this equator: it belongs to the matter which is holographically dual to the bulk black hole (which by definition also has an angular momentum to mass ratio equal to j.) It is possible (though not straightforward) to show [34] that the velocity of this particle, relative to a distinguished observer at infinity with zero angular momentum, is given by
v j = j L 1 1 + j 2 L 2 Ξ .(28)
Here Ξ should be regarded as a function of j (see below). Thus we can regard v j as a (surprisingly complicated, but monotonically increasing) function of j. However, it is easy to see that it is bounded above by unity as j tends to L. Clearly j/L < 1 is just the expression of causality for the matter at infinity which is dual to the black hole: a black hole with j/L approximately equal to unity corresponds holographically to matter at infinity which is rotating at a speed close to that of light, and this is the meaning of the fact that L imposes an upper bound on possible specific angular momenta. We will return to this observation later.
For later use, note first that from (27) we have
a = L 2 j −1 + 1 + 3j 2 L 2 .(29)
This allows us to express Ξ in terms of j :
Ξ = 2L 2 j 2 1 + 3j 2 L 2 − 1 − j 2 L 2 .(30)
The equation to be solved for the radial coordinate at the horizon, r H , takes a deceptively simple form:
r 2 H + a 2 1 + r 2 H L 2 − 2 M = 0;(31)
but, in using this, we have to bear in mind that a is not the specific angular momentum, and M is not the physical mass, of the black hole. The Hawking temperature of the AdS 5 -Kerr black hole is given [60] by
T AdSK 5 = r H 1 + r 2 H L 2 2π (r 2 H + a 2 ) + r H 2πL 2 ,(32)
This equation has a very remarkable consequence: exactly extremal AdS 5 -Kerr black holes actually do not exist under the assumptions we are making here. For the only way to attain zero temperature is clearly if r H = 0, which means that the ring singularity at r = 0, θ = π/2 is visible at null conformal infinity, violating Cosmic Censorship. This is in fact very reasonable from a thermodynamic point of view: it means that near-extremal black holes of this sort, with extremely low temperatures, have extremely small values of r H , that is, from equation (24), very small entropies, in harmony with the Third Law of thermodynamics. (As we saw earlier, the analogous statements are by no means true of asymptotically flat black holes.)
Thus the temperature cannot vanish for these black holes. Nevertheless, we continue to use the familiar expression, "near-extremal" in the case where the temperature is small but not zero. (We can actually be quite precise about this: see below.)
There are in fact several other reasons to avoid the extremal and even the near-extremal cases here.
Firstly, AdS 5 -Kerr black holes correspond holographically to strongly coupled matter at zero baryonic chemical potential, and such matter necessarily has very high temperatures. From a holographic point of view, then, extremal and near-extremal AdS 5 -Kerr black holes, which of course are "cold", are of limited physical interest.
Secondly, there are general reasons for suspecting [69,70] that exactly extremal spheroidal black holes are pathological in various ways, and also that near-extremal black holes are never stable (this is the Weak Gravity Conjecture [71][72][73]).
Finally, there is strong evidence that all extremal and sufficiently near-extremal AdS 5 -Kerr black holes are actually classically unstable due to superradiance [74,67], and thus of little interest to us here. (We will see later that this is never a problem for "large" AdS 5 -Kerr black holes.)
With all this preparation, we now proceed to our main goal, the analysis of the behaviour of the specific entropy of such a black hole.
The entropy of these black holes was given above, equation (24), and so (using (25)) we find that the specific entropy is
s AdSK 5 = 2πr H (r 2 H + a 2 ) Ξ M (2 + Ξ) .(33)
Eliminating M by means of (31), we have
s AdSK 5 = 4πr H Ξ (2 + Ξ) 1 + r 2 H L 2 .(34)
For fixed non-zero temperature, the equation in (32) is a cubic in r H , and so it can be solved explicitly for r H in terms of T AdSK 5 , a, and L. This can be substituted into (34), and equations (29) and (30) can be used to eliminate a and Ξ, and so we can express s AdSK 5 explicitly as a function of j. An example is presented in Figure 6.
Clearly, in this instance, the specific entropy decreases with the specific angular momentum, as expected.
While the expression for s AdSK 5 as a function of j can be presented explicitly, it is of such length and complexity that it would be pointless to do so. (This extreme complexity is due to the presence of the factors of Ξ, the necessity of which was explained earlier.) It is much more informative to proceed along the lines of our discussion of the AdS 5 -Reissner-Nordström geometry. That is, we use r H as a proxy for j ; this is sufficient to determine whether the specific entropy decreases with increasing specific angular momentum. The procedure is as follows.
First, recall that j and a are monotonically increasing functions of each other: any increase in one faithfully represents an increase in the other. Thus we can take a as our variable when discussing the effects of increasing the specific angular momentum.
Next, equation (32) allows us to regard r H as a simple function of a, when the temperature is fixed. As in the charged case, one finds that, provided that the temperature is at least √ 2/(πL), there are two 13 possible values for r H for small values of a, corresponding as before to "large" and "small" AdS 5 -Schwarzschild black holes. (For larger values of a, only the "large" branch exists.) A typical graph of r H as a function of a is shown in Figure 7.
We see that, if we take a "large" AdS 5 -Schwarzschild black hole and begin to spin it up (that is, to increase j) at fixed temperature, this will increase a, and this in turn will increase r H . Thus, as claimed, we can represent any increase in the specific angular momentum by an increase in r H .
We can now confirm the absence of a superradiant instability here. Recall [74] that superradiance is avoided if the angular velocity Ω of the outer horizon satisfies ΩL < 1. This condition is shown in Figure 8 as the dotted-dashed line: superradiance does not Figure 7: Radial coordinate at horizon, with L = 1, T AdSK 5 = 0.5 . The domain is the physical one, corresponding to 0 ≤ a < 1, that is, to 0 ≤ j < 1.
occur for parameter values corresponding to points above that line.
The solid lines show, as in Figure 7, r H as a function of a, but now in the case where the temperature is as low as possible for an AdS 5 -Schwarzschild black hole. Clearly the branch describing "large" black holes is always above the dotted-dashed line, and therefore this is true when the temperature is above the minimum, since higher temperatures lift this branch higher. In short, all "large" AdS black holes remain stable against superradiance, for all values of the specific angular momentum and all temperatures for which they exist. (On the other hand, the smallest of the three values of r H which are possible when a is sufficiently small is always ruled out by this criterion; some, but not all, "small" black holes are likewise excluded.)
Let us now examine the specific entropy, confining attention to the "large" case. The range of values for r H is as follows. Of course, when a = j = 0 we obtain the value for a "large" AdS 5 -Schwarzschild black hole, as given in equation (18). When a and j are close to L (their upper bound) then, from equation (32), we see by inspection that r H approaches πT AdSK 5 L 2 . This, then, is the physical range for r H : L 2 2 πT AdSK 5 + π 2 T 2
AdSK 5 − (2/L 2 ) to (nearly) πT AdSK 5 L 2 .
The strategy now is to express the specific entropy, as given in equation (34), as a function of r H alone.
To do this, we solve (32) for a as a function of r H . The result, after some simplifications, is Figure 8: AdS 5 -Kerr black holes are stable against superradiance for all pairs (a, r H ) lying above the dotted-dashed curve. The other curves show the relation between a and r H for a black hole with temperature as small as possible in the non-rotating asymptotically AdS 5 case, with 0 ≤ a < 1, corresponding to 0 ≤ j < 1. L has been set equal to unity throughout.
a = (2 πL 2 T AdSK 5 − r H ) r H (− 2 L 2 π T AdSK 5 r H + L 2 + 2 r H 2 ) 2 πL 2 T AdSK 5 − r H .(35)
Substituting this into (34), we find after simplification
s AdSK 5 = 4πr H (πL 2 T AdSK 5 − r H ) L 2 3 πL 4 T AdSK 5 + πL 2 T AdSK 5 r H 2 − 2 L 2 r H − r H 3 ;(36)
this is vastly simpler than the explicit expression for the specific entropy in terms of j. We just have to bear in mind that "increasing j from 0 towards its upper bound, L" corresponds to "increasing r H from L 2 2 πT AdSK 5 + π 2 T 2 AdSK 5 − (2/L 2 ) towards πT AdSK 5 L 2 ." (Note that the cubic in the denominator in (36) has only one real root, and one can show that that root always lies outside the permitted range for r H .)
Let us begin with the lowest possible temperature in the non-rotating case, √ 2/π ≈ 0.45 in units with L = 1. The graph of s AdSK 5 is shown in Figure 9.
As in the AdS 5 -Reissner-Nordström case, we find that it is possible for the specific entropy to increase slightly if the specific angular momentum increases from zero to a small value; but then it decreases for all larger values. However, as before, we find that even a slight increase in the temperature eliminates this odd behaviour. For example, if the temperature is increased by just one percent to ≈ 0.455, the physical domain is such that s AdSK 5 is a decreasing function throughout it: see Figure 10.
We conclude, then, that the specific entropy of these black holes decreases, at fixed temperature, as the specific angular momentum increases from zero, unless the temperature is just slightly (less than 1%) greater than its minimum possible value for a non-rotating AdS 5 black hole. In a holographic picture of quark matter, this translates to the statement that a vortical QGP (nearly) always has a lower specific entropy than the QGP produced by central collisions in the same beam, in agreement with expectations based on the quark matter version of the Barnett effect.
The exceptional cases with temperature less than 1% above the AdS 5 -Schwarzschild minimum are of more mathematical than physical interest. For it is well known that, in the regime of small baryonic chemical potential, the transition to the QGP is not a sharp phase change: it is a continuous "crossover" [30]. That is, the transition is not very precisely defined, so the phenomenon of increasing specific entropy for these very special black holes has no meaningful counterpart in the real QGP. The exceptional cases are however theoretically interesting in that their existence shows that the generic decrease of the specific entropy with increasing specific angular momentum does not have a straightforward explanation in terms of basic black hole thermodynamics.
As in the preceding Section, we have, throughout this discussion, only considered temperatures which are possible for non-rotating black holes. This is justified in the present case, because in the dual theory we want to compare strongly coupled matter produced in central heavy-ion collisions (with negligible angular momentum density) with its counterparts produced in peripheral collisions. However, as before, it is instructive to consider the case where the dual matter is "already" rotating. This means that we allow temperatures below 14 √ 2/(πL). At these temperatures, the two parts of the r H vs. a curve merge, and the curve pulls away from the vertical axis, so that there is no longer a distinction between "large" and "small" black holes; if it is 1/(πL) or lower, then the entire merged curve lies below the dotted-dashed line indicating superradiant instability. In this case (which includes extremal AdS 5 -Kerr black holes, long known [74] to be unstable against superradiance) the black holes are unstable for all non-zero values of the specific angular momentum. Thus, a temperature below 1/(πL) is a good definition of "near-extremal" for these black holes: then we can say that all "near-extremal" AdS 5 -Kerr black holes are classically unstable under all circumstances. We do not consider this case further.
When the temperature lies between 1/(πL) and √ 2/(πL), r H is apparently allowed to go down to values such that the specific entropy is an increasing function of the specific angular momentum. In practice, however, that is not the case, at least not to any significant extent. For the black hole can be stable only if the curve describing r H as a function of a lies above the curve below which superradiance occurs. However, that only happens for a narrow range of r H values. For example, in Figure 11, which portrays the case where the temperature is 80% of the minimal AdS 5 -Schwarzschild value, the black hole is only stable when r H lies in the range from ≈ 0.602 to ≈ 1.131. As before, then, the domain in which the specific entropy is an increasing function is almost completely excluded, as one can see from Figure 12; so in fact this case does not differ very greatly from the case of higher temperatures.
is not formed in central collisions, but is formed in some peripheral collisions. No such effect has been observed, but presumably this is possible. Figure 11: The relation between r H and a when the temperature is 80% of the minimal AdS 5 -Schwarzschild value, shown, as in Figure 8, with the line below which the black hole is unstable to a superradiant instability. Units are such that L = 1. Figure 12: The specific entropy of the black hole, with parameters as in Figure 11. The domain is the one on which the black hole is stable against superradiance.
Conclusion
This work is motivated by the simple suggestion that, if we wish to identify black hole microstates, then we might do well to study the statistical mechanics of some similar system where the microphysics is better understood. In the case of rotating black holes, the natural choice is the "vortical QGP" which has been intensively studied experimentally [32].
In that case, it is possible to compare vortical strongly coupled matter with nonvortical strongly coupled matter at the same temperature by simply comparing central and peripheral collisions in the same beam of colliding heavy ions. Here a very simple argument based on the physics underlying the Barnett effect leads us to expect a decreasing specific entropy with increasing specific angular momentum. We found that the holographically related system, an AdS 5 -Kerr black hole, behaves generically in precisely that manner.
Over-simplifying somewhat: the answer to our question, "why does rotation reduce (AdS) black hole specific entropy?" is, "because rotation has that effect on the dual strongly coupled matter, for a reason which in that case is quite clear."
Whether one should regard this as a satisfactory answer is perhaps a philosophical question, which we may leave to one side. Our point is that (in this particular case) holography provides a route towards a better understanding, at least of black hole entropy and possibly ultimately of black hole microstates.
We conclude with the following rather suggestive observation.
In the asymptotically flat cases (whether mass or temperature was fixed) we always found that increasing the specific angular momentum, or the charge, reduces the specific entropy of the black hole. But there was always a limit to the extent of the reduction: for example, in the four-dimensional fixed-temperature asymptotically flat Kerr case, we found that, no matter what value was chosen for the specific angular momentum, the specific entropy could never fall below about 62% of the value for a Schwarzschild black hole of that temperature. (See Figure 2.)
In the AdS 5 -Kerr case, however, there is no such restriction: one sees from equation (36) that the specific entropy can be reduced to any prescribed positive value, however small, by choosing r H sufficiently close to πT AdSK 5 L 2 ; that is, by choosing j sufficiently close to its upper bound, L. (This is clear in Figures 6, 9, 10, and 12.) This is true at any fixed temperature. This is reminiscent of the situation in quantum statistical mechanics where the energy spacing of the ground state and the first excited state is greater than the temperature: the entropy can be very small even if the temperature is not. Thus it seems that very extreme specific angular momenta can give rise to this situation for rotating AdS black holes. That is, at any given temperature, it is possible, by increasing the specific angular momentum suitably, to drive up the spacing between the black hole ground state and its "first excited state" to a value beyond that temperature, even if the latter is large. Further study of this phenomenon might lead to an identification of this "excited state" and perhaps then to an insight into the nature of black hole microstates.
Unfortunately, heavy ion collisions do not offer an observational guide to this regime of values for j. To see why, recall that, in order for j to be close to L, the rotational speed of the dual matter at conformal infinity would have to be close to that of light (equation (28)); presumably this would correspond, in the actual vortical QGP, to motion close to the speed of light in the vortices. In the experiments reported in [32], however, that is not the case. A typical angular velocity for vortices in those experiments is around 0.03 fm −1 . This is indeed a gigantic vorticity (about 9 × 10 21 · s −1 ) by normal standards, but, for these systems, with a radius of at most a few femtometres, this does not imply velocities close to that of light [75]; and so L must still be well above the specific angular momenta in these experiments.
One might propose to investigate the situation in collisions at higher impact energies, such as those studied by the ALICE collaboration at the LHC; but, for reasons which are only partly understood [76], the polarizations of Λ/Λ hyperons actually decrease with increasing impact energy, and in fact they are quite unobservable at the ALICE energies [77]. If this changes with further observations, it would be very interesting to try to determine whether such high-temperature systems do in fact have very low specific entropies when the specific angular momenta are extremely large.
Figure 1 :
1Specific entropy of a four-dimensional asymptotically flat Reissner-Nordström black hole with fixed temperature 0.25 in Planck units, as a function of the charge.Turning now to the Kerr case: the temperature is[58][59][60]
Figure 2 :
2Specific entropy of a four-dimensional asymptotically flat Kerr black hole with fixed temperature 0.1 in Planck units, as a function of the specific angular momentum.
Figure 3 .
3
Figure 3 :
3Radial coordinate at horizon for AdS 5 -Reissner-Nordström black holes, as a function of the charge. (Parameters, including the temperature, fixed in units with L = 1, at values chosen for illustrative clarity.) The corresponding "large" AdS 5 -Schwarzschild black hole has r H = 2 in these units.
Figure 4 :
4Specific entropy of a "large" AdS 5 -Reissner-Nordström black hole with temperature equal to the minimal possible temperature of an AdS 5 -Schwarzschild black hole, using units in which L = 1.
Figure 5 :
5Specific entropy of a "large" AdS 5 -Reissner-Nordström black hole with temperature approximately 1% higher than the minimal possible temperature of an AdS 5 -Schwarzschild black hole, using units in which L = 1. On the physical domain (to the right of the radius of the AdS 5 -Schwarzschild black hole of that temperature) the function is decreasing.
Figure 6 :
6Specific entropy of a five-dimensional asymptotically AdS 5 Kerr black hole with fixed temperature 2 in units with L = 1, as a function of the specific angular momentum.
Figure 9 :
9Specific entropy of an AdS 5 -Kerr black hole with fixed temperature equal to the smallest possible AdS 5 -Schwarzschild temperature √ 2/π ≈ 0.450 in units with L = 1, as a function of r H . The domain is the physical one, ≈ 0.707 ≤ r H <≈ 1.414, corresponding to 0 ≤ j < 1.
Figure 10 :
10Specific entropy of an AdS 5 -Kerr black hole with fixed temperature ≈ 0.455 in units with L = 1, as a function of r H . The domain is the physical one, ≈ 0.814 ≤ r H <≈ 1.428, corresponding to 0 ≤ j < 1.
It is possible for an AdS 5 -Kerr black hole to rotate about two different axes simultaneously, with two different specific angular momenta. Clearly this is of no use to us here, so we always take it that our black holes rotate about a single axis, with one specific angular momentum.
Lower temperatures are possible by beginning with a black hole that is already rotating; but then no comparison can be made with a corresponding non-rotating black hole. Nevertheless we will briefly consider this possibility later, finding that it does not contradict the statements being made here.
Because we will be discussing both four-and five-dimensional spacetimes, we avoid Planck units, and use units in which mass and temperature are inverse lengths, the four-dimensional Coulomb constant is dimensionless and set equal to unity (but the five-dimensional Coulomb constant is not -it has units of length) and entropy, angular momentum, and charge are dimensionless (so specific entropies and angular momenta have units of length). We denote the specific angular momentum in these units by j; we reserve the notation a for the angular momentum parameter which occurs in the various Kerr metrics.
These differ from spherical polar coordinates: φ and ψ run from 0 to 2π, but θ runs from 0 to π/2.
One also sees fromFigure 3that "small" black holes behave, as expected, like their asymptotically flat
Ξ can be either strictly positive or strictly negative. The negative case (a/L > 1) is important[68], but (assuming the validity of Cosmic Censorship) it cannot be attained by continuously spinning up the black hole. As we are interested in a holographic application to a system which is obtained through a process of "spinning up" the QGP, we only consider the strictly positive case, a < L.
Actually, there can be three, but the smallest of the three corresponds to a black hole which is never stable: see below.
This means that we are considering collisions at such low impact energies that a quark-gluon plasma
AcknowledgementsThe author is grateful to Dr. Soon Wanmei for useful discussions.
Aron C Wall, arXiv:1804.10610A Survey of Black Hole Thermodynamics. gr-qcAron C. Wall, A Survey of Black Hole Thermodynamics, arXiv:1804.10610 [gr-qc]
On Black Hole Thermodynamics, Singularity, and Gravitational Entropy. Yen Chin Ong, arXiv:2210.16856Gen.Rel.Grav. 54132gr-qcYen Chin Ong, On Black Hole Thermodynamics, Singularity, and Gravitational En- tropy, Gen.Rel.Grav. 54 (2022) 10, 132, arXiv:2210.16856 [gr-qc]
Cygnus X-1 contains a 21-solar mass black hole -Implications for massive star winds. C A James, Arash Miller-Jones, Jerome A Bahramian, Ilya Orosz, Lijun Mandel, Gou, arXiv:2102.09091Science. 371astro-ph.HEJames C.A. Miller-Jones, Arash Bahramian, Jerome A. Orosz, Ilya Mandel, Lijun Gou et al., Cygnus X-1 contains a 21-solar mass black hole -Implications for massive star winds, Science 371 (2021) 6533, 1046-1049, arXiv:2102.09091 [astro-ph.HE]
Henric Krawczynski, Banafsheh Beheshtipour, arXiv:2201.07360New Constraints on the Spin of the Black Hole Cygnus X-1 and the Physical Properties of its Accretion Disk Corona. astro-ph.HEHenric Krawczynski, Banafsheh Beheshtipour, New Constraints on the Spin of the Black Hole Cygnus X-1 and the Physical Properties of its Accretion Disk Corona, arXiv:2201.07360 [astro-ph.HE]
General-relativistic precession in a black-hole binary. M Hannam, C Hoy, J E Thompson, arXiv:2112.11300Nature. gr-qcHannam, M., Hoy, C., Thompson, J.E. et al. General-relativistic precession in a black-hole binary, Nature (2022), arXiv:2112.11300 [gr-qc]
The Road to Reality. ; A E Roger Penrose, Knopf, New YorkRoger Penrose, The Road to Reality, A. E. Knopf, New York (2005), ch. 27-29
Ying Qin, Xinwen Shu, Shuangxi Yi, Yuan-Zhu Wang, arXiv:2201.05611Hypercritical Accretion for Black Hole High Spin in Cygnus X-1. 2235023astro-ph.HEYing Qin, Xinwen Shu, Shuangxi Yi, Yuan-Zhu Wang, Hypercritical Accretion for Black Hole High Spin in Cygnus X-1, Res.Astron.Astrophys. 22 (2022) 3, 035023, arXiv: 2201.05611 [astro-ph.HE]
Extremal Bifurcations of Rotating AdS 4 Black Holes. Brett Mcinnes, arXiv:2108.05686JHEP. 12155gr-qcBrett McInnes, Extremal Bifurcations of Rotating AdS 4 Black Holes, JHEP 12 (2021) 155, arXiv:2108.05686 [gr-qc]
. H Khodabakhshi, H Lu, Run-Qiu Yang, arXiv:2207.08833Tightening the Penrose Inequality, Sci.China Phys.Mech.Astron. 65120413gr-qcH. Khodabakhshi, H. Lu, Run-Qiu Yang, Tightening the Penrose Inequality, Sci.China Phys.Mech.Astron. 65 (2022) 12, 120413, arXiv:2207.08833 [gr-qc]
Matt Visser, arXiv:0706.0622The Kerr spacetime: A brief introduction. gr-qcMatt Visser, The Kerr spacetime: A brief introduction, arXiv:0706.0622 [gr-qc]
Overcounting of interior excitations: A resolution to the bags of gold paradox in AdS. Joydeep Chakravarty, arXiv:2010.03575JHEP. 0227hep-thJoydeep Chakravarty, Overcounting of interior excitations: A resolution to the bags of gold paradox in AdS, JHEP 02 (2021) 027, arXiv:2010.03575 [hep-th]
Vijay Balasubramanian, Albion Lawrence, Javier M Magan, Martin Sasieta, arXiv:2212.08623Microscopic origin of the entropy of astrophysical black holes. hep-thVijay Balasubramanian, Albion Lawrence, Javier M. Magan, Martin Sasieta, Micro- scopic origin of the entropy of astrophysical black holes, arXiv:2212.08623 [hep-th]
Black hole entropy: inside or out?. Ted Jacobson, Donald Marolf, Carlo Rovelli, arXiv:hep-th/0501103Int.J.Theor.Phys. 44Ted Jacobson, Donald Marolf, Carlo Rovelli, Black hole entropy: inside or out? Int.J.Theor.Phys. 44 (2005) 1807-1837, arXiv:hep-th/0501103
. Timothy Clifton, F R George, Reza Ellis, Tavakol, arXiv:1303.5612A Gravitational Entropy Proposal. 30125009Class. Quant. Grav.. gr-qcTimothy Clifton, George F R Ellis, Reza Tavakol, A Gravitational Entropy Proposal, Class. Quant. Grav. 30 (2013) 125009, arXiv:1303.5612 [gr-qc]
Thermodynamics of Shearing Massless Scalar Field Spacetimes is Inconsistent With the Weyl Curvature Hypothesis. Daniele Gregoris, Yen Chin Ong, Bin Wang, arXiv:2004.10222Phys.Rev.D. 10223539gr-qcDaniele Gregoris, Yen Chin Ong, Bin Wang, Thermodynamics of Shearing Mass- less Scalar Field Spacetimes is Inconsistent With the Weyl Curvature Hypothesis, Phys.Rev.D 102 (2020) 2, 023539, arXiv: 2004.10222 [gr-qc]
Understanding Gravitational Entropy of Black Holes: A New Proposal via Curvature Invariants. Daniele Gregoris, Yen Chin Ong, arXiv:2109.11968Phys. Rev. D. 105104017gr-qcDaniele Gregoris, Yen Chin Ong, Understanding Gravitational Entropy of Black Holes: A New Proposal via Curvature Invariants, Phys. Rev. D 105, 104017 (2022), arXiv:2109.11968 [gr-qc]
Observations of Hawking radiation: the Page curve and baby universes. Donald Marolf, Henry Maxfield, arXiv:2010.06602JHEP. 04272hep-thDonald Marolf, Henry Maxfield, Observations of Hawking radiation: the Page curve and baby universes, JHEP 04 (2021) 272, arXiv: 2010.06602 [hep-th]
Black hole entropy and long strings. Erik P Verlinde, R Manus, Visser, arXiv:2206.03161Int.J.Mod.Phys.D. 312242006hep-thErik P. Verlinde, Manus R. Visser, Black hole entropy and long strings, Int.J.Mod.Phys.D 31 (2022) 14, 2242006, arXiv:2206.03161 [hep-th]
Urs Achim Wiedemann, Gauge/String Duality, Hot QCD and Heavy Ion Collisions. Jorge Casalderrey-Solana, Hong Liu, David Mateos, Krishna Rajagopal, arXiv:1101.0618Gauge/String Duality, Hot QCD and Heavy Ion Collisions. CambridgeCambridge University Presshep-thJorge Casalderrey-Solana, Hong Liu, David Mateos, Krishna Rajagopal, Urs Achim Wiedemann, Gauge/String Duality, Hot QCD and Heavy Ion Collisions, in: Gauge/String Duality, Hot QCD and Heavy Ion Collisions, Cambridge University Press, Cambridge 2014, arXiv:1101.0618 [hep-th]
AdS/CFT Duality User Guide. Makoto Natsuume, arXiv:1409.3575Lect.Notes Phys. 903hep-thMakoto Natsuume, AdS/CFT Duality User Guide, Lect.Notes Phys. 903 (2015), arXiv:1409.3575 [hep-th]
A Practical Mini-Course on Applied Holography. Matteo Baggioli, arXiv:1908.02667SpringerBriefs in Physics. hep-thMatteo Baggioli, A Practical Mini-Course on Applied Holography, SpringerBriefs in Physics 2019, arXiv:1908.02667 [hep-th]
QCD Equilibrium and Dynamical Properties from Holographic Black Holes. Joaquin Grefa, Mauricio Hippert, Jorge Noronha, Jacquelyn Noronha-Hostler, Israel Portillo, Claudia Ratti, Romulo Rougemont, arXiv:2207.12564Rev.Mex.Fis. Suppl. 3 (2022) 4, 040910. nucl-thJoaquin Grefa, Mauricio Hippert, Jorge Noronha, Jacquelyn Noronha-Hostler, Is- rael Portillo, Claudia Ratti, Romulo Rougemont, QCD Equilibrium and Dynamical Properties from Holographic Black Holes, Rev.Mex.Fis.Suppl. 3 (2022) 4, 040910, arXiv:2207.12564 [nucl-th]
Editorial: New Frontiers in Holographic Duality -From quantum complexity and black holes to hydrodynamics and neutron stars. Ayan Mukhopadhyay, arXiv:2210.03315Eur.Phys.J.C. 82877hep-thAyan Mukhopadhyay, Editorial: New Frontiers in Holographic Duality -From quan- tum complexity and black holes to hydrodynamics and neutron stars, Eur.Phys.J.C 82 (2022) 877, arXiv:2210.03315 [hep-th]
Microscopic origin of the Bekenstein-Hawking entropy of supersymmetric AdS5 black holes. Alejandro Cabo-Bizet, Davide Cassani, Dario Martelli, Sameer Murthy, arXiv:1810.11442JHEP. 1062hep-thAlejandro Cabo-Bizet, Davide Cassani, Dario Martelli, Sameer Murthy, Microscopic origin of the Bekenstein-Hawking entropy of supersymmetric AdS5 black holes, JHEP 10 (2019) 062, arXiv:1810.11442 [hep-th]
Sunjin Choi, Joonho Kim, Seok Kim, arXiv:1810.12067Large AdS black holes from QFT. hep-thSunjin Choi, Joonho Kim, Seok Kim, June Nahmgoong, Large AdS black holes from QFT, arXiv:1810.12067 [hep-th]
Black holes in 4d N=4 Super-Yang-Mills. Francesco Benini, Elisa Milan, arXiv:1812.09613Phys. Rev. X. 1021037hep-thFrancesco Benini, Elisa Milan, Black holes in 4d N=4 Super-Yang-Mills, Phys. Rev. X 10, 021037 (2020), arXiv:1812.09613 [hep-th]
Chi-Ming Chang, Ying-Hsuan Lin, arXiv:2209.06728Words to describe a black hole. hep-thChi-Ming Chang, Ying-Hsuan Lin, Words to describe a black hole, arXiv:2209.06728 [hep-th]
Insight into the Microscopic Structure of an AdS Black Hole from Thermodynamical Phase Transition. Shao-Wen, Yu-Xiao Wei, Liu, arXiv:1502.00386Phys. Rev. Lett. 115111302gr-qcShao-Wen Wei, Yu-Xiao Liu, Insight into the Microscopic Structure of an AdS Black Hole from Thermodynamical Phase Transition, Phys. Rev. Lett. 115, 111302 (2015), arXiv:1502.00386 [gr-qc]
High temperature AdS black holes are low temperature quantum phonon gases. Xiangqing Kong, Tao Wang, Liu Zhao, arXiv:2209.12230Phys. Lett. B. 836137623hep-thXiangqing Kong, Tao Wang, Liu Zhao, High temperature AdS black holes are low temperature quantum phonon gases, Phys. Lett. B 836 (2023) 137623, arXiv:2209.12230 [hep-th]
Some Aspects of the Theory of Heavy Ion Collisions. Francois Gelis, arXiv:2102.07604Rept.Prog.Phys. 8456301hep-phFrancois Gelis, Some Aspects of the Theory of Heavy Ion Collisions, Rept.Prog.Phys. 84 (2021) 5, 056301, arXiv: 2102.07604 [hep-ph]
Hydrodynamic attractors in heavy ion collisions: a review. Alexander Soloviev, arXiv:2109.15081Eur.Phys.J.C. 82319hep-thAlexander Soloviev, Hydrodynamic attractors in heavy ion collisions: a review, Eur.Phys.J.C 82 (2022) 4, 319, arXiv: 2109.15081 [hep-th]
Global Λ hyperon polarization in nuclear collisions: evidence for the most vortical fluid. L Adamczyk, STAR CollaborationarXiv:1701.06657Nature. 548nucl-exSTAR Collaboration (L. Adamczyk et al.), Global Λ hyperon polarization in nu- clear collisions: evidence for the most vortical fluid, Nature 548, 62 (2017), arXiv:1701.06657 [nucl-ex]
Global angular momentum generation in heavy-ion reactions within a hadronic transport approach. Nils Sass, Marco Müller, Oscar Garcia-Montero, Hannah Elfner, arXiv:2212.14385nucl-thNils Sass, Marco Müller, Oscar Garcia-Montero, Hannah Elfner, Global angular mo- mentum generation in heavy-ion reactions within a hadronic transport approach, arXiv:2212.14385 [nucl-th]
Applied Holography of the AdS 5 -Kerr Spacetime. Brett Mcinnes, arXiv:1803.02528Int.J.Mod.Phys.A. 34hep-phBrett McInnes, Applied Holography of the AdS 5 -Kerr Spacetime, Int.J.Mod.Phys.A 34 (2019) 24, 1950138, arXiv:1803.02528 [hep-ph]
Inhomogeneous chiral condensation under rotation in the holographic QCD. Yidian Chen, Danning Li, Mei Huang, arXiv:2208.05668Phys.Rev.D. 106106002hep-phYidian Chen, Danning Li, Mei Huang, Inhomogeneous chiral condensation under ro- tation in the holographic QCD, Phys.Rev.D 106 (2022) 10, 106002, arXiv:2208.05668 [hep-ph]
Tsegel'nik, Probing the holographic model of N=4 SYM rotating quark-gluon plasma. Anastasia A Golubtsova, Nikita S , arXiv:2211.11722hep-thAnastasia A. Golubtsova, Nikita S. Tsegel'nik, Probing the holographic model of N=4 SYM rotating quark-gluon plasma, arXiv:2211.11722 [hep-th]
R F Nelson, Luiz F Braga, Octavio C Ferreira, Junqueira, arXiv:2301.01322Configuration entropy of a rotating quark-gluon plasma from holography. hep-thNelson R. F. Braga, Luiz F. Ferreira, Octavio C. Junqueira, Configuration entropy of a rotating quark-gluon plasma from holography, arXiv:2301.01322 [hep-th]
Entropy Production and Effective Viscosity in Heavy-Ion Collisions. Yu B Ivanov, A A Soldatov, arXiv:1605.02476Eur.Phys.J. 5212367nucl-thYu. B. Ivanov, A. A. Soldatov, Entropy Production and Effective Viscosity in Heavy- Ion Collisions, Eur.Phys.J. A52 (2016) no.12, 367, arXiv:1605.02476 [nucl-th]
Vorticity in heavy-ion collisions at the JINR Nuclotron-based Ion Collider fAcility. Yu B Ivanov, A A Soldatov, arXiv:1701.01319Phys. Rev. C. 9554915nucl-thYu. B. Ivanov, A. A. Soldatov, Vorticity in heavy-ion collisions at the JINR Nuclotron-based Ion Collider fAcility, Phys. Rev. C 95, 054915 (2017), arXiv:1701.01319 [nucl-th]
M N Chernodub, arXiv:2208.04808Instantons in rotating finite-temperature Yang-Mills gas. hep-thM. N. Chernodub, Instantons in rotating finite-temperature Yang-Mills gas, arXiv:2208.04808 [hep-th]
N S Tsegelnik, E E Kolomeitsev, arXiv:2211.09219Helicity and vorticity in heavy-ion collisions at NICA energies. nucl-thN.S. Tsegelnik, E.E. Kolomeitsev, V. Voronyuk, Helicity and vorticity in heavy-ion collisions at NICA energies, arXiv:2211.09219 [nucl-th]
Magnetization by Rotation. S J Barnett, Phys. Rev. 6239S. J. Barnett, Magnetization by Rotation, Phys. Rev. 6, 239 (1915)
Quantum field theory in a magnetic field: From quantum chromodynamics to graphene and Dirac semimetals. Vladimir A Miransky, Igor A Shovkovy, arXiv:1503.00732Physics Reports. 576hep-phVladimir A. Miransky, Igor A. Shovkovy, Quantum field theory in a magnetic field: From quantum chromodynamics to graphene and Dirac semimetals, Physics Reports 576 (2015) 1-209, arXiv:1503.00732 [hep-ph]
Magnetism and rotation in relativistic field theory. Kazuya Mameda, Arata Yamamoto, arXiv:1504.05826Prog. Theor. Exp. Phys. hep-thKazuya Mameda, Arata Yamamoto, Magnetism and rotation in relativistic field the- ory, Prog. Theor. Exp. Phys., 093B05 (2016), arXiv:1504.05826 [hep-th]
Interacting fermions in rotation: chiral symmetry restoration, moment of inertia and thermodynamics. M N Chernodub, Shinya Gongyo, arXiv:1611.02598JHEP. 01136hep-thM. N. Chernodub, Shinya Gongyo, Interacting fermions in rotation: chiral sym- metry restoration, moment of inertia and thermodynamics, JHEP 01 (2017) 136, arXiv:1611.02598 [hep-th]
QCD phase structure under rotation. Xu-Guang Hao-Lei Chen, Jinfeng Huang, Liao, arXiv:2108.00586Lecture Notes in Physics. 98711hep-phHao-Lei Chen, Xu-Guang Huang, Jinfeng Liao, QCD phase structure under ro- tation, Lecture Notes in Physics vol. 987, Chapter 11, pages 349-379 (2021), arXiv:2108.00586 [hep-ph]
Thermomagnetic Properties of QCD. Christoph P Hofmann, arXiv:2012.06461Phys. Rev. D. 10414025hep-phChristoph P. Hofmann, Thermomagnetic Properties of QCD, Phys. Rev. D 104, 014025 (2021), arXiv:2012.06461 [hep-ph]
Paramagnetic Squeezing of QCD Matter. G S Bali, F Bruckmann, G Endrődi, A Schäfer, arXiv:1311.2559Phys. Rev. Lett. 11242301hep-latG.S. Bali, F. Bruckmann, G. Endrődi, and A. Schäfer, Paramagnetic Squeezing of QCD Matter, Phys. Rev. Lett. 112, 042301, arXiv:1311.2559 [hep-lat]
M N Chernodub, V A Goy, A V Molochkov, arXiv:2209.15534Inhomogeneity of rotating gluon plasma and Tolman-Ehrenfest law in imaginary time: lattice results for fast imaginary rotation. hep-latM. N. Chernodub, V. A. Goy, A. V. Molochkov, Inhomogeneity of rotating gluon plasma and Tolman-Ehrenfest law in imaginary time: lattice results for fast imaginary rotation, arXiv:2209.15534 [hep-lat]
Gauge/string duality applied to heavy ion collisions: Limitations, insights and prospects. David Mateos, arXiv:1106.3295J.Phys.G. 38124030hep-thDavid Mateos, Gauge/string duality applied to heavy ion collisions: Limitations, insights and prospects, J.Phys.G G38 (2011) 124030, arXiv:1106.3295 [hep-th]
Dan Hooper, Aurora Ireland, Gordan Krnjaic, arXiv:2206.04066Cosmological Magnetic Fields from Primordial Kerr-Newman Black Holes. astro-ph.CODan Hooper, Aurora Ireland, Gordan Krnjaic, Cosmological Magnetic Fields from Primordial Kerr-Newman Black Holes, arXiv:2206.04066 [astro-ph.CO]
I J Araya, N D Padilla, M E Rubio, J Sureda, J Magaña, L Osorio, arXiv:2207.05829Dark matter from primordial black holes would hold charge. astro-ph.COI. J. Araya, N. D. Padilla, M. E. Rubio, J. Sureda, J. Magaña, L. Osorio, Dark matter from primordial black holes would hold charge, arXiv:2207.05829 [astro-ph.CO]
Phenomenology of Magnetic Black Holes with Electroweak-Symmetric Coronas. Yang Bai, Joshua Berger, Mrunal Korwar, Nicholas Orlofsky, arXiv:2007.03703JHEP. 10210hep-phYang Bai, Joshua Berger, Mrunal Korwar, Nicholas Orlofsky, Phenomenology of Magnetic Black Holes with Electroweak-Symmetric Coronas, JHEP 10 (2020) 210, arXiv:2007.03703 [hep-ph]
Astrophysical hints for magnetic black holes. Diptimoy Ghosh, Arun Thalapillil, Farman Ullah, arXiv:2009.03363Phys. Rev. D. 10323006hep-phDiptimoy Ghosh, Arun Thalapillil, Farman Ullah, Astrophysical hints for magnetic black holes, Phys. Rev. D 103, 023006 (2021), arXiv:2009.03363 [hep-ph]
Comments on magnetic black holes. Juan Maldacena, arXiv:2004.06084JHEP. 0479hep-thJuan Maldacena, Comments on magnetic black holes, JHEP 04 (2021) 079, arXiv:2004.06084 [hep-th]
Predictivity lost, predictivity regained: a Miltonian cosmic censorship conjecture. Roberto Emparan, arXiv:2005.07389Int.J.Mod.Phys.D. 292043021hepthRoberto Emparan, Predictivity lost, predictivity regained: a Miltonian cosmic cen- sorship conjecture, Int.J.Mod.Phys.D 29 (2020) 14, 2043021, arXiv:2005.07389 [hep- th]
Black Tsunamis and Naked Singularities in AdS. Roberto Emparan, David Licht, Ryotaku Suzuki, Marija Tomašević, Benson Way, arXiv:2112.07967J. High Energ. Phys. 202290hep-thRoberto Emparan, David Licht, Ryotaku Suzuki, Marija Tomašević, Benson Way, Black Tsunamis and Naked Singularities in AdS, J. High Energ. Phys. 2022, 90 (2022), arXiv:2112.07967 [hep-th]
Rotation and the AdS/CFT correspondence. S W Hawking, C J Hunter, M M Taylor-Robinson, arXiv:hep-th/9811056Phys.Rev. 5964005S.W.Hawking, C.J.Hunter, M.M.Taylor-Robinson, Rotation and the AdS/CFT cor- respondence, Phys.Rev.D59:064005,1999, arXiv:hep-th/9811056
M Marco, Guido Caldarelli, Dietmar Cognola, Klemm, arXiv:hep-th/9908022Thermodynamics of Kerr-Newman-AdS Black Holes and Conformal Field Theories. 17399Marco M. Caldarelli, Guido Cognola, Dietmar Klemm, Thermodynamics of Kerr- Newman-AdS Black Holes and Conformal Field Theories, Class.Quant.Grav. 17 (2000) 399, arXiv:hep-th/9908022
G W Gibbons, M J Perry, C N Pope, arXiv:hep-th/0408217The First Law of Thermodynamics for Kerr-Anti-de Sitter Black Holes. 22G.W. Gibbons, M.J. Perry, C.N. Pope, The First Law of Thermodynam- ics for Kerr-Anti-de Sitter Black Holes, Class.Quant.Grav.22:1503-1526,2005, arXiv:hep-th/0408217
The Special Role of Toroidal Black Holes in Holography. Brett Mcinnes, 2206.00198gr-qcBrett McInnes, The Special Role of Toroidal Black Holes in Holography, 2206.00198 [gr-qc]
Holographic CFT phase transitions and criticality for charged AdS black holes. Wan Cong, David Kubiznak, Robert B Mann, R Manus, Visser, arXiv:2112.14848JHEP. 08174hep-thWan Cong, David Kubiznak, Robert B. Mann, Manus R. Visser, Holographic CFT phase transitions and criticality for charged AdS black holes, JHEP 08 (2022) 174, arXiv: 2112.14848 [hep-th]
Thermal Phase Transition, And Confinement In Gauge Theories. Edward Witten, Anti-De Sitter, Space, arXiv:hep-th/9803131Adv.Theor.Math.Phys. 2Edward Witten, Anti-de Sitter Space, Thermal Phase Transition, And Confinement In Gauge Theories, Adv.Theor.Math.Phys. 2 (1998) 505-532, arXiv:hep-th/9803131
Yen Chin Ong, How Anti-de Sitter Black Holes Reach Thermal Equilibrium. Ru Ling, Hao Xu, arXiv:2107.01556Phys.Lett.B. 826136896gr-qcRu Ling, Hao Xu, Yen Chin Ong, How Anti-de Sitter Black Holes Reach Thermal Equilibrium, Phys.Lett.B 826 (2022) 136896, arXiv:2107.01556 [gr-qc]
. Clifford V Johnson, D -Branes, Cambridge University PressCambridgeClifford V. Johnson, D-Branes, Cambridge University Press, Cambridge, 2002
Color superconductivity in dense quark matter. G Mark, Krishna Alford, Thomas Rajagopal, Andreas Schaefer, Schmitt, arXiv:0709.4635Rev.Mod.Phys. 80hep-phMark G. Alford, Krishna Rajagopal, Thomas Schaefer, Andreas Schmitt, Color superconductivity in dense quark matter, Rev.Mod.Phys. 80 (2008) 1455-1515, arXiv:0709.4635 [hep-ph]
Superradiance -the 2020 Edition. Richard Brito, Vitor Cardoso, Paolo Pani, arXiv:1501.06570Lecture Notes in Physics. 906Springer-Verlaggr-qcRichard Brito, Vitor Cardoso, Paolo Pani, Superradiance -the 2020 Edition, Lecture Notes in Physics volume 906 (Springer-Verlag, 2015), arXiv:1501.06570 [gr-qc]
The Weak Gravity Conjecture Requires the Existence of Exotic AdS Black Holes. Brett Mcinnes, arXiv:2104.07373Nucl.Phys.B. 971115525gr-qcBrett McInnes, The Weak Gravity Conjecture Requires the Existence of Exotic AdS Black Holes, Nucl.Phys.B 971 (2021) 115525 arXiv:2104.07373 [gr-qc]
On the third law of black hole dynamics. K Naresh Dadhich, Narayan, arXiv:gr-qc/9704070Phys.Lett. 231Naresh Dadhich, K. Narayan, On the third law of black hole dynamics, Phys.Lett. A231 (1997) 335-338, arXiv:gr-qc/9704070
T Gary, Maciej Horowitz, Jorge E Kolanowski, Santos, arXiv:2210.02473Almost all extremal black holes in AdS are singular. hep-thGary T. Horowitz, Maciej Kolanowski, Jorge E. Santos, Almost all extremal black holes in AdS are singular, arXiv:2210.02473 [hep-th]
The String Landscape, Black Holes and Gravity as the Weakest Force. Nima Arkani-Hamed, Lubos Motl, Alberto Nicolis, Cumrun Vafa, arXiv:hep-th/0601001JHEP. 070660Nima Arkani-Hamed, Lubos Motl, Alberto Nicolis, Cumrun Vafa, The String Landscape, Black Holes and Gravity as the Weakest Force, JHEP 0706:060,2007, arXiv:hep-th/0601001
Higher-order corrections to mass-charge relation of extremal black holes. Yevgeny Kats, Lubos Motl, Megha Padi, arXiv:hep-th/0606100JHEP. 071268Yevgeny Kats, Lubos Motl, Megha Padi, Higher-order corrections to mass-charge relation of extremal black holes, JHEP 0712:068,2007, arXiv:hep-th/0606100
Unitarity, and the Weak Gravity Conjecture. Nima Arkani-Hamed, Yu-Tin Huang, Jin-Yu Liu, Grant N Remmen, Causality , arXiv:2109.13937JHEP. 0383hep-thNima Arkani-Hamed, Yu-tin Huang, Jin-Yu Liu, Grant N. Remmen, Causality, Uni- tarity, and the Weak Gravity Conjecture, JHEP 03 (2022) 083, arXiv:2109.13937 [hep-th]
Black Holes in Higher Dimensions. Roberto Emparan, Harvey S Reall, arXiv:0801.3471Living Rev.Rel. 116hep-thRoberto Emparan, Harvey S. Reall, Black Holes in Higher Dimensions, Living Rev.Rel.11:6,2008, arXiv:0801.3471 [hep-th]
The Beam Energy Scan at the Relativistic Heavy Ion Collider. Declan Keane, J.Phys.Conf.Ser. 878112015Declan Keane, The Beam Energy Scan at the Relativistic Heavy Ion Collider, J.Phys.Conf.Ser. 878 (2017) no.1, 012015
Rotating quark-gluon plasma in relativistic heavy ion collisions. Y Jiang, Z.-W Lin, J Liao, arXiv:1602.06580Phys. Rev. 9444910hep-phY. Jiang, Z.-W. Lin, J. Liao, Rotating quark-gluon plasma in relativistic heavy ion collisions. Phys. Rev. C94, 044910 (2016), arXiv:1602.06580 [hep-ph]
Measurements of spin alignment of vector mesons and global polarization of hyperons with ALICE at the LHC. arXiv:1711.02018Bedangadas Mohanty for the. 17116008Bedangadas Mohanty for the ALICE Collaboration, Measurements of spin alignment of vector mesons and global polarization of hyperons with ALICE at the LHC, EPJ Web Conf. 171 (2018) 16008, arXiv:1711.02018
| []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.